halid
stringlengths 8
12
| lang
stringclasses 1
value | domain
sequencelengths 1
8
| timestamp
stringclasses 938
values | year
stringclasses 55
values | url
stringlengths 43
389
| text
stringlengths 16
2.18M
| size
int64 16
2.18M
| authorids
sequencelengths 1
102
| affiliations
sequencelengths 0
229
|
---|---|---|---|---|---|---|---|---|---|
01482107 | en | [
"chim",
"spi"
] | 2024/03/04 23:41:48 | 2008 | https://hal.univ-lorraine.fr/hal-01482107/file/Draft%20final.pdf | Laouès Guendouz
Sébastien Leclerc
Alain Retournard
Ahcène Hedjiedj
Daniel Canet
Single-sided radio-frequency field gradient with two unsymmetrical loops. Applications to Nuclear Magnetic Resonance
published or not. The documents may come
systems were verified (vs. theoretical predictions) by means of experiments employing gradients in view of the determination of the self-diffusion coefficients of liquids.
INTRODUCTION
Most Nuclear Magnetic Resonance (NMR) experiments necessarily involve nowadays the use of magnetic field gradients. These experiments include i) coherence pathway selection in pure spectroscopy, ii) measurements of self-diffusion coefficients, and of course iii) Magnetic Resonance Imaging (MRI) or NMR Microscopy [START_REF] Kimmich | NMR Tomography, Diffusometry, Relaxometry[END_REF] . All these techniques rest generally on gradients of the static magnetic field (B 0 gradients) for which enormous efforts of development have been carried out in the last three decades. Nevertheless, they are still hampered by the issue of the so-called internal gradients [START_REF] Price | [END_REF] which superpose to the applied gradients and can therefore alter the spectroscopic or imaging data. These internal gradients arise from the magnetic susceptibility differences of the materials constituting the sample or the object under investigation. They occur at the interfaces between these materials and are especially detrimental in heterogeneous samples. Of course, many schemes have been devised for circumventing this drawback including the use of stronger and stronger applied gradients. This latter remedy is however hampered by the inevitable increase of rise and fall times of B 0 gradients, generally applied in the form of short pulses. On the other hand, gradients of the radio-frequency (rf) magnetic field (B 1 gradients), the alternating magnetic field mandatory in any NMR experiment, have received much less attention 3 . They prove however to be totally immune to internal (or background) gradients since the latter are B 0 gradients in nature (of course, internal B 1 gradients exist but are totally negligible due to the amplitude of B 1 fields as compared to that of B 0 fields). On the other hand, rise and fall times are very small in the case of B 1 gradient pulses (i.e., oscillations at the beginning and at the end of the RF pulses never exceed a few hundreds of ns). All these features should render B 1 gradients rather attractive and, indeed, some phenomena such as diffusive-diffractive peaks could be observed with B 1 gradients in real porous media while they are totally missed with B 0 gradients 4 . Likewise, the variation of the apparent self-diffusion coefficient as a function of the diffusion interval could be properly observed in these systems with B 1 gradients while this variation was underestimated with B 0 gradients. In the same order of idea, NMR images can be devoid of blurring effects in heterogeneous objects when produced by B 1 gradients. These properties should justify efforts for improving the quality of B 1 gradients which suffer from some drawbacks related to their strengths or to their uniformity in an acceptable spatial zone. In the past, our group has been utilizing a single loop for producing B 1 gradients with acceptable strength and uniformity 3 . Other groups have sacrificed uniformity for reaching stronger gradients either with toroidal cavities 5 or especially shaped solenoids possibly combined with B 0 gradients 6 . A third approach makes use of two orthogonal loops for creating B 1 gradients in two directions 7 . Alternatively, a pair of anti-Helmholtz coils can generate an rf gradient, and this possibility with two different currents in each loops has been already used in some way to achieve spatial localization (the so-called "straddle coil" 8,9 ), but it involves a zero magnetic field near the center of the arrangement, and does not generate an uniform gradient.
Therefore, this methodology cannot be used for our applications.
It is the aim of this paper to propose a coil arrangement for circumventing some of the drawbacks mentioned above. From previous work dealing with double Helmholtz coils devised for improving the B 1 field homogeneity 10 , it appears that half of this arrangement is capable of generating stronger and more uniform B 1 gradients. Such arrangements will be discussed here and optimized. Tests will be carried out with dedicated probes (antennas) adapted to horizontal magnets (devoted to IRM) and to vertical magnets (devoted to high resolution NMR spectroscopy).
THEORY
The expression of the magnetic field produced by a single loop of radius a and along the symmetry axis of this loop (x in the following) is given by the well known formula derived from the Biot and Savart law. where the first derivative ( dx dB 1 ;
in other words the B 1 gradient in the x direction) is maximum while the second order derivative is zero (inflexion of the curve representing the B 1 variation as a function of x). Note also that this expression is strictly valid in the dc case but constitutes an excellent approximation (quasi-static approximation) for NMR experiments carried out at relatively low frequencies (up to 200 MHz with coil dimensions significantly smaller than the wavelength).
The choice of the origin of the x-axis makes straightforward an expansion of Eq. (
+ - + = a x a x a I B axial µ (2)
As indicated above, the second order term vanishes but the amplitude of the third order term is slightly larger than that of the first order term (the B 1 gradient). This explains why the zone where the B 1 gradient is uniform (±1% of its maximum value) is rather small : [-0.06 a, 0.06 a]. This latter interval is deduced, to a first approximation, from Eq. (2).
As shown before 10 , additional loops of dimension and location to be optimized can lead to the cancellation of higher order terms in an expansion such as (2). Our aim will be primarily the suppression of the third order term. Let us consider an arrangement made of two coaxial loops and an arbitrary origin O which would advantageously correspond to the inflexion point of the B 1 curve (see above for the case of a single loop). Owing to the axial symmetry of such an arrangement, it would be convenient to have recourse to spherical coordinates. Indeed, Roméo and Hoult 11 have proposed a very useful expression which provides the axial component of the B 1 field at a location M specified by the vector r=OM, this vector being defined by its length r and its angle θ with the x-axis. The Roméo and Hoult formula is expressed as follows for a single loop i of radius i a with a current intensity i I , its position being defined by the vector R i (with length i R ) joining the origin O to any point of the loop and by the angle i α between R i and the x-axis [Fig. 1(a)].
⎭ ⎬ ⎫ ⎩ ⎨ ⎧ + - = + ∞ = ∑ )] (cos ) )( 1 ))( (cos cos ) (cos [( 2 1 0 0 1 θ α α α µ n n i i n i i n n i i axial P R r n P P R I B ( 3
)
where n P is a Legendre polynomial. For two loops with the same current throughout
I I I = =
), and limiting the calculations to the x axis ( 0 = θ ;
x r = and 1 ) (cos = θ n P
), we obtain (simply by superposition)
⎭ ⎬ ⎫ ⎩ ⎨ ⎧ + - = ∑ ∑ = ∞ = + 2 1 0 1 0 1 ] ) )( 1 ))( (cos cos ) (cos [( 1 2 i n n i i n i i n i axial R x n P P R I B α α α µ (4)
The coefficients of the second, third and fourth order terms [in Eq. ( 4)] are given below as a function of
5 i i i i R X X X I - + - µ (6)
Fourth order :
15 i i i i R X X X I - + - µ (7)
For the sake of simplicity, we shall limit the discussion about gradient uniformity to the cancellation of second order and third order terms. Appropriate values of 1 X and 2 X are the roots of the two following equations
0 ) 5 6 1 ( ) 5 6 1 ( 4 2 2 2 4 1 2 1 3 21 = + - + + - X X X X R (8a) 0 ) 7 10 3 ( ) 7 10 3 ( 4 2 2 2 2 4 1 2 1 1 4 21 = + - + + - X X X X X X R (8b)
where the ratio
1 2 21 / R R R =
has been introduced. With the conventions of Fig. 1, one has
0 21 > R , 1 0 < ≤ i X (or equivalently 2 / 0 π α ≤ < i
). From a general point of view, we can decide to consider only the case
1 21 ≥ R
, because, for the inverse ratio
1 21 ≤ R , the two roots ) , ( 2 1 X X become ) , ( 1 2 X X
. On the other hand, possible roots can be separated into two classes: one with 1 X and 2 X of identical signs, the other with 1 X and 2 X of opposite signs.
Indeed, much stronger and more uniform gradients are predicted for 1 X and 2 X of identical signs, hence the choice of the single sided configuration as detailed below.
Two coaxial loops on a spherical surface
Choosing the origin at the sphere center, one has . By reference to the single loop system, dubbed as SL a , we shall define a as
2 2 2 sinα R a a = =
and dub the present two-loop system (with the above values of 1 X and 2 X ) as S a . With these notations, and mimicking Eq.
(2), we obtain
+ + - + = a x a x a x a I B axial µ (9)
Compared to Eq. ( 2), an increase of the second term (the B 1 gradient) and a significant decrease of the third term (thus a significant improvement of the gradient homogeneity) can be observed. From Eq. ( 9), the B 1 field at the origin is 7.84639 10 -7 a I / T (in Tesla units) and the B 1 gradient is equal to 8.21910 10 -7
2
/ a I T/m.
Two coaxial loops on an ellipsoidal surface
We consider the non-spherical case with , only one set of physically acceptable roots can be found; it is however without interest as far as the gradient and its uniformity are concerned.
Comparison of the three arrangements
Referring to SL a , for a given current I in all coils, the B 1 value for the S a and E a arrangements increases by 74.5% and 124.4%, respectively. Likewise, and more importantly in the present context, the rf field gradient increases by 52.3% and 75.4%, respectively. Useful dimensions of the SL a , S a and E a arrangements are given in Table I.
SIMULATIONS B 1 field and B 1 gradient along the symmetry axis
Calculations are performed according to Eq. ( 1) and by adding the contributions of the two loops. B 1 profiles are shown in Fig. 2 for the three arrangements SL a , S a and E a assumed to carry the same current I. Data are normalized with respect to the B 1 value at the origin chosen for the reference loop SL a . As expected, the linearity zone is larger for S a than for SL a and still larger for E a than for S a . B 1 gradient profiles are shown in Fig. 3. They have been obtained by differentiating (numerically) B 1 with respect to the variable x. The vertical scale of Fig. 3 From the expression (in cylindrical coordinates) given by Smythe [START_REF] Smythe | Static and Dynamic Electricity[END_REF] for the components of the field produced by a single loop, we were able to calculate the field components in the case of two loops with the help of the Mathematica software [START_REF] Wolfram | The Mathematica Book[END_REF] . Results are displayed in Fig. 4. Let us recall that axial symmetry prevails here, x denoting the symmetry axis (as above) and y one axis in the transverse plane. The axial component of B 1 ( B 1x ) is seen to be rather uniform in the transverse plane, over a zone roughly equal to the diameter of the main loop [Fig. 4(a)].
By contrast, the transverse component (B 1y ) is far from being uniform in the transverse plane and even exhibits an almost uniform slope [Fig. 4(b)]. In fact, it is zero on the symmetry axis and changes sign when it crosses the symmetry axis. Although its value is smaller than that of the axial component, it could be a problem since this transverse rf field gradient adds to the axial rf field gradient (the main gradient). However, the transverse rf field component is zero at the sample center, consequently unable to excite the spins in the major part of the sample.
Moreover, as this component has opposite signs on both sides of the sample symmetry axis, its possible effects should cancel on an average. These latter considerations lead to the conclusion that we have to consider essentially the gradient of the axial component. As a matter of fact, one of the objectives of this work is precisely to obtain the best uniformity for the axial gradient. The contour plots of Fig. 5 show clearly the size of the uniformity zone for the three arrangements considered here. It can be noticed that this homogeneity zone is (surprisingly) larger in the transverse direction than in the axial direction. The dimensions of the object under investigation is primarily dictated by the gradient uniformity along the axial (x) direction. The above considerations lead to the conclusion that dimensions along the other two dimensions should be similar.
ELECTRICAL CIRCUIT MODEL (TOTAL EQUIVALENT INDUCTANCE)
In order to optimize the performances of an rf antenna, it is necessary to calculate the values of the tuning and matching capacitors which implies to establish an electrical model of the antenna. Owing to the frequencies considered here, the dimensions of the device are small with respect of the wavelength so that the problem amounts to treat two series loops magnetically coupled and to determine their global inductance.
Working at high frequencies (which implies negligible skin depth), the specific inductance of a circular loop i of radius i a with a conductor of radius i ρ can be approximated as 14
] 2 ) 8 [ln( 0 - ≈ i i i i a a L ρ µ (11)
Likewise, the mutual inductance of two coaxial loops of radii 1 a and 2 a separated by d can be written as 12
⎥ ⎦ ⎤ ⎢ ⎣ ⎡ - ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ - = ) ( 2 ) ( 1 2 2 2 2 1 0 12 k E k k K k k a a M µ (12) where ] ) [( / 4 2 2 2 1 2 1 2 d a a a a k + + =
, K and E being the complete elliptic integrals of the first and second kind. Eq. ( 12) can be expressed more simply 14
The quantity 12 m is dimensionless and depends solely on the geometry of the two-loop system. Relevant numerical values are reported in Table II. From these values, we can express the global inductance
12 2 1 2M L L L + + =
for our two arrangements S a and E a (the two loops in a serial configuration)
⎥ ⎦ ⎤ ⎢ ⎣ ⎡ + = ) ln( 57357 . 1 389639 . 0 0 ) ( ρ µ a a L a S (14) ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ + = ) ln( 65609 . 1 697573 . 0 0 ) ( ρ µ a a L a E (15)
The interest of these calculations lie, among other things, in a first evaluation of the tuning capacitor A C , in parallel with the two loops in a serial configuration
) /( 1 2 0 ω L C A = ( 16
)
where 0 ω is the NMR measurement frequency expressed in rad s -1 . Of course, this value has to be slightly modified according to i) the impedance matching conditions, ii) the parasitic capacitors inevitably involved in the actual circuitry.
MATERIAL AND METHODS
Gradient coil prototypes
Owing to the available space and to the actual location of the sample (in the zone of gradient uniformity), our best two-loop arrangement (E a ) could be easily tested with a horizontal superconducting magnet with a usable aperture of 20 cm diameter (Bruker Biospec miniimager 2.34 T, operating at the proton NMR frequency of 100.3 MHz). The main loop radius a is equal to 1.3 cm. The gradient coil assembly is complemented by two Helmholtz-type coils, perpendicular to the gradient coils (thus magnetically decoupled from the gradient coils). These Helmholtz coils are used for the NMR signal detection and possibly for transmitting pulses of homogeneous rf field (in the following this type of coil will be dubbed "receive coil"). All coils are machined with a copper wire of 1mm diameter; they must be perpendicular to the static magnetic field (B 0 ) direction. The sample is a 7 mm o.d. tube containing the substance under investigation and is placed vertically. It should be noted that the Helmholtz-type coils and, evidently, the sample are partly inside the two gradient coils so as to fully benefit from the gradient uniformity zone (see Fig. 5). Finally, for the sake of comparison, we have also built an antenna of the SL a type with a single gradient coil identical to the main loop of E a (denoted in the following by SL Ea ).
Standard NMR spectrometers are generally equipped with a vertical superconducting magnet.
The sample (generally a 5mm o.d. NMR tube) is positioned vertically in the probe by means of pneumatic device which serves as well to remove the sample. This means that the sample cannot be placed inside the gradient coil system as this was the case for the E a configuration.
For this reason, we turned to the S a configuration (see Fig. 5) which is however less efficient in terms of gradient strength and gradient uniformity, although it provides an enlarged gradient uniformity zone and an improved gradient strength with respect to the arrangement involving a single loop (see Figs. 2 and3). Again, for accommodating sample tubes of 5 mm o.d., the main loop has a radius of 1.3 cm but the copper wire diameter has been slightly increased (1.63 mm) in order to improve the quality factor of the gradient system. Due to the necessity of having rf coils perpendicular to the static magnetic field (B 0 ), the receive coil (used as well for producing homogeneous rf pulses) is of the saddle-shaped type. This is shown on the photograph of Fig. 6 (wide-bore vertical 4.7 T magnet: aperture of 89 mm).
Note that here, it is possible to install a temperature regulation device. As this was done for the E a arrangement, the homologous configuration with a single loop has also been tested
(denoted SL Sa in the following).
Tuning and matching
The main problem is the possible leakage between the gradient coils and the receive coil (Helmholtz-type for the horizontal magnet, saddle-shaped for the vertical magnet). In spite of their perpendicularity, residual couplings could if these coils are not part of a balanced circuit. Although an inductive coupling would be a straightforward solution [START_REF] Decorps | [END_REF][16] , due to space limitations, we had rather recourse to a capacitive coupling using a balanced tuning and matching network. The corresponding circuit is schematized in Fig. 7 and follows well known principles [START_REF] Mispelter | NMR Probeheads for Biophysical and Biomedical Experiments -Theoretical Principles and Practical Guidelines[END_REF] . The coil(s) is(are) assumed to possess a resistor r . The capacitors C T and C M contribute mainly to tuning and matching, respectively while the capacitor C S leads to the electrical balancing of the probe coil. It compensates, to a first approximation, half the coil impedance
) 2 / /( 1 2 2 0 ω L C C A S = = ( 17
)
assuring that voltages are opposite in sign at both extremities of the coil assembly. In Table III, are reported the theoretical value of the global inductance L for all the arrangements considered here along with the value of the different capacitors used here for the considered proton Larmor frequency and yielding a quality factor Q in the range 100-300.
Capacitors must be non-magnetic and capable of handling high voltages, because of the usual high intensity of rf pulses, implying power amplifiers with an output of 300 W or even 1 kW.
The capacitors used in this work are of commercial origin: fixed non-magnetic capacitors (100E series, American Technical Ceramics, Huntingdon Station, NY) for adjustment and for C S , 0.8-10 pF variable non-magnetic capacitors (RP series, Polyflon, Norwalk, CT) for fine tuning and matching.
Different tests have been performed outside the NMR magnet in view of determining the Q factor and the possible influence of the liquid contained in the sample tube (load). The load does not lead to significant modifications while tests repeated inside the magnet show that the latter produces negligible effects. The Q values for all the coils devised for this work are reported in Table IV.
From a practical point of view, in spite of the electrical balancing of the gradient coil and the NMR receiving coil, it remains nevertheless a residual coupling between the coils because of their construction and positioning. However, the isolation between the two rf coils can be improved by adjusting finely their relative position (in principle, without calling into question their orthogonality). For all the tested probes, the isolation factor between the gradient coil and the receiving coil was at least 35 dB, and we have not observed any (unwanted) rf power transmitted to the amplifiers. Overall, no particular measure (e.g., active decoupling using pin diodes) was required.
NMR EXPERIMENTAL VERIFICATIONS
An especially severe experiment for assessing the quality of an rf gradient is the measurement of a self-diffusion coefficient 3 . The experiment itself is very simple: it starts with a first gradient pulse (of duration δ ) which defocuses the nuclear magnetization (or rather achieves a spatial labeling), then comes an interval ∆ (with δ >> ∆
) generally called the diffusion interval and the sequence ends with a second gradient pulse, identical to the first which refocuses half of the nuclear magnetization provided that the molecules bearing the nuclear spins have not moved during ∆ . If diffusion occurred, the nuclear magnetization decreases so that the measured NMR signal (as obtained after a standard 90° observing pulse, produced by the receive coil) is of the form
) exp( ) , ( 2 2 ∆ - ≈ D g g S δ γ δ (18)
γ is the gyromagnetic constant of the considered nucleus, g is the gradient strength and D the self-diffusion coefficient. We shall assume that gradient pulses are perfectly identical, meaning that the interval ∆ is sufficiently long for allowing the amplifier to fully recover. Moreover, we assume that the product δ × g is sufficient to produce a significant decay. The experiments discussed below have been performed by varying (incrementing) δ . Observing a decay which is not perfectly (as implied by Eq. ( 18)) indicates exclusively gradient non uniformity. Of course, these discrepancies will manifest themselves primarily for important values of the product δ × g . Moreover, if the gradient is not perfectly uniform, the first gradient pulse will defocus nuclear magnetization imperfectly and if ∆ is relatively short, the effect of these imperfections does not vanish by relaxation phenomena. This can be a more indirect consequence of gradient non-uniformity.
Results shown in Fig. 8 demonstrate clearly the advantages of the arrangement E a over a simple loop identical to the main loop of E a (SL Ea ). Not only has the gradient strength increased as evidenced by a faster decay, but also the gradient uniformity is perfect for E a and poor for the single loop. This can be appreciated, in the E a case, by the coincidence of experimental data points with the theoretical curve.
The last example (Fig. 9) demonstrates the performances of the arrangement S a which is appropriate for a vertical NMR magnet. It is interesting to notice that the diffusion coefficient of octanol (1.3 10 -6 cm 2 s -1 ; 20 times smaller than that of water) can be properly measured with a relatively weak gradient and a probe which has not been (for practical reasons) completely optimized (junction wires could have been shortened to minimize losses and improve the signal to noise ratio of the coil systems). Nevertheless, the good adjustment of experimental data with the theoretical curve demonstrates again the improvement of gradient uniformity with an asymmetric two-loop system. Finally, it appears that, because its sufficient size is sufficiently large with respect to the rf gradient coil dimensions, the metallic shield installed around the S a probe does not disturb significantly the rf gradient field.
CONCLUSION
It has been demonstrated here that, concerning gradients of the NMR rf field (B 1 gradients), it was possible to go well beyond the performances of a single loop which, for years, seemed to be an acceptable compromise. This is a first attempt to use a multi-loop system in order to improve both gradient strength and gradient uniformity. In this work, we tried to obtain the best performances from a simple system of two asymmetric loops. Thanks to an appropriate theoretical approach, we were able to predict, for the more efficient arrangement, that the gradient strength is roughly twice the one which can be obtained with a single loop (for the same current intensity) while the gradient uniformity is multiplied by a factor of three (zone where the axial field gradient does not vary by more than ±1%). However, this implies a single-sided configuration with the object under investigation located inside the loop system.
For the usual applications of NMR, this configuration is not suitable due to the size of standard NMR tubes and to the way they are manipulated. We were nevertheless able to devise another two-loop arrangement, still single-sided, but such that the uniformity zone can accommodate a standard NMR tube at the expense of slightly degraded performances. Adding more loops would probably lift this limitation. This issue is presently underway.
ACKNOWLEGMENTS
This work is part of the ANR project "Instrumentation in Magnetic Resonance" (Grant Blan06-2_139020). We thank J.F. Pautex for assistance in the preparation of the figures for this paper. (see Fig. 1). Note the increase of the gradient uniformity zone when going from SL a to S a and to E a . Both axes have the same scale (relative to the radius of the main loop).
Tables
µ
is the vacuum permittivity and I the intensity of the electric current in the loop. Note that the origin of the x-axis has been chosen at the location 2 / a
b)]. Although this arrangement yielded a very good result in terms of B 1 homogeneity when dealing with a pair of such a two-loop system 10 , it can be shown numerically that here (with the goal of improving the homogeneity of the B 1 gradient), no physically meaningful roots [to Eqs. (8a,b)] exist. However, it is always possible to cancel the second order term [Eq. (8a)] while minimizing the third order term [Eq. (8b)]. We find the set 827276
FIG. 3 .FIG. 5 .
35 FIG. 3. Normalized gradient profile (with respect to the value at the origin O) along the axial
6 .FIG. 7 .FIG. 9 .
679 FIG. 7. Schematic diagram of the (balanced) tuning and matching network, used
Figure 9
field and B 1 gradient off the symmetry axis
a, 0.1479 a], [-0.2011 a, 0.1659 a] for SL a , S a , E a , respectively. Even though the latter two regions are not centered on the origin O, it appears that the uniformity zone has more than doubled when going from SL a to S a and more than tripled when going from SL a to E a . With
10% in place of 1%, we obtain for the new intervals: [-0.2030 a, 0.1628 a], [-0.3107 a, 0.2859
a], [-0.4170 a, 0.2863 a]. The improvement associated with the use of two loops is seen to be
here less significant.
B 1
has been normalized, for each arrangement, with respect to the gradient value at the origin O. The improvement of the gradient uniformity is thus confirmed for a two-loop arrangement, especially for the arrangement E a . The gradient homogeneity can be quantified by the width of the region corresponding to a 1 ± % variation. These widths are [-0.0580 a, 0.0542 a],
[- 0.1175
TABLE I .
I Dimensions (relatively to the radius a of the main loop) of the SL a , S a and E a gradient-coils.
Coil
SL a S a E a
Main loop Radius 1 1 1
Distance from O 0.5 0.2058 0.0402
Secondary Radius --- 0.5736 0.6561
loop Distance from O --- 0.8446 0.6030
TABLE II .
II Inter-loop distance d (relatively to a, radius of the main coil) and values of the m 12 dimensionless factor in the expression of the mutual inductance M 12 (see Eqs. (12) and (13)) for the arrangements S a and E a .
Coil
S a E a
d / a 0.6389 0.5628
m 12 0.3852 0.5201
TABLE III .
III Theoretical values for the gradient coil prototypes: the total inductance L, the capacitor C A for an isolated system, and the capacitors (C S , C T , C M ) in the case of a capacitive coupling (see Fig.7) and a quality factor Q lying in the 100-300 range.
Circuit Two-loop coil One-loop coil
elements E a S a SL Ea SL Sa
L (nH) 99.5 77.6 54.5 46.5
C S =2 C A (pF) 50.7 16.5 92.5 27.5
C T (pF) 45.1-47.4 14.2-15.2 85.0-88.1 24.6-25.8
C M (pF) 5.76-3.29 2.32-1.33 7.90-4.47 3.02-1.72
TABLE IV .
IV Experimental values of the quality factor Q of all the prototypes of the rf coils employed in this work (measured outside the magnet). Notations for characterizing the location and the geometry of a single loop (SL in text). (b) Assembly of two loops on a spherical surface (S). (c) Assembly of two loops on an ellipsoidal surface (E). FIG. 2. Computed normalized B 1 axial field profiles for the coils SL a , S a and E a (from bottom to top). Normalization is based on the value at the center of the reference coil SL a . The horizontal scale represents a relative distance (with respect to the radius of the main coil).
Figure captions
FIG. 1.
Two-loop coil One-loop coil Receive coil
E a S a SL Ea SL Sa Helmholtz-type Saddle-type
Load water octanol water octanol water octanol
Tube outer diameter (mm) 7 5 7 5 7 5
Quality factor Q 180 145 152 111 138 155 | 29,221 | [
"1212619",
"12647",
"760035"
] | [
"24254",
"441232",
"129683",
"24254",
"129683"
] |
01482250 | en | [
"info"
] | 2024/03/04 23:41:48 | 2016 | https://hal.science/hal-01482250/file/Data%20schema%20does%20matter%2C%20even%20in%20NoSQL%20systems%21.pdf | Paola Gómez
Rubby Casallas
email: rcasalla@uniandes.edu.co
Claudia Roncancio
email: claudia.roncancio@imag.fr
Data Schema Does Matter, Even in NoSQL Systems!
A Schema-less NoSQL system refers to solutions where users do not declare a database schema and, in fact, its management is moved to the application code. This paper presents a study that allows us to evaluate, to some extent, the data structuring impact. The decision of how to structure data in semi-structured databases has an enormous impact on data size, query performance and readability of the code, which influences software debugging and maintainability. This paper presents an experiment performed using MongoDB along with several alternatives of data structuring and a set of queries having increasing complexity. This paper introduces an analysis regarding the findings of such an experiment.
I. INTRODUCTION
The data management landscape has become extremely rich and complex. The large variety of requirements in current information systems, has led to the emergence of many heterogeneous data management solutions 1 . Their strengths are varied. Among the most outstanding are data modeling flexibility, scalability and high performance in cost effective ways. It is becoming usual, that an organisation manages structured, semi-structured and non structured data: Relational as well as NoSQL systems are used [START_REF] Sadalage | NoSQL distilled: a brief guide to the emerging world of polyglot persistence[END_REF]. NoSQL systems refer to a variety of solutions that use non-relational datamodels and "schema-free data bases" [START_REF] Leavitt | Will nosql databases live up to their promise?[END_REF]. NoSQL systems are commonly classified into column oriented, key-value, graphs and document oriented. There is no standard data model and few formalization efforts. Such models combine concepts used in the past by non-first normal form relational models, complex values and object oriented systems [START_REF] Han | Survey on nosql database[END_REF]. Concerning transactional support and consistency enforcement, NoSQL solutions are also heterogeneous. They rank from full support to almost nothing which implies delegating the responsibility to the application layer. Moreover, the absence of powerful declarative query languages in several NoSQL systems, increases the responsibility of the developers.
Data modelling has an impact on querying performance, consistency, usability, software debugging and maintainability, and many other aspects [START_REF] Kanade | A study of normalization and embedding in mongodb[END_REF]. What would be good data structuring in NoSQL systems? What is the price to pay by developers and users for flexible data structuring? Is the impact of data structuring on querying complexity high? There are of course no definitive and complete answers to these questions. 1 Great map on https://451research.com/state-of-the-database-landscape This paper presents the analysis of the impact of data structuring regarding data size and, for representative queries, performance. It compares structuring alternatives through an experiment by using different schemes, which vary mainly their embedded structure, and many queries with different access patterns and increasing complexity. During the experiment we also analyse the impact of using indexes in the collections. Furthermore, we discuss the results, of which some are intuitively expected and others point out unexpected aspects. We performed the experiment and analysis using MongoDB, a document oriented database, which is a popular open source system to store semi-structured data. It is flexible, as it allows many data structuring alternatives, and it does not require an explicit data base schema definition nor are integrity constraints enforced.
The rest of the paper is organized as follows. Section II provides a background on MongoDB. In Section III, we define different representative "document schemes" for the same data. They are used in the experience presented in Section IV. We performed a systematic evaluation that considered usual access patterns with increasing complexity. In Section V, we discuss the results and how the experiences confirm or not the expected impact of the document scheme choices. Related works are briefly described in Section VI. Our conclusions and research perspectives are presented in Section VII.
II. DOCUMENT DB BACKGROUND
In this paper, we focus on the use of the MongoDB document db to store semi-structured data. Its "data model" is based on JSON 2 . Therefore, the supported data types are also largely used by other systems [START_REF] Alsubaiee | Asterix: scalable warehouse-style web data integration[END_REF]. In Section II-A, we provide an overview of Mongo's data model.
Concerning system aspects, MongoDB supports BSON serialization, indexes, map/reduce operations, master-slaves replication and data sharding. These features contribute to provide horizontal scalability and high availability. The analysis of all these features is out the scope of this paper. We will mainly focus on query access patterns and the impact of the data structures on their performances. In Section II-B, we introduce Mongo's query capabilities. MongoDB does not support an explicit data base schema that has to be created in advance. It provides data modelling flexibility, as users can create collections including BSON documents that have the same or different structures. A document is a structure composed by a set of field:value pairs. Any document has the _id identifier, which value is either automatically assigned by the system or explicitly given by the user, such as in our example. The type system includes atomic types (string, int, double, boolean), documents and arrays of atomic values or documents. Integrity constraints are not supported.
Note that Mongo's type system supports two ways to relate documents, embedding or referencing . The first one allows one or many documents can be embedded by several other documents. The second one refers documents using one or several fields. Figure 1 proposes two choices to represent the one-to-many relationship between employees and departments. Employees could be "completely" embedded in the Departments@S2 collection, or each employee belongs to Employees@S1 can refers the correspondent department using the field dept.
This type system opens the door to many "modeling" possibilities that sometimes are compared to the normalized and de-normalized alternatives of the relational data model [START_REF] Kanade | A study of normalization and embedding in mongodb[END_REF]. As will be evident in the following sections, even if NoSQL trends are prone to using schema-less data bases, there is an "implicit" schema and its choice is important.
B. Query language
MongoDB 3 provides a document-based query language to retrieve data. A query is always applied on a concrete document collection. Filters, projections, selections and other operators can be used to retrieve particular information for each document. MongoDB provides many operators in order to compare data, find out the existence of a particular field or find elements within arrays. 3 Version 3.1.9
The implementation of a query depends on the data structure and can be performed in many different ways. The complexity of a query program is related to the number of collections involved and to the embedding level where the required data is located.
The performance and the readability of the query programs relies on the application developer's skills and knowledge.
III. DATA STRUCTURING ALTERNATIVES
This section explores the data structuring alternatives that are possible in MongoDB, in order to analyse their advantages and possible drawbacks according to the application needs. We used a simple example called the Company, which was already mentioned. Figure 1 shows the entity-relationship model that involves Company, Department and Employee. Relationships are 1..* with the usual semantics. A company is organized in one or several departments. A department is part of exactly one company and has several employees. Each employee works for a single department. This example does not consider entities with several relationships nor many to many relationships.
We created a set of Mongo databases for the Company case. As there is no database schema definition in Mongo, a database is a set of data collections that contains documents according to different data structuring. As embedding documents and sets of documents are particular features, we worked with six document schemes (named S1 up to S6), and mixed those choices thus leading to different access levels. Figure 2 illustrates the six schemes. A circle represents a collection whereas circles with several lines correspond to several sets of documents embedded into several documents. Next, we introduce the data structuring alternatives.
Separate collections & no document embedding: Document schemes S1 and S4 create a separate collection for each entity type. Each document has a field expected to be an identifier. Such identifiers are used for the references among documents to represent the relationships 4 . In S1, the references are in the collections of the entity participating with cardinality 1 to the relationships. In S4, the references are in a separated collection CDES. No documents are embedded.
Full embedded single collection: in S3 and S5, all relationships are materialized with embedded documents. In the Company case, the traversal of the relationships leads to one single collection embedding documents up to level 3. In S3, company documents are at the first level of the collection. Each document embeds n department documents (level 2) which in turn embed n employee documents (level 3). The embedding choice in S3 reflects the 1 to many direction of the relationship. In S5, the rational is similar, but it uses the many to 1 direction. Employee documents are at level 1 of the collection whereas company documents are embedded at level 3. This choice introduces redundancy of the embedded documents in a collection (e.g. company and department).
Embedding & referencing: S2 and S6 combine collections with embedded documents and with references. In S2, Department is a collection and its relationship with employee is treated by embedding the employee documents (i.e. array of documents) into the corresponding department document. This represents the 1 to many direction of the relationship. The relationship between Department and Company is treated by referencing the company in each department through the company id (i.e. atomic field as the cardinality is 1). Companies is an independent collection.
Embedding & replicating: S6 has been created with the approach presented in [START_REF] Zhao | Schema conversion model of sql database to nosql[END_REF]. It uses the many to 1 direction of the relationships to embed the documents (as in S5) but it also replicates all the documents so as to have a "first level" collection for each entity type. Figure 2 shows the three collections of S6, where for instance, Company exists as 1) a separate collection, 2) as embedded documents in the Departments collection and 3) embedded at the third level in the Employees collection. There is also redundancy for the embedded documents, as was previously mentioned for S5.
IV. EXPERIMENT
This section is devoted to the experiment conducted with the data structuring alternatives introduced in Section III. Section IV-A presents the experiment setup. In Section IV-B, we discuss memory requirements regarding the data structuring choices. Section IV-C presents the queries implemented for different document schemes. For our study, we created six Mongo databases using the schemes S1 to S6. Each of them has been populated with the same data: 10 companies, 50 departments per company and 1000 employees per department. Data is consistent.
We implemented the queries introduced in Section IV-C for all the databases. In some cases, the implementation of a query differs a lot from one schema to another. For each db, each query has been executed 31 times with indexes, and 31 times without indexes. The first execution of each sequence was separated in order to avoid load in memory effects. The experiment was run on a workstation (Intel Core i7 Processor with 1.8 GHz and 8GB of RAM).
B. Data size evaluation
Figures 3a shows the size of each database and figure 3b shows the size of each collection. Note the large size of S5 and S6 databases with respect to the other options. The size of S5 and S6 are mainly dominated by collection Employee which has a fully embedded structure following the many-toone relationships. Company and department documents are replicated in Employees. S6 is a little larger than S5, because of collections Companies and Departments which are extra copies in S6.
S4 and S1 do not embed documents. The size of S4 is twice the size of S1 because of the size of the cdes collection (representing the relationships) which is meaningful in our set-up.
The size of embedded collections tends to grow even if they don't have any replication. See S2 and S3 wrt S1.
size(Departments@S2) > (size(Employees@S1) + size(Department@S1)), With respect to S1, both S2 and S3 have more complex structures with arrays of embedded documents at different levels.
C. Queries
To analyse the impact of the data structuring, we established a set of queries Q to be executed and evaluated using each of the schemes. Q includes queries with increasing complexity: a selection with a simple equality/inequality predicate on a single entity type (see Q1, Q2, Q6), queries requiring partitioning (see Q4), queries involving several entity types (see Q5, Q7) and aggregate functions (see Q3).
[Q1] Employees with a salary equal to $1003.
[Q2] Employees with a salary higher than $1003.
[Q3] Employees with the highest salary.
[Q4] Employees with the highest salary per company and the company id.
[Q5] Employees with the highest salary per company and the company name.
[Q6] The highest salary.
[Q7] Information about the companies including the name of their departments.
These queries have been used to evaluate the performance in each scheme. In this section, we analyse the results of the experiment from the point of view of data structure and query performance (Section V-A). In Section V-B, we focus on the impact of indexing. In Section V-C, we propose a discussion around the facts that were confirmed and found after performing this experiment.
A. Databases without indexes
As was previously mentioned, the experiment includes the execution of queries Q1 to Q7 on the 6 databases -S1 to S6 -. In order to ease the analysis, Figure 4 shows the median execution time for each query in the corresponding schema. In addition, Figure 4b depicts the relative performance for each query with respect to all schemes. There is a line per query. Schemes are ordered from left to right, starting with the one where performances are the best. For example, for Q7, the best performance was obtained in S3 and the worst in S5. For Q2, the performances in S1, S4, S5 and S6 are very close and clearly better than the performance in S3.
Let's first focus on schemes with full embedded single collections. For such schemes, performance is very good when the query access pattern is in the same direction of the embedded structure. This can be appreciated in S3 and S5 for queries Q4, Q5 and Q7. When Q4 and Q5 search for data of employees by company, S5 is not good because the data is dispersed at levels 1 and 3, respectively. This implies traversing down and up several times into the nested structure and discarding the unnecessary information involved in the intermediate level.
We will use the term intrajoins to refer to the process of traversing down and up nested structures in order to find and relate data.
The case is different in S3. Here, employees are at level 3 and companies at level 1, thus matching with the access pattern of Q7. This implies dealing with an intermediate level as in S5; however, in S3, it is not necessary to go up and down between levels. Therefore, S3 performs the best.
A similar behaviour occurs in Q7, which requires company data and the names of its departments. S5 has the required data at levels 3 and 2 respectively -in an inverse orderand they are not grouped. This implies crossing through each document at level 1, discarding useless information, and moving down to levels 2 and 3. All this to group the data located at level 2 based on the data at level 3. The worst performance of Q7 appears in S5.
Regarding S3, the access pattern of Q7 matches its embedding structure perfectly. Here the companies' data are at level 1 and the departments' data are grouped at level 2. S3 performs best again. However, this is not true for all the queries. Q3 on S3 has the worst performance because the required data are nested at level 3.
In order to favor certain access, several copies of the same data can be created using collections with different structures, as is done in S6. S6 extends S5 by including additional collections such as Departments, where company is embedded. This collection corresponds exactly to the data required by Q7. Q7 does not use employees. In this case, even if intrajoins are necessary, and the query access pattern is in the other direction, Q7 performs much better in S6 than in S5. The main reason is that the required data are at levels 1 and 2 in S6, thus avoiding any intermediate or superior levels. In S5, the employees have to be traversed even if their data is not useful for Q7.
When considering schemes with separate collections without document embedding such as S1 and S4, it is evident that they perform the best for queries Q1, Q2 and Q3. This is explained by the fact that the read-set of these queries perfectly matches a collection. No useless data are read, nor complex structures need to be manipulated. This is not the case for Q4 and Q5, which perform badly in S1 and S4. In both cases, the queries require data concerning employees and companies -or their relationship-, which are stored in separate collections -i.e., Cdes@S4 and Departments@S1-. Therefore in S1 and S4, the query evaluation requires join and grouping operations, whereas in S3, the Company collection already reflects that.
Q7 is also very inefficient in S4. The overhead is due to the search of the departments-company relationship in the Cdes collection.
It should be stressed that the hypothesis on data consistency is important for query implementation. For example, for Q7 in S5 and S6, this hypothesis allows the extraction of the name of the first department we found without scanning the whole extension of department.
B. Indexes impact
We complete our study by creating indexes in all databases and analyzing the benefit on the performances of Q1 up to Q7. We created indexes on the identifiers (e.g.. company id) and on the salary. Figures 5a and5b provide a synthesis of the results. Figure 6 shows the speedup obtained using the index with respect to the setup without index.
As expected, indexes improve performances. In particular, the improvement for Q4 and Q5 is high in all schemes, and reaches 96% in S1 (see Figure 6). Some schemes, such as S1 and S5, which perform the worst without indexes, perform very well with indexes (see Figure 5b). Interestingly, the benefit of the indexes in S3, is less important than for the other schemes. S3 is relegated to the last performance position when compared to the other schemes.
Furthermore, indexes improve the performances of Q7 in all schemes. Nevertheless, the improvement obtained in S4 and S5 is not enough to overtake S3. This shows that even if indexes are efficient, there are cases where they cannot compete against an unsuitable schema.
C. Discussion
As was previously discussed, the experiment allowed us to confirm some of the intuitive ideas on data modeling with MongoDB, but it also pointed out unexpected aspects. These aspects concern the performances but also the readability of the code used for implementing the queries.
The embedding level of the data has an impact on performances. Accessing data at the first level of a collection is faster and easier than accessing data in deeper levels.
Querying data stored at different embedding levels in a collection may require complex manipulations. For example, when the structure embeds arrays of documents, the algorithms to manage them are similar to intra-collection joins. They affect performances, but also require more elaborate programming.
The type of results, impacts performances. This point may be an issue when working with complex data. The structure of selected data is pre-formated with the structure of the queried collection. For example, when extracting a field A appearing in documents embedded at level k, the result will maintain the embedded structure if no restructuring operations are performed. This means that the answer may have useless embedding levels and not required information issued from any of the k-1 levels that have been traversed to access field A. Changing such a structure to provide data in another format requires extra processing. The cost of this extra processing should not be neglected.
Concerning storage requirements, our experiment revealed that using collections with embedded documents tends to require more storage than using separate "flat" collections and references for the same data.
VI. RELATED WORK
Many works focus on automatically transforming a relational schema to a data model supported by a NoSQL system. [6] [7]. Some works compare performance, scalability and other properties between relational and NoSQL systems [START_REF] Cattell | Scalable sql and nosql data stores[END_REF] [9] [START_REF] Boicea | Mongodb vs oracle-database comparison[END_REF]. Others such as [START_REF] Wang | Schema management for document stores[END_REF] [12] introduce schema management on top of schema-less document systems.
Zhao et al in [START_REF] Zhao | Schema conversion model of sql database to nosql[END_REF] present a systematic schema conversion that follows a de-normalization approach where: 1) each table is a new collection, 2) each foreign key in a table is transformed into an embedded structure where the keys are replaced by the actual data values. They pre-calculate some natural joins with embedded schemes to improve the query performance with respect to a relational DBMS. Additionally, the paper shows that, even though query performance for some "joins" is better, space performance is worst due to data replication. In our experiment, schema S6 corresponds to the Zhao's strategy. According to our results, if the queries follow the embedded structure, performance is better than in other schemes. In the other cases performance is poorer.
Lombardo et al in [START_REF] Lombardo | Issues in handling complex data structures with nosql databases[END_REF] propose a framework to determine the key-value tables that are most suited to optimize the execution of the queries. From an entity-relationship diagram and the most used queries, they propose to create redundant tables to improve performance (they do not talk about the cost in storage). The goal of their framework is similar to ours: To help NoSQL developers to choose the most suitable data structuring according to the needs of the application. The approach is different because we intend to create several schemes and evaluate them before making a final decision.
Mior in [START_REF] Mior | Automated schema design for nosql databases[END_REF] follows a similar approach to ours but focus on Cassadra. He proposes a static analysis of the storage impact comparing a set of queries in different schemes. We compare and analyse based on experimentation.
VII. CONCLUSION AND FUTURE WORK
This paper reports a study on the impact of data structuring in document-based systems such as MongoDB. This system is scheme-less, as are several NoSQL systems. We worked with several data structuring schemes in order to evaluate size performance, and with several queries in order to analyse the execution time with respect to the schemes. The experiments demonstrated that data structure choices do matter. Collections with embedded documents have a positive impact on queries that follow the embedding order. However, there is no benefit -or there is the possibility of bad performance-for queries accessing the data in another embedding order or requiring to access data embedded at different levels in the same collection. The reason for the latter is that the required manipulations are of similar complexity than that of joins of several collections. Also, collections with embedded documents -even those without replication-tend to require more storage than the same data represented on separate collections.
Future work includes extending our study to cover more complex data scenarios. For instance, relationships with 0 and N..M cardinality and several relationships for an entity type. Considering the semi-structured data model, it would also be interesting to explore more modeling alternatives. For example, the use of fragmented documents and partial replication. Our long-term objective is to provide developers with a design assistant tool to help them solve trade-offs.
Fig. 1 :
1 Fig. 1: Embedded and Referenced documents
Fig. 2 :
2 Fig. 2: Representative set of document schemes
Fig. 3: Data size
Fig. 4 :
4 Fig. 4: Synthesis of executions
Fig. 5 :
5 Fig. 5: Synthesis of executions using indexes
Fig. 6 :
6 Fig. 6: Impact index
https://www.mongodb.com/json-and-bson
This is similar to normalized relational schemes with primary and foreign keys
Data of full results are provided in: https://undertracks.imag.fr/php/studies/study.php/data schemes in nosql systems
ACKNOWLEDGMENT
The authors would like to thank N. Mandran, C. Labbé and C. Vega for their helpful comments on this work. | 26,140 | [
"174217",
"16964"
] | [
"1041987",
"129862",
"1041987"
] |
01371951 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2014 | https://hal.science/hal-01371951/file/LARRIEU2014ApprehendingWorksOfComputerMusic.pdf | de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Apprehending works of computer music through a representation of the code Maxence Larrieu, Université-Paris-Est, LISAA, UPEM, France
Theoretically Proposition
I believe that a representation of the code can be helpful for the analysis.
Here are the main points of the desired representation :
• Independent to programming languages : it must be applicable in all cases
• "Upper" the programming languages, i.e. in a higher abstraction level
• Readable : all the code do not have to be include but only the salient elemen
• Personal : the representation is built by the analyst
• Dynamic : the representation must evolve at the same time as the music Most of these points can be illustrated with the Graphic User Interface (GUI) often present in code of works. Mainly, GUI are used to give a global view of the code so that the user can easily control the computation. In a same way, the representation I need must be built upper the code to facilitate the understanding.
Practical proposition
I have developed a software that allows to build this representation. The implementation is done with Processing, a programming language dedicated to graphic rendering. To build the representation, the user has to identify the salient element of the code, construct OSC message, and send it to the software.
Introduction
At the end of the 20th century Jean-Claude Risset highlighted a novel aspect of computer music composition : sound production through calculation (Risset 1999). Pursuing this line of thought, I view composition as a process that defines the different calculations needed to produce music. Since this calculations are implemented in programming languages, I name them 'code'.
Code is rich
First I notice that the relation between the code and the work is new in music : the same code permit to compose, to actualize and to reconstruct the work. This omnipresence of the code reveals its richness : it contains a lot of information. Thus, as Matteo Meneghini has shown in 2006 with Stria, a study based on this code can be helpful for the analysis.
Main difficulties
Nevertheless, this approach is not yet widespread in musical analysis. I can understand this by revealing the main difficulties for studying the code :
• Code is hermetic : it is dedicated to the computer, not the human
• Code is plural : several programming languages can be used in a piece
• Codes of works are heterogeneous : many languages are used in computer music
• Languages are not perennial : they are evolving with the technological obsolescence (corollary to Moore's law)
Interactivity
I dedicated these representations to the "interactive works" (i.e. when an exterior phenomena is acquired and linked to the computation). Thus, the representation includes three layer : the inputs, the connections and the computation. I choose to work on interactive pieces because the representation of the code are more meaningful.
Example Conclusion
I introduce a new way to use the code of works in an analytically approach. By identifying and representing the evolution of the most salient element of the code, we would like to permit another way for apprehending electroacoustic and computer music works.
This research also leads to question the relation between the piece of work and the code, this brings to the ontology of computer music works.
Capture of a representation given by Dodécalite Intérieur, Jean Michel Bossini, 2009, for vibraphone and live electronics
Sound computing PureData, Max, Csound, OpenMusic etc. Music
OSC messages
Graphic Rendering Processing Representation Maxence LARRIEU Université Paris-Est Laboratory LISAA
Illustration of the device SysMus 2014
Summary
My research is based on the code that exists in computer music works. I suggest that the code can be helpful for musical analysis. Having seen the main difficulties while studying this code, I am introducing a tool that provides a representation of the code during the work By representing the elements of the code that produce the music, I suggest a new way to apprehend works of computer music. | 4,277 | [
"9867"
] | [
"105538"
] |
01482402 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01482402/file/978-3-642-34691-0_15_Chapter.pdf | Joanna Strug
email: pestrug@cyf-kr.edu.pl
Barbara Strug
email: barbara.strug@uj.edu.pl
Machine Learning Approach in Mutation Testing
Keywords: mutation testing, machine learning, graph distance, classification, test evaluation
This paper deals with an approach based on the similarity of mutants. This similarity is used to reduce the number of mutants to be executed. In order to calculate such a similarity among mutants their structure is used. Each mutant is converted into a hierarchical graph, which represents the program's flow, variables and conditions. On the basis of this graph form a special graph kernel is defined to calculate similarity among programs. It is then used to predict whether a given test would detect a mutant or not. The prediction is carried out with the help of a classification algorithm. This approach should help to lower the number of mutants which have to be executed. An experimental validation of this approach is also presented in this paper. An example of a program used in experiments is described and the results obtained, especially classification errors, are presented.
Introduction
Software testing is a very important part of building an application. It can be also described as a process aiming at checking if the application meets the starting requirements, works as expected and satisfies the needs of all involved in.
Software testing, depending on the testing method employed, can be applied at different stages of the application development process. Traditionally most of the testing happens during the coding process and after it has been completed, but there exist approaches (for example agile), where testing is on-going. Thus the methodology of testing depends on the software development approach selected. This paper deals with mutation testing, called also mutation analysis or program mutation -a method of software testing, which involves introducing small changes in the source code (or for some programming languages byte code) of programs. Then the mutants are executed and tested by collections of tests called test suites. A test suite which does not detect mutant(s) is considered defective. The mutants are generated by using a set of mutation operators which try to mimic typical programming errors. This method aims at helping the tester assess the quality and derive effective tests.
One of the problems with mutation testing concerns the number of mutants generated for even a small program/method, what leads to the need of compiling and executing a large number of copies of the program. This problem with mutation testing had reduced its use in practice. Over the years many tools supporting mutation testing were proposed, but reducing the number of mutants is still important aspect of mutation testing and there is a lot of research in this domain, which is briefly reviewed in the next section.
In this paper an approach based on the similarity of mutants is used. This similarity is used to reduce the number of mutants to be executed. In order to calculate such a similarity among mutants their structure is used. Each mutant is converted into a hierarchical control flow graph, which represents the program's flow, variables and conditions. On the basis of this graph form a similarity is calculate among programs. It is then used to predict whether a given test would detect a mutant or not. The prediction is carried out with the help of a classification algorithm. This approach should help to lower the number of mutants which have to be executed and at the same time help to assess quality of test suits without the need of running them. This approach shows some proximity to mutant clustering approach [START_REF] Ji | A Novel Method of Mutation Clustering Based on Domain Analysis[END_REF][START_REF] Hussain | Mutation Clustering[END_REF] as it also attempts to measure similarity of mutants, but we represent mutants in a graph form and use graph based measure rather then converting them to a special space which allows for the use of Hamming distance. Graphs have been for a long time considered to have too high computational cost to be of practical use in many domains but recently there has be a large growth of research on them, which resulted in the development of many algorithms and theoretical frameworks. Much of this research, which is briefly reviewed in the next section, deals with bio-and chemoinformatics, but some other domains were also touched upon.
The main contribution of this paper is a method to reduce the number of mutants that have to be executed in a dynamic way i.e. depending on the program for which they are generated rather than statically for a given language or the operator. Moreover this paper introduces a representation of programs that allows for comparing programs and it also proposes several measures of such a comparison. This approach was applied to two examples and the results seem to be encouraging.
The paper is organized in the following way, in the next section a related work concerning both the mutation testing and different approaches to graph analysis is briefly presented. Then, in section 3 some preliminary notion concerning classification, graphs, edit distance and graph kernels is presented, It is followed by a section 4, which presents fundamental components of our approach i.e. a hierarchical control flow graph and methods for calculating edit distance and kernel for such a graph. In section 5 experiments are presented, including the setup, and the results are discussed. Finally section 6 summarizes the paper by presenting conclusions drawn from the research presented here as well as some possible extensions, improvements and directions for future work.
In this paper a number of issues from different domains is discussed. Thus in this review of related work several domains are taken into account, such as mutation testing and especially reduction approaches, different approaches to classification problem, graph analysis and, in particular, different approaches to calculating distances among graphs, (like edit distance and using kernel methods for graph data) and learning methods based on them.
Mutation testing goes back to the 70s [START_REF] Demillo | Hints on Test Data Selection: Help for the Practicing Programmer[END_REF] and it can be used at different stages of software development. It has also been applied to many programming languages including Java [START_REF] Chevalley | Applying Mutation Analysis for Object-oriented Programs Using a Reflective Approach[END_REF][START_REF] Chevalley | A Mutation Analysis Tool for Java Programs[END_REF][START_REF] Kim | Assessing Test Set Adequacy for Object Oriented Programs Using Class Mutation[END_REF][START_REF] Kim | The Rigorous Generation of Java Mutation Operators Using HAZOP[END_REF][START_REF] Kim | Class Mutation: Mutation Testing for Object-oriented Programs[END_REF][START_REF] Kim | Investigating the effectiveness of objectoriented testing strategies using the mutation method[END_REF] (used in this paper). A lot of research work has also been concerned with defining mutation operators that would mimic typical errors [START_REF] King | A Fortran Language System for Mutation-Based Software Testing[END_REF]. As mentioned in the introduction one of the main problem of mutation testing is the cost of executing large number of mutants so there has been a great research effort concerning reduction costs. Two main approaches to reduction can be divided into two groups: the first containing methods attempting to reduce the number of mutants and the second -those aimed at reducing the execution costs.
One of the methods used to mutant number reduction is sampling. It was first proposed firstly by Acree [START_REF] Acree | On Mutation[END_REF] and Budd [START_REF] Budd | Mutation Analysis of Program Test Data[END_REF]. They still generate all possible mutants but then a percentage of these mutants is then selected randomly to be executed, and all other are discarded. Many studies of this approach were carried out, for example Wong and Mathurs [START_REF] Mathur | An Empirical Comparison of Mutation and Data Flow Based Test Adequacy Criteria[END_REF][START_REF] Wong | On Mutation and Data Flow[END_REF] conducted an experiment using a random percentage of mutants 10% to 40% in steps of 5%.
Another approach to mutant number reduction used clustering [START_REF] Ji | A Novel Method of Mutation Clustering Based on Domain Analysis[END_REF][START_REF] Hussain | Mutation Clustering[END_REF]. It was proposed by Hussain [START_REF] Hussain | Mutation Clustering[END_REF] and instead of selecting mutants randomly, a subset is selected by a clustering algorithm. The process starts by generating all first order mutants, then clustering algorithm is used to put these mutants into clusters depending on the killable test cases. Mutants put into the same cluster are killed by a similar set of test cases, so a small selection of mutants is used from each cluster. All the other are then discarded.
Third approach to reduction was based on selective mutation, which consists in selecting only a subset of mutation operators thus producing smaller number of mutants [START_REF] Mathur | Performance, Effectiveness, and Reliability Issues in Software Testing[END_REF][START_REF] Offutt | An Experimental Evaluation of Selective Mutation[END_REF]. A much wider survey of the domain of mutation testing, including approaches to reduction was carried out by Jia et al. [START_REF] Jia | An Analysis and Survey of the Development of Mutation Testing[END_REF].
The approach proposed in this paper is partially similar to the first two described above, as it also generates all mutants, but then only randomly selected number of them is executed and the test performance for others is assessed on the basis of their similarity to the executed mutants for which performance of test suites is thus known.
The similarity of mutants is measured using graph representation of each mutant. The use of graphs as a mean of object representation has been widely researched. They are used in engineering, system modeling and testing, bioinformatics, chemistry and other domains of science to represent objects and the relations between them or their parts. For use in computer aided design differ-ent types of graphs were researched, not only simple ones but also hierarchical graphs (called also nested graphs [START_REF] Chein | Nested Graphs: A Graph-based Knowledge Representation Model with FOL Semantics[END_REF]).
In this paper a machine learning approach based on similarity is used to analyse graphs. The need to analyze and compare graph data appeared in many domains and thus there has been a significant amount of research in this direction. Three distinctive, although partially overlapping, approaches can be noticed in the literature.
The first one is mainly based on using standard graph algorithms, like finding a maximal subgraph or mining for frequently occurring subgraphs to compare or classify graphs. The frequent pattern mining approach to graph analysis has been researched mainly in the domain of bioinformatics and chemistry [START_REF] Agrawal | Mining association rules between sets of items in large databases[END_REF][START_REF] Han | Mining Frequent Patterns without Candidate Generation: A Frequent-pattern Tree Approach[END_REF][START_REF] Inokuchi | An Apriori-Based Algorithm for Mining Frequent Substructures from Graph Data[END_REF][START_REF] Yan | Substructure Similarity Search in Graph Databases[END_REF]47]. The main problem with this approach is its computational cost, and a huge number of frequent substructures usually found.
The second approach is based on transforming graphs into vectors by finding some descriptive features Among others Bunke and Riesen ([4,[START_REF] Richiardi | Vector Space Embedding of Undirected Graphs with Fixed-cardinality Vertex Sequences for Classification[END_REF][START_REF] Riesen | Cluster Ensembles Based on Vector Space Embeddings of Graphs[END_REF][START_REF] Riesen | Dissimilarity Based Vector Space Embedding of Graphs Using Prototype Reduction Schemes[END_REF][START_REF] Riesen | Reducing the dimensionality of dissimilarity space embedding graph kernels[END_REF]) have done a lot of research on vector space embedding of graphs, where as features different substructures of graphs are selected. Then their number is counted in each graph and these numerical values combined in a predefined order result in a vector that captures some of the characteristics of a graph it represents. Having a graph encoded in a vector a standard statistical learning algorithms can be applied. The main problem is in finding appropriate features/substructures and in enumerating them in each graph. It usually leads to problems similar to those in frequent pattern mining (which is often used to find features counted in vector representation). Nevertheless, this approach has successfully been applied in many domains like image recognition [START_REF] Bunke | Recent advances in graph-based pattern recognition with applications in document analysis[END_REF], and especially the recognition of handwritten texts [START_REF] Liwicki | Combining diverse systems for handwritten text line recognition[END_REF][START_REF] Liwicki | Automatic gender detection using on-line and off-line information[END_REF].
The third direction, which was proposed, among others, by Kashima and Gartner ([13,[START_REF] Kashima | Marginalized Kernels Between Labeled Graphs[END_REF]), is based on the theory of positive defined kernels and kernel methods [START_REF] Schlkopf | A Short Introduction to Learning with Kernels[END_REF][START_REF] Schlkopf | Learning with kernels[END_REF]. There has been a lot of research on different kernels for structured data, including tree and graph kernels [START_REF] Borgwardt | Shortest-path kernels on graphs[END_REF][START_REF] Gartner | A survey of kernels for structured data[END_REF][START_REF] Gartner | Kernels for structured data[END_REF][START_REF] Kashima | Marginalized Kernels Between Labeled Graphs[END_REF]. Tree kernels were proposed by Collins and Duffy [START_REF] Collins | New Ranking Algorithms for Parsing and Tagging: Kernels over Discrete Structures, and the Voted Perceptron[END_REF] and applied to natural language processing. The basic idea is to consider all subtrees of the tree, where a subtree is defined as a connected subgraph of a tree containing either all children of a vertex or none. This kernel is computable in O(|V 1 ||V 2 |), where |V i | is the number of nodes in the i -th tree [START_REF] Gartner | Kernels for structured data[END_REF].
In case of graph kernels there is a choice of several different ones proposed so far. One of them is based on enumerating all subgraphs of graphs G i and calculating the number of isomorphic ones. An all subgraph kernel was shown to be NP-hard by Gartner et al [START_REF] Gartner | A survey of kernels for structured data[END_REF]. Although, taking into account that in case of labelled graphs the computational time is significantly lower such a kernel is feasible in design applications. Another interesting group of graph kernels is based on computing random walks on both graphs. It includes the product graph kernel [START_REF] Gartner | A survey of kernels for structured data[END_REF] and the marginalized kernels [START_REF] Kashima | Marginalized Kernels Between Labeled Graphs[END_REF]. In product graph kernel a number of common walks in two graphs is counted. The marginalized kernel on the other hand is defined as the expectation of a kernel over all pairs of label sequences from two graphs. These kernels are computable in polynomial time, (O(n 6 ) [START_REF] Gartner | Kernels for structured data[END_REF]), although for small graphs it may be worse then 2 n , when the neglected constant factors contribute stronger.
The main research focus is on finding faster algorithms to compute kernels for simple graphs, mainly in bio-and chemoinformatics. Yet, to author's best knowledge, no research has been done in the area of defining and testing kernels for different types of graphs, such as hierarchical control flow graphs proposed in this paper..
Preliminaries
Classification is one of main tasks being part of machine learning. It consists in identifying to which of a given set of classes a new element (often called observation) belongs. This decision is based on a so called training set, which contains data about other elements (often called instances) whose class membership is known. The elements to be classified are analysed on the basis of their properties, called features. These features can be of different types (categorical, ordinal, integer-valued or real-valued), but some known algorithms work only if the data is real-valued or integer-valued based. An algorithm which implements classification is known as a classifier. In machine learning, classification task is considered to be a supervised learning algorithm, i.e. learning process uses a training set of elements classified correctly.
There is a number of known classification algorithms. One of them is k -N N (k nearest neighbours, where k is a parameter), used in this paper. In majority of known classification algorithms, including k -N N an instance to classify is described by a feature vector containing properties of this instance. As in this paper graphs are used, not vectors, to represent objects to classify a way of calculating distance between two graphs is needed. Two such methods, graph edit distance and graph kernel, are briefly presented in the following, together with some basic notions. Then, in the next section, we show how these concepts can be extended to deal with hierarchical flow graphs proposed in this paper.
Graphs
A simple graph G is a set of nodes (called also vertices) V and edges E, where E ⊂ V 2 . Each node and edge can be labeled by a function ξ, which assigns labels to nodes and edges. A walk w of length k -1 in a graph is a sequence of nodes
w = (v 1 , v 2 , . . . , v k ) where (v i , v j ) ∈ E for 1 ≤ i, j ≤ k. If v i ̸ = v j for i ̸ = j then a walk w is called a path.
Graph Edit Distance A graph edit distance (GED) approach is based on the fact that a graph can be transformed to another one by performing a finite number of graph edit operations which may be defined in a different way, depending on algorithms. GED is then defined as the least-cost sequence of such edit operations. Typically edit operation sequences include node edge insertion,node and edge deletion, node and edge substitution (label change). A cost function has to be defined for each of the operations and the cost for the edit operation sequence is defined as the sum of costs for all operations present in a given sequence. It has to be noticed that the sequence of edit operations and thus the cost of the transformation of a graph into another one is not necessary unique, but the lowest cost is and it is used as GED. For any given domain of application the two main issues are thus the way in which the similarity of atoms (nodes and edges) is defined and what is the cost of each operation. For labelled graphs, thus having labels for nodes, edges, or both of them, the deletion/insertion/substitution costs in the GED computations may depend on these labels.
Graph Kernels Another approach to use traditional classification algorithms for non vector data is based on the so called kernel trick, which consists in mapping elements from a given set A into an inner product space S (having a natural norm), without ever having to actually compute the mapping,i.e. graphs do not have to be mapped into some objects in space S, only the way of calculation the inner product in that space has to be well defined. Linear classifications in target space are then equivalent with classifications in source space A. The trick allowing to avoid the actual mapping consists in using the learning algorithms needing only inner products between the elements (vectors) in target space, and defining the mapping in such a way that these inner products can be computed on the objects in the source the original space by means of a kernel function. For the classifiers a kernel matrix K must be positive semi-definite (PSD), although there are empirical results showing that some kernels not satisfy this requirement may still do reasonably well, if a kernel well approximates the intuitive idea of similarity among given objects. Formally a positive definite kernel on a space X is a symmetric function K : X 2 → R, which satisfies ∑ n i,j=1 a i a j K(x i , x j ) ≥ 0, for any points x 1 , . . . , x n ∈ X and coefficients a 1 , . . . , a n ∈ R.
The first approach of defining kernels for graphs was based on comparing all subgraphs of two graphs. The value of such a kernel usually equals to the number of identical subgraphs. While this is a good similarity measure, the enumeration of all subgraphs is a costly process. Another approach is based on comparing all paths in both graphs. It was used by Kashima [START_REF] Kashima | Marginalized Kernels Between Labeled Graphs[END_REF] who proposed the following equation:
K(G 1 , G 2 ) = ∑ path1,path2∈V * 1 ×V * 2 p 1 (path 1 )p 2 (path 2 )K L (lab(path 1 ), lab(path 2 )), (1)
where p i is a probability distribution on V * i , and K L is a kernel on sequences of labels of nodes and edges along the path path i . It is usually defined as a product of subsequent edge and node kernels. This equation can be seen as a marginalized kernel and thus is a positive defined kernel [START_REF] Tsuda | Marginalized kernels for biological sequences[END_REF]. Although computing this kernel requires summing over an infinite number of paths it can be done efficiently by using the product graph and matrix inversion [START_REF] Gartner | A survey of kernels for structured data[END_REF]. Another approach uses convolution kernels [START_REF] Haussler | Convolutional kernels on discrete structures[END_REF], which are a general method for structured data (and thus very usefull for graphs). Convolution kernels are based on the assumption that structured object can be decomposed into components, then kernels are defined for those components and the final kernel is calculated over all possible decompositions.
Data Preparation
To carry out an experiment a number of steps was needed to prepare the data. Firstly, two relatively simple, but nevertheless representative, examples were selected and mutants for them were generated by using Mujava tool [START_REF] Ma | MuJava: a mutation system for java[END_REF]. One of the examples was a simple search presented in Fig. 1
Hierarchical Control Flow Graphs
Although a well known method of representing programs or their components (methods) is a control flow diagram (CFD), it cannot be directly used to compare programs, as we need to compare each element of any expression or condition separately and a traditional CFD labels its elements by whole expressions. So in this paper a combination of CFD and hierarchical graphs is proposed. It adds a hierarchy to this diagram enabling us to represent each element of a program in a single node and thus making the graphs more adequate to comparison. An example of such a hierarchical control flow graph (HCFG) is depicted in Figs. 2a andb. It represents a method Search(...) and its mutant depicted in Fig. 1a and b, respectively. It can be noticed that the insertion of ++ into variable i in a f or loop is represented by an appropriate expression tree replacing a simple node labelled i inside node labelled f or.
Let for the rest of this paper R V and R E be the sets of node and edge labels, respectively. Let ϵ be a special symbol used for unlabelled edges. The set of node labels consists of the set of all possible keywords, names of variables, operators, numbers and some additional grouping labels (like for example declare or array shown in Fig. 2. The set of edge labels contains Y and N .
Definition 1. (Labelled hierarchical control flow graph) A labelled hierarchical control flow graph HCF G is defined as a 5-tuple
(V, E, ξ V , ξ E , ch) where: 1. V is a set of nodes, 2. E is a set of edges, E ⊂ V × V , 3. ξ V : V → R V is a node labelling function, 4. ξ E : E → R E ∪ {ϵ} is an edge labelling function, 5. ch : V → P (V )
is a function assigning to each node a set of its children, i.e. nodes directly nested in v.
Let, for the rest of this paper, ch(v) denotes the set of children of v, and |ch(v)| the size of this set. Let anc be a function assigning to each node its ancestor and let λ be a special empty symbol (different from ϵ),anc : V → V ∪ {λ}, such that anc(v) = w if v ∈ ch(w) and λ otherwise.
Hierarchical Control Flow Graphs Distance
HCFG Edit Distance To define edit cost for a particular graph a cost function for edit operations must be defined. In case of HCF graphs it was defined to mimic as much as possible the influence of a given operation over the similarity. Costs for changing labels were set separately for all pairs of possible keywords, variable names and operators. For example cost of changing the operator in a condition from < into <= is lower than changing == into ! = as the perceived difference between them is higher. Changing the conditional expression into arbitrary true or f alse will be even higher, and it is well represented in the edit distance concept as replacing the expression tree with a single node requires significantly more delete/insert operations.
HCFG Kernel
The edit distance does not take into account the additional information contained in the hierarchical structure HCFG. To incorporate this information into similarity calculations a hierarchical substructure kernel K HCF G is proposed in this paper. It takes into account the label of a given node, number of its children (and thus the internal complexity), the label of its hierarchical ancestor (and thus its position within the structure of the program), and the number and labels of edges connecting this node with its neighbourhood nodes (both incoming and outgoing edges are taken into account) This substructure kernel uses node, edge and tree kernels. The node and edge kernels are defined below. The tree kernel, used within the node one to compare expression trees, is a standard one [START_REF] Collins | New Ranking Algorithms for Parsing and Tagging: Kernels over Discrete Structures, and the Voted Perceptron[END_REF].
k V (v, w) = 1 : ξ V (v) = ξ V (w) ∧ |ch(v)| = |ch(w)| = 0 k V (ch(v), ch(w)) : ξ V (v) = ξ V (w) ∧ |ch(v)| = |ch(w)| = 1 K T (ch(v), ch(w)) : |ch(v)| > 1 ∨ |ch(w)| > 1 0 : ξ V (v) ̸ = ξ V (w).
It can be observed that for nodes having more than one child, thus containing an expression tree, a tree kernel K T is used to compute the actual similarity.
For nodes having different labels the kernel returns 0, while for nodes containing one children the node kernel is called recursively. Definition 3. An edge kernel, denoted k E (e i , e j ), where e i , and e j are edges of a hierarchical flow graph, is defined in the following way:
k E (e i , e j ) = { 1 : ξ E (e i ) = ξ E (e j ) 0 : ξ V (e i ) ̸ = ξ V (e j ).
On the basis of the above kernel a similarity for HCFG is computed.
Definition 4. K HCF G (G i , G j ) = m ∑ i=1 n ∑ j=1 K S (S i , S j ), (2)
where m and n, is the number of hierarchical nodes in each graph and
K S (S i , S j ) = k node (v i , v j ) + k node (anc(v i ), anc(v j )) + Cn ∑ r=1 Cm ∑ t=1 k node (c r (v i ), c t (v j )) + ∑ wj ∈N b(vj ) ∑ wj ∈N b(vj ) k edge ((v i , w i ), (v j , w j ))k node (w i , w j ), (3)
where each S i is a substructure of G i consisting of node v i , its direct ancestor anc(v i ), all its children ch(v i ) (where with C n is the number of children and
c n (v i ) -the n -th child of v i ) , and its neighbourhood N b(v i ).
This kernel is based on the decomposition of a graph into substructures according to the concept of R-convolution kernels and thus is positive semidefinite [START_REF] Haussler | Convolutional kernels on discrete structures[END_REF], and so acceptable as a kernel function [START_REF] Schlkopf | A Short Introduction to Learning with Kernels[END_REF].
Remark on computational costs. Both edit distance and graph kernel are known to have a high computational cost, what was mentioned in sections 1 and 2. But in case of HCFG we have a special situation, i.e. as each graph represents a first order mutant, any two graphs can differ in at most two places. Moreover we know a priori where the change happened, and all the remaining elements of both graphs are identical. As a result the actual computation of both edit distance and HCFG kernel can be done much more efficiently than in general case of two arbitrarily chosen graphs.
Experiments and Results
For each set of mutants a k-NN classification algorithm was run using two different distance measures, an edit distance and a distance computed from HCFG kernel. For the first example three test suites were used and the set of mutants was randomly divided into three parts of similar size, the first was used as a training set and the other as instances to classify. The classification was then repeated using subsequent subsets as training sets. The whole process was repeated five times using different partitions of the set of mutants and the results obtained were averaged. Table 1 presents the results obtained for this example using HCFG edit distance to compute distances in k -N N classifier. Parameter k was, after some experimental tuning, set to 5 for all experiments. The first column of the table shows the percentage of instances classified correctly. The results for mutants classified incorrectly are presented separately for those classified as detectable, while actually they are not (column labelled incorrect killed) and for those classified as not detected, while they actually are detected by a given test suite (column labelled incorrect not killed). Calculating these results separately was motivated by the meaning of these misclassifications. While classifying a mutant as not detected leads to overtesting, the misclassification of the second type can result in missing some errors in code, what is more dangerous. As the results are also used to evaluate the quality of test suites used, incorrectly classifying a mutant as not detected leads to giving a test suite lower score than actual one, why the second misclassification leads to overvaluation of a given test suite. Again, while the first situation is surely not desired, the second one poses more problems, especially as it may lead to a situation when a mutant not detected by any test suite would be labelled as detected, thus resulting in undetected errors in code.
It can be observed that the classification performed reasonably well for all test suits, with the exception of TS1. Deeper analysis of this case seems to suggest that it results from the random partition of the set of mutants for this test suite in which the training set contained unproportionally large number of undetectable mutants. This situation also suggests to perform the partition of mutants in a "smarter" way instead of random. One possible way to do it is to select proportional number of mutants of each type (generated by a given type of mutation operators). The results obtained with the use of HCFG kernel, presented in Table 2, are slightly better in general, especially the classification for TS1 improved significantly, although it may be due to better choice of training sets. It can be also noticed that, while the percentage of correctly classified mutants for test suite 3 is a bit lower, (but the difference is small), less mutants were incorrectly classified as detectable, although this gain happened at the expense if larger classification error in the last column. The results show that the classification improvements for the kernel method are not very significant, but more experiments are needed to decide whether this approach is worth its slightly higher computational cost. For the second example five test suites were used and, as there were more mutants, their set was divided into four parts of similar size, with, like in first example, the first part being used as a training set and the others as instances to classify. The classification was then repeated using subsequent subsets as training sets. The whole process was also repeated five times with different partitions of the set of mutants and the results obtained were averaged. Table 3 presents the results obtained with the use of edit distance and Table 4 -with the use of kernel based distance. Similarly to the first example the kernel based approach has produced slightly better results for correct classifications, with the exception of TS 2, where the error is slightly higher, but only by 0.2%. However a slightly higher improvement can be observed in having a lower percentage of mutants incorrectly classified as detectable. It can also be noticed that the results for TS 3 were visibly worse than for other suits. Closer inspection seems to suggest that this is also a problem with randomly partitioning set of mutants. As TS 2 detects only 22 out of 87 mutants there may occur an over representation of detectable mutants in the training set thus leading to incorrectly classifying many mutants as detectable. As in the first example it suggests replacing random partitioning by another one. Here a useful idea seems to be selecting into training set mutants in such a way that would preserve the proportion of both detectable and undetectable mutants close to the one in the whole set.
Conclusions and Future Work
In this paper an approach to classification of mutants was proposed as a tool to reduce the number of mutants to be executed and to evaluate the quality of test suits without executing them against all possible mutants. This method deals with reducing the number of mutants that have to by executed in a dynamic way i.e. depending on the program for which they are generated rather than statically for a given language or the operator.The approach needs still more experiments to fully confirm its validity, but the results obtained so far are encouraging.
However, several problems were noticed during the experiment that require further research. Firstly, a random selection, although performing reasonably well, causes problems for some test suits. Possible solutions, as suggested in the discussion of results, include selecting mutants to assure they represent diversity of mutation operations thus avoiding selecting to the training set mutants generated by the same type of operations. The second solution is to select a number of detectable and undetectable mutants to preserve proportions from the full set. We plan to investigate both approaches to check whether they improve results in a significant way.
Another direction for future research is connected with the use of kernels. To make better use of them one of kernel based classifiers, for example support vector machines, could be used instead of k -N N . The kernel itself also offers some possibilities for improvements. The node kernel proposed in this paper is based on the label of the node independently form its position ("depth") in the hierarchy; adding some factor proportional to the depth of the node is also planned to be researched.
In k -N N classifier training set consists of vectors in a multidimensional space, for which a class membership is known. Thus training stage of the classifier consists only in storing the vectors and class labels of the elements of the training set. Then, during the actual classification for elements of unknown class membership, a distance from the new element to all elements of the training set is calculated and it is assigned to the class which is most frequent among the k training examples nearest to that new one.
Fig. 2 .
2 Fig. 2. Examples of flow graphs a) a graph for program from Fig. 1a, b) a flow graph for one of AOIS mutants (from Fig. 1b)
, For this example Mujava generated 38 mutants; for the second examples there were 87 mutants. The mutants were then converted into graph form described below.
public int search(int v){ public int search(int v){
int i; int i;
for(i=0;i<size;i=i+1) for(i=0;++i<size;i=i+1)
if(values[i]==v) return i; if(values[i]==v) return i;
return -1; return -1;
} }
Fig. 1. A simple search method and one of its AOIS (Arithmetic Operator Insertion
[29]) mutants
Table 1 .
1 The classification of mutants of example 1 with the use of GED
correct incorrect killed incorrect not killed
TS 1 65.2% 13.06% 21.74%
TS 2 78.25% 8.7% 13.5%
TS 3 82.6% 8.7% 8.7%
Table 2 .
2 The classification of mutants of example 1 with the use of kernel
correct incorrect killed incorrect not killed
TS 1 75.55% 5.45% 19.00%
TS 2 84.1% 6.65% 9.25%
TS 3 82.2% 4.7% 12.7%
Table 3 .
3 The classification of mutants of example 2 with the use of GED
correct incorrect killed incorrect not killed
TS 1 75.7% 12.1% 12.2%
TS 2 73.4% 6.5% 20.1%
TS 3 60.5% 26.2% 16.3%
TS 4 78.2% 10.3% 11.5%
TS 5 76.4% 11.3% 12.3%
Table 4 .
4 The classification of mutants of example 2 with the use of kernel
correct incorrect killed incorrect not killed
TS 1 79.1% 6.3% 14.6%
TS 2 73.2% 4.5% 22.3%
TS 3 61.5% 22.6% 20.9%
TS 4 85.1% 4.6% 10.5%
TS 5 79.2% 9.53% 11.3% | 37,571 | [
"1003382",
"1003383"
] | [
"487486",
"303988"
] |
01482405 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01482405/file/978-3-642-34691-0_17_Chapter.pdf | Akihito Hiromori
email: hiromori@ist.osaka-u.ac.jp
Takaaki Umedu
email: umedu@ist.osaka-u.ac.jp
Hirozumi Yamaguchi
Teruo Higashino
email: higashino@ist.osaka-u.ac.jp
Protocol Testing and Performance Evaluation for MANETs with Non-uniform Node Density Distribution
Keywords: Protocol testing, Performance evaluation, MANET, Mobility, VANET, DTN, Rural postman problem
In this paper, we focus on Mobile Ad-hoc Networks (MANETs) with non-uniform node density distribution such as Vehicular Ad-hoc Networks (VANETs) and Delay Tolerant Networks (DTNs), and propose a technique for protocol testing and performance evaluation. In such MANETs, node density varies depending on locations and time, and it dynamically changes every moment. In the proposed method, we designate node density distributions and their dynamic variations in a target area. Then, we construct a graph called TestEnvGraph where all node density distributions are treated as its nodes and they are connected by edges whose weights denote differences of two node density distributions. We specify a set of edges to be tested in the graph, formulate a problem for efficiently reproducing all the given node density distributions and their dynamic variations as a rural postman problem, find its solution and use it as the order of reproduction of designated node density distributions and their variations. Protocol testing is carried out by reproducing node density distributions in the derived order. We have designed and developed a method and its tool for mobility generation on MANETs, which can reproduce any designated node density distribution and its dynamic variations in a target area. From our experiments for a VANET protocol, we have shown that our method can give a similar trend in network throughput and packet loss rates compared with realistic trace based protocol testing.
Introduction
With the advance of mobile wireless communication technology, recently several types of mobile wireless communication systems have been designed and developed. Smart phones and car navigation systems can be used for communicating neighboring people and vehicles, respectively. Mobile Ad-hoc Network (MANET) applications such as Vehicular Networks (VANETs) and Delay Tolerant Networks (DTNs) are becoming popular. VANET is the most promising MANET applications. Also, several DTN systems using smart phones and car navigation systems have been proposed as emergency communication means in disaster situations. Those systems can be used as social systems and they require high reliability and sustainability. In general, sensor networks are stable and they are often used in areas with uniform node density distributions. However, unlike stable sensor networks, VANET and DTN applications are used under nonuniform node density distributions. Node density varies depending on locations and time, and it dynamically changes every moment. It is well-known that node mobility and density affect reliability and performance of MANET applications [START_REF] Boudec | Perfect Simulation and Stationarity of a Class of Mobility Models[END_REF][START_REF] Tracy | A Survey of Mobility Models for Ad Hoc Network Research[END_REF]. In order to improve reliability and performance of MANET applications, it is important to reproduce several types of node density distributions efficiently and carry out their testing in simulation using network simulators and/or emulation using real mobile devices (e.g. mobile robots).
In this paper, we propose a protocol testing method for such MANET protocols and applications. In the proposed method, first we designate a set of node density distributions and their dynamic variations for a target area for which we want to carry out protocol testing and performance evaluation. For example, in VANET applications, node densities near intersections might become high when their signals are red while they might become low when the signals become green. Here, we assume that protocol designers can designate such node density distributions and their variations for a target area through simulation and real trace data. Then, we construct a graph called TestEnvGraph where all node density distributions are treated as its nodes and they are connected by edges whose weights denote differences of two node density distributions. The graph TestEnvGraph represents a testing environment and its dynamic change of node density distributions to be tested. As shown in [START_REF] Boudec | Perfect Simulation and Stationarity of a Class of Mobility Models[END_REF], it is known that it takes time to reproduce MANET with designated mobility and make it stable. Thus, it is desirable that we can reproduce all the designated node density distributions and their variations with a small cost. In this paper we formulate a problem for efficiently reproducing all the designated density distributions and their variations as a rural postman problem [START_REF] Pearn | Algorithms for the Rural Postman Problem[END_REF] of the graph TestEnvGraph, find its solution using a heuristic algorithm and use it as an efficient order to reproduce all the designated node density distributions and their variations.
On the other hand, in [START_REF] Ueno | A Simple Mobility Model Realizing Designated Node Distributions and Natural Node Movement[END_REF], we have proposed a method for generating a waypoint mobility model with designated node density distributions for a target area. In this paper, we slightly extend its method and use it to reproduce designated node density distributions and their dynamic variations mechanically. Fig. 1 denotes an example of a designated node density distribution and its mobility patterns. The dark gray cells in Fig. 1 (a) denote high node density while the light gray cells denote low node density. Fig. 1 (b) denotes example mobility patterns. Using a rural postman tour for the graph TestEnvGraph, we reproduce a testing environment which can treat any designated node density distributions and their dynamic variations with a small cost.
In order to show effectiveness of the proposed method, we have compared network throughput and packet loss rates of VANET applications in our approach In ITS research communities, it is known that vehicular densities strongly affect the performance of vehicle-to-vehicle (V2V) communication. Therefore, trace based data and microscopic vehicular mobility models are often used. They are useful to reproduce typical traffic patterns. Here, we have generated vehicular mobility patterns using both the proposed method and a microscopic traffic simulator VISSIM [START_REF] Ptv | VISSIM[END_REF], and compared their performance. We have generated typical 10 patterns of node density distributions and their dynamic variations near an intersection. Then, we have evaluated the performance of a protocol. The results are shown in Section 6. Our experiments have shown that the performance based on node density distributions and their dynamic variations derived using our proposed method and tool is rather close to that based on real trace based and microscopic vehicular mobility based traffic patterns.
Real traces and those obtained from microscopic traffic simulators can reproduce typical traffic patterns easily. However, it is difficult to reproduce peculiar traffic patterns using such methods. In general, it takes much time and costs to reproduce rare cases. On the other hand, the proposed method can designate any node density distribution and its variations. It can help to improve the performance and reliability of MANET protocols and applications. As far as the authors know, it is the first approach that we can designate any node density distributions and their variations and use them for protocol testing. By finding a rural postman tour for the graph TestEnvGraph, we minimize the cost for reproducing the designated node density distributions and their variations.
Related Work
It has been recognized that node mobility and density affect the performance of mobile wireless networks [START_REF] Royer | An Analysis of the Optimum Node Density for Ad Hoc Mobile Networks[END_REF][START_REF] Zhang | Study of a Bus-based Disruption-Tolerant Network: Mobility Modeling and Impact on Routing[END_REF], and many mobility models have been proposed so far [START_REF] Camp | A Survey of Mobility Models for Ad Hoc Network Research[END_REF][START_REF] Tracy | A Survey of Mobility Models for Ad Hoc Network Research[END_REF]. Random-based mobility models such as the Random Waypoint (RWP) model and the Random Direction (RD) model are often used, and some analytical researches have revealed their properties [START_REF] Chu | Node Density and Connectivity Properties of the Random Waypoint Model[END_REF][START_REF] Rojas | Experimental Validation of the Random Waypoint Mobility Model through a Real World Mobility Trace for Large Geographical Areas[END_REF]. The results have shown that the node density distribution is not uniform; e.g. there is a high-density peak at the central point of the target area. There are several works for protocol testing of MANET. For example, Ref. [START_REF] Zakkuidin | Towards a Game Theoretic Understanding of Ad Hoc Routing[END_REF] proposed a game theory based approach for formalizing testing of MANET routing protocols. Ref. [START_REF] Maag | A Step-wise Validation Approach for a Wireless Routing Protocol[END_REF] proposed a method for conformance testing and applied it to Dynamic Source Routing (DSR). For details, see a survey of Ref. [START_REF] Carneiro | One Step Forward: Linking Wireless Self-Organizing Network Validation Techniques with Formal Testing Approaches[END_REF].
On the other hand, if we want to design MANET applications for pedestrians with smart phones and/or running vehicles in urban districts, we need more realistic mobility. For example, in VANET application areas, Refs. [START_REF] Khelil | An Epidemic Model for Information Diffusion in MANETs[END_REF] and [START_REF] Saito | Design and Evaluation of Inter-Vehicle Dissemination Protocol for Propagation of Preceding Traffic Information[END_REF] proposed adaptive protocols for efficient data dissemination from vehicles by considering neighboring vehicular density so that we can avoid the so-called broadcast storm problem. In [START_REF] Artimy | Assignment of Dynamic Transmission Range Based on Estimation of Vehicle Density[END_REF], the authors proposed a method for estimation of vehicular density. Ref. [START_REF] Sommer | Progressing Towards Realistic Mobility Models in VANET Simulations[END_REF] argued the need for combining a specific road traffic generator and a wireless network simulator. They need to be coupled bidirectionally when a target VANET protocol may influence the behavior of vehicles on streets. Recently, several microscopic vehicular mobilities are proposed as the means for reproducing realistic vehicular mobility [START_REF] Halati | CORSIM-corridor Traffic Simulation Model[END_REF][START_REF]Quadstone Paramics: Paramics[END_REF][START_REF] Umedu | An Inter-vehicular Communication Protocol for Distributed Detection of Dangerous Vehicles[END_REF][START_REF] Ptv | VISSIM[END_REF]. A traffic simulator VISSIM [START_REF] Ptv | VISSIM[END_REF] adopts a microscopic vehicular mobility. Ref. [START_REF] Umedu | An Inter-vehicular Communication Protocol for Distributed Detection of Dangerous Vehicles[END_REF] also proposed a microscopic vehicular mobility which can reproduce a vehicular mobility close to real traffic traces obtained from aerial photographs of Google Earth.
MANET applications for pedestrians with smart phones have similar analysis. In [START_REF] Maeda | Urban Pedestrian Mobility for Mobile Wireless Network Simulation[END_REF], we have shown that there are large variations for performance and packet loss rates of multi-hop communications depending on node density distributions. In DTNs, it is known that node mobility and density strongly affect the reliability and performance of DTN applications (e.g. see [START_REF] Zhang | Study of a Bus-based Disruption-Tolerant Network: Mobility Modeling and Impact on Routing[END_REF]). Especially, if there are no rely nodes, in many DTN protocols, intermediate nodes store their received data and forward them to their preceding nodes when they are found. In order to show that proposed store-and-forward mechanisms can work well, we need to check sustainability for several types of node density distributions.
All the above research works show that reproduction of node mobility and density distributions is very important. However, there are very few works about testing of MANET protocols, which consider non-uniform node density distributions and their dynamic variations. This paper is motivated to give a solution for protocol testing on such a MANET.
MANETs with Non-uniform Node Density Distribution
In general, dissemination intervals of many VANET protocols are autonomously adjusted depending on observed node density so that we can reduce the probability of packet collisions. Many of DTN protocols have store-and-forward mechanisms so that packets can reach to their destinations even if node density for a part on their routes is very low for some period. Performance of such MANET applications cannot be evaluated by general random based mobility.
In Fig. 2, we show node density distributions and average speeds of moving vehicles near an intersection where we divide a target road segment between intersections into three cells of 200 meters and show their node densities with three categories: "0(low)" (white cells), "1(middle)" (gray cells) and "2(high)" (black cells). In this figure, on the horizontal road, the densities of two cells close to the intersection are "high" and the other cells are very low, while the densities of the vertical road are "middle" or "low". It is a typical situation where We have generated one hour's traffic trace data of 1 km 2 square area with 5×5 checked roads using the microscopic traffic simulator VISSIM [START_REF] Ptv | VISSIM[END_REF], and analyzed their node density distributions (note that we have removed first 20 minutes' trace data in simulation since the simulated traffic has not been stable at first). In the analysis, we have made a density map like Fig. 2 at each intersection for every unit time period where the unit time period is 60 sec. Here, totally 1025 patterns of node density distributions are derived. In Fig. 3, we have shown typical 10 traffic patterns and their dynamic change representing a loop where an ID number is given for each pattern. We have classified the obtained patterns by density distribution patterns for horizontal roads, and found that the most emergent top 14 patterns can cover about 25 % of traffic situations. Fig. 4 denotes the transitions among the top 14 typical patterns. When we execute typical VANET based dissemination protocols and multi-hop communications among running vehicles, the typical variations of their node density distributions correspond to transitions (sequences of edges) whose lengths are three or four in Fig. 4. Thus, by reproducing all such transitions in Fig. 4 and carrying out protocol testing for their transitions, we can check their reliability.
Fig. 4. Transitions among top 14 typical states
For example, in [START_REF] Saito | Design and Evaluation of Inter-Vehicle Dissemination Protocol for Propagation of Preceding Traffic Information[END_REF] we have proposed a dissemination protocol for propagating preceding traffic information. This protocol can gather real-time traffic information of 2-3 km ahead within 3 minutes with 60-80 % probability. Most of such preceding traffic information is sent from neighboring vehicles within a few hundred meters. Suppose that Fig. 4 shows variations of node density distributions for such a road section and that each edge corresponds to an one minute's variation of node density distributions. Then, each sequence of three edges from a state corresponds to dynamic change of node density distributions in the target road section for 3 minutes. Thus, by collecting all sequences of three edges from all states and by testing their performance, packet loss rates and buffer length, we can evaluate performance characteristic and reliability of the protocol. When we count the number of all sequences of three transitions from states corresponding to typical node density distributions, it becomes rather large. In Section 5, we will propose an efficient testing method. We will give another example. In [START_REF] Nakamura | Realistic Mobility Aware Information Gathering in Disaster Areas[END_REF], we have proposed a protocol for realistic mobility aware information gathering in disaster areas where we combine the notion of store-and-forward mechanisms in DTNs and geographical routing on MANETs. In the proposed protocol, if intermediate nodes cannot relay safety information to its home cells by multi-hop communication, they hold it until they meet preceding nodes and re-transmit it as proxies. If shortest paths to home cells are not available, detours are autonomously found. However, in [START_REF] Nakamura | Realistic Mobility Aware Information Gathering in Disaster Areas[END_REF], we have only evaluated performance of DTN protocols for fixed disaster situations such as Fig. 5 (b) and (c) where the white cells and gray cells represent movable areas and obstacle areas, respectively. On the other hand, in an early stage in disaster, obstacle cells might be small like Fig. 5 to be tested, the total time necessary for testing should be minimized.
In the following sections, we will propose a testing method for considering those conditions.
Mobility Generation with Designated Node Density Distribution
In [START_REF] Ueno | A Simple Mobility Model Realizing Designated Node Distributions and Natural Node Movement[END_REF], we have proposed a method for generating a waypoint mobility model with designated node density distributions for a target area where each node repeats the process of (i) choosing a destination point in the target area, (ii) moving straightly toward the destination point with a constant velocity, and (iii) staying at the point for a certain time period. The goal of this work is to synthesize mobility patterns that can capture real (or intentional) node density distributions. Fig. 6 denotes typical node density distributions where dark and thin gray colors denote high and low node densities, respectively. A target area is divided into several subregions called cells and we can designate a favorite node density to each cell. In order to automatically generate natural mobility patterns realizing designated node density distributions, the method determines probabilities of choosing waypoints from those cells, satisfying given node density distributions. Fig. 7 denotes example mobility traces for the four types of node density distributions. The problem is formulated as an optimization problem of minimizing errors from designated node density distributions, and probabilities of choosing waypoints at each cell are determined. Since the problem has nonlinear constraints, a heuristic algorithm generates near-optimal solutions. Here, we extend the method in [START_REF] Ueno | A Simple Mobility Model Realizing Designated Node Distributions and Natural Node Movement[END_REF] so that we can treat variations of node density distributions. First, we give an outline of the method about how to determine the probabilities of choosing waypoints from each cell, satisfying given node density distributions. Assume that the target area is divided into m × n square cells and these cells are numbered sequentially from top left (0) to bottom right (m•n-1) like Fig. 8 (a). Suppose that each node in cell i selects a destination cell (say j) with probability p i,j called destination probability. These probabilities
m•n-1 ∑ j=0 p i,j = 1 (0 ≤ i ≤ m • n -1) (1)
!" In a steady state, the number of nodes moving from an origin cell to a destination cell per unit time is We call it as a flow rate and denote it as f j . The flow rate f j must satisfy the following equation.
#" $! %! &! ! !"%! #!"%! ! ! ! ! ! ! ! ! !! &!"%! '#"%("!! ) %*$! ) %*!! ) %*&! ) %*!+%! ) %*!+&! ! ! ! ! !! "! #! ! ! ! ! ! ! " # ! ! " # ! " # $ $ ! ! " # ! " # $ $ ! ! " # ! " # ! ! " # ! " # ! !"## $ % & ' ! ! ! ! ! ! ! !! "! #! ! ! ! ! ! " # $ % & % & & ! ! ! " # "! $ % "! $!& #'(( ) ! ! " # "! $ * "! $!& #'(( (a
f j = m•n-1 ∑ i=0 f i • p i,j (0 ≤ j ≤ m • n -1) (2)
Next, we define cell transit time representing time necessary for traveling from cell i to cell j and cell transit number representing the number of nodes moving through a cell (say, cell k). As shown in Fig. 8 (b), we denote an origin point in cell i and a destination point in cell j as (x i , y i ) and (x j , y j ), respectively. The average transit distance (denoted by L i,j ) between these two points is represented as
√ (x j -x i ) 2 + (y j -y i ) 2 .
Similarly, the transit distance on cell k (denoted L pass i,j,k ) for nodes traveling from cell i to cell j is shown in Fig. 8 (b). Here, (x k1 , y k1 ) and (x k2 , y k2 ) denote the intersection points of the line segment between (x i , y i ) and (x j , y j ) on the two sides of cell k. Assume that all the nodes move at the same speed (denoted as V ) and that they stop for the same pause time T pause after arriving at their destination cells. Hereafter, T pass i,j,k denotes the average cell transit time on cell k for nodes moving from cell i to cell j. T pass i,j,k is represented by the following equation. Note that the value of T pass i,j,k is zero cell k has no intersection with the line segment (i.e. L pass i,j.k = 0).
T pass i,j,k = { L pass i,j,k V (j = k) L pass i,j,k V + T pause (j = k) (3)
Hereafter, we show how to calculate the cell transit number by destination probabilities. The number of nodes moving from cell i to cell j per unit time can be represented as f i • p i,j . The transit time for these nodes can be represented as T pass i,j,k . Thus the number of nodes passing cell k in the nodes moving from cell i to cell j is calculated as f i • p i,j • T pass i,j,k (see Fig. 8 (c)). Since nodes might pass through cell k for different combinations of origin-destination cells, the total number of nodes at cell k (cell transit number d k ) can be represented as follows.
d k = m•n-1 ∑ i=0 m•n-1 ∑ j=0 f i • p i,j • T pass i,j,k (4)
Here, in order to treat the cell transit number for cell k as the node density for cell k, we assume that the number of all nodes is 1 as shown in Eq. [START_REF] Chu | Node Density and Connectivity Properties of the Random Waypoint Model[END_REF]. By applying f i to Eq.( 4), we can get the node density distribution obtained by p i,j .
m•n-1 ∑ k=0 d k = 1 (5)
Since the problem described above has non-linear constraints, we give a heuristic algorithm to derive a solution. A trivial solution satisfying the above constraints is that all nodes move only in the first assigned cells. In [START_REF] Ueno | A Simple Mobility Model Realizing Designated Node Distributions and Natural Node Movement[END_REF], we give a proof to show that for any node density distribution, we can generate the corresponding non-trivial waypoint mobility satisfying designated node density distributions like Fig. 7. For details about how to solve the problem with the above non-linear constraints, see [START_REF] Ueno | A Simple Mobility Model Realizing Designated Node Distributions and Natural Node Movement[END_REF].
In order to treat dynamic change of node density distributions, we give the following constraint when a new (next) node density distribution is generated from the current one. Let p i,j and p i,j denote the destination probabilities from cell i to cell j at the current and next time slots, respectively. And, let Dif f denote the sum of the differences of destination probabilities at the current and next time slots for all cells. If we can minimize the value of Dif f , the next node mobility can be generated relatively easily from the current node mobility. Then, when we find a solution satisfying the above constraints, we give the following objective function Dif f and find a solution minimizing the value of Dif f . From our experiences, if the value of the objective function Dif f is small, time necessary to generate a steady next node density distribution becomes small. Thus, in this paper, from the obtained value of Dif f , we will generate the next node density distribution by generating slightly different intermediate node density distributions sequentially from the current node density distribution. By using this method, we generate any dynamic variation of node density distributions.
Dif f = m•n-1 ∑ i=0 m•n-1 ∑ j=0 {p i,j -p i,j } (6)
In Table 1, designated node density distributions and measured density distributions for the two mobilities are shown in the left side table and right side table, respectively. Although the derived mobility cannot reflect designated node density distributions perfectly, their errors are mostly within 0.1 % (there exist relatively larger errors for reproduction of empty node density distributions).
Table 1. Designated (left) and measured (right) density distributions (%) 3.00 4.00 3.00 4.00 3.00 3.01 3.92 2.94 4.00 3.00 4.00 6.00 4.00 6.00 4.00 4.14 6.04 4.07 6.07 4.03 3.00 4.00 3.00 4.00 3.00 2.98 3.99 3.94 4.01 2.92 4.00 6.00 4.00 6.00 4.00 3.94 5.99 4.02 6.01 3.91 3.00 4.00 3.00 4.00 3.00 3.01 3.99 3.01 4.00 3.00
Efficient Protocol Testing
In general, protocol testing is classified into two categories: simulation based testing and real machine based testing. Real machine based testing is not realistic for VANET applications. In such a case, using wireless network simulators and reproducing several node density distributions is one possibility. As far as the authors know, most of wireless network simulators can only reproduce random based and trace based mobility. On the other hand, our method can reproduce node density distributions corresponding to several types of vehicular mobility patterns and carry out the simulation based testing. The proposed method can be also used for real machine based testing. For example, if multiple mobile robots can follow the mobility patterns generated by the method described in the previous section, movement of such robots satisfies the designated node density distribution.
Here, we construct a graph TestEnvGraph for representing the testing environment in a target area. Let T patterns denote such typical dynamic transition patterns of node density distributions. For example, suppose that the node density distributions of three cells vary from "201" to "211" and "222" in turn where "0", "1" and "2" denote low, middle and high node densities, respectively. We want to carry out testing for variations of those node density distributions in this order. In such a case, we give a transition pattern ID "p h " to this transition pattern "201 → 211 → 222" and make their pair < p h , "201 → 211 → 222" >. We call this pair as the transition pattern with ID "p h ". Here, we assume that the set T patterns of transition patterns includes all the set of node density distributions and their variations for which we want to carry out testing.
Then, we construct the following graph G = (V, E) where V denotes the set of all transition patterns with IDs where the node
< p h , n i > (1 ≤ i ≤ k) belongs to V if and only if a transition pattern < p h , "n 1 → n 2 →, ..., → n k " > is included in T patterns , and the edge < p h , n i >→< p h , n i+1 > (1 ≤ i ≤ k -1) belongs to E if and only if < p h , "n 1 → n 2 →, ..., → n k " > is included in T patterns .
Here, we define the difference of node density distributions. For the transition pattern "201 → 211 → 222", we define that the difference of node density distributions between "201" to "211" is "1" since only low node density "0" of the second cell is changed to middle "1". On the other hand, the difference of node density distributions between "211" to "222" is "2" since the node densities of the second and third cells are changed from "1" to "2", and the sum of their differences is "2". We treat such a difference as the weight of the corresponding edge in E. Since only target transition patterns are represented as the graph G = (V, E), G = (V, E) is not always totally connected in general. Thus, we construct the graphs G = (V + V , E + E ) and G" = (V + V + V ", E + E + E") as follows.
For the graph G = (V + V , E + E ) , let V denote the set of nodes where < * , n 1 > and < * ,
n k > belong to V if < p h , "n 1 → n 2 →, ..., → n k " is included in T patterns , and < * , n 1 >→< p h , n 1 > and < p h , n k >→< * , n k > belong to E if < p h , "n 1 → n 2 →, ..., → n k " > is included in T patterns .
Here, we treat the weights for edges in E as zero. Then, we construct the graph G" = (V +V +V ", E +E +E") as follows, and treat it as the graph TestEnvGraph for representing the testing environment. For each pair of < * , n i > and < * , n j > whose difference of node density distributions is d, if there does not exist a path from < * , n i > to < * , n j > whose total sum of edges' weights is d in
G = (V + V , E + E ), we add < * , n i1 >,...,< * , n i d-1 > to the set of nodes V ", and add (i) edge < * , n i >→< * , n i1 >, (ii) < * , n ip >→< * , n ip+1 > (1 ≤ p ≤ d -2)
, and (iii) < * , n i d-1 >→< * , n j > to the set of edges E" where
n ip+1 > (1 ≤ p ≤ d -2)
, and (iii) that of < * , n i d-1 > and < * , n j > are one. We treat the graph G" = (V + V + V ", E + E + E") as the graph TestEnvGraph. Note that there might be several choices for < * ,
n ip >→< * , n ip+1 > (1 ≤ p ≤ d -2).
Here, any choice is acceptable as TestEnvGraph.
Fig. 9 denotes an example TestEnvGraph. Here, we assume T patterns = {< p 1 , "222 → 221" >, < p 2 , "111 → 121" >, < p 3 , "221 → 111" >}. In this example, there are three cells. The node density distribution "221" denotes that node density of the first two cells is "2" (high) and that of the third cell is "1" (middle). In T patterns , three variations of node density distributions are required as those to be tested. The variation < "222 → 221" > requires that the node density distribution is changed from "222" to "221". In Fig. 9, at first we construct the graph G = (V, E). The nodes and edges with thick lines denote V and E, respectively. The value of each edge denotes its weight (the difference of node density distributions). The nodes and edges with fine lines denote those belonging to V + V " and E + E" of T estEnvGraph = (V + V + V ", E + E + E"), respectively.
[Property of TestEnvGraph]
The graph TestEnvGraph holds the following properties.
(a) For each pair of < * , n i > and < * , n j > in T estEnvGraph = (V + V + V ", E + E + E") whose difference of node density distributions is d, there exists a path from < * , n i > to < * , n j > (also a path from < * , n j > to < * , n i >) whose total sum of edges' weights is d in TestEnvGraph. (b) If we carry out testing for all edges in E, then we can conclude that all the transition patterns representing the designated node density distributions and their variations in T patterns are tested.
[Rural Postman Problem (RPP)]
For a given directed graph G = (V, E) and a subset E ⊆ E of edges (here, we call E as the set of target edges to be traversed), Rural Postman Problem (RPP) is a problem to find the cheapest Hamiltonian cycle containing each of edges in the set E of target edges to be traversed (and possibly others in E). The problem is shown to be NP-complete [START_REF] Pearn | Algorithms for the Rural Postman Problem[END_REF]. We call a cheapest Hamiltonian cycle as a rural postman tour for the graph.
Note that there are several heuristic algorithms for efficiently solving Rural Postman Problem (RPP) although such heuristic algorithms might not be able to find its optimal solution [START_REF] Pearn | Algorithms for the Rural Postman Problem[END_REF].
[Assumptions for protocol testing] [Problem to find the most efficient testing order for TestEnvGraph] For a given T estEnvGraph = (V +V +V ", E +E +E") where E denotes the set of all the transition patterns of node density distributions and their variations belonging to T patterns , if the above assumptions hold, then the problem to find the most efficient testing order for TestEnvGraph is formulated as a problem to find a rural postman tour for TestEnvGraph where E is treated as the set of target edges to be traversed.
In order for solving Rural Postman Problem (RPP), we have used a SA-based heuristic algorithm and found an efficient testing order for TestEnvGraph.
For TestEnvGraph shown in Fig. 9, suppose that we start testing from node < * , "111" >. Then, the rural postman tour (RPP) < * , "111" >→< p2, "111" >→< p2, "121" >→< * , "121" >→< * , "221" >→< * , "222" >→< p1, "222" >→< p1, "221" >→< * , "221" >→< p3, "221" >→< p3, "111" >→< * , "111" > denotes a shortest tour for this testing and its total weights (the sum of the differences of node density distributions) is 6. If MANET designers generate variations of node density distributions in this order and carry out protocol testing, then they can carry out it with the minimum cost.
[Problem to find the most efficient testing order ] Let V = {n 1 , n 2 , ..., n k } denote the set of all designated node density distributions for which we want to carry out testing (and/or performance evaluation). Then, we designate < p i , "n i → n i " > (1 ≤ i ≤ k) as T patterns . Using the above algorithm, we construct TestEnvGraph and find a rural postman tour where E = {< p i , "n i → n i " >| 1 ≤ i ≤ k} is treated as the set of target edges to be traversed. This modified rural postman problem (RPP) corresponds to the problem to find the most efficient testing order for carrying out tests of all designated node density distributions. [START_REF] Pearn | Algorithms for the Rural Postman Problem[END_REF] Total 293 0 0 0 3 0 1 0 0 0 0 0 1 5 446 0 0 0 0 0 1 0 0 0 0 1 0 2 517 0 0 0 2 0 0 1 0 0 0 1 0 4 246 1 1 0 2 0 1 0 0 0 1 2 9 17 80 0 0 0 0 0 0 0 0 0 0 6 7 13 369 0 1 0 0 0 0 0 0 0 1 7 4 13 1 0 1 0 0 0 0 0 0 0 0 4 6 11 71 0 0 0 0 0 0 1 1 0 0 0 2 4 45 1 0 0 0 0 1 1 1 0 0 1 1 6 152 0 1 0 1 0 1 1 0 0 0 3 3 10 Table 3. The number of packet losses for proposed method
Pattern [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]
Total 293 0 0 0 2 0 2 0 0 0 0 0 2 6 446 0 0 0 0 0 1 0 0 0 0 0 0 1 517 0 0 0 2 0 0 0 0 0 0 1 0 3 246 0 0 0 2 0 1 0 0 0 1 3 8 15 80 0 0 0 0 0 0 0 0 0 0 7 6 13 369 0 0 0 0 0 0 0 0 0 0 6 6 12 1 0 0 0 0 0 0 0 0 0 0 4 6 10 71 0 0 0 0 0 0 1 1 0 0 0 2 4 45 0 0 0 0 0 1 1 1 0 0 1 1 5 152 0 0 0 1 0 1 1 0 0 0 3 3 9
Experimental Results and Analysis
Here, we show some experimental results and their analysis using a case study on VANET. Our method described in Section 4 can generate a similar node density distribution for a given traffic trace data. However, its mobility is not the same as that of real trace data. Therefore, we use the microscopic traffic simulator VISSIM and have measured node density distributions from the obtained typical trace data shown in Fig. 3. We have conducted a simulation for a trace composed of these 10 patterns to evaluate multi-hop communications by AODV protocol [START_REF] Perkins | Ad Hoc On-demand Distance Vector (AODV)[END_REF] over the intersection. We have transmitted packets from left to right through the intersection every 1 second. Table 2 shows the number of packet losses at each cell in Fig. 3 for the patterns. Each row [n] denotes the number of packet losses at the cell [n] on the horizontal road in Fig. 2. The routes by AODV protocol were usually constructed over cells on the horizontal road. Since the packets are transmitted from left to right and the vehicles on these cells also moved to the same direction. Thus, there are relatively few packet losses at the upside cells even though their densities are high. We have also measured the node density distributions at those cells. Based on the obtained measured values, we have reproduced their node density distributions using the method described in Section 4 where "Manhattan1" mobility in Fig. 6 (c) is used for generating vehicular mobility. Then, we have evaluated the packet loss rates through network simulation. Table 3 shows the packet loss rates for the corresponding 10 patterns in Fig. 3. As shown in the two tables, although the results for Table 2 and Table 3 are not the same, they show that their similarity is rather high. Therefore, our proposed mobility model with designated node density distributions can expect a similar trend in network throughput/reliability that are related to how many packet losses occurred. Note that since the packet losses happen at different timing in the two methods, it might be difficult to reproduce and evaluate time sensitive protocols with our proposed method.
In general, trace based testing can represent realistic situations more accurately. Typical traffic patterns can be obtained easily. However, it is difficult to obtain unusual traffic patterns and it takes much time and costs. In order to improve reliability and sustainability of target protocols like VANET protocols, it is very important to reproduce not only typical traffic patterns but also unusual ones and evaluate the reliability and sustainability for those situations.
Conclusion
In this paper, we have proposed a method for efficiently carrying out protocol testing for a set of designated node density distributions and their variations. The method formulates the problem for finding the most efficiently testing order as the problem to find a rural postman tour for the graph called TestEnvGraph. The experimental results show that our method can easily reproduce node density distributions and their variations for VANET applications and their network throughput and packet loss rates are rather similar with those based on real trace based traffic data.
One of our future work is to collect several types of real trace data and evaluate the effectiveness and applicability of the proposed method.
Fig. 1 .
1 Fig. 1. Example of node density distribution and its mobility patterns with those obtained in real trace based (microscopic mobility based) approaches.In ITS research communities, it is known that vehicular densities strongly affect the performance of vehicle-to-vehicle (V2V) communication. Therefore, trace based data and microscopic vehicular mobility models are often used. They are useful to reproduce typical traffic patterns. Here, we have generated vehicular mobility patterns using both the proposed method and a microscopic traffic simulator VISSIM[START_REF] Ptv | VISSIM[END_REF], and compared their performance. We have generated typical 10 patterns of node density distributions and their dynamic variations near an intersection. Then, we have evaluated the performance of a protocol. The results are shown in Section 6. Our experiments have shown that the performance based on node density distributions and their dynamic variations derived using our proposed method and tool is rather close to that based on real trace based and microscopic vehicular mobility based traffic patterns.Real traces and those obtained from microscopic traffic simulators can reproduce typical traffic patterns easily. However, it is difficult to reproduce peculiar traffic patterns using such methods. In general, it takes much time and costs to reproduce rare cases. On the other hand, the proposed method can designate any node density distribution and its variations. It can help to improve the performance and reliability of MANET protocols and applications. As far as the authors know, it is the first approach that we can designate any node density distributions and their variations and use them for protocol testing. By finding a rural postman tour for the graph TestEnvGraph, we minimize the cost for reproducing the designated node density distributions and their variations.
Fig. 2 .Fig. 3 .
23 Fig. 2. Node density and average speed at an intersection in typical conditions
Fig. 5 .
5 Fig. 5. Expansion of obstacle cells in disaster
Fig. 6 .Fig. 7 .
67 Fig. 6. Snapshots for four types of node density distributions
Fig. 8 .
8 Fig. 8. Calculation of node transition probabilities
(a) Checkerboard 4 .
4 00 4.50 5.00 4.50 4.00 3.95 4.41 4.96 4.41 3.95 4.50 0.00 5.50 0.00 4.50 4.39 0.41 5.34 0.41 4.39 5.00 5.50 6.00 5.50 5.00 5.00 5.43 5.92 5.43 5.00 4.50 0.00 5.50 0.00 4.50 4.39 0.41 5.34 0.41 4.39 4.00 4.50 5.00 4.50 4.00 3.95 4.41 4.96 4.41 3.95 (b) Manhattan1
Fig. 9 .
9 Fig. 9. Example TestEnvGraph
(a) We can generate a waypoint-based mobility model with any designated node density distribution using the method described in the previous section. (b) It requires some cost (time) for generating the waypoint-based mobility model with a designated node density distribution. (c) In order for changing a given node density distribution to another one whose difference of node density distributions is d, it requires some cost (time) in proportion to d.
Table 2 .
2 The number of packet losses for trace data generated by VISSIM
Pattern [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] | 41,462 | [
"1003385",
"1003386",
"1003387",
"1003388"
] | [
"206804",
"206804",
"206804",
"206804"
] |
01482410 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01482410/file/978-3-642-34691-0_5_Chapter.pdf | Martin Fagereng Johansen
email: martin.fagereng.johansen@sintef.no
Øystein Haugen
email: oystein.haugen@sintef.no
Franck Fleurey
email: franck.fleurey@sintef.no
Erik Carlson
email: erik.carlson@no.abb.com
Jan Endresen
email: jan.endresen@no.abb.com
Tormod Wien
email: tormod.wien@no.abb.com
A Technique for Agile and Automatic Interaction Testing for Product Lines
Keywords: Product Lines, Testing, Agile, Continuous Integration, Automatic, Combinatorial Interaction Testing
Product line developers must ensure that existing and new features work in all products. Adding to or changing a product line might break some of its features. In this paper, we present a technique for automatic and agile interaction testing for product lines. The technique enables developers to know if features work together with other features in a product line, and it blends well into a process of continuous integration. The technique is evaluated with two industrial applications, testing a product line of safety devices and the Eclipse IDEs. The first case shows how existing test suites are applied to the products of a 2wise covering array to identify two interaction faults. The second case shows how over 400,000 test executions are performed on the products of a 2-wise covering array using over 40,000 existing automatic tests to identify potential interactions faults.
Introduction
A product line is a collection of products with a considerable amount of hardware or code in common. The commonality and differences between the products are usually modeled as a feature model. A product of a product line is given by a configuration of the feature model, constructed by specifying whether features are including or not. Testing product lines is a challenge since the number of possible products grows exponentially with the number of choices in the feature model. Yet, it is desirable to ensure that the valid products function correctly.
One approach for testing product lines is combinatorial interaction testing [START_REF] Cohen | Constructing interaction test suites for highlyconfigurable systems in the presence of constraints: A greedy approach[END_REF]. Combinatorial interaction testing is to first construct a small set of products, called a covering array, in which interaction faults are most likely to show up and then to test these products normally. We have previously advanced this approach by showing that generating covering arrays from realistic features models is tractable [START_REF] Johansen | Properties of realistic feature models make combinatorial testing of product lines feasible[END_REF] and by providing an algorithm that allows generating covering arrays for product lines of the size and complexity found in industry [START_REF] Johansen | An Algorithm for Generating t-wise Covering Arrays from Large Feature Models[END_REF].
In its current form, the application of combinatorial interaction testing to testing product lines is neither fully automatic nor agile; a technique for automatic and agile testing of product lines based on combinatorial interaction testing is the contribution of this paper, presented in Section 3. The technique is evaluated by applying it to test two industrial product lines, a product line of safety devices and the Eclipse IDEs; this is presented in Section 4.
In Section 4.1 it is shown how the technique can be implemented using the Common Variability Language (CVL) [START_REF] Haugen | Adding standardized variability to domain specific languages[END_REF] tool suite. (CVL is the language of the ongoing standardization effort of variability languages by OMG.) Five test suites were executed on 11 strategically selected products, the pair-wise covering array, of a product line of safety devices to uncover two unknown and previously undetected bugs.
In Section 4.3 it is shown how the technique can be implemented using the Eclipse Platform plug-in system. More than 40,000 existing automatic tests were executed on 13 strategically selected products, the pair-wise covering array, of the Eclipse IDE product line, producing more than 400,000 test results that reveal a multitude of potential interaction faults.
2 Background and Related Work
Product Lines
A product line [START_REF] Pohl | Software Product Line Engineering: Foundations, Principles and Techniques[END_REF] is a collection of products with a considerable amount of hardware or code in common. The primary motivation for structuring one's products as a product line is to allow customers to have a system tailored for their purpose and needs, while still avoiding redundancy of code. It is common for customers to have conflicting requirements. In that case, it is not even possible to ship one product for all customers.
The Eclipse IDE products [START_REF] Rivieres | Eclipse Platform Technical Overview[END_REF] can be seen as a software product line. Today, Eclipse lists 12 products (which configurations are shown in Table 1a 4 ) on their download page 5 .
One way to model the commonalities and differences in a product line is using a feature model [START_REF] Kang | Feature-oriented domain analysis (foda) feasibility study[END_REF]. A feature model sets up the commonalities and differences of a product line in a tree such that configuring the product line proceeds from the root of the tree. Figure 1 shows the part of the feature model for the Eclipse IDEs that is sufficient to configure all official versions of the Eclipse IDE. The figure uses the common notation for feature models; for a detailed explanation of feature models, see Czarnecki and Eisenecker 2000 [START_REF] Czarnecki | Generative programming: methods, tools, and applications[END_REF].
Product Line Testing
Testing a product line poses a number of new challenges compared to testing single systems. It has to be ensured that each possible configuration of the product Fig. 1: Feature Model for the Eclipse IDE Product Line line functions correctly. One way to validate a product line is through testing, but testing is done on a running system. The software product line is simply a collection of many products. One cannot test each possible product, since the number of products in general grows exponentially with the number of features in the product line. For the feature model in Figure 1, there are 356, 352 possible configurations.
Reusable Component Testing In a survey of empirics of what is done in industry for testing software product lines [START_REF] Johansen | A Survey of Empirics of Strategies for Software Product Line Testing[END_REF], we found that the technique with considerable empirics showing benefits is reusable component testing. Given a product line where each product is built by bundling a number of features implemented in components, reusable component testing is to test each component in isolation. The empirics have later been strengthened; Ganesan et al. 2012 [START_REF] Ganesan | An analysis of unit tests of a flight software product line[END_REF] is a report on the test practices at NASA for testing their Core Flight Software System (CFS) product line. They report that the chief testing done on this system is reusable component testing [START_REF] Ganesan | An analysis of unit tests of a flight software product line[END_REF].
Interaction Testing There is no single recommended approach available today for testing interactions between features in product lines efficiently [START_REF] Engström | Software product line testing -a systematic mapping study[END_REF], but there are many suggestions. Some of the more promising suggestions are combinatorial interaction testing [START_REF] Cohen | Constructing interaction test suites for highlyconfigurable systems in the presence of constraints: A greedy approach[END_REF], discussed below; a technique called ScenTED, where the idea is to express the commonalities and differences on the UML model of the product line and then derive concrete test cases by analyzing it [START_REF] Reuys | The scented method for testing software product lines[END_REF]; and incremental testing, where the idea is to automatically adapt a test case from one product to the next using the specification of similarities and differences between the products [START_REF] Uzuncaova | Incremental test generation for software product lines[END_REF]. Kim et al. 2011 [14] presented a technique where they can identify irrelevant features for a test case using static analysis.
Combinatorial Interaction Testing: Combinatorial interaction testing [START_REF] Cohen | Constructing interaction test suites for highlyconfigurable systems in the presence of constraints: A greedy approach[END_REF] is one of the most promising approaches. The benefits of this approach is that it deals directly with the feature model to derive a small set of products (a covering array) which products can then be tested using single system testing techniques, of which there are many good ones [START_REF] Binder | Testing object-oriented systems: models, patterns, and tools[END_REF].
Table 1: Eclipse IDE Products, Instances of the Feature Model in Figure 1 (a) Official Eclipse IDE products
Feature\Product 1 2 3 4 5 6 7 8 9 101112 EclipseIDE XXXXXXXXX X X X RCP Platform XXXXXXXXX X X X CVS XXXXXXXXX X X X EGit --XXXX ------ EMF XX ---XX ----- GEF XX ---XX ----- JDT XX --XXX -X --X Mylyn XXXXXXXXX X X - WebTools -X ----X ---X - RSE -XXX --XX ---- EclipseLink -X ----X --X -- PDE -X --XXX -X --X Datatools -X ----X ----- CDT --XX ---X ---- BIRT ------X ----- GMF -----X ------ PTP -------X ---- Scout --------X --- Jubula ---------X -- RAP ----X ------- WindowBuilder X ----------- Maven X ----------- SVN ------------ SVN15 ------------ SVN16 ------------ (b) Pair-wise Covering Array Feature\Product 1 2 3 4 5 6 7 8 9 10111213 EclipseIDE XXXXXXXXX X X X X RCP Platform XXXXXXXXX X X X X CVS -X -X -X --X ---- EGit -X --XXX --X --- EMF -XXXX --XX X X X - GEF --XXX -XXX --X - JDT -XXXX -X -X X -X - Mylyn -X -X --XX ----- WebTools --XXX -X --X X -- RSE -XX -XX ------- EclipseLink -XX --X -XX ---- PDE -X -XX -X -X --X - Datatools -XXXX ---X -X X - CDT --XX -X -XX ---- BIRT ---XX ---X --X - GMF --XXX --XX ---- PTP --X -XX -XX ---- Scout -X -X --X -X ---- Jubula --XX -X -X -X --- RAP -XX --XX -X ---- WindowBuilder -X -X -X -X ----- Maven -XX -----X X --- SVN -X --XXXXX -X -X SVN15 -X --X --X ----X SVN16 -----XX -X -X --
There are three main stages in the application of combinatorial interaction testing to a product line. First, the feature model of the system must be made. Second, the t-wise covering array must be generated. We have developed an algorithm that can generate such arrays from large features models [3]6 . These products must then be generated or physically built. Last, a single system testing technique must be selected and applied to each product in this covering array.
Table 1b shows the 13 products that must be tested to ensure that every pairwise interaction between the features in the running example functions correctly. Each row represents one feature and every column one product. 'X' means that the feature is included for the product, '-' means that the feature is not included. Some features are included for every product because they are core features, and some pairs are not covered since they are invalid according to the feature model.
Testing the products in a pair-wise covering array is called 2-wise testing, or pair-wise testing. This is a special case of t-wise testing where t = 2. t-wise testing is to test the products in a covering array of strength t. 1-wise coverage means that every feature is at least included and excluded in one product, 2wise coverage means that every combination of two feature assignments are in the covering array, etc. For our running example, 3, 13 and 40 products is sufficient to achieve 1-wise, 2-wise and 3-wise coverage, respectively.
Empirical Motivation An important motivation for combinatorial interaction testing is a paper by Kuhn et al. 2004 [16]. They indicated empirically that most bugs are found for 6-wise coverage, and that for 1-wise one is likely to find on average around 50%, for 2-wise on average around 70%, and for 3-wise around 95%, etc.
Garvin and Cohen 2011 [START_REF] Garvin | Feature interaction faults revisited: An exploratory study[END_REF] did an exploratory study on two open source product lines. They extracted 28 faults that could be analyzed and which was configuration dependent. They found that three of these were true interaction faults which require at least two specific features to be present in a product for the fault to occur. Even though this number is low, they did experience that interaction testing also improves feature-level testing, that testing for interaction faults exercised the features better. These observations strengthen the case for combinatorial interaction testing.
Steffens et al. 2012 [START_REF] Steffens | Industrial evaluation of pairwise spl testing with moso-polite[END_REF] did an experiment at Danfoss Power Electronics. They tested the Danfoss Automation Drive which has a total of 432 possible configurations. They generated a 2-wise covering array of 57 products and compared the testing of it to the testing all 432 products. This is possible because of the relatively small size of the product line. They mutated each feature with a number a mutations and ran test suites for all products and the 2-wise covering array. They found that 97.48% of the mutated faults are found with 2-wise coverage.
Proposed Technique
We address two problems with combinatorial interaction testing of software product lines in our proposed technique. A generic algorithm for automatically performing the technique is presented in Section 3.2, an evaluation of it is presented in Section 4 and a discussion of benefits and limitations presented in Section 5.
-The functioning of created test artifacts is sensitive to changes in the feature model: The configurations in a covering array can be drastically different with the smallest change to the feature model. Thus, each product must be built anew and the single system test suites changed manually. Thus, plain combinatorial interaction testing of software product lines is not agile. This limits it from effectively being used during development.
-Which tests should be executed on the generated products: In ordinary combinatorial interaction testing, a new test suite must be made for a unique product. It does not specify how to generate a complete test suite for a product.
Idea
Say we have a product line in which two features, A and B, are both optional and mutually optional. This means that there are four situations possible: Both A and B are in the product, only A or only B is in the product and neither is in the product. These four possibilities are shown in Table 2a.
Feature\Situation 1 2 3 4 A X X -- B X -X - (b) Triples Feature\Situation 1 2 3 4 5 6 7 8 A X X X X ---- B X X --X X -- C X -X -X -X -
If we have a test suite that tests feature A, T estA, and another test suite that tests feature B, T estB, the following is what we expect: (1) When both feature A and B are present, we expect T estA and T estB to succeed. (2) When just feature A is present, we expect T estA to succeed. (3) Similarly, when just feature B is present, we expect T estB to succeed. (4) Finally, when neither feature is present, we expect the product to continue to function correctly. In all four cases we expect the product to build and start successfully.
Similar reasoning can be made for 3-wise and higher testing, which cases are shown in Table 2b. For example, for situation 1, we expect T estA, T estB and T estC to pass, in situation 2, we expect T estA and T estB to pass, which means that A and B work in each other's presence and that both work without C. This kind of reasoning applies to the rest of the situations in Table 2b and to higher orders of combinations.
Algorithm for Implementation
The theory from Section 3.1 combined with existing knowledge about combinatorial interaction testing can be utilized to construct a testing technique. Algorithm 1 shows the pseudo-code for the technique.
The general idea is, for each product in a t-wise covering array, to execute the test suites related to the included features. If a test suite fails for one configuration, but succeeds for another, we can know that there must be some kind of interaction disturbing the functionality of the feature.
In Algorithm 1, line 1, the covering array of strength t of the feature model F M is generated and the set of configurations are placed in CA t . At line 2, the algorithm iterates through each configuration. At line 3, a product is constructed from the configuration c. P G is an object that knows how to construct a product from a configuration; making this object is a one-time effort. The product is placed in p. If the construction of the product failed, the result is placed in the result table, ResultT able. The put operation on ResultT able takes three parameters, the result, the column and the row. The row parameter can be an asterisk, '*', indicating that the result applies to all rows.
If the build succeeded, the algorithm continues at line 7 where the algorithm iterates through each test suite, test, of the product line, provided in a set T ests. At line 8, the algorithm takes out the feature, f , that is tested by the test suite test. The algorithm finds that in the object containing the Test-Feature-Mapping, T F M . At line 9, if this feature f is found to be included in the current Algorithm 1 Pseudo Code of the Automatic and Agile Testing Algorithm end for 14:
end if 15: end for configuration, c, then, at line 10, the test suite is run. The results from running the test is placed in the result table 7 , line 11.
Result Analysis
Results stored in a result table constructed by Algorithm 1 allow us to do various kinds of analysis to identify the possible causes of the problems.
Attributing the cause of a fault These examples show how analysis of the result can proceed:
-If we have a covering array of strength 1, CA 1 , of a feature model F M : If a build fails whenever f 1 is not included, we know that f 1 is a core feature.
-If we have a covering array of strength 2, CA 2 , of a feature model F M in which feature f 1 and f 2 are independent on each other: If, ∀c ∈ CA 2 where both f 1 and f 2 are included, the test suite for f 1 fails, while where f 1 is included and f 2 is not, then the test suite of f 1 succeeds, we know that the cause of the problem is a disruption of f 1 caused by the inclusion f 2 .
-If we have a covering array of strength 2, CA 2 , of a feature model F M in which feature f 1 and f 2 are not dependent on each other: If, ∀c ∈ CA 2 where both f 1 and f 2 are included, the test suites for both f 1 and f 2 succeed, while where f 1 is included and f 2 is not, then the test suite of f 1 fails, we know that the cause of the problem is a hidden dependency from f 1 to f 2 .
These kinds of analysis are possible for all the combinations of successes and failures of the features for the various kinds of interaction-coverages.
Of course, if there are many problems with the product line, then several problems might overshadow each other. In that case, the tester must look carefully at the error given by the test case to find out what the problem is. For example, if every build with f 1 included fails that will overshadow a second problem that f 2 is dependent on f 1 .
Guarantees It is uncommon for a testing technique to have guarantees, but there are certain errors in the feature model that will be detected.
-Feature f is not specified to be a core feature in the feature model but is in the implementation. This is guaranteed to be identified using a 1-wise covering array: There will be a product in the covering array with f not included that will not successfully build, start or run. -Feature f 1 is not dependent on feature f 2 in the feature model, but there is a dependency in the code. This is guaranteed to be identified using a 2-wise covering array. There will be a product in the 2-wise covering array with f 1 included and f 2 not included that will not pass the test suite for feature f 1 .
Evaluation with two Applications and Results
Application to ABB's "Safety Module"
About the ABB Safety Module The ABB Safety Module is a physical component that is used in, among other things, cranes and assembly lines, to ensure safe reaction to events that should not occur, such as the motor running too fast, or that a requested stop is not handled as required. It includes various software configurations to adapt it to its particular use and safety requirements. A simulated version of the ABB Safety Module was built-independently of the work in this paper-for experimenting with testing techniques. It is this version of the ABB Safety Module which testing is reported in this paper.
Basic Testing of Sample Products Figure 2a shows the feature model of the Safety Module. There are in total 640 possible configurations. Three of these are set up in the lab for testing purposes during development. These are shown in Figure 2b and are, of course, valid configurations of the feature model of the ABB Safety Module, Figure 2a.
The products are tested thoroughly before they are delivered to a customer. Five test suites are named in the left part of Table 3a; the right side names the feature that the test suite tests.
We ran the relevant tests from Table 3a. The results from running the relevant test suite of each relevant product are shown in Table 3b. The table shows a test suite in each row and a product in each column. When the test suite tests a feature not present in the product, the entry is blank. When the test suite tests a feature in the product, the error count is shown. All test runs gave zero errors, meaning that they were successful for the three sample products. This is also what we expected since these three test products have been used during development of the simulation model to see that it functions correctly. Testing Interactions Systematically The three sample products are three out of 640 possible products. Table 4a shows the 11 products that need to be tested to ensure that every pair of features is tested for interaction faults; that is, the 2-wise covering array of Figure 2a. We built these products automatically and ran the relevant automatic test suite on them. Table 4b shows the result from running each relevant test suite on each product of Table 4a. If the features interact correctly, we expect that there would be no error.
Feature\Product a1 a2 a3 SafetyDrive X X X SafetyModule X X X CommunicationBus X X X SafetyFunctions X X X StoppingFunctions X X X STO X X X SS1 X X X Limit Values --- Other X X X SSE X X X SAR X X - SLS X X X SBC X X X SBC Present X -X SBC during STO --- SBC after STO --X SBC before STO X -- SBC Absent -X - SMS X X X SIL X X X Level2 -X - Level3 X -X
As we can see, products 2, 3, 7 and 8 did not compile correctly. This proved to be because for certain configurations, the CVL variability model was built incorrectly, producing a faulty code that does not compile.
For product 9, the test suite for the SMS ("Safe Maximum Speed") feature failed. This is interesting, because it succeeded for product 4 and 5. We investigated the problem, and found that the SMS feature does not work if the break is removed from the ABB Safety Module. This is another example of an interaction fault. It occurs when SMS is present, and the break is absent. The inclusion of SBC Absent means that there is no break within in the implementation.
XXXXXXXXXX X SafetyModule XXXXXXXXXX X CommunicationBus XXXXXXXXXX X SafetyFunctions XXXXXXXXXX X StoppingFunctions XXXXXXXXXX X STO XXXXXXXXXX X SS1 --XX -X -XXX - Limit Values -X -X -XXX --X Other -XX -XXXXXX X SSE --X --XXXX -- SAR -X --XXX ---X SBC -XX -X -XXXX X SBC Present -XX -X --XX -X SBC after STO ----X --X --- SBC during STO -X ------X -- SBC before STO --X -------X SBC Absent ------X --X - SMS --X -XX --XX - SLS -XX --X -X -X - SIL XXXXXXXXXX X Level2 --XX --XXX -- Level3 XX --XX ---X X (
Implementation with CVL
The pseudo-algorithm for implementing the technique with the CVL [START_REF] Haugen | Adding standardized variability to domain specific languages[END_REF] tool suite is shown as Algorithm 2. It is this implementation that was used to test the ABB Safety Module8 . The algorithm assumes that the following is given to it: a CVL variability model object, V M ; a coverage strength, t; a list of tests, tests; and that a mapping between the tests and the features, T F M . 9The algorithm proceeds by first generating a t-wise covering array and setting them up as resolution models in the CVL model, VM. The CVL model contains bindings to the executable model artifacts for the ABB Safety Module. Everything that is needed is reachable from the CVL model. It can thus be used to generate the executable product simulation models; the set of product models is placed in P . The algorithm then loops through each product p. For each product, it sees if the build succeeded. If it did not, that is noted in resultT able. If the build succeeded, the algorithm runs through each test from the test set provided. If the feature the test tests is present in the product, run the test and record the result in the proper entry in resultT able. The result table we got in the experiment with the ABB Safety Module is shown in Table 4b.
Algorithm 2 Pseudo Code of CVL-based version of Algorithm 1
1: V M.GenerateCoveringArray(t) 2: P ← V M.GenerateP roducts() 3: for each product p in P do 4:
if p build failed then 5:
resultT able.put("buildf ailed", p, * )
Application to the Eclipse IDEs
The Eclipse IDE product line was introduced earlier in this paper: The feature model is shown in Figure 1, and a 2-wise covering array was shown in Table 1b.
The different features of the Eclipse IDE are developed by different teams, and each team has test suites for their feature. Thus, the mapping between the features and the test suites are easily available.
The Eclipse Platform comes with built-in facilities for installing new features. We can start from a new copy of the bare Eclipse Platform, which is an Eclipse IDE with just the basic features. When all features of a product have been installed, we can run the test suite associated with each feature.
We implemented Algorithm 1 for the Eclipse Platform plug-in system and created a feature mapping for 36 test suites. The result of this execution is shown in Table 5b. This experiment10 took in total 10.8 GiB of disk space; it consisted of 40,744 tests and resulted in 417,293 test results that took over 23 hours to produce on our test machine.
In Table 5b, the first column contains the results from running the 36 test suites on the released version of the Eclipse IDE for Java EE developers. As expected, all tests pass, as would be expected since the Eclipse project did test this version with these tests before releasing it.
The next 13 columns show the result from running the tests of the products of the complete 2-wise covering array of the Eclipse IDE product line. The blank cells are cells where the feature was not included in the product. The cells with a '-' show that the feature was included, but there were no tests in the test setup for this feature. The cells with numbers show the number of errors produced by running the tests available for that feature. The algorithm assumes that the following is given: a feature model, F M , and a coverage strength, t.
In the experiment in the previous section we provided the feature model in Figure 1. The algorithm loops through each configuration in the covering array. In the experiment, it was the one given in Table 1b. For each configuration, a version of Eclipse is constructed: The basic Eclipse platform is distributed as a package. This package can be extracted into a new folder and is then ready to use. It contains the capabilities to allow each feature and test suite can be installed automatically using the following command: <eclipse executable> -application org.eclipse.equinox.p2.director -repository <repository1,...> -installIU <feature name> Similar commands allow tests to be executed.
A mapping file provides the links between the features and the test suites. This allows Algorithm 3 to select the relevant tests for each product and run them against the build of the Eclipse IDE. The results are put into its entry in the result table. The results from the algorithm are in a table like the one given in the experiment, shown in Table 5b.
Limitations
-Emergent features: Emergent features are features that emerge from the combination of two or more features. Our technique does not test that an emergent feature works in relation to other features. -Manual Hardware Product Lines: Product line engineering is also used for hardware systems. Combinatorial interaction testing is also a useful technique to use for these products lines [START_REF] Johansen | Generating Better Partial Covering Arrays by Modeling Weights on Sub-Product Lines[END_REF]; however, the technique described in this paper is not fully automatic when the products must be set up manually. -Quality of the automated tests: The quality of the results of the technique is dependent on the quality of the automated tests that are run for the features of the products. -Feature Interactions: A problem within the field of feature interaction testing is how to best create tests as to identify interaction faults occur between two or more concrete features, the feature interaction problem [START_REF] Bowen | The feature interaction problem in telecommunications systems[END_REF].
Although an important problem, it is not what our technique is for. Our technique covers all simple interactions and gives insight into how they work together.
Conclusion
In this paper we presented a new technique for agile and automatic interaction testing for product lines. The technique allows developers of product lines to set up automatic testing as a part of their continuous integration framework to gain insight into potential interaction faults in their product line.
The technique was evaluated by presenting the results from two applications of it: one to a simulation model of the ABB safety module product line using the CVL tool suite, and one to the Eclipse IDE product lines using the Eclipse Platform plug-in system. The cases show how the technique can identify interaction faults in product lines of the size and complexity found in industry.
Fig. 2 :
2 Fig. 2: ABB Safety Module Product Line (a) Feature Model SafetyDrive
Table 2 :
2 Feature Assignment Combinations
(a) Pairs
Table 3 (
3
a) Feature-Test Mapping (b) Test errors
Unit-Test Suite Feature Test\Product a1 a2 a3
GeneralStartUp SafetyDrive GeneralStartUp 0 0 0
Level3StartUpTest Level3 Level3StartUpTest 0 0
TestSBC After SBC after STO TestSBC After 0
TestSBC Before SBC before STO TestSBC Before 0
TestSMS SMS TestSMS 0 0 0
Table 4 :
4 Test Products and Results for Testing the Safety Module
(a) 2-wise Covering Array
Feature\Product 0 1 2 3 4 5 6 7 8 9 10
SafetyDrive
Table 5 :
5 Tests and Results for Testing the Eclipse IDE Product Line, Figure 1, Using the 2-wise Covering Array of Table 1b
(a) Tests
Test Suite Tests Time(s)
EclipseIDE 0 0
RCP Platform 6,132 1,466
CVS 19 747
EGit 0 0
EMF 0 0
GEF 0 0
JDT 33,135 6,568
Mylyn 0 0
WebTools 0 0
RSE 0 0
EclipseLink 0 0
PDE 1,458 5,948
Datatools 0 0
CDT 0 0
BIRT 0 0
GMF 0 0
PTP 0 0
Scout 0 0
Jubula 0 0
RAP 0 0
WindowBuilder 0 0
Maven 0 0
SVN 0 0
SVN15 0 0
SVN16 0 0
Total 40,744 14,729
The technique is a fully usable software product line testing technique: It scales, and free, open source algorithms and software exists for doing all the automatic parts of the technique.13 -Agile: The technique is agile in that once set up, a change in a part of the product line or to the feature model will not cause any additional manual work. The product line tests can be rerun with one click throughout development. (Of course, if a new feature is added, a test suite for that feature should be developed.) -Continuous Integration: The technique fits well into a continuous integration framework. At any point in time, the product line can be checked out from the source code repository, built and the testing technique run. For example, the Eclipse project uses Hudson [19] to check out, build and test the Eclipse IDE and its dependencies at regular intervals. Our technique can be set up to run on Hudson, and every night produce a result table with possible interaction faults in a few hours on suitable hardware. -Tests the feature model: The technique tests the feature model in that errors might be found after a change of it. For example, that a mandatory relationship is missing causing a feature to fail. -Automatic: The technique is fully automatic except making the test suites for each feature, a linear effort with respect to the number of features, and making the build-scripts of a custom product, a one-time effort. -Implemented: The technique has been implemented and used for CVLbased product lines and for Eclipse-based product lines, as described in Section 4. -Run even if incomplete: The technique can be run even if the product line test suites are not fully developed yet. It supports running a partial test suite, e.g. when only half of the test suites for the features are present, one still gets some level of verification. For example, if a new feature is added to the product line, a new test suite is not needed to be able to analyze the interactions between the other features and it using the other feature's test suites. -Parallel: The technique is intrinsically parallel. Each product in the covering array can be tested by itself on a separate node. For example, executing the technique for the Eclipse IDE could have taken approximately 1/13th of the time if executed on 13 nodes, taking approximately in 2 hours instead of 23.
5 Benefits and Limitations
Benefits
-Usable:
http://www.eclipse.org/downloads/compare.php, retrieved 2012-04-12.
http://eclipse.org/downloads/, retrieved 2012-04-12.
See[START_REF] Johansen | An Algorithm for Generating t-wise Covering Arrays from Large Feature Models[END_REF] for a definition of covering arrays and for an algorithm for generating them.
Two examples of result tables are shown later, Tables4b and 5b.
The source code for this implementation including its dependencies is found on the paper's resource website: http://heim.ifi.uio.no/martifag/ictss2012/.
All these are available on the paper's resource website.
The experiment was performed on Eclipse Indigo 3.7.0. The computer on which we did the measurements had an Intel Q9300 CPU @2.53GHz, 8 GiB, 400MHz RAM and the disk ran at 7200 RPM.
We will report the failing test cases and the relevant configuration to the Eclipse project, along with the technique used to identify them.
The source code for this implementation including its dependencies is available through the paper's resource website, along with the details of the test execution and detailed instructions and scripts to reproduce the experiment.
Software and links available on the paper's resource website: http://heim.ifi.uio. no/martifag/ictss2012/.
Acknowledgments. The work presented here has been developed within the VERDE project ITEA 2 -ip8020. VERDE is a project within the ITEA 2 -Eureka framework.
The authors would like to thank the anonymous reviewers for their helpful feedback.
Products 4-5, 7 and 11-12 pass all relevant tests. For both features CVS and PDE, all products pass all tests. For product 2-3 and 9-10, the JDT test suites produce 11, 8, 5 and 3 error respectively. For the RCP-platform test suites, there are various number of errors for products 1-3, 6, 8-10 and 13.
We executed the test several times to ensure that the results were not coincidental, and we did look at the execution log to make sure that the problems were not caused by the experimental set up such as file permissions, lacking disk space or lacking memory. We did not try to identify the concrete bugs behind the failing test cases, as this would require extensive domain knowledge that was not available to us during our research. 11
Implementation with Eclipse Platform's Plug-in System
Algorithm 3 shows the algorithm of our testing technique for the Eclipse Platform plug-in system 12 . | 36,711 | [
"1003390",
"1003391",
"830592",
"1003392",
"1003393",
"1003394"
] | [
"86695",
"50791",
"86695",
"86695",
"487500",
"487500",
"487500"
] |
01482426 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01482426/file/978-3-642-34691-0_11_Chapter.pdf | Hengyi Yang
Bardh Hoxha
email: bhoxha@asu.edu
Georgios Fainekos
email: fainekos@asu.edu
Querying Parametric Temporal Logic Properties on Embedded Systems
In Model Based Development (MBD) of embedded systems, it is often desirable to not only verify/falsify certain formal system specifications, but also to automatically explore the properties that the system satisfies. Namely, given a parametric specification, we would like to automatically infer the ranges of parameters for which the property holds/does not hold on the system. In this paper, we consider parametric specifications in Metric Temporal Logic (MTL). Using robust semantics for MTL, the parameter estimation problem can be converted into an optimization problem which can be solved by utilizing stochastic optimization methods. The framework is demonstrated on some examples from the literature.
Introduction
Software development for embedded control systems is particularly challenging. The software may be distributed with real time constraints and must interact with the physical environment in non trivial ways. Multiple incidents and accidents of safety critical systems [START_REF] Lions | Ariane 5, flight 501 failure[END_REF][START_REF] Hoffman | The near rendezvous burn anomaly of december 1998[END_REF] reinforce the need for design, verification and validation methodologies that provide a certain level of confidence in the system correctness and robustness.
Recently, there has been a trend to develop software for safety critical embedded control systems using the Model Based Design (MBD) paradigm. Among the benefits of the MBD approach is that it provides the possibility for automatic code generation. Based on a level of confidence on the automatic code generation process, some of the system verification and validation can be performed at earlier design stages using only models of the system. Due to the importance of the problem, there has been a substantial level of research on testing and verification of models of embedded and hybrid systems (see [START_REF] Tripakis | Modeling, Verification and Testing using Timed and Hybrid Automata[END_REF] for an overview).
In [START_REF] Nghiem | Monte-carlo techniques for falsification of temporal properties of non-linear hybrid systems[END_REF], we investigated a new approach for testing embedded and hybrid systems against formal requirements in Metric Temporal Logic (MTL) [START_REF] Koymans | Specifying real-time properties with metric temporal logic[END_REF]. Our work was premised on the need to express complex design requirements in a formal logic for both requirements analysis and requirements verification. Based on the concept of robustness of MTL specifications [START_REF] Fainekos | Robustness of temporal logic specifications for continuous-time signals[END_REF], we were able to pose the property falsification/testing problem as an optimization problem. In particular, robust MTL semantics provide the user with an application depended measure of how far a system behavior is from failing to satisfy a requirement. Therefore, the goal of an automatic test generator is to produce a sequence of tests by gradually reducing that positive measure until a system behavior with a negative robustness measure is produced. In other words, we are seeking to detect system behaviors that minimize the specification robustness measure.
Unfortunately, the resulting optimization problem is non-linear and nonconvex, in general. Moreover, embedded system models frequently contain black boxes as subcomponents. Thus, only stochastic optimization techniques can be employed for solving the optimization problem and, in turn, for solving the initial falsification problem. In our previous research [START_REF] Sankaranarayanan | Falsification of temporal properties of hybrid systems using the cross-entropy method[END_REF][START_REF] Annapureddy | Ant colonies for temporal logic falsification of hybrid systems[END_REF][START_REF] Nghiem | Monte-carlo techniques for falsification of temporal properties of non-linear hybrid systems[END_REF], we have explored the applicability of various stochastic optimization methods to the MTL falsification problem with great success.
In this work, we take the MTL falsification method one step further. Namely, not only would we like to detect a falsifying behavior if one exists, but also we would like to be able to explore and determine system properties. Such a property exploration framework can be of great help to the practitioner. In many cases, the system requirements are not well formalized or understood at the initial system design stages. Therefore, if the specification can be falsified, then it is natural to ask for what parameter values the system still falsifies the specification.
In more detail, given an MTL specification with an unknown or uncertain parameter [START_REF] Asarin | Parametric identification of temporal properties[END_REF], we automatically formulate an optimization problem whose solution provides a range of values for the parameter such that the specification does not hold on the system. In order to solve the resulting optimization problem, we utilize our MTL falsification toolbox S-TaLiRo [START_REF] Annapureddy | S-taliro: A tool for temporal logic falsification for hybrid systems[END_REF], which contains a number of stochastic optimization methods [START_REF] Sankaranarayanan | Falsification of temporal properties of hybrid systems using the cross-entropy method[END_REF][START_REF] Annapureddy | Ant colonies for temporal logic falsification of hybrid systems[END_REF][START_REF] Nghiem | Monte-carlo techniques for falsification of temporal properties of non-linear hybrid systems[END_REF]. Finally, we demonstrate our framework on a challenge problem from the industry [START_REF] Chutinan | Dynamic analysis of hybrid system models for design validation[END_REF] and we present some experimental results on a small number of benchmark problems.
Problem Formulation
In this work, we take a general approach in modeling real-time embedded systems that interact with physical systems that have non-trivial dynamics. In the following, we will be using the term hybrid systems or Cyber-Physical Systems (CPS) for such systems to stress the interconnection between the embedded system and the physical world.
We fix N ⊆ N, where N is the set of natural numbers, to be a finite set of indexes for the finite representation of a system behavior. In the following, given two sets A and B, B A denotes the set of all functions from A to B. That is, for any f ∈ B A we have f :
A → B.
We view a system Σ as a mapping from a compact set of initial operating conditions X 0 and input signals U ⊆ U N to output signals Y N and timing (or sampling) functions T ⊆ R N + . Here, U is a compact set of possible input values at each point in time (input space), Y is the set of output values (output space), R is the set of real numbers and R + the set of positive reals.
We impose three assumptions / restrictions on the systems that we consider:
1. The input signals (if any) must be parameterizable using a finite number of parameters. That is, there exists a function U such that for any u ∈ U, there exist two parameter vectors λ = [λ 1 . . . λ m ] T ∈ Λ, where Λ is a compact set, and
t = [t 1 . . . t m ] T ∈ R m + such that m << max N and for all i ∈ N , u(i) = U(λ, t)(i).
2. The output space Y must be equipped with a generalized metric d which contains a subspace Z equipped with a metric d. 3. For a specific initial condition x 0 and input signal u, there must exist a unique output signal y defined over the time domain R. That is, the system Σ is deterministic.
Further details on the necessity and implications of the aforementioned assumptions can be found in [START_REF] Abbas | Probabilistic temporal logic falsification of cyber-physical systems[END_REF]. Under Assumption 3, a system Σ can be viewed as a function ∆ Σ : X 0 ×U → Y N × T which takes as an input an initial condition x 0 ∈ X 0 and an input signal u ∈ U and it produces as output a signal y : N → Y (also referred to as trajectory) and a timing function τ : N → R + . The only restriction on the timing function τ is that it must be a monotonic function, i.e., τ (i) < τ (j) for i < j. The pair µ = (y, τ ) is usually referred to as a timed state sequence, which is a widely accepted model for reasoning about real time systems [START_REF] Alur | Real-Time Logics: Complexity and Expressiveness[END_REF]. A timed state sequence can represent a computer simulated trajectory of a CPS or the sampling process that takes place when we digitally monitor physical systems. We remark that a timed state sequence can represent both the internal state of the software/hardware (usually through an abstraction) and the state of the physical system. The set of all timed state sequences of a system Σ will be denoted by L(Σ). That is,
L(Σ) = {(y, τ ) | ∃x 0 ∈ X 0 . ∃u ∈ U . (y, τ ) = ∆ Σ (x 0 , u)}.
Our high level goal is to explore and infer properties that the system Σ satisfies by observing its response (output signals) to particular input signals and initial conditions. We assume that the system designer has some partial understanding about the properties that the system satisfies or does not satisfy and he/she would like to be able to precisely determine these properties. In particular, we assume that the system developer can formalize the system properties in Metric Temporal Logic (MTL) [START_REF] Koymans | Specifying real-time properties with metric temporal logic[END_REF], but some parameters are unknown. Such parameters could be unknown threshold values for the continuous state variables of the hybrid system or some unknown real time constraints.
Example 1 As a motivating example, we will consider a slightly modified version of the Automatic Transmission model provided by Mathworks as a Simulink demo 1 . Further details on this example can be found in [START_REF] Zhao | Generating test inputs for embedded control systems[END_REF][START_REF] Fainekos | Verification of automotive control applications using s-taliro[END_REF][START_REF] Abbas | Probabilistic temporal logic falsification of cyber-physical systems[END_REF].
The only input u to the system is the throttle schedule, while the break schedule is set simply to 0 for the duration of the simulation which is T = 30 sec.
The physical system has two continuous-time state variables which are also its outputs: the speed of the engine ω (RPM) and the speed of the vehicle v, i.e., Y = R 2 and y(t) = [ω(t) v(t)] T for all t ∈ [0, 30]. Initially, the vehicle is at rest at time 0, i.e., X 0 = {[0 0] T } and x 0 = y(0) = [0 0] T . Therefore, the output trajectories depend only on the input signal u which models the throttle, i.e., (y, τ ) = ∆ Σ (u). The throttle at each point in time can take any value between 0 (fully closed) to 100 (fully open). Namely, u(i) ∈ U = [0, 100] for each i ∈ N . The model also contains a Stateflow chart with two concurrently executing Finite State Machines (FSMs) with 4 and 3 states, respectively. The FSMs model the logic that controls the switching between the gears in the transmission system. We remark that the system is deterministic, i.e., under the same input u, we will always observe the same output y.
In our previous work [START_REF] Abbas | Probabilistic temporal logic falsification of cyber-physical systems[END_REF][START_REF] Annapureddy | S-taliro: A tool for temporal logic falsification for hybrid systems[END_REF][START_REF] Sankaranarayanan | Falsification of temporal properties of hybrid systems using the cross-entropy method[END_REF], on such models, we demonstrated how to falsify requirements like: "The vehicle speed v is always under 120km/h or the engine speed ω is always below 4500RPM." A falsifying system trajectory appears in Fig. 1. In this work, we provide answers to queries like "What is the fastest time that ω can exceed 3250 RPM" or "For how long can ω be below 4500 RPM".
Formally, in this work, we solve the following problem.
Problem 1 (Temporal Logic Parameter Estimation Problem) Given an MTL formula φ[θ] with a single unknown parameter θ ∈ Θ = [θ m , θ M ] ⊆ R, a hybrid system Σ, and a maximum testing time T , find an optimal range An overview of our proposed solution to Problem 1 appears in Fig. 2. The sampler produces a point x 0 from the set of initial conditions, a parameter vector λ that characterizes the control input signal u and a parameter θ. The vectors x 0 and λ are passed to the system simulator which returns an execution trace (output trajectory and timing function). The trace is then analyzed by the MTL robustness analyzer which returns a robustness value representing the best estimate for the robustness found so far. In turn, the robustness score computed is used by the stochastic sampler to decide on a next input to analyze. The process terminates after a maximum number of tests or when no improvement on the parameter estimate θ has been made after a number of tests.
Θ * = [θ * m , θ * M ] such that for any ζ ∈ Θ * , φ[ζ] does not hold on Σ, i.e., Σ |= φ[ζ].
Robustness of Metric Temporal Logic Formulas
Metric Temporal Logic (MTL) was introduced in [START_REF] Koymans | Specifying real-time properties with metric temporal logic[END_REF] in order to reason about the quantitative timing properties of boolean signals. In the following, we present directly MTL in Negation Normal Form (NNF) since this is needed for the presentation of the new results in Section 5. We denote the extended real number line by R = R ∪ {±∞}.
Definition 1 (Syntax of MTL in NNF) Let R be the set of truth degree constants, AP be the set of atomic propositions and I be a non-empty non-singular interval of R ≥0 . The set M T L of all well-formed formulas (wff ) is inductively defined using the following rules:
-Terms: True ( ), false (⊥), all constants r ∈ R and propositions p, ¬p for p ∈ AP are terms. -Formulas: if φ 1 and φ 2 are terms or formulas, then
φ 1 ∨ φ 2 , φ 1 ∧ φ 2 , φ 1 U I φ 2
and φ 1 R I φ 2 are formulas.
The atomic propositions in our case label subsets of the output space Y . In other words, each atomic proposition is a shorthand for an arithmetic expression of the form p ≡ g(y) ≤ c, where g : Y → R and c ∈ R. We define an observation map O : AP → P(Y ) such that for each p ∈ AP the corresponding set is
O(p) = {y | g(y) ≤ c} ⊆ Y .
In the above definition, U I is the timed until operator and R I the timed release operator. The subscript I imposes timing constraints on the temporal operators. The interval I can be open, half-open or closed, bounded or unbounded, but it must be non-empty (I = ∅) (and, practically speaking, nonsingular (I = {t})). In the case where I = [0, +∞), we remove the subscript I from the temporal operators, i.e., we just write U, and R. Also, we can define eventually (3 I φ ≡ U I φ) and always (2 I φ ≡ ⊥R I φ).
Before proceeding to the actual definition of the robust semantics, we introduce some auxiliary notation. A metric space is a pair (X, d) such that the topology of the set X is induced by a metric d. Using a metric d, we can define the distance of a point x ∈ X from a set S ⊆ X. Intuitively, this distance is the shortest distance from x to all the points in S. In a similar way, the depth of a point x in a set S is defined to be the shortest distance of x from the boundary of S. Both the notions of distance and depth will play a fundamental role in the definition of the robustness degree.
Definition 2 (Signed Distance) Let x ∈ X be a point, S ⊆ X be a set and d be a metric on X. Then, we define the Signed Distance from x to S to be
Dist d (x, S) := -dist d (x, S) := -inf{d(x, y) | y ∈ S} if x ∈ S depth d (x, S) := dist d (x, X\S) if x ∈ S
We remark that we use the extended definition of the supremum and infimum, i.e., sup ∅ := -∞ and inf ∅ := +∞. MTL formulas are interpreted over timed state sequences µ. In the past [START_REF] Fainekos | Robustness of temporal logic specifications for continuous-time signals[END_REF], we proposed multi-valued semantics for MTL where the valuation function on the predicates takes values over the totally ordered set R according to a metric d operating on the output space Y . For this purpose, we let the valuation function be the depth (or the distance) of the current point of the signal y(i) in a set O(p) labeled by the atomic proposition p. Intuitively, this distance represents how robustly is the point y(i) within a set O(p). If this metric is zero, then even the smallest perturbation of the point can drive it inside or outside the set O(p), dramatically affecting membership.
For the purposes of the following discussion, we use the notation [[φ]] to denote the robustness estimate with which the timed state sequence µ satisfies the specification φ. Formally, the valuation function for a given formula φ is
[[φ]] : (Y N × T) × N → R.
In the definition below, we also use the following notation : for Q ⊆ R, the preimage of Q under τ is defined as :
τ -1 (Q) := {i ∈ N | τ (i) ∈ Q}.
Definition 3 (Robustness Estimate) Let µ = (y, τ ) ∈ L(Σ), r ∈ R and i, j, k ∈ N , then the robustness estimate of any formula MTL φ with respect to µ is recursively defined as follows
[[r]](µ, i) := r [[ ]](µ, i) := +∞ [[⊥]](µ, i) := -∞ [[p]](µ, i) := Dist d (y(i), O(p)) [[¬p]](µ, i) := -Dist d (y(i), O(p)) [[φ 1 ∨ φ 2 ]](µ, i) := max([[φ 1 ]](µ, i), [[φ 2 ]](µ, i)) [[φ 1 ∧ φ 2 ]](µ, i) := min([[φ 1 ]](µ, i), [[φ 2 ]](µ, i)) [[φ 1 U I φ 2 ]](µ, i) := sup j∈τ -1 (τ (i)+I) min([[φ 2 ]](µ, j), inf i≤k<j [[φ 1 ]](µ, k)) [[φ 1 R I φ 2 ]](µ, i) := inf j∈τ -1 (τ (i)+I) max([[φ 2 ]](µ, j), sup i≤k<j [[φ 1 ]](µ, k))
Recall that we use the extended definition of supremum and infimum. When i = 0, then we simply write [[φ]](µ). The robustness of an MTL formula with respect to a timed state sequence can be computed using several existing algorithms [START_REF] Fainekos | Robustness of temporal logic specifications for continuous-time signals[END_REF][START_REF] Fainekos | Verification of automotive control applications using s-taliro[END_REF][START_REF] Donze | Robust satisfaction of temporal logic over real-valued signals[END_REF].
Parametric Metric Temporal Logic over Signals
In many cases, it is important to be able to describe an MTL specification with unknown parameters and, then, infer the parameters that make the specification true/false. In [START_REF] Asarin | Parametric identification of temporal properties[END_REF], Asarin et. al. introduce Parametric Signal Temporal Logic (PSTL) and present two algorithms for computing approximations for parameters over a given signal. Here, we review some of the results in [START_REF] Asarin | Parametric identification of temporal properties[END_REF] while adapting them in the notation and formalism that we use in this paper.
We will restrict the occurrences of unknown parameters in the specification to a single parameter that may appear either in the timing constraints of a temporal operator or in the atomic propositions.
Definition 4 (Syntax of Parametric MTL (PMTL)) Let λ be a parameter, then the set of all well formed PMTL formulas is the set of all well formed MTL formulas where either λ appears in an arithmetic expression, i.e., p[λ] ≡ g(y) ≤ λ, or in the timing constraint of a temporal operator, i.e., I[λ].
We will denote a PMTL formula φ with parameter λ by φ[λ]. Given some value θ ∈ Θ, then the formula φ[θ] is an MTL formula.
Since the valuation function of an MTL formula is a composition of minimum and maximum operations quantified over time intervals, a formula φ[λ] is monotonic with respect to λ.
Example 2 Consider the PMTL formula φ[λ] = 2 [0,λ] p where p ≡ (ω ≤ 3250). Given a timed state sequence µ = (y, τ ) with τ (0) = 0, for θ 1 ≤ θ 2 , we have:
[0, θ 1 ] ⊆ [0, θ 2 ] =⇒ τ -1 ([0, θ 1 ]) ⊆ τ -1 ([0, θ 2 ]). Therefore, [[φ[θ 1 ]]](µ) = inf i∈τ -1 ([0,θ1]) (-Dist d (y(i), O(p))) ≥ inf i∈τ -1 ([0,θ2]) (-Dist d (y(i), O(p))) = [[φ[θ 2 ]]](µ). That is, the function [[φ[θ]]](µ)
is non-increasing with θ. See Fig. 3 for an example using an output trajectory from the system in Example 1.
The previous example can be formalized in the following result.
Proposition 1 Consider a PMTL formula φ[λ] such that it contains a subformula φ 1 Op I[λ] φ 2 where Op ∈ {U, R}. Then, given a timed state sequence µ = (y, τ ), for θ 1 , θ 2 ∈ R ≥0 , such that θ 1 ≤ θ 2 , and for i ∈ N , we have:
1. if (i) Op = U and sup I[λ] = λ or (ii) Op = R and inf I[λ] = λ, then [[φ[θ 1 ]]](µ, i) ≤ [[φ[θ 2 ]]](µ, i), i.e., the function [[φ[λ]]](µ, i) is nondecreasing with respect to λ, and 2. if (i) Op = R and sup I[λ] = λ or (ii) Op = U and inf I[λ] = λ, then [[φ[θ 1 ]]](µ, i) ≥ [[φ[θ 2 ]]](µ, i), i.e., the function [[φ[λ]]](µ, i
) is non-increasing with respect to λ. Proof (Sketch). The proof is by induction on the structure of the formula and it is similar to the proofs that appear in [START_REF] Fainekos | Robustness of temporal logic specifications for continuous-time signals[END_REF].
For completeness, we present the case [[φ 1 U α,λ φ 2 ]](µ, i), where ∈ {[, (} and ∈ {], )}. The other cases are either similar or they are based on the monotonicity of the operators max and min. Let θ 1 ≤ θ 2 , then:
[[φ 1 U α,θ1 φ 2 ]](µ, i) ≤ max [[φ 1 U α,θ1 φ 2 ]](µ, i), [[φ 1 U θ1,θ2 φ 2 ]](µ, i) = [[φ 1 U α,θ2 φ 2 ]](µ, i) where ∈ {[, (} such that α, θ 1 ∩ θ 1 , θ 2 = ∅ and α, θ 1 ∪ θ 1 , θ 2 = α, θ 2 .
We can derive similar results when the parameter appears in the numerical expression of the atomic proposition.
Proposition 2 Consider a PMTL formula φ[λ] such that it contains a parametric atomic proposition p[λ] in a subformula. Then, given a timed state sequence µ = (y, τ ), for θ 1 , θ 2 ∈ R ≥0 , such that θ 1 ≤ θ 2 , and for i ∈ N , we have:
1. if p[λ] ≡ g(x) ≤ λ, then [[φ[θ 1 ]]](µ, i) ≤ [[φ[θ 2 ]]](µ, i), i.e., the function [[φ[λ]]](µ, i) is nondecreasing with respect to λ, and 2. if p[λ] ≡ g(x) ≥ λ, then [[φ[θ 1 ]]](µ, i) ≥ [[φ[θ 2 ]]](µ, i), i.e., the function [[φ[λ]]](µ, i) is non-increasing with respect to λ.
Proof (Sketch). The proof is by induction on the structure of the formula and it is similar to the proofs that appear in [START_REF] Fainekos | Robustness of temporal logic specifications for continuous-time signals[END_REF].
For completeness, we present the base case
[[p[λ]]](µ, i) where p[λ] ≡ g(x) ≤ λ. Since θ 1 ≤ θ 2 , O(p[θ 1 ]) ⊆ O(p[θ 2 ]
). We will only present the case for which y(i) ∈ O(p[θ 2 ]). We have:
O(p[θ 1 ]) ⊆ O(p[θ 2 ]) =⇒ dist d (y(i), O(p[θ 1 ])) ≥ dist d (y(i), O(p[θ 2 ])) =⇒ Dist d (y(i), O(p[θ 1 ])) ≤ Dist d (y(i), O(p[θ 2 ])) =⇒ [[p[θ 1 ]]](µ, i) ≤ [[p[θ 2 ]]](µ, i)
The results presented in this section can be easily extended to multiple parameters. However, in this work, we will focus on a single parameter in order to derive a more tractable optimization problem.
Temporal Logic Parameter Bound Computation
The notion of robustness of temporal logics will enable us to pose the parameter estimation problem as an optimization problem. In order to solve the resulting optimization problem, falsification methods and S-TaLiRo can be utilized in order to estimate Θ * for Problem 1.
As described in the previous section, the parametric robustness functions that we are considering are monotonic with respect to the search parameter. Therefore, if we are searching for a parameter over an interval Θ = [θ m , θ M ], we know that Θ * is going to be either of the form [θ m , θ * ] or [θ * , θ M ]. In other words, depending on the structure of φ[λ], we are either trying to minimize or maximize θ * such that for all θ ∈ Θ * , we have However, [[φ[θ]]](Σ) neither can be computed using reachability analysis algorithms nor is known in closed form for the systems that we are considering. Therefore, we will have to compute an under-approximation of Θ * . Our focus will be to formulate an optimization problem that can be solved using stochastic search methods. In particular, we will reformulate optimization problem (1) into a new one where the constraints due to the specification are incorporated into the cost function:
[[φ[θ]]](Σ) = min µ∈Lτ (Σ) [[φ[θ]]](µ) ≤ 0.
optimize θ∈Θ θ + γ ± [[φ[θ]]](Σ) if [[φ[θ]]](Σ) ≥ 0 0 otherwise ( 2
)
where the sign (±) and the parameter γ depend on whether the problem is a maximization or a minimization problem. The parameter γ must be properly chosen so that the optimum of problem ( 2) is in Θ if and only if [[φ[θ]]](Σ) ≤ 0.
In other words, we must avoid the case where for some θ, we have [[φ[θ]]](Σ) > 0 and (θ + [[φ[θ]]](Σ)) ∈ Θ. Therefore, if the problem in Eq. ( 1) is feasible, then the optimum of equations ( 1) and ( 2) is the same.
Non-increasing Robustness Functions
First, we consider the case of non-increasing robustness functions [[φ[θ]]](Σ) with respect to the search variable θ. In this case, the optimization problem is a minimization problem.
To see why this is the case, assume that
[[φ[θ M ]]](Σ) ≤ 0. Since for θ ≤ θ M , we have [[φ[θ]]](Σ) ≥ [[φ[θ M ]
]](Σ), we need to find the minimum θ such that we still have [[φ[θ]]](Σ) ≤ 0. That θ will be θ * since for all θ ∈ [θ * , θ M ], we will have [[φ[θ ]]](Σ) ≤ 0.
We will reformulate the problem of Eq. ( 2) so that we do not have to solve two separate optimization problems. From (2), we have:
min θ∈Θ θ + γ + min µ∈Lτ (Σ) [[φ[θ]]](µ) if min µ∈Lτ (Σ) [[φ[θ]]](µ) ≥ 0 0 otherwise = = min θ∈Θ θ + min µ∈Lτ (Σ) γ + [[φ[θ]]](µ) if [[φ[θ]]](µ) ≥ 0 0 otherwise = = min θ∈Θ min µ∈Lτ (Σ) θ + γ + [[φ[θ]]](µ) if [[φ[θ]]](µ) ≥ 0 0 otherwise (3)
where γ ≥ max(θ M , 0). The previous discussion is formalized in the following result.
Proposition 3 Let θ * and µ * be the parameters returned by an optimization algorithm that is applied to the problem in Eq. (3
). If [[φ[θ * ]]](µ * ) ≤ 0, then for all θ ∈ Θ * = [θ * , θ M ], we have [[φ[θ]]](Σ) ≤ 0. Proof. If [[φ[θ * ]]](µ * ) ≤ 0, then [[φ[θ * ]]](Σ) ≤ 0. Since [[φ[θ]]](Σ)
is non-increasing with respect to θ, then for all θ ∈ [θ * , θ M ], we also have [[φ[θ]]](Σ) ≤ 0.
Since we are utilizing stochastic optimization methods [START_REF] Sankaranarayanan | Falsification of temporal properties of hybrid systems using the cross-entropy method[END_REF][START_REF] Annapureddy | S-taliro: A tool for temporal logic falsification for hybrid systems[END_REF][START_REF] Annapureddy | Ant colonies for temporal logic falsification of hybrid systems[END_REF][START_REF] Nghiem | Monte-carlo techniques for falsification of temporal properties of non-linear hybrid systems[END_REF] to solve problem (3), if [[φ[θ * ]]](µ * ) > 0, then we cannot infer that the system is correct for all parameter values in Θ.
Example 4 Using Eq. (3) as a cost function, we can now compute the optimal parameter for Example 3 using our toolbox S-TaLiRo [START_REF] Annapureddy | S-taliro: A tool for temporal logic falsification for hybrid systems[END_REF]. In particular, using Simulated Annealing as a stochastic optimization function, S-TaLiRo returns θ * ≈ 2.45 as optimal parameter for constant input u(t) = 99.8046. The corresponding temporal logic robustness for the specification 2 [0,2.45] (ω ≤ 4500) is -0.0445. The total number of tests performed for this example was 500 and, potentially, the accuracy of estimating θ * can be improved if we increase the maximum number of tests. However, we remark that based on several tests the algorithm converges to a good approximation within 200 tests.
Non-decreasing Robustness Functions
The case of non-decreasing robustness functions is symmetric to the case of non-increasing robustness functions. In particular, the optimization problem is a maximization problem. We will reformulate the problem of Eq. ( 2) so that we do not have to solve two separate optimization problems. From (2), we have:
max θ∈Θ θ + γ -min µ∈Lτ (Σ) [[φ[θ]]](µ) if min µ∈Lτ (Σ) [[φ[θ]]](µ) ≥ 0 0 otherwise = = max θ∈Θ θ + γ + max µ∈Lτ (Σ) (-[[φ[θ]]](µ)) if max µ∈Lτ (Σ) (-[[φ[θ]]](µ)) ≤ 0 0 otherwise = = max θ∈Θ θ + max µ∈Lτ (Σ) γ -[[φ[θ]]](µ) if -[[φ[θ]]](µ) ≤ 0 0 otherwise = = max θ∈Θ max µ∈Lτ (Σ) θ + γ -[[φ[θ]]](µ) if [[φ[θ]]](µ) ≥ 0 0 otherwise (4)
where γ ≤ min(θ m , 0). The previous discussion is formalized in the following result.
Proposition 4 Let θ * and µ * be the parameters returned by an optimization algorithm that is applied to the problem in Eq. ( 4). If
[[φ[θ * ]]](µ * ) ≤ 0, then for all θ ∈ Θ * = [θ m , θ * ], we have [[φ[θ]]](Σ) ≤ 0. Proof. If [[φ[θ * ]]](µ * ) ≤ 0, then [[φ[θ * ]]](Σ) ≤ 0. Since [[φ[θ]]](Σ) is non-decreasing with respect to θ, then for all θ ∈ [θ m , θ * ], we also have [[φ[θ]]](Σ) ≤ 0. Again, if [[φ[θ * ]
]](µ * ) > 0, then we cannot infer that the system is correct for all parameter values in Θ.
Example 5 Let us consider the specification φ[λ] = 2 [λ,30] (ω ≤ 4500) on our running example. The specification robustness [[φ[θ]]](∆ Σ (u)) as a function of θ and the input u appears in Fig. 5 (left) for constant input signals. The creation of the graph required 100 × 30 = 3, 000 tests. The contour under the surface indicates the zero level set of the robustness surface, i.e., the θ and u values for which we get [[φ[θ]]](∆ Σ (u)) = 0. We remark that the contour is actually an approximation of the zero level set computed by a linear interpolation using the neighboring points on the grid. From the graph, we could infer that θ * ≈ 13.8 and that for any θ ∈ [0, 13.8], we would have [[φ[θ]]](Σ) ≤ 0. Again, the approximate value of θ * is a rough estimate based on the granularity of the grid.
Using Eq. ( 4) as a cost function, we can now compute the optimal parameter for Example 3 using our toolbox S-TaLiRo [START_REF] Annapureddy | S-taliro: A tool for temporal logic falsification for hybrid systems[END_REF]. S-TaLiRo returns θ * ≈ 12.59 as optimal parameter for constant input u(t) = 90.88 within 250 tests. The temporal logic robustness for the specification 2 [12.59,30] (ω ≤ 4500) with respect to the input u appears in Fig. 5 (right). Some observations: (i) The θ * ≈ 12.59 computed by S-TaLiRo is actually very close to the optimal value since for θ * ≈ 12.79 the system does not falsify any more. (ii) The systematic testing that was used in order to generate the graph was not able to accurately compute a good approximation to the parameter unless even more tests (> 3000) are generated.
Experiments and a Case Study
The parametric MTL exploration of embedded systems was motivated by a challenge problem published by Ford in 2002 [START_REF] Chutinan | Dynamic analysis of hybrid system models for design validation[END_REF]. In particular, the report provided a simple -but still realistic -model of a powertrain system (both the physical system and the embedded control logic) and posed the question whether there are constant operating conditions that can cause a transition from gear two to gear one and then back to gear two. Such a sequence would imply that the transition was not necessary in the first place.
The system is modeled in Checkmate [START_REF] Silva | Formal verification of hybrid systems using CheckMate: a case study[END_REF]. It has 6 continuous state variables and 2 Stateflow charts with 4 and 6 states, respectively. The Stateflow chart for the shift scheduler appears in Fig. 6. The system dynamics and switching conditions are linear. However, some switching conditions depend on the inputs to the system. The latter makes the application of standard hybrid system verification tools not a straightforward task.
In [START_REF] Fainekos | Verification of automotive control applications using s-taliro[END_REF], we demonstrated that S-TaLiRo [START_REF] Annapureddy | S-taliro: A tool for temporal logic falsification for hybrid systems[END_REF] can successfully solve the challenge problem (see Fig. 6) by formalizing the requirement as an MTL specification φ e1 = ¬3(g 2 ∧ 3(g 1 ∧ 3g 2 )) where g i is a proposition that is true when the system is in gear i. Stochastic search methods can be applied to solve the resulting optimization problem where the cost function is the robustness of the specification. Moreover, inspired by the success of S-TaLiRo on the challenge problem, we tried to ask a more complex question. Namely, does a transition exists from gear two to gear one and back to gear two in less than 2.5 sec? An MTL specification that can capture this requirement is φ e2 = 2((¬g 1 ∧ Xg 1 ) → 2 [0,2.5] ¬g 2 ).
The natural question that arises is what would be the smallest time for which such a transition can occur? We can formulate a parametric MTL formula to query the model of the powertrain system: φ e3 [λ] = 2((¬g 1 ∧Xg 1 ) → 2 [0,λ] ¬g 2 ). We have extended S-TaLiRo to be able to handle parametric MTL specifications. The total simulation time of the model was 60 sec and the search interval was Θ = [0, 30]. S-TaLiRo returned θ * ≈ 0.4273 as the minimum parameter found (See Fig. 6) using about 300 tests of the system.
In Table 6, we present some experimental results. Since no other technique can solve the parameter estimation problem for MTL formulas over hybrid systems, we compare our method with the falsification methods that we have developed in the past [START_REF] Abbas | Probabilistic temporal logic falsification of cyber-physical systems[END_REF][START_REF] Sankaranarayanan | Falsification of temporal properties of hybrid systems using the cross-entropy method[END_REF]. A detailed description of the benchmark problems can be found in [START_REF] Abbas | Probabilistic temporal logic falsification of cyber-physical systems[END_REF][START_REF] Sankaranarayanan | Falsification of temporal properties of hybrid systems using the cross-entropy method[END_REF] and the benchmarks can be downloaded with the S-TaLiRo distribution 2 . In order to be able to compare the two methods, when performing parameter estimation, we regard a parameter value less than the constant in the MTL formula as falsification. Notably, for benchmark problems that are easier to falsify, the parameter estimation method incurs additional cost in the sense of reduced number of falsifications. On the other hand, on hard problem instances, the parameter estimation method provides us with parameter ranges for which the system fails the specification. Moreover, on the powertrain challenge problem, the parameter estimation method actually helps in falsifying the system. We conjecture that the reason for this improved performance is that the timing requirements on this problem are more important than the state constraints.
Related Work
The topic of testing embedded software and, in particular, embedded control software is a well studied problem that involves many subtopics well beyond the scope of this paper. We refer the reader to specialized book chapters and textbooks for further information [START_REF] Conrad | Testing automotive control software[END_REF][START_REF] Koopman | Better Embedded System Software[END_REF]. Similarly, a lot of research has been invested on testing methods for Model Based Development (MBD) of embedded systems [START_REF] Tripakis | Modeling, Verification and Testing using Timed and Hybrid Automata[END_REF]. However, the temporal logic testing of embedded and hybrid systems has not received much attention [START_REF] Plaku | Falsification of ltl safety properties in hybrid systems[END_REF][START_REF] Tan | Model-based testing and monitoring for hybrid embedded systems[END_REF][START_REF] Nghiem | Monte-carlo techniques for falsification of temporal properties of non-linear hybrid systems[END_REF][START_REF] Zuliani | Bayesian statistical model checking with application to simulink/stateflow verification[END_REF]. Parametric temporal logics were first defined over traces of finite state machines [START_REF] Alur | Parametric temporal logic for model measuring[END_REF]. In parametric temporal logics, some of the timing constraints of the temporal operators are replaced by parameters. Then, the goal is to develop algorithms that will compute the values of the parameters that make the specification true under some optimality criteria. That line of work has been extended to real-time systems and in particular to timed automata [START_REF] Di Giampaolo | Parametric metric interval temporal logic[END_REF] and continuoustime signals [START_REF] Asarin | Parametric identification of temporal properties[END_REF]. The authors in [START_REF] Fages | On temporal logic constraint solving for analyzing numerical data time series[END_REF][START_REF] Rizk | On a continuous degree of satisfaction of temporal logic formulae with applications to systems biology[END_REF] define a parametric temporal logic called quantifier free LTL over real valued signals. However, they focus on the problem of determining system parameters such that the system satisfies a given property rather than on the problem of exploring the properties of a given system.
Another related research topic is the problem of Temporal Logic Queries [START_REF] Chan | Temporal-logic queries[END_REF][START_REF] Chechik | Tlqsolver: A temporal logic query checker[END_REF]. In detail, given a model of the system and a temporal logic formula φ, a subformula in φ is replaced with a special symbol ?. Then, the problem is to determine a set of Boolean formulas such that if these formulas are placed into the placeholder ?, then φ holds on the model.
Conclusions
An important stage in Model Based Development (MBD) of embedded control software is the formalization of system requirements. We advocate that Metric Temporal Logic (MTL) is an excellent candidate for formalizing interesting design requirements. In this paper, we have presented a solution on how we can explore system properties using Parametric MTL (PMTL) [START_REF] Asarin | Parametric identification of temporal properties[END_REF]. Based on the notion of robustness of MTL [START_REF] Fainekos | Robustness of temporal logic specifications for continuous-time signals[END_REF], we have converted the parameter estimation problem into an optimization problem which we solve using S-TaLiRo [START_REF] Annapureddy | S-taliro: A tool for temporal logic falsification for hybrid systems[END_REF]. Even though this paper presents a method for estimating the range for a single parameter, the results can be easily extended to multiple parameters as long as the robustness function has the same monotonicity with respect to all the parameters. Finally, we have demonstrated that the our method can provide interesting insights to the powertrain challenge problem [START_REF] Chutinan | Dynamic analysis of hybrid system models for design validation[END_REF].
Fig. 1 .
1 Fig. 1. Example 1: A piecewise constant input signal u parameterized with Λ ∈ [0, 100] 6 and t = [0, 5, 10, 15, 20, 25] and the corresponding output signals that falsify the specification. Ideally, by solving Problem 1, we would also like to have the property that for any ζ ∈ Θ -Θ * , φ[ζ] holds on Σ, i.e., Σ |= φ[ζ]. However, even for a given ζ, the problem of algorithmically computing whether Σ |= φ[ζ] is not easy to solve for the classes of hybrid systems that we consider in this work.An overview of our proposed solution to Problem 1 appears in Fig.2. The sampler produces a point x 0 from the set of initial conditions, a parameter vector λ that characterizes the control input signal u and a parameter θ. The vectors x 0 and λ are passed to the system simulator which returns an execution trace (output trajectory and timing function). The trace is then analyzed by the MTL robustness analyzer which returns a robustness value representing the best estimate for the robustness found so far. In turn, the robustness score computed is used by the stochastic sampler to decide on a next input to analyze. The process terminates after a
Fig. 2 .
2 Fig. 2. Overview of the solution to the MTL parameter estimation problem on CPS.
Fig. 3 .
3 Fig. 3. Example 2. Left: Engine speed ω(t) for constant throttle u(t) = 50. Right: The robustness of the specification 2 [0,θ] (ω ≤ 3250) with respect to θ.
Example 3
3 Let us consider again the automotive transmission example and the specification φ[λ] = 2 [0,λ] p where p ≡ (ω ≤ 4500). The specification robustness [[φ[θ]]](∆ Σ (u)) as a function of θ and the input u appears in Fig. 4 (left) for constant input signals. The creation of the graph required 100 × 30 = 3, 000 tests. The contour under the surface indicates the zero level set of the robustness surface, i.e., the θ and u values for which we get [[φ[θ]]](∆ Σ (u)) = 0. From the graph, we can infer that θ * ≈ 2.8 and that for any θ ∈ [2.8, 30], we have [[φ[θ]]](Σ) ≤ 0. The approximate value of θ * is a rough estimate based on the granularity of the grid that we used to plot the surface. In summary, in order to solve Problem 1, we would have to solve the following optimization problem: optimize θ (1) subject to θ ∈ Θ and [[φ[θ]]](Σ) = min µ∈Lτ (Σ) [[φ[θ]]](µ) ≤ 0
Table 1 .
1 Experimental Comparison of Falsification (FA) vs. Parameter Estimation (PE). Each instance was run for 100 times and each run was executed for a maximum of 1000 tests. Legend: #Fals.: the number of runs falsified, Parameter Estimate: min, average, max of the parameter value computed, dnf : did not finish.
Benchmark Problem #Fals. Parameter Estimate
Specification Instance FA PE PE
φ AT 2 [λ] = ¬3(p AT 1 ∧ 3 [0,λ] p AT 2 ) φ AT 2 [10] 96 84 7.7, 9.56, 16.84
φ AT 3 [λ] = ¬3(p AT 1 ∧ 3 [0,λ] p AT 3 ) φ AT 3 [10] 51 0 10.00, 10.22, 14.66
φ AT 4 [λ] = ¬3(p AT 1 ∧ 3 [0,λ] p AT 2 ) φ AT 4 [7.5] 0 0 7.57, 7.7, 8.56
φ AT 5 [λ] = ¬3(p AT 1 ∧ 3 [0,λ] p AT 2 ) φ AT 5 [5] 0 0 7.56, 7.74, 9.06
φe3[2.5] dnf 93 1.28, 2.26, 6.82
Available at: http://www.mathworks.com/products/simulink/demos.html
https://sites.google.com/a/asu.edu/s-taliro/
Acknowledgments This work was partially supported by a grant from the NSF Industry/University Cooperative Research Center (I/UCRC) on Embedded Systems at Arizona State University and NSF awards CNS-1116136 and CNS-1017074. | 42,174 | [
"1003399",
"1003400",
"1003401"
] | [
"251827",
"251827",
"251827"
] |
01482457 | en | [
"sdv"
] | 2024/03/04 23:41:48 | 2016 | https://hal.science/hal-01482457/file/692-2182-1-PB.pdf | Mie Elholm Birkbak
Nina Kolln Wittig
Malene Laugesen
Alexandra Pacureanu
Jesper Annemarie Brüel
Jesper Skovhus Thomsen
Françoise Peyrin
Henrik Birkedal
Mie Elholm Birkbak
Nina Kølln
Alexandra Parcureanu
Francoise Peyrin
COMPLEX ARCHITECTURE OF THE OSTEOCYTE LACUNAR-CANALICULAR NETWORK IN MICE
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
The osteocyte network in bone has attracted great interest due to the role of osteocytes in mechanosensing and regulation of bone remodeling. Osteocytes reside in lacunae and are interconnected by cellular processes running through a network of canaliculi; canals roughly 200 nm in diameter. The canalicular network plays a vital role in the communication between osteocytes and facilitates a way for osteocytes to orchestrate bone remodelling. Rodents are widely used as model organisms to study experimentally induced effects in bone. Human and rodent bone does, however, display large structural variations with the largest difference being the absence of harversian remodeling in rodents, which has profound implications for bone microstructure [1].
Here we have studied the lacuna-canalicular network in mouse bone to describe the communication network and the structural features found on the sub-micro meter length scale. Describing the hierarchical structure of bone demands multiscale imaging techniques [2][3][4] and advances in high resolution X-ray imaging has paved the way for characterization of the lacunar-canalicular network [5][6][7] Herein we apply Xray holotomography with a 25 nm voxel size to mouse bone.
Methods
Cortical bone from the femoral mid-diaphysis from 3 NMRI mice were cut into 0.4×0.4×3 mm 3 rods with a diamond saw. Local nano-tomography was performed at ID16A, ESRF. Radiographs were collected for four different sample-to-detector distances resulting in a final voxel size of 25 nm and a field of view of 50 m. Phase reconstruction was performed as described in [5] followed by tomographic reconstruction yielding 3D phase maps of the imaged volume.
Results
The tomographic results easily allowed visualizing the osteocy-canalicular network with high fidelity. An example is show in Figure 1 that shows the void space around an osteocyte. The spaghetti-like network extending from it are the canaliculi. The samples contained features in the canalicular network not seen in humans [4][5][6][7]. Approximately 1 m large voids were observed in all animals throughout the probed volume. The voids are roughly spherical tending to prolate in shape and well connected with the canalicular network.
The void are predominantly centered around junctions between multiple canaliculi.
Discussion
The communication between the osteocytes has been speculated to be enabled by fluid flow through the canaliculi [8]. While the role of the voids reported herein remains unclear, their presence are bound to influence the information flow through the network. These voids have not been observed in samples of human origin exemplifying another difference between human and rodent bone [1,9]. This further stresses the need for better understanding of the bone communication network.
Figure 1 :
1 Figure 1: Osteocyte and the connecting canaliculi. Encircled in red a void can be seen well interconnected with the surrounding canaliculi.
Acknowledgements
We thank the ESRF for beam time through LTP MD-830. 5m | 3,512 | [
"20272",
"967050"
] | [
"421324",
"421324",
"421324",
"2568",
"464567",
"464567",
"530748",
"2568",
"421324"
] |
01483305 | en | [
"info"
] | 2024/03/04 23:41:48 | 2016 | https://hal.science/hal-01483305/file/CSCI4581_Final_Bouras_Zainal_Education_Ontologies%20%281%29.pdf | Abdelaziz Bouras
email: abdelaziz.bouras|azainal@qu.edu.qa
Alanood A Zainal
Education Ontology Modeling for Competency Gap Analysis
Keywords: Knowledge, Competency models, Education, Ontology
Competency-based education was initially developed in response to growing criticism towards education considered as more and more disconnected from the societal evolutions, especially changes within the workplaces. To better address the problem, knowledge about the gap between the university curricula outcome and the industry requirements is important. This paper describes how ontology concept could be a relevant tool for an initial analysis and focuses on the assessment of the competences needed by the Information Technology market. It illustrates the use of ontologies for three identified end users: Employers, Educators and students.
Introduction
The increasing development of both technical and social infrastructures has created new high-qualified labour needs (transportation, banking system, health care system, etc.). The students need then to be better prepared to the new complex nature of the world of work. The Pro-Skima project aims at contributing to this challenge. Some output form the project has been already reported in previous publications [START_REF] Bouras | Cooperative education development: Towards ICT reference models[END_REF] [START_REF] Veillard | Designing a competency framework for graduate levels in computing sciences: the Middle-East context[END_REF]. Such study needs some technical tools for proof of concept. In this publication we will mainly focus on the use of such tools: an ontology concept.
Defining the skills and competencies
The need to define skills and competencies demanded and supplied by the industry due to their importance in job placements has emerged since several decades [START_REF] Markowitsch | Descriptors for competence: towards an international standard classification for skills and competences[END_REF]. Many national initiatives have been lead in order to formalize the definitions of competency and skills in the industry. Examples include O*NET [START_REF]O*NET OnLine[END_REF] in the United States, "AMS-Qualifikationsklassifikation" in Austria, "Kompetenzenkatalog" in Germany and "ROME" in France [START_REF] Markowitsch | Descriptors for competence: towards an international standard classification for skills and competences[END_REF]. These approaches have been classified to define competencies and skills as three main approaches. The first approach is used by psychologists, it specifies that skills and competencies are measured by comparing portfolios in a quantifiable way. This method is highly standardized, basic and does not cover identifications of competencies in depth. The second approach relies on building individual portfolios by collecting documents such as reports and certificates. On the contrary of the first method, this method is highly individual, non-standardized and could be used by any individual regardless of his qualification. The third method is simply using a comprehensive list of competency and skills to describe profiles of individuals, this is considered to be a standardized method that could apply universally to all individuals.
Modeling competencies generated by the academic programs
Academic programs are generally developed based on two approaches. The approaches seem to relate to applying science and work place requirements. In developing curricula, the first approach uses applied sciences as an input, in the contrary the second approach uses work place skill requirements as an input to develop programs. The first approach bases the curriculum around teaching the basic and core knowledge in the relevant science discipline, where it is believed that learned knowledge would help students in acquiring the basic skills needed in their workplace after they graduate. The second approach starts with analyzing the required job skills needed to perform a task and later a curriculum is developed to generate the correct competencies looking for a job in that specific field. The second approach of curricula design would require an assessment of the current local and global trends in technology and demand of competencies to perform the different sets of jobs. The outcome of the assessment should lead to designing a competency framework, where programs in different disciplines could refer to when designing their curricula.
Higher education institutions use different methods to model competencies in order to be able to start developing the curriculum method of their academic programs. One of the most well-known methods is DACUM (Designing A CUrriculuM) [START_REF] Norton | DACUM Handbook[END_REF]. The DACUM model was born in Canada and then disseminated at the international level [START_REF] Rauner | Qualification and curriculum research. Handbook of technical and vocational education and training research[END_REF]. It consists of a topdown analysis: a profession, a function or a family of occupations or functions. First, the subject of analysis is determined, then the different responsibilities or the constituent tasks of these occupations or functions are defined, in turn they are broken down into tasks, subtasks, actions, each with an analysis of the knowledge, skills, standards, resources to be mobilized. The originality of this method is that it relies only on small groups of professional experts who comes from the same professional domain. The experts are considered to be well positioned to describe their own work. Moreover, the analysis is not exclusively made by experts themselves, it also includes some representatives of trade unions, employers, academics, policy makers, etc. This is necessary because the outcome is not only a technical analysis, but also an agreement between different social partners: companies, schools (or universities), states, and representatives from trade unions [8].
Hence, the principle of using the DACUM method relies on the knowledge of experts who perform the daily task of the job, which the assessors are interested to analyze. Educators get to interact one to one in a workshops to help them understand the competency requirements and find the answer to "what needs to be taught?" when developing a new academic program. One of the main reasons this method is effective is because it has been identified that there is a gap between what education programs offer and between the skills that are actually needed by employers [START_REF] Norton | DACUM Handbook[END_REF].
Another model is the European e-Competence
Framework (e-CF), which was established as a tool to support mutual understanding and provide transparency of language through the articulation of competences required and deployed by ICT professionals [9]. A framework has been developed, maintained and supported in practical implementation by a large number of European ICT and HR experts in the context of ICT. The Information Security Management part for instance, related to cybersecurity is very informative. It ensures that security risks are analyzed and managed at all levels with respect to enterprise data and information strategy.
Another contribution that uses a pyramidal representation of layers to represent the information is The Information Technology Competency Model [10]. The arrangement of the tiers in such shape implies that competencies at the top are at a higher level of skill. Other models exist but the summarized models in this section are among the closest ones to our needs. They are rather generic and do not clearly tackle the complexity of the specific nature of some particularities in the graduate ICT degree levels and their dynamic issues such as Cyber security problems. For the specific issues, complementary field expertise is necessary (interview of experts).
Modeling of the ontology
The ontology is represented in a taxonomy that help in describing employee, education and industry defined competencies. A Superclass is further divided into subclasses that help in defining more classification to the individuals. All classes under the root class (called "Thing") are set to be disjoint, this implies that an individual cannot be part of two classes at the same time. An example would be that a Course cannot be a Learning Outcomes at the same time, it can only be either one of them. Assigning disjoined classes is essential because, not specifying that an individual is not a part of a class is not necessarily not a part of it. Some of the main terms that are mentioned to represent the education domain are: Institution, Department, Course, Program, Learning Outcome, Grade and Study Plan". Terms representing the industry domain included: Employee, Competencies, and Occupation.
Defining the classes and the class hierarchy
Classes are groups of individuals that are chosen to represent a class because they fulfill the same membership requirements as others [11]. Classes usually exist in hierarchy often referred to as Taxonomy. Hierarchy is used to infer inheritance, this allows the 'Reasoners' to fulfill their purpose [11]. Defining the classes were done in a Combination development process, which is using both top-to-Bottom and Bottom-to-Top approaches. Prominent terms were first coined and then followed more generalization and specialization that created the hierarchy shown in Figure (1).
A. Defining the properties of classesslots
Properties will typically come from the verbs we use in the domain to describe the classes. Some of the verbs that would describe the enumerated terms in step 3 are: Enrolled, generate, has applied, has course, has gained, has selected, is a, is equivalent to, is part of, is selected by lacks and requires. Properties serves the purpose of linking two individuals together, thus, each slot mentioned was used to describe these links to create the internal concept structure. The defined Object Properties of the ontology is as listed in Figure (8). There are no defined Data Properties yet, as there has not been any need identified to use them in the model. The following property chains has been added to the ontology as shown in Table [START_REF] Bouras | Cooperative education development: Towards ICT reference models[END_REF]. The second facet will classify an Employee as "Fit for the job", if the employee has gained all the Skills, Knowledge and Abilities required by the intended occupation. The object properties: lacksAbilities, LacksKnowledge and LacksSkills were added to the ontology in order to get the set of competencies the employee lacks. By comparing the set of competencies the employee lacks with the negation of the set of competencies the occupation requires, we can infer if the Employee is Fit for the Occupation or not. In order for the added three object properties to function, they need to be added as disjoint properties to the following proprties: hasGainedSkill, hasGainedAbility and hasGainedKnowledge. Since these properties already have object property chains asserted to them, this can't be done within the same ontology due to the constrain explained previously. Hence, to solve this challenge, we will apply these changes in a second ontology that will mimic the first ontology but only have disjoint property added without the property chains as a difference.
C. Create instances
The ontology creation is concluded by adding the required individuals (instances) to the classes of the hierarchy. The processes requires choosing a certain class, adding the individual and then completing the necessary slot values or in other words, asserting the Property to the Individual. For example, as shown in Figure ( 13), Noor is added as an Individual of the Class Employee. She has two Object Properties asserted to her, the first Property is "hasAppliedForOccupation" it is asserted to an Instant from the Occupation Class which is Information_Security_Analysts. This implies that Noor has applied for the asserted job title. The second Property "hasSelected" has been asserted the value Option_2, which denotes Noor's selection of Study Plan.
Evaluation of the Ontology Output
To prove the usefulness of the proposed ontology we are currently working on three scenarii (Employers, Educators and students) to show how this serves the objective of performing the gap analysis in each case.
The scenarii used will show the outcome of each run ontology and how can each user make use of the output result. The output result type can be used in two different ways:
-For seeking more informing about the knowledge domains: to be informed about the actual situation and to be able to measure the gap.
-For decision making: to take actions based on the results and draw new plans/apply enhancements based on assessments.
The data used to feed the job occupations, competencies and the mapping of each job to each competency is derived from the real data published on O*NET [START_REF]O*NET OnLine[END_REF].
Data used to feed the courses were derived from the Computer Science and Engineering (Qatar University) online curricula [12], we are focusing on getting the data related to the study plan for a student who wants to complete the requirements to graduate from the Bachelors of computer science program. For instance
Conclusion
Understanding the gap between the supply and demand of competencies is a rich area that is worth exploring. Finding an efficient and accurate way of exploring that area has been proved to be a challenging endeavor for educators, employers and job seekers. The examples given in this paper show how the proposed ontology can be used in identifying users to obtain the needed gap analysis of competencies. The proposed ontology is intended to be a mean of technical communication between all of these stakeholders. Having such link could potentially serve as a solid base for any fruitful collaboration which may lead in developing an efficient mechanism that would help them reach building a solid competency model.
Figure 1 :
1 Figure 1: Ontology Classes Most classes derived from the Education domain have remained general as noticed like: Study_Plan and Learning_Outcomes. Courses has been categorized into more subclasses to show their classification nature. As for classes that are derived from the Industry domain, classes such as Competencies, has been classified according to O*NET's [5] native classification of Skills, Abilities and Knowledge. The Employee class will have a subclass created for every job occupation that the employee would like to test the gap analysis on. These classes will be used by the Reasoner to derive inferences about how fit are the Employees whom have applied to the jobs they seek.
Figure 2 :
2 Figure 2: Ontology Properties
Figure 3 :
3 Figure 3: Ontology object property diagram -Object Property Chains Property chains help us infer information about classes from how they are linked to each other. For example, if we would like to obtain the list of courses Mariam has enrolled in by only
Figure 4 :Figure 5 :
45 Figure 4: Ontology Property Chain
Figure ( 6 )
6 Figure[START_REF] Norton | DACUM Handbook[END_REF] outlines how the implemented classes and object properties interacts to achieve the purpose of the ontology.
Figure 6 :
6 Figure 6: Ontology Design -Disjoint Properties Some of the facets on Classes are introduced in the ontology in Table (2). The logic behind the first facet is, for all employees who are intended to apply for a specific job to be classified under one class. The way the facet expression was written, ensures that any Individual under the Class Employee who has the Property Assertion "hasAppliedForOccupation value Database_Administrators", gets classified by the Reasoner under this class.
Figure 7
7 Figure 7 Employee Object Property AssertionDue to the nature of Open World Assumptions (OWA) OWL has, individuals are assumed to be the same regardless of the way they are named. Two individuals may have the same name but they could be assumed to be different. Likewise, when two individuals may have different names and could be assumed to be equivalent. This requires us to explicitly define all the other individuals in the Class Employee as different Individuals. This will drive the Reasoner to not assume that other individual are equal and would prevent inconsistent inheritance.
Figure (8) shows all required competencies for the Class Occupation based on the inference result.
Figure 8
8 Figure 8 Closing instances under Class Competency Not all data available on the website relating to all occupations, competencies and university courses were entered into the ontology due to the limitation of the inference engines of the used Ontology tool. Entered data were selected based on a criteria that would help illustrate different examples of the ontology uses.A Learning Outcome mapping and selecting student study plans are under preparation in efforts to mimic the exercise educators should follow in order to assess their programs.
Table 1 A
1 list of Asserted Property Chains
generatesSkill generateLO o isEqualToSkill
SubPropertyOf generatesSkill
EnrolledInCourse o
hasGainedAbility generatesAbility SubPropertyOf
Name Object Property Asserted Property Chain hasGainedAbility
EnrolledInCourse o
EnrolledInCourse hasSelected SubPropertyOf EnrolledInCourse o hasCourse hasGainedKnowle dge generatesKnowledge SubPropertyOf
hasGainedKnowledge
generatesAbility generateLO o isEqualToAbility SubPropertyOf generatesAbility hasGainedSkill EnrolledInCourse o generatesSkill SubPropertyOf hasGainedSkill
generateLO o
generatesKnowled isEqualToKnowledge
ge SubPropertyOf
generatesKnowledge
Table 2
2 Ontology Facets
Facet Class Name
Employee and (hasAppliedForOccupation value Database_Administrators) Applied_for_ Database_Ad mi nistrators_Oc cupation
Employee and
(lacksAbilities only (not (is-
An-AbilityRequiredFor value
Database_Administrators))) and (lacksKnowledge only (not (is-A-KnowledgeRequiredFor value Database_Administrators))) Fit_for_ Database_Ad ministrators_ Oc cupation
and (lacksSkills only (not (is-
A-SkillRequiredFor value
Database_Administrators)))
Acknowledgement
This publication was made possible by NPRP grant # NPRP 7-1883-5-289 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors. | 18,531 | [
"1003422"
] | [
"257464",
"145304",
"33804",
"257464"
] |
01483315 | en | [
"info"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01483315/file/main.pdf | Shankara Narayanan
Khushraj Madnani
email: khushraj@cse.iitb.ac.in
Paritosh Pandya
email: pandya@tifr.res.in
A Regular Metric Temporal Logic
Keywords:
We study an extension of MTL in pointwise time with regular expression guarded modality Reg I (re) where re is a regular expression over subformulae. We study the decidability and expressiveness of this extension, called RegMTL, as well as its fragment SfrMTL where only star-free extended regular expressions are allowed. Using the technique of temporal projections, we show that RegMTL has decidable satisfiability by giving an equisatisfiable reduction to MTL. Moreover, we identify a subset MITL[UReg] for which our (polynomial time computable) equi-satisfiable reduction gives rise to formulae of MITL. Thus, MITL[UReg] has elementary decidability. As our second main result, we show that SfrMTL is equivalent to partially ordered (or very weak) 1-clock alternating timed automata. We also identify natural fragments of logic TPTL which correspond to our logics.
Introduction
Temporal logics provide constructs to specify qualitative ordering between events in time. Real time logics are quantitative extensions of temporal logics with the ability to specify real time constraints amongst events. The main modality in Metric Temporal Logic (MTL) is the until modality a U I b which, when asserted at a point specifies that there is a future within a time distance in I where b holds, and a holds continuously till then. Two notions of MTL have been studied in the literature : continuous and pointwise. It is known [START_REF] Alur | The benefits of relaxing punctuality[END_REF] that satisfiability checking of MTL is undecidable in the continuous semantics even for finite words, while for the pointwise case, this is decidable [START_REF] Ouaknine | On the decidability of metric temporal logic[END_REF]. The complexity of the satisfiability problem for MTL over finite timed words is known to be non-primitive recursive (NPR) in the pointwise semantics, while if the intervals I allowed in the until modalities are non-punctual, then the complexity drops to EXPSPACE in both the pointwise and continuous semantics. The fragment of MTL with only non-punctual intervals is denoted MITL, and was introduced in [START_REF] Alur | The benefits of relaxing punctuality[END_REF]. A non-punctual interval has the form x, y where x < y, x ∈ N, y ∈ N ∪ {∞}.
There are various natural extensions of temporal logics have been studied both in classical and timed logic domain. Wolper extended LTL with certain grammar operators to achieve MSO completeness. Baziramwabo, McKenzie and Thérien extended LTL with modular and group modalities, and showed that the latter is as expressive as regular languages [START_REF] Baziramwabo | Modular temporal logic[END_REF]. Counting LTL is an extension of LTL with threshold counting. It has been shown that this extension does not increase the expressive power of LTL [START_REF] Laroussinie | Counting ltl. In TIME[END_REF]. As another extension, LTL with just modulo counting modalities has been studied by [START_REF] Lodaya | Ltl can be more succinct[END_REF]. In timed logics, Raskin's Ph.D thesis studied various extensions of MITL with the ability to count over the entire model. Rabinovich et. al. extended continuous MITL with counting (called the C modality) and Pnueli modalities [START_REF] Rabinovich | Complexity of metric temporal logic with counting and pnueli modalities[END_REF] and showed that these extensions are more expressive than MITL. The counting modalities C n (φ), specify that the number of points that satisfy φ within the next unit interval is at least n. The Pnueli modality, P n k is a generalization of the threshold counting modality : P n k (φ 1 , . . . , φ k ) specifies that there is an increasing sequence of timestamps t 1 , . . . , t k in the next unit interval such that φ i is true at t i .
Contributions This paper is on extensions of MTL in the point-wise semantics. Contributions of this paper are as follows:
Generalizations: We generalize some of these extended modalities(Pnueli, modulo counting) that has been studied in the literature with a Reg I and UReg I modality which allows us to specify a regular expression over subformulae within some time interval in the future. Let re(φ 1 , . . . , φ k ) be a regular expression over formulae φ 1 , . . . , φ k . The Reg I (re(φ 1 , . . . , φ k )) modality specifies that the pattern of the behaviour of the subformulae, φ 1 , . . . , φ k , in the time segment within interval I in the future is in accordance with re(φ 1 , . . . , φ k ), while the ψ 1 UReg I,re(φ1,...,φ k ) ψ 2 modality asserts that there exist a point j in the future within interval I where ψ 2 is true, and at all the points strictly between the present point and j, ψ 1 is true and the behaviour of φ 1 , . . . , φ k in this region is in accordance with re(φ 1 , . . . , φ k ). This extension of MTL is denoted as RegMTL. Satisfiability Checking: We show that RegMTL is decidable over finite timed words with non primitive recursive complexity using the technique of oversampled temporal projections. The check Reg I (re) at each point in the timed word is taken care of by annotating the timed word with an encoding of the runs of the DFA corresponding to the re. We show that the runs of the automaton can be captured in a way requiring only bounded amount of information, and that this can be captured in MTL, giving rise to an equisatisfiable MTL formula.
Automata-Logic Connection and Expressiveness:
We show that SfrMTL, the subclass of RegMTL where the regular expressions are star free, characterize exactly 1 clock partially ordered alternating timed automata. If K is the maximum constant used in the automaton, we show that the behaviour of each location of the automaton over time can be asserted using LTL formulae over timed regions [0, 0], (0, 1), . . . , [K, K], (K, ∞). This enables us to assert the behaviour of the automaton starting at any location as a Reg I (re) formula where the re is captured by an LTL formula. This also implies that SfrMTL is exactly equivalent to 1-TPTL (the most expressive decidable fragment of TPTL in pointwise semantics). This is the first such equivalence of logics with interval constraints (SfrMTL) and freeze quantifications (1-TPTL) in pointwise semantics to the best of our knowledge.
Complexity: We focus on non punctual fragments of RegMTL, and show that satisfiability with only UReg modality has a 2EXPSPACE upper bound, while, surprisingly, if one considers a special case of the Reg I modality which only specifies the parity of a proposition in the next unit interval (the iseven modality), the complexity is F ω ω -hard. Finally we also explore the complexity with UM, a restricted form of Ureg that allows to specify only modulo counting constraints, and show its satisfiability to be EXPSPACE-complete.
It is important to note that in spite of being a special case, UM is exponentially more succinct then UReg.
Novel Proof Techniques: The logic RegMTL uses modalities that can assert the truth of a regular expression within a time interval. The satisfiability of RegMTL requires one to check the truth of these regular expressions at arbitrary points of the model; we do this by encoding the runs of the automaton corresponding to the regular expression starting at each point in the model. We show that the information pertaining to the potentially unbounded number of runs originating from the unboundedly many points of the model can be stored using bounded memory, by merging the runs when they reach the same state. This idea of merging the runs and encoding them in the model is new, to the best of our knowledge. The other novelty in terms of proof techniques used is while proving that RegMTL is at least as expressive as partially ordered 1-clock alternating timed automata. The timed behaviours enforced by any state of the automaton is captured by writing LTL formulae over the clock regions, and putting them together as RegMTL formulae Reg I (re) where the re is a star-free expression obtained corresponding to the LTL formula asserted over clock region I.
Preliminaries
Timed Temporal Logics
We first describe the syntax and semantics of the timed temporal logics needed in this paper : MTL and TPTL. Let Σ be a finite set of propositions. A finite timed word over Σ is a tuple ρ = (σ, τ ). σ and τ are sequences σ 1 σ 2 . . . σ n and t 1 t 2 . . . t n respectively, with σ i ∈ 2 Σ -∅, and t i ∈ R ≥0 for 1 ≤ i ≤ n and ∀i ∈ dom(ρ), t i ≤ t i+1 , where dom(ρ) is the set of positions {1, 2, . . . , n} in the timed word. Given Σ = {a, b}, ρ = ({a, b}, 0.8)({a}, 0.99)({b}, 1.1) is a timed word. ρ is strictly monotonic iff t i < t i+1 for all i, i + 1 ∈ dom(ρ). Otherwise, it is weakly monotonic. The set of finite timed words over Σ is denoted T Σ * . Metric Temporal Logic (MTL) extends linear temporal logic (LTL) by adding timing constraints to the "until" modality of LTL. MTL is parameterized by using a permitted set of open, half-open or closed time intervals, denoted by Iν. The end points of these intervals are in N ∪ {0, ∞}. Such an interval is denoted a, b . For example, [START_REF] Alur | The benefits of relaxing punctuality[END_REF][START_REF] Laroussinie | Counting ltl. In TIME[END_REF], [3, ∞). For t ∈ R ≥0 and interval a, b , t + a, b stands for the interval t + a, t + b .
Metric Temporal Logic
Given a finite alphabet Σ, the formulae of MTL are built from Σ using boolean connectives and time constrained version of the modality U as follows:
ϕ ::= a(∈ Σ) |true |ϕ ∧ ϕ | ¬ϕ | ϕ U I ϕ, where I ∈ Iν. For a timed word ρ = (σ, τ ) ∈ T Σ * , a position i ∈ dom(ρ) ∪ {0}
, and an MTL formula ϕ, the satisfaction of ϕ at a position i of ρ is denoted (ρ, i) |= ϕ, and is defined as follows:
ρ, i |= a ↔ a ∈ σ i , ρ, i |= ¬ϕ ↔ ρ, i ϕ ρ, i |= ϕ 1 ∧ ϕ 2 ↔ ρ, i |= ϕ 1 and ρ, i |= ϕ 2 ρ, i |= ϕ 1 U I ϕ 2 ↔ ∃j > i, ρ, j |= ϕ 2 , t j -t i ∈ I, and ρ, k |= ϕ 1 ∀ i < k < j
We assume the existence of a special point called 0, outside dom(ρ). The time stamp of this point is 0 (t 0 = 0). 1 ρ satisfies ϕ denoted ρ |= ϕ iff ρ, 1 |= ϕ. The language of a MTL formula ϕ is L(ϕ) = {ρ | ρ, 0 |= ϕ}. Two formulae ϕ and φ are said to be equivalent denoted as ϕ ≡ φ iff L(ϕ) = L(φ). Additional temporal connectives are defined in the standard way: we have the constrained future eventuality operator ♦ I a ≡ true U I a and its dual I a ≡ ¬♦ I ¬a. We also define the next operator as O I φ ≡ ⊥ U I φ. Weak versions of operators are defined as
♦ ns I a = a ∨ ♦ I a, ns I a ≡ a ∧ I a, a U ns I b ≡ b ∨ [a ∧ (a U I b)] if 0 ∈ I, and [a ∧ (a U I b)] if 0 / ∈ I.
Also, a Wb is a shorthand for a ∨ (a Ub). The subclass of MTL obtained by restricting the intervals I in the until modality to non-punctual intervals is denoted MITL. Theorem 1 ([10]). Satisfiability checking of MTL is decidable over finite timed words and is non-primitive recursive.
Timed Propositional Temporal Logic (TPTL)
In this section, we recall the syntax and semantics of TPTL. A prominent real time extension of linear temporal logic is TPTL, where timing constraints are specified with the help of freeze clocks. The set of TPTL formulas are defined inductively as: TPTL is interpreted over finite timed words over Σ. The truth of a formula is interpreted at a position i ∈ N along the word. For a timed word ρ = (σ 1 , t 1 ) . . . (σ n , t n ), we define the satisfiability relation, ρ, i, ν |= φ saying that the formula φ is true at position i of the timed word ρ with valuation ν of all the clock variables.
ϕ ::= a(∈ Σ) |true |ϕ ∧ ϕ | ¬ϕ | ϕ Uϕ | y.ϕ | y ∈ I There is a set C of
1. ρ, i, ν |= a ↔ a ∈ σ i 2. ρ, i, ν |= ¬ϕ ↔ ρ, i, ν ϕ 3. ρ, i, ν |= ϕ 1 ∧ ϕ 2 ↔ ρ, i, ν |= ϕ 1 and ρ, i, ν |= ϕ 2 4. ρ, i, ν |= x.ϕ ↔ ρ, i, ν[x ← t i ] |= ϕ 5. ρ, i, ν |= x ∈ I ↔ t i -ν(x) ∈ I 6. ρ, i, ν |= ϕ 1 Uϕ 2 ↔ ∃j > i, ρ, j, ν |= ϕ 2 , and ρ, k, ν |= ϕ 1 ∀ i < k < j ρ satisfies φ denoted ρ |= φ iff ρ, 1, 0 |= φ.
Here 0 is the valuation obtained by setting all clock variables to 0. We denote by k-TPTL the fragment of TPTL using at most k clock variables. The fragment of TPTL with k clock variables is denoted k-TPTL.
MTL with Regular Expressions (RegMTL)
In this section, we introduce the extension of MTL with regular expressions, that forms the core of the paper. These modalities can assert the truth of a regular expression within a particular time interval with respect to the present point. For example, Reg (0,1) (ϕ 1 .ϕ 2 ) * when evaluated at a point i, asserts that either τ i+1 ≥ τ i + 1 (corresponds to ) or, there exist 2k points
τ i < τ i1 < τ i2 < • • • < τ i 2k < τ i+1 , k > 0, 0 < τ i+1 -τ i < 1, such that ϕ 1
evaluates to true at τ i2j+1 , and ϕ 2 evaluates to true at τ i2j+2 , for all j ≥ 0. RegMTL Syntax: Formulae of RegMTL are built from Σ (atomic propositions) as follows:
ϕ ::= a(∈ Σ) |true |ϕ ∧ ϕ | ¬ϕ | Reg I re | ϕUReg I,re ϕ re ::= ϕ | re.re | re + re | re * where I ∈ Iν.
An atomic regular expression re is any well-formed formula ϕ ∈ RegMTL. For a regular expression re, let Γ be the set of all subformulae and their negations appearing in re. For example, if re = aUReg (0,1),Reg [START_REF] Alur | The benefits of relaxing punctuality[END_REF][START_REF] Baziramwabo | Modular temporal logic[END_REF] [Reg (0,1) b] b, then Γ consists of Reg [START_REF] Alur | The benefits of relaxing punctuality[END_REF][START_REF] Baziramwabo | Modular temporal logic[END_REF] [Reg (0,1) b], Reg (0,1) b, b and their negations. Let Cl(Γ) denote consistent sets 2 in P(Γ). L(re) is the set of strings over Cl(Γ) defined as follows. Let S ∈ Cl(Γ).
L(re) = {S | a ∈ S} if re = a, {S | ϕ 1 , ϕ 2 ∈ S} if re = ϕ 1 ∧ ϕ 2 , {S | ϕ / ∈ S} if re = ¬ϕ, L(re 1 ).L(re 2 ) if re = re 1 .re 2 , L(re 1 ) ∪ L(re 2 ) if re = re 1 + re 2 , [L(re 1 )] * if re = (re 1 ) * .
If re is not an atomic regular expression, but has the form re 1 + re For ρ=({a}, 0.1)({a}, 0.3)({a, b}, 1.01), ρ, 1 |= ϕ, since a∈σ 2 , b∈σ 3 , τ 3 -τ 1 ∈(0, 1) and the untimed word obtained at position 2 is a which is in L(ab * ). For ρ = ({a}, 0.1)({a}, 0.3)({a}, 0.5)({a}, 0.9)({b}, 1.01), we know that ρ, 1 ϕ, since the untimed word obtained is aaa / ∈ L(ab * ). Example 2. Consider the formula ϕ = Reg (0,1) [¬Reg (0,1) a]. Then Γ = {¬Reg (0,1) a, Reg (0,1) a, a, ¬a}. 1. For the word ρ = ({a, b}, 0.1)({a, b}, 1.01)({a}, 1.2), TSeg(Γ, (0, 1), 1) = {a, Reg (0,1) a} is the marking of position 2. ρ, 2 |= Reg (0,1) a since ρ, 3 |= a. Hence, ρ, 1 ϕ. 2. For ρ = ({a, b}, 0.1)({b}, 0.7)({a, b}, 1.01)({a}, 1.2), TSeg(Γ, (0, 1), 1)={b}.{a, b, Reg (0,1) a}.
ρ, 1 ϕ.
2 a set S is consistent iff ϕ ∈ S ↔ ¬ϕ / ∈ S 3. Lastly, for ρ = ({a, b}, 0.1)({a, b}, 1.01)({b}, 1.2), we obtain ρ, 1 |= ϕ, since ρ, 3 a, and hence position 2 is not marked Reg (0,1) a. Example 3. Consider the formula ϕ = Reg (0,1) [Reg (0,1) a] * . For ρ = ({a, b}, 0.1)({a, b}, 0.8)({b}, 0.99)({a, b}, 1.5), we have ρ, 1 Reg (0,1) [Reg (0,1) a] * , since point 2 is not marked Reg (0,1) a, even though point 3 is.
The language accepted by a RegMTL formula ϕ is given by L(ϕ) = {ρ | ρ, 0 |= ϕ}.
Subclasses of RegMTL
As a special subclass of RegMTL, we consider the case when the regular expressions do only mod counting. With this restriction, the ϕUReg I,re ϕ modality is written as ϕUM I,θ ϕ where θ has the form #ψ = k%n, while the Reg I modality is written as MC k%n I . In both cases, k, n ∈ N and 0 ≤ k ≤ n -1. This restriction of RegMTL, written MTL mod has the form ϕ ::
= a(∈ Σ) |true |ϕ ∧ ϕ | ¬ϕ | ϕ | MC k%n I ϕ | ϕUM I,θ ϕ. The obvious semantics of ρ, i |= MC k%n I ϕ checks if the number of times ϕ is true in τ i + I is M (n) + k, where M (n)
denotes a non-negative integer multiple of n, and 0 ≤ k ≤ n -1. ρ, i |= ϕ 1 UM I,#ψ=k%n ϕ 2 checks the existence of j > i such that τ j -τ i ∈ I, and the number of times ψ is true in between i, j is
M (n) + k, 0 ≤ k ≤ n -1.
Example 4. The formula ϕ = ns (a → MC 0%2 (0,1) b) says that whenever there is an a at a time point t, the number of b's in the interval (t, t + 1) is even. The formula ψ = (a → trueUM (0,1),#b=0%2 (a ∨ b)) when asserted at a point i checks the existence of a point j > i such that a or b ∈ σ j , τ j -τ i ∈ (0, 1), a ∈ σ k for all i < k < j, and the number of points between i, j where b is true is even.
The subclass of RegMTL using only the UReg modality is denoted RegMTL[UReg]. Likewise, the subclass of MTL mod with only UM is denoted MTL mod [UM], while MTL mod [MC] denotes the subclass using just MC.
Temporal Projections
In this section, we discuss the technique of temporal projections used to show the satisfiability of RegMTL. Let Σ, X be finite sets of propositions such that Σ ∩ X = ∅. (Σ, X)-simple extensions and Simple Projections: A (Σ, X)-simple extension is a timed word ρ = (σ , τ ) over X ∪ Σ such that at any point i ∈ dom(ρ ), σ i ∩ Σ = ∅. For Σ={a, b}, X={c}, ({a}, 0.2)({a, c}, 0.3)({b, c}, 1.1) is a (Σ, X)-simple extension while ({a}, 0.2)({c}, 0.3)({b}, 1.1) is not. Given a (Σ, X)-simple extension ρ, the simple projection of ρ with respect to X, denoted ρ\X is the word obtained by deleting elements of X from each σ i . For Σ={a, b}, X={c} and ρ = ({a}, 0.1)({b}, 0.9)({a, c}, 1.1), ρ \ X = ({a}, 0.1)({b}, 0.9)({a}, 1.1). (Σ, X)-oversampled behaviours and Oversampled Projections: A (Σ, X)-oversampled behaviour is a timed word ρ = (σ , τ ) over X ∪ Σ, such that σ 1 ∩ Σ = ∅ and σ |dom(ρ )| ∩ Σ = ∅. Oversampled behaviours are more general than simple extensions since they allow occurrences of new points in between the first and the last position. These new points are called oversampled points. All other points are called action points. For Σ = {a, b}, X = {c}, ({a}, 0.2)({c}, 0.3)({b}, 0.7)({a}, 1.1) is a (Σ, X)-oversampled behaviour, while ({a}, 0.2)({c}, 0.3)({c}, 1.1) is not. Given a (Σ, X)-oversampled behaviour ρ = (σ , τ ), the oversampled projection of ρ with respect to Σ, denoted ρ ↓ X is defined as the timed word obtained by removing the oversampled points, and then erasing the symbols of X from the action points. ρ=ρ ↓ X is a timed word over Σ.
A temporal projection is either a simple projection or an oversampled projection. We now define equisatisfiability modulo temporal projections. Given MTL formulae ψ and φ, we say that φ is equisatisfiable to ψ modulo temporal projections iff there exist disjoint sets X, Σ such that (1) φ is over Σ, and ψ over Σ ∪ X, (2) For any timed word ρ over Σ such that ρ |= φ, there exists a timed word ρ such that ρ |= ψ, and ρ is a temporal projection of ρ with respect to X, (3) For any behaviour ρ over Σ ∪ X, if ρ |= ψ then the temporal projection ρ of ρ with respect to X is well defined and ρ |= φ. If the temporal projection used above is a simple projection, we call it equisatisfiability modulo simple projections and denote it by φ = ∃X.ψ. If the projection in the above definition is an oversampled projection, then it is called equisatisfiability modulo oversampled projections and is denoted φ ≡ ∃ ↓ X.ψ. Equisatisfiability modulo simple projections are studied extensively [START_REF] Kini | On construction of safety signal automata for MITL[U, S] using temporal projections[END_REF][START_REF] Prabhakar | On the expressiveness of MTL with past operators[END_REF][START_REF] Francois | Logics, Automata and Classical Theories for Deciding Real Time[END_REF]. It can be seen that if
ϕ 1 = ∃X 1 .ψ 1 and ϕ 2 = ∃X 2 .ψ 2 , with X 1 , X 2 disjoint, then ϕ 1 ∧ ϕ 2 = ∃(X 1 ∪ X 2 ).(ψ 1 ∧ ψ 2 ) [8].
Unlike simple projections, when one considers oversampled projections, there is a need to relativize the formula with respect to the original alphabet Σ to preserve satisfiability. As an example, let φ = (0,1) a be a formula over Σ = {a}, and let
ψ 1 = (b ↔ ¬a) ∧ (¬b U (0,1) b), ψ 2 = (c ↔ [0,1) a) ∧ c be two formulae over Σ 1 = Σ ∪ {b} and Σ 2 = Σ ∪ {c} respectively. Clearly, φ = ∃ ↓ {b}ψ 1 and φ = ∃ ↓ {c}ψ 2 . However, φ = ∃ ↓ {b, c}(ψ 1 ∧ ψ 2 )
, since the non-action point b contradicts the condition [0,1) a corresponding to c. However, if ψ 1 , ψ 2 are relativized with respect to Σ 1 , Σ 2 respectively, then we will not have this problem.
Relativizing
ψ 1 , ψ 2 with respect to Σ 1 , Σ 2 gives Rel(ψ 1 , Σ 1 ), Rel(ψ 1 , Σ 2 ) as (act 1 →(b↔¬a))∧[(act 1 →¬b) U (0,1) (b∧act 1 )], and (act 2 → (c ↔ [0,1) (act 2 → a))) ∧ (act 2 ∧ c).
This resolves the problem and indeed
φ = ∃ ↓ {b, c}(Rel(ψ 1 , Σ 1 ) ∧ Rel(ψ 2 , Σ 2 )).
Satisfiability, Complexity, Expressiveness
The main results of this section are as follows.
Theorem 2. 1. Satisfiability of RegMTL is decidable.
Satisfiability of MITL
mod [UM] is EXPSPACE-complete. 3. Satisfiability of MITL[UReg] is in 2EXPSPACE. 4. Satisfiability of MITL mod [MC] is F ω ω -hard.
We will use equisatisfiability modulo oversampled projections in the proof of Theorem 2. This technique is used to show the decidability of RegMTL, 2EXPSPACE-hardness of RegMITL[UReg], and Ackermannian-hardness of MITL mod [MC]. The proof of Theorem 2.1 follows from Lemmas 4 and 5, and from Theorem 1. Details of Theorems 2.2, 2.3, 2.4 can be found in Appendices B.2, B.3 and 3.3.
Theorem 3. RegMTL[UReg] ⊆ RegMTL[Reg], MTL mod [UM] ⊆ MTL mod [MC].
Theorem 3 shows that the Reg modality can capture UReg (and likewise, MC captures UM). Thus, RegMTL ≡ RegMTL[Reg]. The proofs can be seen in Appendix C.
Equisatisfiable Reduction
In this section, we describe the steps to obtain an equisatisfiable reduction from RegMTL to MTL which shows that satisfiability checking of RegMTL is decidable. Starting from a RegMTL formula ϕ, the following steps are taken.
1.
Flattening. Each of the modalities Reg I , UReg that appear in the formula ϕ are replaced with fresh witness propositions to obtain a flattened formula. For example, if
ϕ = Reg (0,1) [aUReg (1,2),Reg (0,1) (a+b) * b], then flattening yields ns [w 1 ↔ Reg (0,1) w 2 ] ∧ ns [w 2 ↔ aUReg (1,2),w3 b] ∧ ns [w 3 ↔ Reg (0,1) (a + b) * ],
where w 1 , w 2 , w 3 are fresh witness propositions. Let W be the set of fresh witness propositions such that Σ ∩ W = ∅. After flattening, the modalities Reg I , UReg appear only within temporal definitions. Temporal definitions are of the form ns [a ↔ Reg I atom] or ns [a ↔ xUReg I ,atom y], where atom is a regular expression over Σ ∪ W , W being the set of fresh witness propositions used in the flattening, and I is either a unit length interval or an unbounded interval. 2. Consider any temporal definition T and a timed word ρ over Σ ∪ W . Each of the regular expression atom has a corresponding minimal DFA recognizing it. We first construct a simple extension ρ which marks each position of ρ using the run information from the minimal DFA that accepts the regular expression atom. However, to check that the regular expression atom holds good in a particular time interval from a point in the timed word, we need to oversample ρ by introducing some extra points. Based on this oversampling, each point of ρ can be marked a as a witness of Reg I atom (or xUReg I ,atom y). The construction of the simple extension ρ is in section 3.2, while details of the elimination of Reg I atom, xUReg I ,atom y using oversampling are in the lemmas 4 and 5.
Construction of Simple Extension ρ
For any given ρ over Σ ∪ W , where W is the set of witness propositions used in the temporal definitions T of the forms ns [a ↔ Reg I atom] or ns [a ↔ xUReg I ,atom y], we construct a simple extension ρ that marks points of ρ with the run information of the minimal DFA accepting atom. This results in the extended alphabet Σ ∪ W ∪ Threads ∪ Merge for ρ . The behaviour of Threads and Merge are explained below.
Let AP denote the (sub)set of propositions over which atom is defined. Let A atom = (Q, 2 AP , δ, q 1 , Q F ) be the minimal DFA that accepts atom and let Q = {q 1 , q 2 , . . . , q m }. Let In = {1, 2, . . . , m} be the indices of the states. We have to mark every point i in dom(ρ ) with a or ¬a depending on the truth of Reg I atom or xUReg I,atom y at i. To do this, we "run" A atom starting from each point i in dom(ρ ). At any point i of dom(ρ ), we thus have the states reached in A atom after processing the i -1 prefixes of ρ , and we also start a new thread at position i. This way of book-keeping will lead to maintaining unbounded number of threads of the run of A atom . To avoid this, we "merge" threads i, j if the states reached at points i, j are the same, and maintain the information of the merge. It can be seen then that we need to maintain at most m distinct threads at each point, given m is the number of states of A atom . We mark the points in ρ with the state information on each thread and the information about which threads are being merged (if any), with the following set of propositions : 1. Let Th i (q x ) be a proposition that denotes that the ith thread is active and is in state q x , while Th i (⊥) be a proposition that denotes that the ith thread is not active. The set Threads consists of propositions Th i (q x ), Th i (⊥) for 1 ≤ i, x ≤ m. 2. If at a position e, we have Th i (q x ) and Th j (q y ) for i < j, and if δ(q x , σ e ) = δ(q y , σ e ), then we can merge the threads i, j at position e + 1. Let (i, j) be a proposition that signifies that threads i, j have been merged. In this case, (i, j) is true at position e + 1. Let Merge be the set of all propositions (i, j) for 1 ≤ i < j ≤ m. At most m threads can be running at any point e of the word. We now describe the conditions to be checked in ρ .
Initial condition-At the first point of the word, we start the first thread and initialize
q 1 q 1 q 1 q 1 T h(1) T h(2) T h(3) T h(4) q 2 q 3 q 4 q 1 q 2 q 3 q 4 q 2 q 3 q 4 q 2 q 2 × Merge(1, 4) × q 3 q 3 × Merge(2, 3) × Figure 2
Encoding runs and merging of threads.
c i-1 c i c i-1 c i c i⊕u τ v + l τ v + u τ v Th i (q 1 ) ¬Mrg(i) M(i 1 , i) ¬Mrg(i 1 ) M(i 2 , i 1 ) M(x, y) ¬Mrg(x) Th j (q f ) M(z, x) ¬Mrg(x) Figure 3 Linking of R pref and R suf .
all other threads as ⊥. This could be specified as
ϕ init = ((Th 1 (q 1 )) ∧ i>1 Th i (⊥)).
Initiating runs at all points-
To check the regular expression within an arbitrary interval, we need to start a new run from every point. ϕ start = ns ( i≤m Th i (q 1 ))
Disallowing Redundancy-At any point of the word, if i = j and Th i (q x ) and Th j (q y ) are both true, then
q x = q y . ϕ no-red = x∈In ns [¬ 1≤i<j≤m (Th i (q x ) ∧ Th j (q x ))]
Merging Runs-If two different threads Th i , Th j (i < j) reach the same state q x on reading the input at the present point, then we just keep one copy and merge thread Th j with Th i . We remember the merge with the proposition (i, j). We define a macro Nxt(Th i (q x )) which is true at a point e if and only if Th i (q y ) is true at e and δ(q y , σ e ) = q x , where σ e ⊆ AP is the maximal set of propositions true at e. Nxt(Th i (q x )) is true at e iff thread Th i reaches q x after reading the input at e. Nxt(Th i (q x ))= (qy,prop)∈{(q,p)|δ(q,p)=qx}
[prop∧Th i (q y )].
Let ψ(i, j, k, q x ) be a formula that says that at the next position, Th i (q x ) and Th k (q x ) are true for k > i, but for all j < i, Th j (q x ) is not. ψ(i, j, k, q x ) is given by Nxt(Th i (q x ))∧ j<i ¬Nxt(Th j (q x ))∧Nxt(Th k (q x )). In this case, we merge threads Th i , Th k , and either restart Th k in the initial state, or deactivate the kth thread at the next position. This is given by the formula
NextMerge(i, k) O[ (i, k) ∧ (Th k (⊥) ∨ Th k (q 1 )) ∧ Th i (q x )]. ϕ is defined as x,i,k∈In∧k>i ns [ψ(i, j, k, q x ) → NextMerge(i, k)]
Propagating runs-If Nxt(Th i (q x )) is true at a point, and if for all j < i, ¬Nxt(Th j (q x )) is true, then at the next point, we have Th i (q x ). Let NextTh(i, j, q x ) denote the formula Nxt(Th i (q x )) ∧ ¬Nxt(Th j (q x )). The formula ϕ pro is given by i,j∈In∧i<j ns [NextTh(i, j, q x )→O[Th i (q x ))∧¬ (i, j)]]. If Th i (⊥) is true at the current point, then at the next point, either Th i (⊥) or Th i (q 1 ). The latter condition corresponds to starting a new run on thread Th i .
ϕ N O-pro = i∈In ns {Th i (⊥)→O(Th i (⊥) ∨ Th i (q 1 ))}
XX:10 A Regular Metric Temporal Logic
Once we construct the extension ρ , checking whether the regular expression atom holds in some interval I in the timed word ρ, is equivalent to checking that if a thread Th i is at q 1 at the first action point in I, then the corresponding thread is at q f at the last point in I. But the main challenge is that the indices of a particular thread might change because of merges. Thus the above condition reduces to checking that at the first action point u within I, if Th i (q 1 ) holds, then after a series of merges of the form (i 1 , i), (i 2 , i 1 ), . . . (j, i n ), at the last point v in the interval I, Th j (q f ) is true, for some final state q f . Note that the number of possible sequences are bounded(and a function of size of the DFA). Figure 2 illustrates the threads and merging. Let Run be the formula obtained by conjuncting all formulae explained above. This captures the run information of A atom . The formula Run then correctly captures the run information on ρ.
We can easily write a 1-TPTL formula that will check the truth of Reg [l,u) atom at a point v on the simple extension ρ (see Appendix A). However, to write an MTL formula that checks the truth of Reg [l,u) atom at a point v, we need to oversample ρ as shown below.
Lemma 4. Let T = ns [a ↔ Reg I atom] be a temporal definition built from Σ ∪ W . Then we synthesize a formula ψ ∈ MTL over Σ ∪ W ∪ X such that T ≡ ∃ ↓ X.ψ.
Proof. Lets first consider the case when the interval I is bounded of the form [l, u). Starting with the simple extension ρ having the information about the runs of A atom , we explain the construction of the oversampled extension ρ as follows:
We first oversample ρ at all the integer timestamps and mark them with propositions in C = {c 0 , . . . , c max-1 } where max is the maximum constant used in timing constraints of the input formulae. An integer timestamp k is marked c i if and only if k = M (max) + i where M (max) denotes a non-negative integral multiple of max and 0
≤ i ≤ max -1.
This can be done easily by the formula
c 0 ∧ i∈{0,...max-1} ns (c i → ¬♦ (0,1) ( C)∧♦ (0,1] c i⊕1 )
where x⊕y is addition of x, y modulo max.
Next, a new point marked ovs is introduced at all time points τ whenever τ -l or τ -u is marked with Σ. This ensures that for any time point t in ρ , the points t + l, t + u are also available in ρ . After the addition of integer time points, and points marked ovs, we obtain the oversampled extension (Σ ∪ W ∪ Threads ∪ Merge, C ∪ {ovs}) ρ of ρ . To check the truth of Reg [l,u) atom at a point v, we need to assert the following: starting from the time point τ v + l, we have to check the existence of an accepting run R in A atom such that the run starts from the first action point in the interval [τ v + l, τ v + u), is a valid run which goes through some possible sequence of merging of threads, and witnesses a final state at the last action point in [τ v + l, τ v + u). To capture this, we start at the first action point in [τ v + l, τ v + u) with initial state q 1 in some thread Th i , and proceed for some time with Th i active, until we reach a point where Th i is merged with some Th i1 . This is followed by Th i1 remaining active until we reach a point where Th i1 is merged with some other thread Th i2 and so on, until we reach the last such merge where some thread say Th n witnesses a final state at the last action point in [τ v + l, τ v + u). A nesting of until formulae captures this sequence of merges of the threads, starting with Th i in the initial state q 1 . Starting at v, we have the point marked ovs at τ v + l, which helps us to anchor there and start asserting the existence of the run.
The issue is that the nested until can not keep track of the time elapse since τ v + l. However, note that the greatest integer point in [τ v + l, τ v + u) is uniquely marked with c i⊕u whenever c i ≤ τ v ≤ c i⊕1 are the closest integer points to τ v . We make use of this by (i) asserting the run of A atom until we reach c i⊕u from τ v + l. Let the part of the run R that has been witnessed until c i⊕u be R pref . Let R = R pref .R suf be the accepting run. (ii) From τ v + l, we jump to τ v + u, and assert the reverse of R suf till we reach c i⊕u . This ensures that
R = R pref .R suf is a valid run in the interval [τ v + l, τ v + u). Let Mrg(i) = [ j<i (j, i) ∨ c i⊕u ].
We first write a formula that captures R pref . Given a point v, the formula captures a sequence of merges through threads i > i 1 > • • • > i k1 , and m is the number of states of A atom .
Let
ϕ P ref,k1 = m≥i>i1>•••>i k 1 MergeseqPref(k 1 )
where MergeseqPref(k 1 ) is the formula
♦ [l,l] {¬( Σ ∨ c i⊕u ) U[Th i (q 1 ) ∧ (¬Mrg(i) U[ (i 1 , i)∧ (¬Mrg(i 1 ) U[ (i 2 , i 1 ) ∧ . . . (¬Mrg(i k1 ) Uc i⊕u )])])]}
Note that this asserts the existence of a run till c i⊕u going through a sequence of merges starting at τ v + l. Also, Th i k 1 is the guessed last active thread till we reach c i⊕u which will be merged in the continuation of the run from c i⊕u . Now we start at τ v + u and assert that we witness a final state sometime as part of some thread Th i k , and walk backwards such that some thread i t got merged to i k , and so on, we reach a thread Th ic to which thread Th i k 1 merges with. Note that Th i k 1 was active when we reached c i⊕u . This thread Th i k 1 is thus the "linking point" of the forward and reverse runs. See Figure 3.
Let
ϕ Suf,k,k1 = 1≤i k <•••<i k 1 ≤m MergeseqSuf(k, k 1 ) where MergeseqSuf(k, k 1 ) is the for- mula ♦ [u,u] {¬( Σ∨c i⊕u )S[(Th i k (q f ))∧(¬Mrg(i k )S [ (i k , i k-1 )∧(¬Mrg(i k-1 )S[ (i k-1 , i k-2 )∧ • • • (i c , i k1 ) ∧ (¬Mrg(i k1 ) Sc i⊕u )])])]}. For a fixed sequence of merges, the formula ϕ k,k1 = k≥k1≥1 [MergeseqPref(k 1 )∧MergeseqSuf(k, k 1 )
] captures an accepting run using the merge sequence. Disjuncting over all possible sequences for a starting thread Th i , and disjuncting over all possible starting threads gives the required formula capturing an accepting run. Note that this resultant formulae is also relativized with respect to Σ and also conjuncted with Rel(Σ, Run) (where Run is the formula capturing the run information in ρ as seen in section 3.2) to obtain the equisatisfiable MTL formula. Note that S can be eliminated obtaining an equisatisfiable MTL[ U I ] formula modulo simple projections [START_REF] Prabhakar | On the expressiveness of MTL with past operators[END_REF].
If I was an unbounded interval of the form [l, ∞), then in formula ϕ k,k1 , we do not require MergeseqSuf(k, k 1 ); instead, we will go all the way till the end of the word, and assert Th i k (q f ) at the last action point of the word. Thus, for unbounded intervals, we do not need any oversampling at integer points.
Lemma 5. Let T = ns [a ↔ xUReg I,re y] be a built from Σ ∪ W . Then we synthesize a formula ψ ∈ MTL over Σ ∪ W ∪ X such that T ≡ ∃ ↓ X.ψ.
Proof. We discuss the case of bounded intervals here; the unbounded interval case can be seen in Appendix B. The proof technique is very similar to Lemma 4. The differences that arise are as below. 1. Checking re in Reg I re at point v is done at all points j such that τ j -τ v ∈ I. To ensure this, we needed the punctual modalities ♦ [u,u] , ♦ [l,l] . On the other hand, to check UReg I,re from a point v, the check on re is done from the first point after τ v , and ends at some point within [τ v + l, τ v + u). Assuming τ v lies between integer points c i , c i⊕1 , we can witness the forward run in MergeseqPref from the next point after τ v till c i⊕1 , and for the reverse run, go to some point in τ v + I where the final state is witnessed, and walk back till c i⊕1 . The punctual modalities are hence not required and we do not need points marked ovs.
2.
The formulae MergeseqPref(k 1 ), MergeseqSuf(k, k 1 ) of the lemma 4 are replaced as follows:
MergeseqPref(k 1 ) : {¬( Σ ∨ c i⊕1 ) U[Th i (q 1 ) ∧ (¬Mrg(i) U[ (i 1 , i) ∧ (¬Mrg(i 1 ) U [ (i 2 , i 1 ) ∧ . . . (¬Mrg(i k1 ) Uc i⊕1 )])])]}. MergeseqSuf(k, k 1 ) : ♦ I {[(Th i k (q f ))∧(¬Mrg(i k )S [ (i k , i k-1 )∧(¬Mrg(i k-1 )S[ (i k-1 , i k-2 )∧ • • • (i c , i k1 ) ∧ (¬Mrg(i k1 ) Sc i⊕1 )])]
)]} The above takes care of re in xUReg I,re y : we also need to say that x holds continously from the current point to some point in I. This is done by pushing x into re (see the translation of ϕ 1 UReg I,re ϕ 2 to Reg I re in Appendix C). The resultant formulae is relativized with respect to Σ and also conjuncted with Rel(Σ, Run) to obtain the equisatisfiable MTL formula.
The equisatisfiable reduction in Lemma 5 above hence gives an elementary upper bound for satisfiability checking when we work on MITL with UReg, since after elimination of UReg, we obtain an equisatisfiable MITL formula. This is very interesting since it shows an application of the oversampling technique : without oversampling, we can eliminate UReg using 1-TPTL as shown in Appendix A. However, 1-TPTL does not enjoy the benefits of non-punctuality. In particular, Appendix F.2 shows that non punctual 1-TPTL is already non-primitive recursive.
Complexity
In this section, we discuss the complexity of MITL mod [MC], proving Theorem 2.4. To prove this, we obtain a reduction from the reachability problem of Insertion Channel Machines with Emptiness Testing (ICMET). We now show how to encode the reachability problem of We first define error-free channel machines. Given A as above, a configuration of A is a pair (q, U ) where q ∈ S and U ∈ (M * ) C gives the contents of each channel. Let Conf denote the configurations of A. The rules in ∆ induce an Op-labelled transition relation on Conf, as follows.
ICMET in MITL mod [MC].
(a) (q, c!a, q ) ∈ ∆ yields a transition (q, U )
c!a -→ (q , U ) where U (c) = U (c).a, and U (d) = U (d) for d = c. (b) (q, c?a, q ) ∈ ∆ yields a transition (q, U ) c?a -→ (q , U ) where U (c) = a.U (c), and U (d) = U (d) for d = c. (c) (q, c = , q ) ∈ ∆ yields a transition (q, U ) c= -→ (q , U ) provided U (c) is the empty word.
All other channel contents remain the same. If the only transitions allowed are as above, then we call A an error-free channel-machine. Error-free channel machines are Turing-powerful. We now look at channel machines with insertion errors. These augment the transition relation on Conf with the following rule: (d) Insertion errors are then introduced by extending the transition relation on global states with the following clause: if (q, U ) α -→ (q , V ), and if U U and V V , then (q, U ) α -→ (q , V ). U U if U can be obtained from U by deleting any number of letters.
The channel machines as above are called ICMET. A run of an ICMET is a sequence of transitions γ 0
op0 → γ 1 • • • opn-1
→ γ n . . . that is consistent with the above operational semantics. Consider any ICMET C = (S, M, ∆, C), with set of states S = {s 0 , . . . , s n } and channels C = {c 1 , . . . , c k }. Let M be a finite set of messages used for communication in the channels.
We encode the set of all possible configurations of C, with a timed language over the alphabet where k refers to number of channels. 2. At time (2k + 2)j + (2k -1), the current state s w of the ICMET at configuration j is encoded by the truth of the proposition s w . 3. The jth configuration begins at the time point (2k + 2)j. At a distance [2i -1, 2i] from this point, 1 ≤ i ≤ k, the contents of the i th channel are encoded as shown in the point 7.
Σ = M a ∪ M b ∪ ∆ ∪ S ∪ {H}, where M a = {m a |m ∈ M } M b = {m b |m ∈ M },
The intervals of the form (2i, 2i + 1), 0 ≤ i ≤ k + 1 from the start of any configuration are time intervals within which no action takes place. 4. Lets look at the encoding of the contents of channel i in the jth configuration. Let m hi be the message at the head of the channel i. Each message m i is encoded using consecutive occurrences of symbols m i,a and m i,b . In our encoding of channel i, the first point marked m hi,a in the interval (2k + 2)j + [2i -1, 2i] is the head of the channel i and denotes that m hi is the message at the head of the channel. The last point marked m ti,b in the interval is the tail of the channel, and denotes that message m ti is the message stored at the tail of the channel. 5. Exactly at 2k + 1 time units after the start of the j th configuration, we encode the transition from the state at the j th configuration to the (j + 1) st configuration (starts at (2k + 2)(j + 1)). Note that the transition has the form (s, c!m, s ) or (s, c?m, s ) or (s, c = , s ). 6. We introduce a special symbol H, which acts as separator between the head of the message and the remaining contents, for each channel. 1. All the states must be at distance 2k + 2 from the previous state (first one being at 0) and all the propositions encoding transitions must be at the distance 2k + 1 from the start of the configuration.
ϕ S =s 0 ∧ [S ⇒ {♦ (0,2k+2] (S)∧ (0,2k+2) (¬S)∧♦ (0,2k+1] α∧ [0,2k+1) (¬α)∧♦ (2k+1,2k+2) (¬α)}]
2. All the messages are in the interval [2i -1, 2i] from the start of configuration. No action takes place at (2i -2, 2i -1) from the start of any configuration.
ϕ m = {S⇒ k i=1 [2i-1,2i] (M ∨H)∧ (2i-2,2i-1) (¬action)} 3.
Consecutive source and target states must be in accordance with a transition α. For example, s j appears consecutively after s i reading α i iff α i is of the form (s i , y, s j ) ∈ ∆, with y ∈ {c i !m, c i ?m, c i = ∅}. ϕ ∆ = s,s ∈S {(s∧♦ (0,2k+2] s )⇒(♦ (0,2k+1] ∆ s,s )} where ∆ s,s are possible α i between s, s .
XX:14 A Regular Metric Temporal Logic 4. We introduce a special symbol H along with other channel contents which acts as a separator between the head of the channel and rest of the contents. Thus H has following properties There is one and only one time-stamp in the interval (2i -1, 2i) from the start of the configuration where H is true. The following formula says that there is an occurrence of a H:
ϕ H1 = [(S∧♦ (2i-1,2i) M )⇒( k i=1 ♦ (2i-1,2i) (H))]
The following formula says that there can be only one H: ϕ H2 = (H⇒¬♦ (0,1) H) Every message m x is encoded by truth of proposition m x,a immediately followed by m x,b . Thus for any message m x , the configuration encoding the channel contents has a sub-string of the form (m x,a m x,b ) * where m x is some message in M .
ϕ m = [m x,a ⇒O (0,1] m x,b ]∧ [m x,b ⇒O (0,1) M a ∨O( ∆ ∨ H)]∧(¬M b UM a )
If the channel is not empty (there is at least one message m a m b in the interval (2i-1, 2i) corresponding to channel i contents) then there is one and only one m b before H. The following formula says that there can be at most one m b before H.
ϕ H3 = [¬{M b ∧ ♦ (0,1) (M a ∧ ♦ (0,1) H)}]
The following formula says that there is one M b before H in the channel, if the channel is non-empty.
ϕ H4 = [S⇒{ k j=1 (♦ [2j-1,2j] (M b )⇒ ♦ [2j-1,2j] (M b ∧ ♦ (0,1) H))}] Let ϕ H =ϕ H1 ∧ ϕ H2 ∧ ϕ H3 ∧ ϕ H4 .
Encoding transitions:
We first define a macro for copying the contents of the i th channel to the next configuration with insertion errors. If there were some m x,a , m
[2g-1,2g] [ mx∈M (m x,a ∧iseven (0,2k+2) (m x,b ))⇒O(iseven (0,2k+2) (m x,b ))] ∧ [2i-1,2i] [ mx∈M (m x,a ∧¬iseven (0,2k+2) (m x,b ))⇒O(¬iseven (0,2k+2) (m x,b ))]
If the transition is of the form c i = . The following formulae checks that there are no events in the interval (2i -1, 2i) corresponding to channel i, while all the other channel contents are copied.
ϕ ci= =S ∧ (2i-1,2i) (¬action)∧ k g=1 copy g
If the transition is of the form c i !m x where m ∈ M . An extra message is appended to the tail of channel i, and all the m a m b 's are copied to the next configuration. M b ∧ (0,1) (¬M )) denotes the last time point of channel i; if this occurs at time t, we know that this is copied at a timestamp strictly less than 2k + 2 + t.Thus we assert that truth of ♦ (2k+2,2k+3) m x,b at t.
ϕ ci!m =S∧ k g=1 copy g ∧♦ [2i-1,2i) {(M ∧ (0,1) (¬M ))⇒(♦ (2k+2,2k+3) (m x,b ))}
If the transition is of the form c i ?m where m ∈ M . The contents of all channels other than i are copied to the intervals encoding corresponding channel contents in the next configuration. We also check the existence of a first message in channel i; such a message has a H at distance (0, 1) from it.
ϕ ci?mx =S∧ k j =i,g=1 copy g ∧♦ (2i-1,2i) {m x,b ∧♦ (0,1) (H)}∧ [2i-1,2i] [ mx∈M (m x,a ∧ iseven (0,2k+2) (m x,b )∧¬♦ (0,1) H)⇒O(iseven (0,2k+2) (m x,b ))]∧ [2i-1,2i] [ mx∈M (m x,a ∧¬iseven (0,2k+2) (m x,b )∧¬♦ (0,1) H)⇒O(¬iseven (0,2k+2) (m x,b ))] 6.
Channel contents must change in accordance to the relevant transition. Let L be a set of labels (names) for the transitions. Let l ∈ L and α l be a transition labeled l.
ϕ C = [S ⇒ l∈L (♦ (0,2k+1] ( α l ⇒ φ l ))]
where φ l are the formulae as seen in 5. 7. Let t be a state of the ICMET whose reachability we are interested in. Check s t is reachable from s 0 . φ reach = ♦(s t ) Thus the formula encoding ICMET is:
ϕ 3 = ϕ S ∧ ϕ ∆ ∧ ϕ m ∧ ϕ H ∧ ϕ C ∧ ϕ reach
Main Equivalences
In this section, we discuss the two equivalences : the equivalence between po-1-clock ATA and 1-TPTL, and that between po-1-clock ATA and SfrMTL. SfrMTL is the fragment of RegMTL where the regular expressions are all star-free. This gives the equivalence between 1-TPTL and SfrMTL.
Automaton-Logic Characterization
In this section, we show that partially ordered 1-clock alternating timed automata (po-1-clock ATA) capture exactly the same class of languages as 1-TPTL. We also show that 1-TPTL is equivalent to the class RegMTL where the regular expressions re involved in the formulae are star-free. We denote by SfrMTL this subclass RegMTL. This also shows for the first time in pointwise timed logics, an equivalence between freeze point logics and logics with interval constraints. A 1-clock ATA [START_REF] Ouaknine | On the decidability of metric temporal logic[END_REF] is a tuple A = (Σ, S, s 0 , F, δ), where Σ is a finite alphabet, S is a finite set of locations, s 0 ∈ S is the initial location and F ⊆ S is the set of final locations. Let x denote the clock variable in the 1-clock ATA, and x c denote a clock constraint where c ∈ N and ∈ {<, ≤, >, ≥}. Let X denote a finite set of clock constraints of the form x c. The transition function is defined as δ : S × Σ → Φ(S ∪ Σ ∪ X) where Φ(S ∪ Σ ∪ X) is a set of formulae defined by the grammar below. Let s ∈ S. The grammar is defined as
ϕ ::= |⊥|ϕ 1 ∧ ϕ 2 |ϕ 1 ∨ ϕ 2 |s|x c|x.ϕ
x.ϕ is a binding construct correspinding to resetting the clock x to 0.
The notation Φ(S ∪ Σ ∪ X) thus allows boolean combinations as defined above of locations, symbols of Σ, clock constraints and , ⊥, with or without the binding construct (x.). A configuration of a 1-clock ATA is a set consisting of locations along with their clock valuation. Given a configuration C, we denote by δ(C, a) the configuration D obtained by applying δ(s, a) to each location s such that (s, ν) ∈ C. A run of the 1-clock ATA starts from the initial configuration {(s 0 , 0)}, and proceeds with alternating time elapse transitions and discrete transitions obtained on reading a symbol from Σ. A configuration is accepting iff it is either empty, or is of the form {(s, ν) | s ∈ F }. The language accepted by a 1-clock ATA A, denoted L(A) is the set of all timed words ρ such that starting from {(s 0 , 0)}, reading ρ leads to an accepting configuration. A po-1-clock ATA is one in which there is a partial order denoted ≺ on the locations, such that whenever s j appears in Φ(s i ), s j ≺ s i , or s j = s i . Let ↓ s i = {s j | s j ≺ s i }.
x.s does not appear in δ(s, a) for all s ∈ S, a ∈ Σ.
Example. Consider A = ({a, b}, {s 0 , s a , s }, s 0 , {s 0 , s }, δ) with transitions δ(s
0 , b) = s 0 , δ(s 0 , a) = (s 0 ∧ x.s a ) ∨ s , δ(s a , a) = (s a ∧ x < 1) ∨ (x > 1) = δ(s a , b), and δ(s , b) = s , δ(s , a) = ⊥.
The automaton accepts all strings where every non-last a has no symbols at distance 1 from it. Note that this is a po-1-clock ATA.
Lemma 6. po-1-clock ATA and 1-TPTL are equivalent in expressive power.
po-1-clock ATA to 1-TPTL
In this section, we explain the algorithm which converts a po-1-clock ATA A into a 1-TPTL formula ϕ such that L(A) = L(ϕ). The translation from 1-TPTL to po-1-clock ATA is easy, as in the translation from MTL to po-1-clock ATA. We illustrate the key steps of the reverse direction, and apply it on the example above, while the step by step details can be seen in Appendix D. There are 4 main steps.
1.
In step 1, we write each transition δ(s, a) into a disjunction
C 1 ∨ C 2 or C 1 or C 2 , where C 1 = s ∧ ϕ 1 , with ϕ 1 ∈ Φ(↓ s ∪ {a} ∪ X), and C 2 = ϕ 2 , where ϕ 2 ∈ Φ(↓ s ∪ {a} ∪ X). 2.
In step 2, we combine all transitions possible from a location s by disjuncting them, and denote the resultant as ∆(s). In the example above, we obtain ∆(s 0 ) = s 0 ∧ [(a ∧ x.s a )) ∨ b] ∨ (a ∧ s ). 3. In step 3, we take the first step towards obtaining a 1-TPTL formula corresponding to each location, by replacing all locations s appearing in ∆(s) with Os . This is denoted N (s). Continuing with the example, we obtain
N (s 0 ) = Os 0 ∧ [(a ∧ x.Os a )) ∨ b] ∨ (a ∧ Os ), N (s a ) = (Os a ∧ x < 1) ∨ (x > 1), N (s ) = Os ∧ b. 4.
In the last step, we solve each N (s) starting with the lowest location in the partial order.
We make use of the fact that for the lowest locations s n in the partial order, we have
N (s n ) = (Os n ∧ ϕ 1 ) ∨ ϕ 2 , where ϕ 1 , ϕ 2 ∈ Φ(Σ, X).
Hence, a solution to this, denoted F (s n ) is ϕ 1 Wϕ 2 if s n is an accepting location, and as ϕ 1 U ns ϕ 2 if s n is non-accepting. This is recursively continued as we go up the partial order, where each N (s i ) has the form (Os i ∧ ϕ 1 ) ∨ ϕ 2 such that F (s ) is computed for all locations s appearing in ϕ 1 , ϕ 2 . Solving for s i is then similar to that of s n . F (s 0 ) then gives the TPTL formula that we are looking for.
In our example,
F (s ) = ns b, F (s a )=x < 1 U ns x > 1. Finally, F (s 0 ) = [(a ∧ x.OF (s a )) ∨ b] W(a ∧ OF (s )) as ((a ∧ (x.O[(x < 1) U ns x > 1])) ∨ b) W(a ∧ O ns b).
1-TPTL and SfrMTL
In this section, we prove the following result.
Theorem 7. 1-TPTL and SfrMTL are equivalent.
The proof uses Lemmas 8 and 9. We first show that starting from a SfrMTL formula ϕ, we can construct an equivalent 1-TPTL formula ψ.
Assume that there exist some k such that E j = C k . In this case, the LTL formulae that is satisfied in region R y is P k ( U| W)Q j . Thus the y th element of the sequence is
P k ( U| W)Q j .
Assume that there is no C k such that E j = C k . Then the LTL formula that is satisfied in R y is Q j . Thus the y th element of the sequence is Q j . For every sequence that has R y as one of the above, we have: * The assertion in all regions < R i is as there is no restriction on the region before the present point, since we only consider future temporal modalities. Similarly the formulae in regions R z > R y are also set to as there are no restrictions on the behaviour once we come out of the state s. * For all C g = x ∈ R w , where R y > R w > R i , the region R w will satisfy ns P g ∨ ns ⊥. Thus the assertion in R g in every sequence is ns P g or ns ⊥, depending on whether or not we have points lying in R g . Recall that ns ⊥ is the LTL formula whose only model is the empty word . If for some C g such that E j = C g and C g = x ∈ R i , then in region R i , we assert ns P g . Thus the i th entry is ns P g . * Note that all the remaining regions (if any), are between i and y. There is no behaviour allowed at this point. At these points ns ⊥ is true as only the empty string is accepted. Boolean combinations of Beh-Given two locations s 1 , s 2 , with F (s 1 ) = ϕ 1 and F (s 2 ) = ϕ 2 , we construct Beh(F (s 1 ), R) and Beh(F (s 2 ), R) as shown above for all R ∈ R. Given these Beh's we now define boolean operations ∧ and ∨ on these sets, such that
Beh(ϕ 1 , R) ∧ Beh(ϕ 2 , R) = Beh(ϕ 1 ∧ ϕ 2 , R). 1. For every R i ∈ R, we first take the cross product Beh(ϕ 1 , R i ) × Beh(ϕ 2 , R i ), obtaining
a set consisting of ordered pairs of BDs. All the possible behaviours of ϕ 1 ∧ ϕ 2 starting in region R i is equivalent to the conjunction of all possible behaviours of ϕ 1 conjuncted with all the possible behaviours of ϕ 2 .
For every pair (BD
1 , BD 2 )∈Beh(ϕ 1 , R i )×Beh(ϕ 2 , R i ), construct a behaviour BD ∈ Beh(ϕ 1 ∧ ϕ 2 , R i
) such that the i th entry of BD is equal to the conjunction of the i th entry of BD 1 with that of BD 2 . This will ensure that we take all the possible behaviours of F (s 1 ) at region R i and conjunct it with all the possible behaviours of F (s 2 ) in the same region. In a similar way we can also compute the Beh(ϕ 1 ∨ ϕ 2 , R). Elimination of nested Beh: Given any F (s) of the form andC i , E j being clock constraints of the form x ∈ R. Assume that we have calculated Beh(F (s i ), R) for all s i ∈↓ s. We construct Beh(F (s), R) as shown above. After the construction, there might be some propositions of the form O(s j ) as a conjunct in some of the BD's in Beh(F (s), R). This occurrence of s j is eliminated by stitching Beh(F (s j )) with BD as follows:
ϕ=[(P 1 ∧ C 1 ) ∨ (P 2 ∧ C 2 ) . . . (P n ∧ C n )]( U| W)[(Q 1 ∧ E 1 ) ∨ (Q 2 ∧ E 2 ) . . . (Q m ∧ E m )] with P i , Q j ∈ Φ(Σ ∪ OS),
1. Given a sequence BD=[X 0 , . . . , X g-1 , Q j ∧ O(T j ), X g+1 , . . . , X 2K ],
we show how to eliminate T j in the g th entry. 2. There are 2K -g + 1 possibilities, depending on which region ≥ g the next point lies with respect to Q j ∧ O(T j ). 3. Suppose the next point can be taken in R g itself. This means that from the next point, all the possible behaviours described by Beh(F (T j ), R g ) would apply along with the behaviour in this sequence BD. Thus, we first take a cross product BD × Beh(F (T j ), R g ) which will give us pairs of sequences of the form [X 0 , . . . ,
X g-1 , Q j ∧ O(T j ), X g+1 , . . . , X 2K ], [Y 0 , . . . , Y 2K ].
We define a binary operation combine which combines two sequences. Let [X 0 , . . . , X 2K ] denote the combined sequence. To combine
BD 1 =[X 0 , . . . , X g-1 , α 1 U ns α 2 , X g+1 , . . . , X 2K ] BD 2 =[X 0 , . . . , X g-1 , ns (α 1 ), X g+1 , . . . , X 2K ].
For BD 1 and BD 2 , we apply the operations defined previously. Finally, we show that given a Beh for F (s), how to construct an SfrMTL formula, Expr(s), equivalent to x.O(s). That is, ρ, i |= Expr(s) if and only if ρ, i, ν |= x.O(F (s)), for any ν. We give a constructive proof as follows:
Assume ρ, i, ν |= x.O(F (s)). Note that according to the syntax of TPTL, every constraint x ∈ I checks the time elapse between the last point where x was frozen. Thus satisfaction of formulae of the form x.φ at a point is independent of the clock valuation.
ρ, i, ν |= x.O(F (s)) iff ρ, i, ν[x ← τ i ] |= OF (s). We have precomputed Beh(F (s), R) for all regions R. Thus, ρ, i, ν |= x.O(F (s)) iff for all w ∈ 0, . . . , 2K, ρ, i + 1, τ i |= (x ∈ R w ).
This implies that there exists BD ∈ Beh(F (s), R w ) such that for all j ∈ {0, . . . , 2K}, the jth entry BD[j] of BD is the LTL formulae satisfied within region R j . Note that,
ρ, i + 1, τ i |=(x ∈ R w ) is true, iff, ρ, i |= g∈{1,...,w-1} [Reg Rg ∅] ∧ Reg Rw Σ + .
Reg Rg ∅∧Reg Rw Σ + and * ψ 2 = BD∈Beh(F (s),Rw) j∈{1,...,2K} Reg Rj (re(BD[j]))}].
where E is the set of regions such that, for all e ∈ E, Beh(F (s), R e ) is an empty set. The SfrMTL formula Expr(s 0 ) is such that ρ, 1 |= F (s 0 ) iff ρ, 0 |= Expr(s 0 ).
Discussion
Generalization of other Extensions:
In this paper, we study extensions of MTL with ability to specify periodic properties by adding constructs which can specify regular expressions over subformulae within a time interval. This construct also generalizes most of the extensions studied in the literature (for example, Pnueli modalities, threshold counting, modulo counting and so on) still retaining decidability. To the best of our knowledge this is the most expressive decidable extension of MTL in the literature in point-wise semantics.
Automaton Logic Connection:
We give an interval logic characterization for po-1clock-ATA. The only other such equivalences we know of are [START_REF] Paritosh | The unary fragments of metric interval temporal logic: Bounded versus lower bound constraints[END_REF] is between the logic MITL with only unary future and past modalities, restricted to unbounded intervals and partially ordered 2-way deterministic timed automata. Unlike interval logics, automata and logics with freeze quantifiers do not enjoy the perks of non punctuality see Appendix F.2.
Interval Constraint vs. Freeze point quantification:
This was always an interesting question in the literature. Ours is the first such equivalence in point wise semantics. In continuous semantics, these logics are equivalent if we extend it with counting modality [START_REF] Hunter | When is metric temporal logic expressively complete?[END_REF].
Exploiting Non-punctuality: We also give two natural non-punctual fragments
RegMITL[UReg] and MITL mod [UM] of our logic having elementary complexity for satisfiability over both finite and infinite words proving the benefits of characterization using interval logics. We claim that these logics are the most expressive logics in pointwise semantics which have elementary satisfiability checking for both finite and infinite timed words. Finally, we show that if we allow mod counting within the next unit interval, we fail to achieve benefits of relaxing punctuality.
cond1 cond2 x.♦(x < l ∧ O[(x ≥ l) ∧ GoodRun]) ϕ chk2 = cond1 cond2 x.(O[(x ≥ l) ∧ GoodRun]) ϕ chk3 = cond1 cond2
x.GoodRun where GoodRun is the formula which describes the run starting in q 1 , going through a sequence of merges, and witnesses q f at a point when x ∈ [l, u), and is the maximal point in [l, u). GoodRun is given by Th
i (q 1 ) ∧ [{¬Mrg(i)} U[ (i n , i) ∧ {¬Mrg(i n )} U[ (i n-1 , i n ) . . . {¬Mrg(i 2 )} U[ (i 1 , i 2 ) ∧ q∈Q F Nxt(Th i1 (q)) ∧ x ∈ [l, u) ∧ O(x > u)] . . .]]]] where Mrg(i) = j<i (j, i).
The idea is to freeze the clock at the current point e, and start checking a good run from the first point in the interval [l, u). ϕ chk1 is the case when the first point in [l, u) is not the next point from the current point e, while ϕ chk2 handles the case when the next point is in [l, u). In both cases, l > 0. Let Th i be the thread having the initial state q 1 in the start of the interval I. Let i 1 be the index of the thread to which Th i eventually merged (at the last point in the interval [l, u) from e). The next expected state of thread Th i1 is one of the final states if and only if the sub-string within the interval [l, u) from the point e satisfies the regular expression atom. Note that when the frozen clock is ≥ l, we start the run with Th i (q 1 ), go through the merges, and check that x ∈ I when we encounter a thread Th i1 (q), with q being a final state. To ensure that we have covered checking all points in τ e + I, we ensure that at the next point after Th i1 (q), x > u. The decidability of 1-TPTL gives the decidability of RegMTL.
B Proof of Lemma 5 : Unbounded Intervals
The major challenge for the unbounded case is that the point from where we start asserting Th i (q f ) (call this point w) and the point from where we start the counting, (this point is v) may be arbitrarily far. This may result in more than one point marked c i⊕1 . In the bounded interval case, the unique point marked c i⊕1 was used as the "linking point" to stitch the sequences of the run after v till c i⊕1 , and from some point in τ v + I witnessing a final state back to c i⊕1 . The possible non-uniqueness of c i⊕1 thus poses a problem in reusing what we did in the bounded interval case. Thus we consider two cases: Case 1: In this case, we assume that our point w lies within [τ v + l, τ v + l ). Note that τ v + l is the nearest point from v marked with c i⊕l⊕1 . This can be checked by asserting ¬c i⊕l⊕1 all the way till c i⊕1 while walking backward from w, where Th i k (q f ) is witnessed.
Krishna, Khushraj, Paritosh
XX:23
The formula MergeseqPref(k 1 ) does not change. MergeseqSuf(k, k 1 ) is as follows:
♦ [l,l+1) {[(Th i k (q f )) ∧ (¬Mrg (i k ) S[ (i k , i k-1 ) ∧ (¬Mrg (i k-1 ) S[ (i k-1 , i k-2 ) ∧ • • • (i c , i k1 ) ∧ (¬Mrg (i k1 ) Sc i⊕1 )])])]}
where
Mrg (i) = [ j<i (j, i) ∨ c i⊕l⊕1 ]
Case 2: In this case, we assume the complement. That is the point w occurs after τ v + l .
In this case, we assert the prefix till c i⊕l⊕1 and then continue asserting the suffix from this point in the forward fashion unlike other cases. The changed MergeseqPref and MergeseqSuf are as follows:
MergeseqPref(k 1 ):
{¬( Σ ∨ c i⊕l⊕1 ) U[Th i (q 1 ) ∧ (¬Mrg(i) U[ (i 1 , i)∧ (¬Mrg(i 1 ) U[ (i 2 , i 1 ) ∧ . . . (¬Mrg(i k1 ) Uc i⊕l⊕1 )])])]} MergeseqSuf(k, k 1 ): ♦ [l+1,l+2) {[c i⊕l⊕1 ∧ (¬Mrg(i k1 ) U[ (i c , i k1 ) ∧ (¬Mrg(i c ) U[ (i c , i k1 ) ∧ • • • (i k-1 , i k-2 ) ∧ (¬Mrg(i k-1 ) U (Th i k (q f ))])])]} where Mrg(i) = [ j<i (j, i)]
B.1 Complexity of RegMTL Fragments
To prove the complexity results we need the following lemma.
Lemma 10. Given any MITL formulae ϕ with O(2 n ) modalities and maximum constant used in timing intervals K, the satisfiability checking for ϕ is EXPSPACE in n, K.
Proof. Given any MITL formula with expn = O(2 n ) number of modalities, we give a satisfiability preserving reduction from ϕ to ψ ∈ MITL[ U 0,∞ , S] as follows:
(a) Break each U I formulae where I is a bounded interval, into disjunctions of U Ii modality, where each I i is a unit length interval and union of all I i is equal to I. That is,
φ 1 U l,u φ 2 ≡ φ 1 U l,l+1) φ 2 ∨ φ 1 U [l+1,l+2) φ 2 . . . ∨ φ 1 U [u-1,u φ 2 .
This at most increases the number of modalities from expn to expn × K. (b) Next, we flatten all the modalities containing bounded intervals. This results in replacing subformulae of the form φ 1 U [l,l+1) φ 2 with new witness variables. This results in the conjunction of temporal definitions of the form ns [a ↔ φ 1 U [l,l+1) φ 2 ] to the formula. This will result in linear blow up in number of temporal modalities (2
× expn × K). (c) Now consider any temporal definition ns [a ↔ φ 1 U [l,l+1) φ 2 ].
We show a reduction to an equisatisfiable MITL formula containing only intervals of the form 0, u or l, ∞).
First we oversample the words at integer points C = {c 0 , c 1 , c 2 , . . . , c K-1 }. An integer timestamp k is marked c i if and only if k = M (K)+i, where M (K) denotes a non-negative integer multiple of K, and 0 ≤ i ≤ K -1. This can be done easily by the formula
c 0 ∧ i∈{0,...K-1} ns (c i →¬♦ (0,1) ( C) ∧ ♦ (0,1] c i⊕1 )
where x ⊕ y is (x + y)%K (recall that (x + y)%K = M (K) + (x + y), 0 ≤ x + y ≤ K -1).
XX:24 A Regular Metric Temporal Logic
Consider any point i within a unit integer interval marked c i-1 , c i . Then φ 1 U [l,l+1) φ 2 is true at that point i if and only if, φ 1 is true on all the action points till a point j in the future, such that either j occurs within [l, ∞) from i and there is no c i⊕l between i and j (τ j ∈
[τ i + l, τ + l ] ) φ C1,i = (φ 1 ∧ ¬c i⊕l ) U [l,∞) φ 2
or, j occurs within [0, l + 1) from i, and j is within a unit interval marked c i⊕l and c i⊕l⊕1 (τ j ∈ [ τ + l , τ i + l + 1) ).
φ C2,i = φ 1 U [0,l+1) (φ 2 ∧ (¬( C)) Sc i⊕l ) The temporal definition ns [a ↔ φ 1 U [l,l+1) φ 2 ] is then captured by K-1 i=1 ns [{a ∧ (¬( C) Uc i )} ↔ φ C1,i ∨ φ C2,i ]
To eliminate each bounded interval modality as seen (a)-(c) above, we need O(K) modalities. Thus the total number of modalities is O(2 n ) × O(K) × O(K) and the total number of propositions 2 Σ ∪ {c 0 , . . . , c K-1 }. Assuming binary encoding for K, we get a MITL[ U 0,∞ , S] formulae of exponential size. As the satisfiability checking for MITL[ U 0,∞ , S] is in PSPACE [START_REF] Alur | The benefits of relaxing punctuality[END_REF], we get EXPSPACE upper bound. EXPSPACE hardness of MITL can be found in [START_REF] Alur | The benefits of relaxing punctuality[END_REF].
B.2 Proof of Theorem 2.2
Starting from an MITL mod [UM] formula, we first show how to obtain an equisatisfiable MITL formula modulo simple projections.
Elimination of UM
In this section, we show how to eliminate UM from MTL mod [UM] over strictly monotonic timed words. This can be extended to weakly monotonic timed words. Given any MTL mod [UM] formula ϕ over Σ, we first "flatten" the UM modalities of ϕ and obtain a flattened formula. Example. The formula ϕ = [a U(e ∧ (f U (2,3),#b=2%5 y))] can be flattened by replacing the UM with a fresh witness proposition w to obtain
ϕ f lat = [a U(e ∧ w)]∧ ns {w ↔ (f U (2,3),#b=2%5 y)}.
Starting from χ ∈ MTL mod [UM], in the following, we now show how to obtain equisatisfiable MTL formulae corresponding to each temporal projection containing a UM modality. 1. Flattening : Flatten χ obtaining χ f lat over Σ ∪ W , where W is the set of witness propositions used, Σ ∩ W = ∅.
Eliminate Counting : Consider, one by one, each temporal definition T
i of χ f lat . Let Σ i = Σ ∪ W ∪ X i , where X i is a set of fresh propositions, X i ∩ X j = ∅ for i = j.
For each temporal projection T i containing a UM modality of the form x U I,#b=k%n y, Lemma 11 gives ζ i ∈ MTL over Σ i such that T i ≡ ∃X i .ζ i .
Putting it all together : The formula ζ=
k i=1 ζ i ∈ MTL is such that k i=1 T i ≡ ∃X. k i=1 ζ i where X = k i=1 X i .
For elimination of UM, marking witnesses correctly is ensured using an extra set of symbols B = {b 0 , ..., b n } which act as counters incremented in a circular fashion. Each time a witness of the formula which is being counted is encountered, the counter increments, else it remains same. The evaluation of the mod counting formulae can be reduced to checking the difference Lets consider trueUReg I,re φ 2 when I = [l, l + 1). Let Γ be the set of subformulae and their negations occurring in re . When evaluating trueUReg [l,l+1),re φ 2 at a point i, we know that φ 2 holds good at some point j such that τ j -τ i ∈ [l, l + 1), and that Seg(re , i, j) ∈ L(re ). We know that by the above lemma, any word σ ∈ L(re ),for any decomposition σ = σ 1 .σ 2 , there exist an i ∈ {1, 2, . . . , n} such that σ 1 ∈ L(R i 1 ) and σ 2 ∈ L(R i 2 ). Thus we decompose at j with every possible
R k 1 .R k 2 pair such that τ j ∈ τ i + [l, l + 1), TSeg(Γ, (0, l), i) ∈ L(R k 1 ), TSeg(Γ, [l, l + 1), i) ∈ L(R k
2 ).S.Σ * , where φ 2 ∈ S, S ∈ Cl(Γ). Note that φ 2 holds good at the point j such that τ j ∈ [τ i + l, τ i + l + 1), and in [l, τ j ), the expression R k 2 evaluates to true. We simply assert Σ * on the remaining part (τ j , l + 1) of the interval. Thus trueUReg [l,l+1),re φ 2 ≡ i∈{1,2...,n}
Reg (0,l) R i 1 ∧ Reg [l,l+1) R i 2 .φ 2 .Σ * .
2. We first show that the UM modality can be captured by MC. Consider any formula φ 1 UM I,#φ3=k%n φ 2 . At any point i this formulae is true if and only if there exists a point j in future such that τ j -τ i ∈ I and the number of points between i and j where φ 3 is true is k%n, and φ 1 is true at all points between i and j. To count between i and j, we can first count the behaviour φ 3 from i to the last point of the word, followed by the counting from j to the last point of the word. Then we check that the difference between these counts to be k%n.
Let cnt φ (x, φ 3 ) = {φ ∧ MC x%n (0,∞) (φ 3 )}. Using this macro, φ 1 UM I,#φ3=k%n φ 2 is equivalent to n-1 k1=0 [ψ 1 ∨ ψ 2 ] where ψ 1 ={cnt true (k 1 , φ 3 ) ∧ (φ 1 U I cnt φ2∧¬φ3 (k 2 , φ 3 ))}, ψ 2 ={cnt true (k 1 , φ 3 ) ∧ (φ 1 U I cnt φ2∧φ3 (k 2 -1, φ 3 ))}, k 1 -k 2 =k
The only difference between ψ 1 , ψ 2 is that in one, φ 3 holds at position j, while in the other, it does not. The k 2 -1 is to avoid the double counting in the case φ 3 holds at j.
D
po-1-clock ATA to 1-TPTL
In this section, we explain the algorithm which converts a po-1-clock ATA A into a 1-TPTL formula ϕ such that L(A) = L(ϕ). 1.
Step 1. Rewrite the transitions of the automaton. Each δ(s, a) can be written in an equivalent form C 1 ∨ C 2 or C 1 or C 2 where C 1 has the form s ∧ ϕ 1 , where ϕ 1 ∈ Φ(↓ s ∪ {a} ∪ X), C 2 has the form ϕ 2 , where ϕ 2 ∈ Φ(↓ s ∪ {a} ∪ X) In particular, if s is the lowest location in the partial order, then ϕ 1 , ϕ 2 ∈ Φ({a} ∪ X). Denote this equivalent form by δ (s, a). For the example above, we obtain δ (s can call only itself), s 0 , . . . , s k . Apply the same construction. As explained above, the constructed formulae while eliminating a location will be true at a point if and only if there is an accepting run starting from the corresponding location with the same clock value. Let the formulae obtained for any s i be ϕ i . The occurrence of an s i in any ∆(s i<n ) can be substituted with ϕ i as a look ahead. This gives us an n -1 level 1-clock ATA with TPTL look ahead. By by induction, we obtain that every 1-clock po-ATA can be reduced to 1 -TPTL formulae.
0 , a) = (s 0 ∧ (a ∧ x.s a )) ∨ (a ∧ s ), δ (s 0 , b) = s 0 ∧ b, δ (s a , a) = (s a ∧ x < 1) ∨ (x > 1) δ (s ) = (s ∧ b) 2. Step 2.
E
Proof of Lemma 8
Proof. Let ρ be a timed word such that ρ, i |= Reg I re. Note that re could be a compound regular expression containing formulae of the form Reg I re . As a first step, we introduce an atomic proposition w I which evaluates to true at all points j in ρ such that τ j -τ i ∈ I. In case I is an unbounded interval, then we need not concatenate (¬w I ) * at the end of (w ∧ w I ). The rest of the proof is the same.
F Example
Example. Consider the example of the po-1-clock ATA in the main paper. We now work out a few steps to illustrate the construction. The lowest locations are s , s a and we know F (s ) = ns b, F (s a ) = (x < 1) U ns (x > 1) and F (s 0 ) = [(a ∧ x.OF (s a )) ∨ b] W(a ∧ OF (s )).
The regions are R 0 , R 1 , R 2 , R 3 .
From this, we obtain Beh(s a , R) = {[ , , ns ⊥, ]} for all R ∈ R \ {R 2 }. Note that as there is no constraint of the form x = 1 in the formulae, Beh(s a , R 2 ) = ∅. As there are no clock constraints in s , there is no need to compute the Beh for it. We do it here just for the purpose of illustration. Beh(s , R 0 ) consists of BD's BD ] implies that from the beginning of the timed word till the last occurrence of a, there is no a which has an action point exactly after unit time from it. This is what exactly the input ATA was specifying.
F.1 Two Counter Machines
Incremental Error Counter Machine
An incremental error counter machine is a counter machine where a particular configuration can have counter values with arbitrary positive error. Formally, an incremental error kcounter machine is a k + 1 tuple M = (P, C 1 , . . . , C k ) where P is a set of instructions like above and C 1 to C k are the counters. The difference between a counter machine with and without incremental counter error is as follows:
1. Let (l, c 1 , c 2 . . . , c k ) → (l , c 1 , c 2 . . . , c k ) be a move of a counter machine without error when executing l th instruction.
2.
The corresponding move in the increment error counter machine is
(l, c 1 , c 2 . . . , c k ) → {(l , c 1 , c 2 . . . , c k )|c i ≥ c i , 1 ≤ i ≤ k}
Thus the value of the counters are non deterministic.
Theorem 15. [START_REF] Minsky | Finite and Infinite Machines[END_REF] The halting problem for deterministic k counter machines is undecidable for k ≥ 2. Theorem 16. [START_REF] Demri | LTL with the freeze quantifier and register automata[END_REF] The halting problem for incremental error k-counter machines is non primitive recursive.
Figure 1
1 Figure 1 Big picture of the paper. The interval logic star-free MTL denoted SfrMTL is equivalent to the freeze logic 1 -TPTL, which is equivalent to po-1-clock-ATA. All the logics in blue have an elementary complexity, while SfMITL[UReg] is strictly more expressive than MITL, and RegMITL[UReg] is more expressive than its star-free counterpart SfMITL[UReg].
clock variables progressing at the same rate, y ∈ C, and I is an interval of the form <a, b> a, b ∈ N with <∈ {(, [} and >∈ {], )}.
ICMETA
channel machine is a tuple A = (S, M, ∆, C) where S is a finite set of states, M is a finite channel alphabet, C is a finite set of channel names, and ∆ ⊆ S × Op × S is the transition relation, where Op = {c!a, c?a, c = | c ∈ C, a ∈ M } is the set of transition operations. c!a corresponds to writing message a to the tail of channel c, c?a denotes reading the message a from the head of channel c, and c = tests channel c for emptiness.
and H is a new symbol. 1 .
1 The jth configuration for j ≥ 0 is encoded in the interval [(2k + 2)j, (2k + 2)(j + 1) -1)
7 .
7 A sequence of messages w 1 w 2 w 3 . . . w z in any channel is encoded as a sequence w 1,a w 1,b Hw 2,a w 2,b w 3,a w 3,b . . . w z,a w z,b . Let S = n i=0 s i denote the states of the ICMET, α = m i=0 α i , denote the transitions α i of the form (s, c!m, s ) or (s, c?m, s ) or (s, c = , s ). Let action = true and let M a = mx∈M (m x,a ), M b = mx∈M (m x,b ), with M = M a ∨ M b .
3 .
3 For each location s, construct ∆(s) which combines δ (s, a) for all a ∈ Σ, by disjuncting them first, and again putting them in the form in step 1. Thus, we obtain ∆(s) = D 1 ∨ D 2 or D 1 or D 2 where D 1 , D 2 have the forms s ∧ ϕ 1 and ϕ 2 respectively, where ϕ 1 , ϕ 2 ∈ Φ(↓ s ∪ Σ ∪ X). For the example above, we obtain ∆(s0 ) = s 0 ∧ [(a ∧ x.s a )) ∨ b] ∨ (a ∧ s ) ∆(s a ) = (s a ∧ x < 1) ∨ (x > 1) ∆(s ) = s ∧ b.Step 3. We now convert each ∆(s) into a normal form N (s). N (s) is obtained from ∆(s) as follows.
Then it is easy to see that ρ, i |= Reg I re iff ρ, i |= Reg I (re ∧ w I ), since Reg I covers exactly all points which are within the interval I from i. As the next step, we replace re, with an atomic proposition w obtaining the formulaReg I [w ∧ w I ]. Assume that I is a bounded interval. Reg I [w ∧ w I ] is equivalent to Reg[(w ∧ w I ).(¬w I ) * ], since Reg[]covers the entire suffix of ρ starting at point i. Now, replace w I with the clock constraint x ∈ I, and rewrite the formula as x.[(w ∧ (x ∈ I)). (¬(x ∈ I))], which is in 1 -TPTL. Note that this step also preserves equivalence of teh formulae. Replacing w with re now eliminates one level of the Reg operator in the above formula. Doing the same technique as above to re which has the form Reg I (re ), will eliminate one more level of Reg and so on. Continuing this process will result in a 1-TPTL formula which has k freeze quantifications iff the starting SfrMTL formula had k nestings of the Reg modality.
1 = [ ns b, ns b, ns b, ns b] BD 2 = [ , ns b, ns b, ns b] BD 3 = [ , , ns b, ns b] and BD 4 = [ , , , ns b].Beh(s , R 1 ) consists of BD 1 , BD 2 , BD 3 , Beh(s , R 2 ) consists of BD 2 , BD 3 , while Beh(s , R 3 ) consists of BD 3 .It can be seen that Expr(s a ) is given by the disjunction of[Reg R0 ( ) ∧ Reg R1 ( ) ∧ Reg R2 (∅) ∧ Reg R3 ( )] [Reg R0 (∅) ∧ Reg R1 (Σ + )] → [Reg R1 ( ) ∧ Reg R2 (∅) ∧ Reg R3 ( )] [Reg R0 (∅) ∧ Reg R1 (∅) ∧ Reg R3 (Σ + )] → Reg R3( ) It can be seen that Expr(s ) will be equivalent to O ns b. Let F (s 0 ) be the formulae we get by substituting s a with Expr(s a ) and s by ns b. Again note that there are no clock constraints in F (s 0 ). thus we do not need to make Beh for it. The final formulae which is supposed to be asserted at 0 is x.O(F (s 0 ) which is equivalent to O(F (s 0 )(as there are no timing constraints in F (s 0 )). Observe what is the behaviour of O(F (s 0 ) = O[(a ∧ Expr(s a )) ∨ b] W(a ∧ O( ns b)). Note that if any point satisfies Expr(s a ) if and only if there is no action point exactly after 1 unit time. Thus, O(F (s 0 )) = O[[(a ∧ Expr(s a )) ∨ b] W(a ∧ O( ns b))]
A
deterministic k-counter machine is a k + 1 tuple M = (P, C 1 , . . . , C k ), where 1. C 1 , . . . , C k are counters taking values in N ∪ {0} (their initial values are set to zero);2. P is a finite set of instructions with labels p 1 , . . . , p n-1 , p n . There is a unique instruction labelled HALT. For E ∈ {C 1 , . . . , C k }, the instructions P are of the following forms: a. p g : Inc(E), goto p h , b. p g : If E = 0, goto p h , else go to p d , c. p g : Dec(E), goto p h , d. p n : HALT. A configuration W = (i, c 1 , . . . , c k ) of M is given by the value of the current program counter i and values c 1 , c 2 , . . . , c k of the counters C 1 , C 2 , . . . , C k . A move of the counter machine(l, c 1 , c 2 , . . . , c k ) → (l , c 1 , c 2 , . . . , c k ) denotes that configuration (l , c 1 , c 2 , . . . , c k ) is obtained from (l, c 1 , c 2 , .. . , c k ) by executing the l th instruction p l . If p l is an increment or decrement instruction, c l = c l +1 or c l -1, while c i = c i for i = l and p l is the respective next instruction, while if p l is a zero check instruction, then c i = c i for all i, and p l = p j if c l = 0 and p k otherwise.
. ϕ = Reg I re. As above, let Γ be the set of all subformulae appearing in re. Then for
RegMTL formula ϕ, we define the satisfaction of ϕ at a position i as follows. Consider the formula ϕ = aUReg (0,1),ab * b. Then re=ab * and Γ={a, b, ¬a, ¬b}.
1. ϕ = ϕ 1 UReg I,re ϕ 2 .
Consider first the case when re is any atomic regular expression. Let Γ be the set of
all subformulae appearing in re. For positions i < j ∈ dom(ρ), let Seg(Γ, i, j) denote
the untimed word over Cl(Γ) obtained by marking the positions k ∈ {i + 1, . . . , j -1}
of ρ with ψ ∈ Γ iff ρ, k |= ψ. Then ρ, i |= ϕ 1 UReg I,re ϕ 2 ↔ ∃j>i, ρ, j|= ϕ 2 , t j -t i ∈I,
ρ, k |= ϕ 1 ∀i<k<j and, Seg(Γ, i, j) ∈ L(re), where L(re) is the language of the regular
expression re.
2a
position i∈dom(ρ) and an interval I, let TSeg(Γ, I, i) denote the untimed word over Cl(Γ)
obtained by marking all the positions k such that τ k -τ i ∈ I of ρ with ψ ∈ Γ iff ρ, k |= ψ.
Then ρ, i |= Reg I re ↔ TSeg(Γ, I, i) ∈ L(re).
Example 1.
2 or re 1 .re 2 or (re 1 ) * , then we use the standard definition of L(re) as L(re 1 ) ∪ L(re 2 ), L(re 1 ).L(re 2 ) and [L(re 1 )] * respectively. RegMTL Semantics: For a timed word ρ = (σ, τ ) ∈ T Σ * , a position i ∈ dom(ρ) ∪ {0}, and a
Lemma 8. SfrMTL ⊆ 1 -TPTL
The proof can be found in Appendix E. We illustrate the technique on an example here. Example. Consider ϕ = Reg (0,1) [Reg (1,2) (a+b) * ]. We first obtain Reg (0,1) (w (0,1) ∧[Reg [START_REF] Alur | The benefits of relaxing punctuality[END_REF][START_REF] Baziramwabo | Modular temporal logic[END_REF] (a+ b) * ]), followed by Reg (0,1) (w (0,1) ∧ w) where w is a witness for Reg (1,2) (a + b) * . This is then rewritten as Reg((w (0,1) ∧ w).(¬w (0,1) ) * ), and subsequently as Reg((x ∈ (0, 1) ∧ w) ∧ [¬(x ∈ (0, 1)])). This is equivalent to x.[x ∈ (0, 1) ∧ w ∧ [¬(x ∈ (0, 1)]]. Now we replace w, and do one more application of this technique to obtain the formula x.[x ∈ (0, 1) ∧ [x.(ψ ∧ x ∈ (1, 2) ∧ [¬(x ∈ (1, 2))])] ∧ [¬(x ∈ (0, 1))]], where ψ is the LTL formula equivalent to (a + b) * .
po-1-clock ATA to SfrMTL
Lemma 9. Given a po-1-clock ATA A, we can construct a SfrMTL formula ϕ such that L(A) = L(ϕ).
Let A be a po-1-clock ATA with locations S = {s 0 , s 1 , . . . , s n }. Let K be the maximal constant used in the guards x ∼ c occurring in the transitions. The idea of the proof is to partition the behaviour of each location s i across the regions R 0 =[0, 0], R 1 =(0, 1), . . . , R 2K =[K, K], R + K =(K, ∞) with respect to the last reset of the clock. Let R denote the set of regions. Let R h < R g denote that the region R h comes before R g .
The behaviour in each region is captured as an LTL formula that is invariant in each region. From this, we obtain an SfrMTL formula that represents the behaviour starting from each region while at location s. The fact that the behaviours are captured by LTL formulae asserts the star-freeness of the regular expressions in the constructed RegMTL formulae. In the following, we describe this construction step by step. Let a behaviour distribution (BD) be described as a sequence of length 2K + 1 of the form [ϕ 0 , ϕ 1 , . . . , ϕ 2K ] where each ϕ i is a LTL formula (which does not evaluate to false) that is asserted in region R i . For any location s in A, and a region R we define a function that associates a set of possible behaviours. As seen in section 4.1.1, assume that we have computed F (s) for all locations s. Let F (S) = {F (s) | s ∈ S}. Let B(F (S)) represent the boolean closure of F (S) (we require only conjunctions and disjunctions of elements from F (S)). We define Beh : B(F (S)) × R → 2 BD . Intuitively, Beh(F (s), R i ) provides all the possible behaviours in all the regions of R, while asserting F (s) at any point in R i . Thus,
where α is a number that depends on the number of locations and the transitions of A and the maximal constant K. Now we describe the construction of Beh(F (s), R i ). If s is the lowest in the partial order, then F (s) has the form ϕ 1 Wϕ 2 or ϕ 1 U ns ϕ 2 , where ϕ 1 , ϕ 2 are both disjunctions of conjunctions over Φ(Σ, X). Each conjunct has the form ψ ∧ x ∈ I where ψ ∈ Φ(Σ) and I ∈ R.
Let s be a lowest location in the partial order. F (s) then has the form
where P i and Q j are propositional formulae in Φ(Σ) and C i and E j are clock constraints. Without loss of generality, we assume that clock constraints are of the form x ∈ R y , where R y ∈ R, and that no two C i and no two E j are the same. We now construct Beh(F (s), R) for F (s). 1. Beh(F (s), R i ) = ∅ if and only if there are no constraints x ∈ R i in F (s). This is because F (s) does not allow any behaviour within R i and Beh(F (s), R i ) asserts the behaviour when the clock valuation lies in R i .
Consider an
For each such E j , a behaviour BD is added to the set Beh(F (s), R i ) as follows.
XX:19
the behaviours from the point where T j was called, we substitute T j with the LTL formula asserted at region R g in Beh(F (T j ), R g ). This is done by replacing T j with Y g . For all w < g, X w = X w . For all w > g, X w = X w ∧ Y w . Let the set of BDs obtained thus be called Seq g . 4. Now consider the case when the next point is taken a region > R g . In this case, we consider all the possible regions from R g+1 onwards. For every b ∈ {g + 1, . . . , 2K} we do the following operation: we first take the cross product of [X 0 , . . . , X g-1 , Q j ∧ O(T j ), X g+1 , . . . , X 2K ] and Beh(F
We define an operation stitch(b) on this pair which gives us a sequence [X 0 , . . . , X 2K ]. For all w < g, X w = X w . For w = g, X w = Q j . For all b > w > g, X w = X w ∧ ns ⊥. This implies the next point from where the assertion
This combines the assertions of both the behaviours from the next point onwards. Let the set of BDs we get in this case be Seq ≥g . 5. The final operation is to substitute [X 0 , . . . , X g-1 , Q j ∧O(T j ), X g+1 , . . . , X 2K ] with BDs from any of Seq g , Seq g+1 , . . . , Seq 2K . 6. Note that a similar technique will work while eliminating U j from BD = [X 0 , . . . , X g-1 , ns P j ∧ O(U j ), X g+1 , . . . , X 2K ]. Given BD=[X 0 , . . . , X g-1 , P i ∧O(U i ) U ns Q j ∧O(T j ), X g+1 , . . . , X 2K ], we need to eliminate both the U i and T j . The formulae says either Q j ∧O(T j ) is true at the present point or, P i ∧ O(U i ) true until some point in the future within the region R g , when Q j ∧ O(T j ) becomes true. Thus, we can substitute BD with two sequences BD 1 =[X 0 , . . . , X g-1 , Q j ∧O(T j ), X g+1 , . . . , X 2K ], and BD 2 =[X 0 , . . . , X g-1 , P i ∧O(U i ) UQ j ∧O(T j ), X g+1 , . . . , X 2K ]. We can eliminate T j from BD 1 as shown before. Consider BD 2 which guarantees that the next point from which the assertion
and that U i is called for the last time within R g . Such a BD 2 has to be combined with Beh(F (U i ), R g ). T j can be called from any point either within region R g or succeeding regions. Consider the case where T j is called from within the region R g . First let us take a cross product of BD with Beh(F (U i ), R g ) × Beh(F (T j ), R g ). This gives a triplet of sequences of the form [X 0 , . . . , X g-1 , Q j ∧ O(T j ), X g+1 , . . . , X 2K ], [Y U,0 , . . . , Y U,2K ], [Y T,0 , . . . , Y T,2K ]. We now show to combine the behaviours and get a sequence [X 0 , . . . , X 2K ].
For every w < g, X w = X w . For w = g, X g is obtaining by replacing U i with Y U,g and T j with Y T,g in the gth entry of BD 2 . For all w > g, X w = X w ∧ Y U,g ∧ Y T,g . Let the set of these BDs be denoted Seq g . Now consider the case where T j was called from any region R b >R g . Take a cross product of BD with Beh(F
]. The one difference in combining this triplet as compared to the last one is that we have to assert that from the last point in R g , the next point only occurs in the region R b . Thus all the regions between R g and R b should be conjuncted with ns ⊥. We get a sequence [X 0 , . . . , X 2K ] after combining, such that For all w < g, X w = X w . For w = g, X g = (
. , X 2K ] and can be replaced by 2 BDs
Appendix
A 1-TPTL for Reg [l,u) atom
To encode an accepting run going through a sequence of merges capturing Reg [l,u) atom at a point e, we assert ϕ chk1 ∨ ϕ chk2 at e, assuming l = 0. If l = 0, we assert ϕ chk3 . Recall that m is the number of states in the minimal DFA accepting atom.
between indices between the first and the last symbol in the time region where the counting constraint is checked.
B.2.1 Construction of Simple Extension
Consider a temporal definition T = ns [a ↔ xUM I,#b=k%n y], built from Σ ∪ W . Let ⊕ denote addition modulo n + 1.
1. Construction of a (Σ ∪ W, B)-simple extension. We introduce a fresh set of propositions
and construct a family of simple extensions ρ = (σ , τ ) from ρ = (σ, τ ) as follows:
C3: σ i has exactly one symbol from B for all 1 ≤ i ≤ |dom(ρ)|.
2. Formula specifying the above behaviour. The variables in B help in counting the number of b's in ρ. C1, C2 and C3 are written in MTL as follows:
Proof. 1. Construct a simple projection ρ as shown in B.2.1.
2. Now checking whether at point i in ρ, x U I,#b=k%n y is true, is equivalent to checking that at point i in ρ there exist a point j in the future where y is true and for all the points between j and i, x is true and the difference between the index values of the symbols from B at i and j is k%n.
) where j = k + i%n. Note that φ 1 UReg I,re φ 2 ≡ trueUReg I,re φ 2 , where re is a regular expression obtained by conjuncting φ 1 to all formulae ψ occurring in the top level subformulae of re. For example, if we had aUReg (0,1),(Reg [START_REF] Alur | The benefits of relaxing punctuality[END_REF][START_REF] Baziramwabo | Modular temporal logic[END_REF] [Reg (2,3) (b+c) * ]) d, then we obtain trueUReg (0,1),(a∧Reg [START_REF] Alur | The benefits of relaxing punctuality[END_REF][START_REF] Baziramwabo | Modular temporal logic[END_REF] [Reg (2,3) (b+c) * ]) d. When evaluated at a point i, the conjunction ensures that φ 1 holds good at all the points between i and j, where τ j -τ i ∈ I. To reduce trueUReg I,re φ 2 to a Reg I formula, we need the following lemma.
The formula
Lemma 14. Given any regular expression R, there exist finitely many regular expressions
That is, for any string σ ∈ R and for any decomposition of σ as σ 1 .σ 2 , there exists some i ≤ n such that
Proof. Let A be the minimal DFA for R. Let the number of states in A be n. The set of strings that leads to some state q i from the initial state q 1 is definable by a regular expression R i 1 . Likewise, the set of strings that lead from q i to some final state of A is also definable by some regular expression R i 2 . Given that there are n states in the DFA A, we have
Consider any string σ ∈ L(A), and any arbitrary decomposition of σ as σ 1 .σ 2 . If we run the word σ 1 over A, we might reach at some state q i . Thus σ 1 ∈ L(R i 1 ). If we read σ 2 from q i , it should lead us to one of the final states (by assumption that σ ∈ R) . Thus σ 2 ∈ L(R i 2 ).
If s occurs in ∆(s), replace it with Os.
Replace each s occurring in each Φ i (↓ s) with Os . Let N (s) = N 1 ∨ N 2 , where N 1 , N 2 are normal forms. Intuitively, the states appearing on the right side of each transition are those which are taken up in the next step. The normal form explicitely does this, and takes us a step closer to 1--TPTL. Continuing with the example, we obtain
Start with the state s n which is the lowest in the partial order. Let N (s n ) = (Os n ∧ ϕ 1 ) ∨ ϕ 2 , where ϕ 1 , ϕ 2 ∈ Φ(Σ, X). Solving N (s n ), one obtains the solution F (s n ) as ϕ 1 Wϕ 2 if s n is an accepting location, and as ϕ 1 U ns ϕ 2 if s n is non-accepting.
In the running example, we obtain
The 1-TPTL formula equivalent to L(A) is then given by F (s 0 ).
D.1 Correctness of Construction
The above algorithm is correct; that is, the 1-TPTL formula F (s 0 ) indeed captures the language accepted by the po-1-clock ATA.
For the proof of correctness, we define a 1-clock ATA with a TPTL look ahead. That is, δ : S × Σ → Φ(S ∪ X ∪ χ(Σ ∪ {x})), where χ(Σ ∪ {x}) is a TPTL formula over alphabet Σ and clock variable x. We allow open TPTL formulae for look ahead; that is, one which is not of the form x.ϕ. All the freeze quantifications x. lie within ϕ. The extension now allows to take a transition (s, ν) → [κ ∧ ψ(x)], where ψ(x) is a TPTL formula, if and only if the suffix of the input word with value of x being ν satisfies ψ(x). We induct on the level of the partial order on the states.
Base Case: Let the level of the partial order be zero. Consider 1-clock ATA having only one location s 0 . Let the transition function be δ(s 0 , a) = B a (ψ a (x), X, s 0 ) for every a ∈ Σ. By our construction, we reduce
specifies that the clock constraints X 1 are satisfied and the suffix satisfies the formulae ψ 1 (x) on reading an a. Thus for this δ(s 0 , a), we have Os 0 ∧ X 1 ∧ ψ 1 (x) ∧ a as a corresponding disjunct in ∆ which specifies the same constraints on the word. Thus the solution to the above will be satisfied at a point with some x = ν if and only if there is an accepting run from s 0 to the final configuration with x = ν.
If the s 0 is a final location, the solution to this is, ϕ
If it is non-final, then it would be U instead of W. Note that this implies that whenever s 0 is invoked with value of x being ν, the above formulae would be true with x = ν thus getting an equivalent 1 -TPTL formulae.
Assume that for automata with n-1 levels in the partial order, we can construct 1-TPTL formulae equivalent to the input automaton as per the construction. Consider any input automaton with n levels. Consider all the locations at the lowest level (that is, the location
F.2 Non-punctual 1-TPTL is NPR
In this section, we show that non-punctuality does not provide any benefits in terms of complexity of satisfiability for TPTL as in the case of MITL. We show that satisfiability checking of non-punctual TPTL is itself non-primitive recursive. This highlights the importance of our oversampling reductions from RegMTL and RegMITL to MTL and MITL respectively, giving RegMITL an elementary complexity. It is easier to reduce RegMITL to 1-variable, non-punctual, TPTL without using oversampling, but this gives a non-primitive recursive bound on complexity.
Non-punctual TPTL with 1 Variable (1 -OpTPTL)
We study a subclass of 1 -TPTL called open 1 -TPTL and denoted as 1 -OpTPTL. The restrictions are mainly on the form of the intervals used in comparing the clock x as follows:
Whenever the single clock x lies in the scope of even number of negations, x is compared only with open intervals, and Whenever the single clock x lies in the scope of an odd number of negations, x is compared to a closed interval. Note that this is a stricter restriction than non-punctuality as it can assert a property only within an open timed regions.
F.2.1 Satisfiability Checking for 1 -OpTPTL
In this section we will investigate the benefits of relaxing punctuality in TPTL by exploring the hardness of satisfiability checking for 1 -OpTPTL over timed words.
Theorem 17. Satisfiability Checking of 1 -OpTPTL[♦, O] is decidable with non primitive recursive lower bound over finite timed words and it is undecidable over infinite timed words.
Proof. We encode the runs of k counter incremental error channel machine using 1 -OpTPTL formulae with ♦, O modalities. We will encode a particular computation of any CM using timed words. The main idea is to construct an 1 -OpTPTL[♦, O] formula ϕ ICM for any given k-incremental counter machine ICM such that it is satisfied by only those timed words that encode the halting computation of ICM. Moreover, for every halting computation C of ICM at least one timed word ρ C satisfies ϕ ICM such that ρ C encodes C.
We encode each computation of some k-incremental counter machine ICM = (P, C) where P = {p 1 , . . . , p n } and C = {c 1 , . . . , c k } using timed words over the alphabet Σ ICM = i∈{1,...,k} (S ∪ F ∪ {a j , b j }) where S = {s p |p ∈ 1, . . . , n} and F = {f p |p ∈ 1, . . . , n} as follows: A i th configuration, (p, c 1 , . . . , c k ) is encoded in the time region [i, i + 1) with sequence :
The concatenation of these time segments of a timed word encodes the whole computation. Thus the untimed projection of our language will be of the form:
To construct a formula ϕ ICM , the main challenge is to write down some finite specifications which propagate the behaviour from the time segment [i, i+1) to the time segment [i+1, i+2) XX:32 A Regular Metric Temporal Logic such that the later encodes the i + 1 th configuration of ICM (in accordance with the program counter value at i th configuration). The usual idea is to copy all the a's from one configuration to another using punctuality. This is not possible in a non-punctual logic. Thus we try to preserve the number (or copy a time point) using following idea:
Given any non last (a j , t)(b j , t ) before F(for some counter c j ) , of a timed word encoding a computation. We assert that the last symbol in (t, t + 1) is a j and the symbol in (t , t + 1) is b j . We can easily assert that the untimed sequence of the timed word is of the form
The above two conditions imply that there is at least one a j within time(t 1 + 1, t 2 + 1). Thus all the non last a j b j is copied to the segment encoding next configuration. Now appending one a j b j ,two a j b j 's or no a j b j 's depends on whether the instruction was copy, increment or decrement operation.
ϕ ICM is obtained as a conjunction of several formulae. Let S, F be a shorthand for . This could be expressed in the formula below
Initial Configuration: There is no occurrence of a j b j within [0, 1]. The program counter value is 1.
)) Copying S, F: Every (S, u), (F, v) has a next occurrence (S, u ), (F, v ) in future such that u -u ∈ (k, k + 1) and v -v ∈ (k -1, k). Note that this condition along with ϕ 1 and ϕ 2 makes sure that S and F occur only within the intervals of the form [i, i + 1) where i is the configuration number.
Beyond p n =HALT, there are no instructions
At any point of time, exactly one event takes place. Events have distinct time stamps.
Eventually we reach the halting configuration p n , c 1 , . . . , c k : ϕ 6 = ♦s n Every non last (a j , t)(b j , t ) occurring in the interval (i, i + 1) should be copied in the interval (i + 1, i + 2). We specify this condition by stating that from every non last a j (before A j+1 or f p ) the last symbol within (0, 1) is a j . Similarly from every non last b j (before A j+1 or f p ) the last symbol within (k -1, k) is b j . Thus (a j , t)(b j , t ) will have Krishna, Khushraj, Paritosh XX:33 a (b j , t + 1 -) where ∈ (0, t -t). Thus all the non last a j b j will incur a b j in the next configuration . ϕ 2 makes sure that there is an a j between two b j 's. Thus this condition along with ϕ 1 makes sure that the non last a j b j sequence is conserved. Note that there can be some a j b j which are arbitrarily inserted. These insertions errors model the incremental error of the machine. Thus if we consider a mapping where (a j , t ins )(b j , t ins ) is mapped to (a j , t)(b j , t ) such that t ins ∈ {t + 1, t + 1}, this is an injective function. Just for the sake of simplicity we assume that a k+1 = f alse.
We define a short macro Copy C\W : Copies the content of all the intervals encoding counter values except counters in W . Just for the sake of simplicity we denote
Using this macro we define the increment,decrement and jump operation.
1. p g : If C j = 0 goto p h , else goto p d . δ 1 specifies the next configuration when the check for zero succeeds. δ 2 specifies the else condition.
2. p g : Inc(C j ) goto p h . The increment is modelled by appending exactly one a j b j in the next interval just after the last copied a j b j ϕ g,incj 8
=
] specifies the increment of the counter j when the value of j is zero. The formula
specifies the increment of counter j when j value is non zero by appending exactly one pair of a j b j after the last copied a j b j in the next interval. 3. p g : Dec(C j ) goto p h . Let second -last(a j ) = a j ∧ O(O(last(a j ))). Decrement is modelled by avoiding copy of last a j b j in the next interval. The formula ψ dec 0 = ns [{s g ∧ (¬a j ) Uf g )} → {(¬S) U{s h ∧ ((¬a j ) U(F)}] specifies that the counter remains unchanged if decrement is applied to the j when it is zero. The formula ψ dec 1 = ns [{s g ∧ ((¬F) U(a j ))} → (¬F) Ux.{second -last(a j ) ∧ ♦(T -x ∈ (0, 1) ∧ (a j ∧ OO(A j+1 ∧ T -x ∈ (1, 2))))}] decrements the counter j, if the present value of j is non zero. It does that by disallowing copy of last a j b j of the present interval to the next.
XX:34 A Regular Metric Temporal Logic
The formula ϕ ICM = i∈{1,...,7} ϕ i ∧ p∈P ϕ p 8 . 2) To prove the undecidability we encode the k counter machine without error. Let the formula be ϕ CM . The encoding is same as above. The only difference is while copying the non-last a in the ϕ M we allowed insertion errors i.e. there were arbitrarily extra a and b allowed in between apart from the copied ones in the next configuration while copying the non-last a and b. To encode counter machine without error we need to take care of insertion errors. Rest of the formula are same. The following formula will avoid error and copy all the non-last a and b without any extra a and b inserted in between.
F.2.1.1 Correctness Argument
Note that increment errors occurred only while copying the non last ab sequence in (1). The similar argument for mapping a j with a unique a j in the next configuration can be applied in past and thus using ϕ 9 mapping we can say that non last a j , b j in the previous configuration can be mapped to a copied a j , b j in the next configuration with an injective mapping. This gives as an existence of bijection between the set of non-last a k , b k in the previous configuration and the set of copied a k , b k by ϕ 7 . Thus "there are no insertion errors" is specified with ϕ 9 . | 99,026 | [
"937922",
"1003424",
"1003425"
] | [
"54366",
"54366",
"328231"
] |
01188039 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2010 | https://hal.univ-reunion.fr/hal-01188039/file/eLex%20JMB%20JS.pdf | Jean Simon
email: jean.simon@reunion.iufm.fr
Equipe Grrapeli
François-Marie Blondel
email: francois-marie.blondel@inrp.fr
Use of a CSCW platform by three different categories of people: trace analysis according to Activity Theory
Keywords: trace analysis, CSCW, Activity theory, teachers training
In this paper, we analyze the traces of the activity on a CSCW platform. For that we gather the traces in the higher level shared folders (hlsf ). Based on Activity Theory, the hlsf permit to distinguish between the groups and the goals they pursue. The first group that we will study is constituted of preservice teachers who use the platform to pool and share teaching resources. The second group is constituted of students who use the platform to prepare an examination and the third one is constituted of researchers who use the platform to work together. We will show that according to the groups and their goals the observed activities are not the same.
Introduction
We study here the activity of three different categories of users on two CSCW platforms BSCW [START_REF] Bentley | Basic Support for Cooperative Work on the World Wide Web[END_REF]. The first category consists of preservice teachers (PE2s for "professeurs des écoles 2è année") who learn how to teach, they have passed successfully the contest to become a primary school teacher. The second consists of students (PE1s for "professeurs des écoles 1è année") who prepare this contest. The platform used in their case is a BSCW platform managed by the training teachers school of La Réunion (IUFM for Institut universitaire de formation des maîtres). It is important to note that we study their work when they are alone on the platform and that there is no trainer associated with their work. We have showed in (Simon, 2009a) and (Simon, 2009b) that it makes a difference in terms of results. The latter category consists of a community of students and researchers. The platform used then is a BSCW platform managed by the ENS (Ecole Normale Supérieure) of Cachan. This is the reason why we will call this category "ENS". These three categories had a lifespan very different when we have registered the traces of their activity (july 2009). The two first had access to the platform only during one academic year (2008)(2009) when they were in formation in the IUFM, while the students and the researchers have access since September 2004 to the platform of ENS.
The reasons to use a platform of groupware are different from one category to another. For PE2s, the platform serves to pool and share resources to make class. For the PE1s, the objective is rather to share everything that can help them to success in the contest. At the ENS of Cachan, the students work on the platform at the demand of their trainers because most of them are not always present on campus. For researchers, the platform serves to preserve and exchange on key documents and events and manage a large number of collective actions, including projects that involve other remote teams in France or abroad. It is noticed that if the students and the researchers of the ENS and the PE2s of the IUFM can find that it is very beneficial for them to work together, it is not the case of the PE1s since they will be in competition at the time of the contest.
To study this activity, we analyze the traces which these users have left on the platform by using the Activity Theory [START_REF] Engeström | Learning by expanding: An Activity-Theoretical Approach to Developmental Research[END_REF] because these three types of activity are characterized as well as by the groups which are at their origin as by the goals they pursue. In the next section, we quickly present the adopted methodology and, in section three, the obtained results. In the conclusion, we reconsider these results.
Methodology
The principles and the reasons of the methodology that we employ here can be found in [START_REF] Simon | Dossiers partagés par les stagiaires avec ou sans formateur à l'IUFM de La Réunion : Analyses des traces[END_REF] and (Simon 2009a). Briefly, we study the traces that were left on platforms by combining them in units that make sense.
For each category, we have the totality of the traces which its users have left on the platform. These traces are like photography, taken at a given time, of the activities taking place on it. At the time of the analysis, there were 166893 of them left by the researchers and the students on the platform of the ENS of Cachan and 775750 on the platform of the IUFM of La Réunion. But for the latter, the traces do not concern only the categories of users PE1 and PE2 but also secondary school teachers, trainers,... In most researches, trace analysis consists simply in counting the traces. The problem is that, by doing this way, a lot of information is lost. These traces reflect various activities and this is the reason why it is necessary to be able to gather them according to these activities. In particular, we must be able to define who has left them to pursue what purpose. We find, in the background of this approach, the Activity Theory.
In order to be able to recover these two parameters, the group and the goal, we, thus, gather the traces in hlsf (higher level shared folder) (Simon, 2009,a). A hlsf is a folder created and shared by a group of users in order to achieve their goals. A hlsf, thus, includes the traces corresponding to different objects (sub-folders, documents, URLs, ...), various actions on these objects (creation, reading, modification,...) and the group of members associated with it.
The use of the hslf as unit of analysis makes it possible to study the traces left on the platform according to these groups and of their objectives. In particular, this has allowed us to study the activity of PE2s during three years, distinguishing whether they were alone (Simon, 2009a) or with a trainer (Simon, 2009b).
In what follows, we analyze the hlsfs shared by each of the three categories: PE2s, PE1s and researchers. The PE2s created the biggest number of hlsfs but we will see later that these hlsfs were not the most active, moreover, compared to the previous years (Simon, 2009, a), this number is going down.
Results
Number of hlsfs
The PE1s have the lowest number of hlsfs both in absolute terms ( 22) or reported to the number of PE1 (0.13). This is understandable when we know, as we have noted, that the PE1s will be in competition. This will also appear in most of the next tables.
Regarding the ENS category, it can be wondered why they do not have created more hlsfs because their activity take place over 5 years whereas the activity of the PE took place only over one year. The answer holds, perhaps, in the fact that all the hlsfs are associated with an activity and that a larger number of years does not imply necessarily a larger number of activities, a same activity being able to proceed over several years. Table 2 indicates the number of hlsfs in which the users take part. It is noted, in all categories, that a majority of them take part from one to ten hlsfs.
User involvement in several hlsfs
We note what has been reported previously: It is among the PE1 that we find the greatest number of non-participants. Nearly 1 PE1 out of 4 does not belong to any hlsf.
In contrast, only 1% of the people of ENS are not involved in any hlsf and they are also a minority to participate in more than 10.
It is among the PE2 that the greatest number of users taking part in more than 10 hlsfs is found. That raises the question to know if all their hlsfs are justified or if so some of them could not be gathered. Table 3 indicates how the users have distributed the roles among them. We distinguish five different roles. The leaders who have created at least one hlsf, the moderators who have created at least one subfolder in these hlsfs, the producers who have deposited at least one document in these hlsfs, the readers who have read at least one document in these hlsfs and, finally, the inactives who, as the name suggests it, did not take part in the life of the hlsfs.
Roles
For each category, the percentages are reported to the number of members of the category who take part in at least one hlsf and not to the total number of members of the category.
We can see that according to the categories to which they belong, the users have distributed the roles differently.
Organization (leader and moderator)
We see that there is no real leadership among the PE2s, one PE2 out of four has created a folder, whereas there's only one out of eight for the PE1s and only one out of sixteen for the researchers. This concept of leadership is to be taken with caution for the reason that it is not because someone launches a hlsf that he is necessarily the leader of this hlsf. This is the case, for instance, at the ENS where one researcher has created 52 hlsfs out of 102 and most of them have been created at the demand of his colleagues.
The tendency remains the same for the moderators but with some nuances. It is for the PE2s that we find the greatest percentage of moderators. For the PE1s , the number of moderators is equal to the number of leaders. Furthermore, a closer analysis shows that this is the same persons who have created the hlsfs who have been brought to create sub-folders.
At the ENS, the number of moderators is much bigger than the number of leaders. This suggests a greater division of labour in this category. Another explication is that, once the general framework is decided through the creation of the hlsf, more people create subfolder according to the life of the projects.
Production
It is noted that the number of producers reported to the number of member is weakest among the PE1s. That can be explained, again, by the fact that they are in competition.
This percentage is nearly 38% among the PE2s and 29% among students and researchers of ENS. For the latter, this number can appear relatively weak and we would have thought that over the 5 years all members would have, more or less, taken part in the production of documents. This could be explained by the fact that all the students are not always constrained to produce, in the same way some researchers produce little, often because these means of communication and exchange do not belong to their professional practices.
We will reconsider the production in the last point of this section because the number of producers by hlsf is what characterizes the best these hlsfs.
Use/reading
Concerning the use, the readings, we note that almost all the students and researchers were interested in what there was on the platform (98% have read at least one document) but only 8 PE out of 10 (PE1 or PE2)
Partial conclusion
Concerning PE2s, the figures seem to indicate that there is no really leadership or people charged with the organization. We can also wonder whether there were, a priori, organization of those hlsfs.
The same phenomenon of strong imbalance between production and use, already reported in [START_REF] Simon | Teaching resource pooling and sharing by the primary school teachers trainees , ICOOL 2007[END_REF] and (Simon 2009a) is confirmed here. If we assume that PE2s must share documents to ease their tasks while they are in charge of a class, we find that 44% of them do not really respect the rule because they take without giving (82% -38%).
However, it is at among the PE1 that this phenomenon of "lurker" is the strongest: only 14% of them deposit documents whereas they are 82% to read. The fact that they are 64% to take without giving seems to be consistent with the fact that they ate in competition.
Finally, for students and researchers of the ENS, there are more moderators than leaders. This suggests a greater division of labour than in the previous categories: the management of collective projects implies a distribution more constrained of the professional frameworks. For them too, there is a gap of 68% between the number of producers and the number of readers. But given the objectives given by them to the platform, this gap is less troublesome because the activity does not aim to ease the tasks of each other but rather to broadcast the information.
Activity
We have seen in previous tables, the number of participants and the roles they were given. We want to go further and discover up to what point this participation is important. In table 4, we observe the activity within hlsfs shared by each category. To measure this activity, we evaluated the average number of members in the different types of hlsfs, the average number of documents produced and the average number of readings done.
Thus, we see that if the average number of members per hlsf is not very different from one community to another, the productivity of its members is significantly higher among students and researchers (44 documents produced by hlsfs) than PE2 (4 documents produced by hlsfs) and that this remains true even if we reduce the work of researchers to one academic year (44 / 5 = 8.8).
If we consider, moreover, the average productivity per member, it reveals that students and researchers of ENS are the most productive. However, as noted in Table 3, all the members are not producers and we analyze more precisely this in the next point.
Concerning the readings, by contrast, we see that, if we take into account the lifespan of the hlsfs, the PE1 read the most, 3 readings by each member, while PE2 are reading, on average, only 1 document and the students and researchers 2 (11/ 5) Furthermore, we also find that all members do not read all the documents and, this, whatever the community. Thus, there could have been 72 readings by the PE2s (18x4) and there were only 18, 549 readings by the students and the researchers while there has been only 137. It is among the PE1 that the number of real readings, 50, is closest to the number of possible readings, 90 (6x15).
Production
The number of producers in a hlsf is one of the criteria which make it possible to define the type of activity within this hlsf [START_REF] Gerard | Analyse des réseaux sociaux associées aux dossiers partagés par des professeurs des écoles stagiaires[END_REF]. In one hand, more there are producers, more there are exchanges and there could be cooperation or collaboration within those hlsfs. On the other hand, we can assume that hlsfs where we find only one producer are rather intended for dissemination of information.
When we consider the hlsfs according to the number of producers, we obtain the following table. In table 5, we note that there is the biggest number of producers in the hlsfs shared by the PE2. Almost 53% of their hlsfs have 2 or more producers. This number confirms those of previous years (Simon, 2009a). This reflects the fact that the PE2 want to pool and share resources to make class. In addition, as we have seen, it is in the hlsfs shared by PE2 that there is the lowest number of documents, we can deduce that the PE2s are the least productive.
In hlsfs shared by PE1s, as in hlsfs shared by students and researchers, there is generally one single producer. We can thus assume that we are less in exchange of information between members than in dissemination of information from one member to the others. However, the reason is very different from one category to another. Whereas, in the case of the researchers, it is the diffusion of information, to state to the others what each one does, which is at the origin of the creation of the hlsfs, in the case of PE1s, the small number of producers is explained by the fact that they are competing with each other as we have already seen in previous points.
To conclude this point, we can say that : -the PE2s are, at most, in cooperation rather than collaboration in the sense of [START_REF] Dillenbourg | he evolution of research on collaborative learning[END_REF]) -for the two other categories, the platform is primarily used to disseminate information but not for the same reasons. It is noted that, naturally, more there are documents more there are subfolders, but it is also noted that we find the phenomenon of overorganization announced in several of our previous articles. We call "overorganization" the fact that there are very few documents on average per folder. We showed that in 2006-2007, the hlsfs shared by PE2s when they are between peers had an average of 3 documents per folder, that remains true over the next three years and also for the hlsfs that PE2s have shared with trainers (Simon, 2009b). We note, here, that the community of the PE1s and the community of students and researchers do not derogate from this rule because we obtain an average of almost 3 documents per folder for the PE1s and an average close to 4 documents per folder for the researchers.
3.6.
We put quotes at "overorganization" because we can wonder whether this phenomenon does not come rather from a lack of organization of the data rather than an overorganization as reports [START_REF] Reffay | Echanger pour apprendre en ligne[END_REF]
Conclusion
Even if we did not refer to it explicitly, all our work has for background the Activity theory and the different heights of the triangles of Engeström [START_REF] Engeström | Learning by expanding: An Activity-Theoretical Approach to Developmental Research[END_REF]. We find, thus, the concepts of subject, of goal (pool and share resources to make class, to prepare the contest, use the platform to diffuse its work), of community (PE2, PE1, students and researchers), of tool (BSCW) but also of division of labour and in a less way of rules.
To be able to make this analysis, we could not simply count the traces left on the platform. Those traces should have been gathered in hlsfs. The hlsfs make it possible to analyze more precisely what occurs on a platform and to distinguish between the different activities which are held there and the groups which are at the source of these activities. This enabled us to show that according to the groups and the goals which they pursue the activities are not the same.
For PE2, the objective is to pool and share resources to make class. This explains that there are more producers than in other hlsfs. Nevertheless, we find that the activity in their case seems to be less intensive, and more diffuse than in other communities:
-more hlsfs -more organizers(leaders, moderators) -less documents and readings per hlsf. This phenomenon is still reinforced by the fact that BSCW is not the only tool they use for pool documents. In an investigation carried out in July 2009 (Gérard, 2010), they indicate that they also use the email and USB keys to share resources. There is also a great difference between the number of producers and the number of readers. In the investigation cited above, almost all PE2s wish to continue to pool after training. We can wonder about the possibility of pooling whereas 44% of people do not do it.
For the PE1s, the use of a CSCW platform may seem contradictory in the sense that the contest puts them in competition with each other. This can explain : -why they are few to take part in a hlsf, -why they are also very few to produce, -why the readers read so much. A possible strategy, when we want to pass a contest, may be to take as much information as possible and to give as little as possible. The results found here are going in this way. That, by contrast, raises the question of the motivations of the producers. Indeed, they seem to work against their interests because they deliver information to people who do not give them back. This unequal exchange should hurt them a priori but, in fact, this is not the case. On the 165 PE1 who have done the contest only 25 have succeeded (15%), while on the 18 PE1 producers, shown in Table 3, who have done the contest, 6 were successful (33%). Concerning this last point, it is however necessary to avoid making a cause and effect link. It is not because that these 6 PE1 were producers on the platform that they had more chance to succeed at the contest but more probably it is because these 6 PE1 were more motivated that they produced on the platform AND succeeded at the contest.
For the category students and researchers of the ENS, the tool, BSCW, seems to be less a tool for collaborative work or even cooperative work in the strict sense of [START_REF] Dillenbourg | he evolution of research on collaborative learning[END_REF] as a tool for disseminating information to colleagues. We can not really speak of exchange when there is one producer only per hlsf. Another figure that is going in this sense is the very low number of notes. A note on BSCW allows, among others, to comment a document. They are fewer than 300, all types of notes confused, in the space shared by them. Concerning the division of labour, the work of the researchers is more organized than in other categories: most producers than moderators and more moderators than leaders. But for this category it is clear that the analyse can go deeper. The next one will have to split this category in two subcategories, the hlsfs shared by students and trainers and the hlsfs shared by the researchers for projects.
The analysis of the traces is thus optimized by the fact that we gather them in hlsfs. It can still be improved by a multimodal analysis: social networks analysis of the groups associated with the hlsfs, titles analysis of the names of the folders,… (Simon, 2009b). But nevertheless, if
Table 1 number of hlsfs
PE2 PE1 ENS
Number of hlsfs 2008-2009 2008-2009 2004-2009
number of hlsfs 72 22 102
number of users 175 165 325
number of hlsfs for one user 0,41 0,13 0,31
Table 1
1 indicates the number of hlsfs per category of users, the number of users and the number of hlsfs brought back to the number of users.
Table 2 :
2
nb of PE2 nb of PE1 ENS
2008-2009 % 2008-2009 % 2004-2009 %
0 12 6,86% 39 23,64% 3 0,92%
>=1 et <10 121 69,14% 126 76,36% 311 95,69%
>=10 42 24,00% 0 0,00% 11 3,38%
TOTAL 175 100,00% 165 100,00% 325 100,00%
participation to one or more hlsf . participation to one or more hlsf .
Table 3 :
3 Roles
% according % according to Number of % according to
Number of to the nb of Number the nb of PE1 people of the nb of ENS
Roles PE2 PE2 members of PE1 members ENS members
leader 41 25% 17 13 % 22 7%
moderator 49 30% 17 13% 58 18%
producer 62 38% 18 14% 95 29%
reader 133 82% 106 82% 316 98%
inactive 30 18% 23 18% 6 2%
Table 4
4
: Activity
PE2 PE1 ENS
activity of the hlsf . 2008-2009 2008-2009 2004-2009
Number of hlsf s 72 22 102
Number of PEs 175 165 325
Average number of members for
one hlsf 18 15 12
Average number of documents for
one hlsf 4 6 44
Average number of documents
produced by one member 0,24 0,44 3,55
Average number of readings for
one hlsf 18 50 137
Average number of readings for
one member 1 3 11
Table 5 :
5 number of hlsf according to the number of producers who work in it
number of hlsf according to the number of producers who PE2 2008-2009 PE1 2008-2009 ENS 2004-2009
work in it nb hslf % nb hslf % nb hslf %
0 producer 7 9,72% 2 9,09% 5 4,90%
1 producer 27 37,50% 14 63,64% 62 60,78%
2 producers and more 38 52,78% 6 27,27% 35 34,31%
total 72 100,00% 22 100,00% 102 100,00%
As the hlsf is also a folder we integrate it in the average (B subfolder +1 hlsf). | 23,452 | [
"14819"
] | [
"232"
] |
01187215 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://hal.univ-reunion.fr/hal-01187215/file/CSCW%20data%20mining%20Simon%20Ralambondriany.pdf | RALAMBONDRAINY Simon Jean
email: jean.simon@univ-reunion.fr
Henri Lim
Use of a CSCW platform by trainers and trainees Trace analysis: multimodal analysis vs data mining approach
The purpose of this paper is to show that data mining tools can help refine the understanding of what happens on a platform of collaborative work. For this, we compare a multimodal analysis of traces with an analysis of these same traces performed by data mining tools. The second analysis confirms the results of the first one on some points but also gives more information and reveals some "dysfunctions" in the use of the platform. So, it seems interesting to use such tools to allow a formative evaluation of the training devices.
Introduction
The Reunion Island teacher training school trains the future teachers. Since 2005, trainers and trainees use a platform for computer supported collaborative work (CSCW) to be formed. Trainees and trainers create there and share folders where they deposit and draw various resources which must help them to make class. The objectives for the trainers are: to deposit documents and to serve as "collective memory", to improve lesson plans proposed by the trainees, to facilitate preparation of workshop of practice analysis, to pool and share within the framework of the dissertation, to help online and at distance trainees during the training period when they are in charge of a class, to validate the certificate C2i2e which confirms that the trainee is able to use ICT in education. In this paper we compare two analyzes of the 77 folders shared by 15 trainers and 277 trainees, in 2006-2007.
Multimodal analysis of the folders
In 2009 (Simon, 2009), we proposed a first classification of those folders described by 14 numeric variables: number of members, of trainees, of trainers, total of producers, of trainer producers, of trainee producers, of trainer readers, of trainee readers, total of documents, of documents produced by the trainers, of documents produced by trainee, total of readings, of readings made by the trainers, of readings by trainee. This was done through successive analyses of the data collected: Step 1, volume of exchanges (statistical analysis),
Step 2, number of producers (social network analysis), Step 3, type of producer: trainer or trainee (social network analysis), Step 4, analysis of the titles of the folders (text analysis). This multimodal analysis has proposed 6 categories:
Category 1, 7 folders, corresponds to a set of test or error folders because they don't contain any documents. Category 2, 21 folders, corresponds to documents put at the disposal of the group by one trainee producer.
They contain lessons plans or sequences or dissertations put on line and subject to the approval of the trainer. Category 3, 14 folders, (one producer) and Category 4, 13 folders, (many producers) are documents put at the disposal of the trainees most often by the trainer: the titles of the documents refer to the discipline and/or the level but also to the group of trainees at the IUFM. Category 5, 20 folders, is the set of folders that must allow trainees to pass the c2i2e. As mentioned, the c2i2e certifies that the student is able to use ICT in the classroom. Normally, all these folders must have the same activity: more or less the same average number of documents, of readings… Class 6, 2 folders, is singular by its intense activity. It corresponds to the folders that have been set up to accompany students during their training course and are intended to answer questions "just in time" and "just enough". It is observed that, from category 1 to category 6, the activity in the folders is increasing. We wanted to compare this approach to a more formal approach that uses data-mining methods.
Analysis of the folders by data-mining methods
In a first step we proceed to a quantitative analysis of the folders described by the numerical variables seen above by using data-mining methods (Correspondence Analysis, Ward and k-means algorithms, with the software SPAD). This first study has revealed the salient values for each variable. Some had already been identified by the multimodal analysis, for example one trainer producer by folder, but others not, in particular the size of the groups associated to folders. After that, to proceed to a qualitative analysis of the folders we defined for each variable six modalities, interval of values, based on the salient values discovered previously. Then we applied again a datamining approach on the data recoded in those modalities. By this way we obtain 7 classes:
The classes 1 and 7 cover categories 1 and 6 of the multimodal analysis, Class 1 "empty folders", 7 folders, which is characterized by a null value on all variables except members. Class 7 "accompaniment during training course", 2 folders, which is characterized by the higher modality on all variables The classes 2,3,4,5 and 6 cover the categories 2, 3, 4 and 5, but there is no one-to-one correspondence between classes and categories: Class 2 "individualized accompaniment", 11 folders, is characterized by the fact that there are only two members per folder: one trainer and one trainee and one single producer, the trainee. These folders correspond to a work requested by the trainer to the trainee or trainee's request for assistance to the trainer. Class 3 "weak cooperation", 12 folders, is disparate. As the folders have mostly several producers, they do not fall within the "dissemination" classes below, but if we talk about cooperation it should be mentioned that this is low cooperation. It is characterized only by a homogeneous and relatively small number of documents deposited in folders and, to a lesser extent, a small number of members. It contains some folders of the category 5. Class 4 "dissemination to one group", 22 folders, includes on average one producer. Here folders are used to disseminate information to trainees of the group. In fact, as the average number of readings is relatively small, 18, we may question the effectiveness of this dissemination. Class 5 "dissemination to several groups", 4 folders, contains folders that are characterized by a high number of trainees and concern several groups of them. For 3 folders, there is only one producer. They are used to disseminate information. Class 6 "strong cooperation", 19 folders, consists mainly of ICT folders, but it doesn't cover the entire category 5. These folders are used to validate the trainee's c2i2e what implies a certain number of exchanges and in particular certain homogeneity in the number of readings by the trainers, in the number of trainee producers and in the number of documents that they deposit. This is a class where there are really production and reading and therefore a strong cooperation, but, however, definitely less than in Class 7.
This shows that the analysis performed data-mining methods provides a relevant vision in terms of training and allows distinguishing the different approaches adopted by trainers on the platform and the various uses they make of it. It has led to highlight interesting patterns of behavior as class 2 and class 5 absent from the categorization obtained by the multimodal analysis. For class 2, we see that a CSCW platform can compete with the email usually used in this case. For class 5 the interest is in its opposition with class 4 (diffusion to several groups vs to one group). The diffusion to one group corresponds to a trainer giving documents to his trainees, documents that he has used or will use in his course face-to-face. The diffusion to several groups is a little bit different: it corresponds to documents deposited by a trainer for the trainees for a possible use. In this case the folders are used like databanks.
The use of the data-mining tools has also showed that some folders that should have been in "strong cooperation", class 6, are in "weak cooperation", class 3, and, thus, do not satisfy the expected contract. This is the case for some folders used to validate the c2i2e.
Conclusion
From our point of view, one essential contribution of data mining tools is to "objectify" our observation. While the multi-modal analysis approach is more a top-down approach, the data-mining approach is more a bottom-up approach. Romero & Ventura speak about an approach of "discovery driven" "in the sense that the hypothesis is automatically extracted from the data" (Romero & Ventura, 2007). The software proposes classes and the researcher must find what the relevance of these classes in his field is. In doing so, he questioned the discrepancy between what it should be and what it is really. Thus, data-mining can be used as a tool for evaluating the work done on the platform. It can lead to improve this device and so to improve the results. It can serve also as feedback to trainers to improve their practice and administrators to improve the system (Romero & Ventura, 2007).
Bibliography
Romero, C., Ventura, S. (2007) Educational Data Mining: a Survey from 1995 to 2005. Expert Systems with x Applications. Elsevier 1:33 (2007) 135-146. Simon J, (2009) Three years of use of a CSCW platform by the preservice teachers and the trainers of the Reunion Island teacher training school, ICALT 09, Proceedings of the 2009 Ninth IEEE International Conference on Advanced Learning Technologies -Volume 00, pp 637-641, Riga, 2009 | 9,443 | [
"13953",
"969196"
] | [
"54305",
"54305"
] |
01483482 | en | [
"math"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01483482/file/optimalrates.pdf | Scott Armstrong
email: scotta@cims.nyu.edu
Jessica Lin
email: jessica@math.wisc.edu
OPTIMAL QUANTITATIVE ESTIMATES IN STOCHASTIC HOMOGENIZATION FOR ELLIPTIC EQUATIONS IN NONDIVERGENCE FORM
Keywords: Mathematics Subject Classification. 35B27, 35B45, 35J15 stochastic homogenization, correctors, error estimate
.
1. Introduction 1.1. Motivation and informal summary of results. We identify the optimal error estimates for the stochastic homogenization of solutions u ε solving:
(1.1)
-tr A x ε D 2 u ε = 0 in U, u ε (x) = g(x)
on ∂U.
Here U is a smooth bounded subset of R d with d ≥ 2, D 2 v is the Hessian of a function v and tr(M ) denotes the trace of a symmetric matrix M ∈ S d . The coefficient matrix A(⋅) is assumed to be a stationary random field, with given law P, and valued in the subset of symmetric matrices with eigenvalues belonging to the interval [λ, Λ] for given ellipticity constants 0 < λ ≤ Λ. The solutions u ε are understood in the viscosity sense [START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF] although in most of the paper the equations can be interpreted classically. We assume that the probability measure P has a product-type structure and in particular possesses a finite range of dependence (see Section 1.3 for the precise statement). According to the general qualitative theory of stochastic homogenization developed in [START_REF] Papanicolaou | Diffusions with random coefficients[END_REF][START_REF] Yurinskiȋ | Averaging of second-order nondivergent equations with random coefficients[END_REF] for nondivergence form elliptic equations (see also the later work [START_REF] Caffarelli | Homogenization of fully nonlinear, uniformly elliptic and parabolic partial differential equations in stationary ergodic media[END_REF]), the solutions u ε of (1.1) converge uniformly as ε → 0, P-almost surely, to that of the homogenized problem (1.2) tr(AD 2 u) = 0 in U, u(x) = g(x) on ∂U, for some deterministic, uniformly elliptic matrix A. Our interest in this paper is to study the rate of convergence of u ε to u.
Error estimates quantifying the speed homogenization of u ε → u have been obtained in [START_REF] Yurinskiȋ | Averaging of second-order nondivergent equations with random coefficients[END_REF][START_REF] Yurinskiȋ | On the error of averaging of multidimensional diffusions[END_REF][START_REF] Caffarelli | Rates of convergence for the homogenization of fully nonlinear uniformly elliptic pde in random media[END_REF][START_REF] Armstrong | Quantitative stochastic homogenization of elliptic equations in nondivergence form[END_REF]. The most recent paper [START_REF] Armstrong | Quantitative stochastic homogenization of elliptic equations in nondivergence form[END_REF] was the first to give a general result stating that the typical size of the error is at most algebraic, that is, O(ε α ) for some positive exponent α. The earlier work [START_REF] Yurinskiȋ | On the error of averaging of multidimensional diffusions[END_REF] gave an algebraic error estimate in dimensions d > 4. The main purpose of this paper is to reveal explicitly the optimal exponent.
Our main quantitative estimates concern the size of certain stationary solutions called the approximate correctors. These are defined, for a fixed symmetric matrix M ∈ S d and ε > 0, as the unique solution
φ ε ∈ C(R d ) ∩ L ∞ (R d ) of the equation (1.3) ε 2 φ ε -tr A(x)(M + D 2 φ ε ) = 0 in R d .
Our main result states roughly that, for every x ∈ R d , ε ∈ (0, 1 2 ] and t > 0,
(1.4)
P ε 2 φ ε (x) -tr AM ≥ tE(ε) ≲ exp -t 1 2
, where the typical size E(ε) of the error depends only on the dimension d in the following way:
(1.5)
E(ε) ∶= ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ε log ε in d = 2, ε 3 2 in d = 3, ε 2 log ε 1 2 in d = 4, ε 2 in d > 4.
Note that the rescaling φ ε (x) ∶= ε 2 φ ε x ε allows us to write (1.3) in the so-called theatrical scaling as
φ ε -tr A x ε M + D 2 φ ε = 0 in R d .
This is a well-posed problem (it has a unique bounded solution) on R d which homogenizes to the equation
φ -tr A M + D 2 φ = 0 in R d .
The solution of the latter is obviously the constant function φ ≡ tr AM and so the limit (1.6) ε 2 φ ε (x) = φ ε (εx) → tr AM is a qualitative homogenization statement. Therefore, the estimate (1.4) is a quantitative homogenization result for this particular problem which asserts that the speed of homogenization is O(E(ε)). Moreover, it is well-known that estimating the speed of homogenization for the Dirichlet problem is essentially equivalent to obtaining estimates on the approximate correctors (see [START_REF] Evans | Periodic homogenisation of certain fully nonlinear partial differential equations[END_REF][START_REF] Avellaneda | Compactness methods in the theory of homogenization. II. Equations in nondivergence form[END_REF][START_REF] Caffarelli | Rates of convergence for the homogenization of fully nonlinear uniformly elliptic pde in random media[END_REF][START_REF] Armstrong | Quantitative stochastic homogenization of elliptic equations in nondivergence form[END_REF]). Indeed, the estimate (1.4) can be transferred without any loss of exponent to an estimate on the speed of homogenization of the Dirichlet problem. One can see this from the standard two-scale ansatz u ε (x) ≈ u(x) + ε 2 φ ε x ε , which is easy to formalize and quantify in the linear setting since the homogenized solution u is completely smooth. We remark that since (1.4) is an estimate at a single point x, an estimate in L ∞ for the Dirichlet problem will necessarily have an additional logarithmic factor log ε q for some q(d) < ∞. Since the argument is completely deterministic and essentially the same as in the case of periodic coefficients, we do not give the details here and instead focus on the proof of (1.4).
The estimate (1.4) can also be expressed in terms of an estimate on the subquadratic growth of the correctors φ, which are the solutions, for given M ∈ S d , of the problem
(1.7) ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ -tr A(x)(M + D 2 φ = -tr(AM ) in R d , lim sup R→∞ R -2 osc B R φ = 0.
Recall that while D 2 φ exists as a stationary random field, φ itself may not be stationary. The estimate (1.4) implies that, for every x ∈ R d ,
P φ(x) -φ(0) ≥ t x 2 E x -1 ≲ exp -t 1 2
.
Notice that in dimensions d > 4, this implies that the typical size of φ(x)φ(0) stays bounded as x → ∞, suggesting that φ is a locally bounded, stationary random field. In Section 7, we prove that this is so: the correctors are locally bounded and stationary in dimensions d > 4.
The above estimates are optimal in the size of the scaling, that is, the function E(ε) cannot be improved in any dimension. This can be observed by considering the simple operator -a(x)∆ where a(x) is a scalar field with a random checkerboard structure. Fix a smooth (deterministic) function f ∈ C ∞ c (R d ) and consider the problem (1.8) a x ε ∆u ε = f (x) in R d . We expect this to homogenize to a problem of the form
-a∆u = f (x) in R d .
In dimension d = 2 (or in higher dimensions, if we wish) we can also consider the Dirichlet problem with zero boundary conditions in a ball much larger than the support of f . We can then move the randomness to the right side of the equation to have -∆u ε = a -1 x ε f (x) in R d and then write a formula for the value of the solution u ε at the origin as a convolution of the random right side against the Green's kernel for the Laplacian operator. The size of the fluctuations of this convolution is easy to determine, since it is essentially a sum (actually a convolution) of i.i.d. random variables, and it turns out to be precisely of order E(ε) (with a prefactor constant depending on the variance of a(⋅) itself). For instance, we show roughly that var [u ε (0)] ≃ E(ε) 2 . This computation also serves as a motivation for our proof of (1.4), although handling the general case of a random diffusion matrix is of course much more difficult than that of (1.8), in which the randomness is scalar and can be split from the operator. 1.2. Method of proof and comparison to previous works. The arguments in this paper are inspired by the methods introduced in the divergence form setting by Gloria and Otto [START_REF] Gloria | An optimal variance estimate in stochastic homogenization of discrete elliptic equations[END_REF][START_REF] Gloria | An optimal error estimate in stochastic homogenization of discrete elliptic equations[END_REF] and Gloria, Neukamm and Otto [START_REF] Gloria | Quantification of ergodicity in stochastic homogenization: optimal bounds via spectral gap on Glauber dynamics[END_REF] (see also Mourrat [21]). The authors combined certain concentration inequalities and analytic arguments to prove optimal quantitative estimates in stochastic homogenization for linear elliptic equations of the form -∇ ⋅ (A(x)∇u) = 0.
The concentration inequalities provided a convenient mechanism for transferring quantitative ergodic information from the coefficient field to the solutions themselves, an idea which goes back to an unpublished paper of Naddaf and Spencer [START_REF] Naddaf | Estimates on the variance of some homogenization problems[END_REF]. Most of these works rely on some version of the Efron-Stein inequality [START_REF] Efron | The jackknife estimate of variance[END_REF] or the logarithmic Sobolev inequality to control the fluctuations of the solution by estimates on the spatial derivatives of the Green's functions and the solution.
A variant of these concentration inequalities plays an analogous role in this paper (see Proposition 2.2). There are then two main analytic ingredients we need to conclude: first, an estimate on the decay of the Green's function for the heterogenous operator (note that, in contrast to the divergence form case, there is no useful deterministic bound on the decay of the Green's function); and (ii) a higher-order regularity theory asserting that, with high P-probability, solutions of our random equation are more regular than the deterministic regularity theory would predict. We prove each of these estimates by using the sub-optimal (but algebraic) quantitative homogenization result of [START_REF] Armstrong | Quantitative stochastic homogenization of elliptic equations in nondivergence form[END_REF]: we show that, since solutions are close to those of the homogenized equation on large scales, we may "borrow" the estimates from the constant-coefficient equation. This is an idea that was introduced in the context of stochastic homogenization for divergence form equations by the first author and Smart [START_REF] Armstrong | Quantitative stochastic homogenization of convex integral functionals[END_REF] (see also [START_REF] Gloria | A regularity theory for random elliptic operators[END_REF][START_REF] Armstrong | Lipschitz regularity for elliptic equations with random coefficients[END_REF]) and goes back to work of Avellaneda and Lin [START_REF] Avellaneda | Compactness methods in the theory of homogenization[END_REF][START_REF] Avellaneda | Compactness methods in the theory of homogenization. II. Equations in nondivergence form[END_REF] in the case of periodic coefficients.
We remark that, while the scaling of the error E(ε) is optimal, the estimate (1.4) is almost certainly sub-optimal in terms of stochastic integrability. This seems to be one limitation of an approach relying on (nonlinear) concentration inequalities, which so far yield only estimates with exponential moments [START_REF] Gloria | A regularity theory for random elliptic operators[END_REF] rather than Gaussian moments [START_REF] Armstrong | Mesoscopic higher regularity and subadditivity in elliptic homogenization[END_REF][START_REF] Armstrong | The additive structure of elliptic homogenization[END_REF][START_REF] Gloria | The corrector in stochastic homogenization: Near-optimal rates with optimal stochastic integrability[END_REF]. Recently, new approaches based on renormalization-type arguments (rather than nonlinear concentration inequalities) have been introduced in the divergence form setting [START_REF] Armstrong | Mesoscopic higher regularity and subadditivity in elliptic homogenization[END_REF][START_REF] Armstrong | The additive structure of elliptic homogenization[END_REF][START_REF] Gloria | The corrector in stochastic homogenization: Near-optimal rates with optimal stochastic integrability[END_REF]. It was shown in [START_REF] Armstrong | The additive structure of elliptic homogenization[END_REF] that this approach yields estimates at the critical scale which are also optimal in stochastic integrability. It would be very interesting to see whether such arguments could be developed in the nondivergence form case. (1.10) ∃σ ∈ (0, 1] such that sup
x,y∈R d A(x) -A(y) x -y σ < ∞,
We define the set (1.11) Ω ∶= {A ∶ A satisfies (1.9) and (1.10)} .
Notice that the assumption (1.10) is a qualitative one. We purposefully do not specify any quantitative information regarding the size of the supremum in (1.10), because none of our estimates depend on this value. We make this assumption in order to ensure that a comparison principle holds for our equations.
We next introduce some σ-algebras on Ω. For every Borel subset U ⊆ R d we define F(U ) to be the σ-algebra on Ω generated by the behavior of A in U , that is, (1.12) F(U ) ∶= σ-algebra on Ω generated by the family of random variables
{A ↦ A(x) ∶ x ∈ U } .
We define F ∶= F(R d ).
Translation action on (Ω, F).
The translation group action on R d naturally pushes forward to Ω. We denote this action by {T y } y∈R d , with T y ∶ Ω → Ω given by (T y A)(x) ∶= A(x + y). The map T y is clearly F-measurable, and is extended to F by setting, for each E ∈ F, T y E ∶= {T y A ∶ A ∈ E} . We also extend the translation action to F-measurable random elements X ∶ Ω → S on Ω, with S an arbitrary set, by defining (T y X)(F ) ∶= X(T y F ).
We say that a random field f ∶ Z d × Ω → S is Z d -stationary provided that f (y+z, A) = f (y, T z A) for every y, z ∈ Z d and A ∈ Ω. Note that an F-measurable random element X ∶ Ω → S may be viewed as a Z d -stationary random field via the identification with X(z, A) ∶= X(T z A).
P [E] = P [T z E] .
(P2) P has a range of dependence at most : that is, for all Borel subsets U, V of R d such that dist(U, V ) ≥ , F(U ) and F(V ) are P-independent.
Some of our main results rely on concentration inequalities (stated in Section 2.2) which require a stronger independence assumption than finite range of dependence (P2) which was used in [START_REF] Armstrong | Quantitative stochastic homogenization of elliptic equations in nondivergence form[END_REF]. Namely, we require that P is the pushforward of another probability measure which has a product space structure. We consider a probability space (Ω 0 , F 0 , P 0 ) and denote
(1.13) (Ω * , F * , P * ) ∶= (Ω Z d 0 , F Z d 0 , P Z d 0 ).
We regard an element ω ∈ Ω * as a map ω ∶ Z d → Ω 0 . For each Γ ⊆ Z d , we denote by F * (Γ) the σ-algebra generated by the family {ω ↦ ω(z) ∶ z ∈ Γ} of maps from Ω * to Ω 0 . We denote the expectation with respect to P * by E * . Abusing notation slightly, we also denote the natural
Z d -translation action on Ω * by T z , that is, T z ∶ Ω * → Ω * is defined by (T z ω)(y) ∶= ω(y + z).
We assume that there exists an (F * , F)-measurable map π ∶ Ω * → Ω which satisfies the following:
(1.14) P [E] = P * π -1 (E)
for every E ∈ F,
(1.15) π ○ T z = T z ○ π for every z ∈ Z d ,
with the translation operator interpreted on each side in the obvious way, and
(1.16) for every Borel subset U ⊆ R d and E ∈ F(U ),
π -1 (E) ∈ F * z ∈ Z d ∶ dist(z, U ) ≤ 2 .
We summarize the above conditions as:
(P3) There exists a probability space (Ω * , F * , P * ) of the form (1.13) and a map
π ∶= Ω * → Ω,
which is (F, F * )-measurable and satisfies (1.14), (1.15) and (1.16).
Note that, in view of the product structure, the conditions (1.14) and (1.16) imply (P2) and (1.14) and (1.15) imply (P1). Thus (P1) and (P2) are superseded by (P3).
1.4. Statement of main result. We next present the main result concerning quantitative estimates of the approximate correctors.
Theorem 1.1. Assume that P is a probability measure on (Ω, F) satisfying (P3). Let E(ε) be defined by (1.5). Then there exist δ(d, λ, Λ) > 0 and C(d, λ, Λ, ) ≥ 1 such that, for every ε ∈ (0, 1 2 ],
(1.17)
E ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ exp ⎛ ⎜ ⎝ ⎛ ⎝ 1 E(ε) sup x∈B √ d (0) ε 2 φ ε (x) -tr AM ⎞ ⎠ 1 2 +δ ⎞ ⎟ ⎠ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ≤ C.
The proof of Theorem 1.1 is completed in Section 6.
1.5. Outline of the paper. The rest of the paper is organized as follows. In Section 2, we introduce the approximate correctors and the modified Green's functions and give some preliminary results which are needed for our main arguments. In Section 3, we establish a C 1,1 regularity theory down to unit scale for solutions. Section 4 contains estimates on the modified Green's functions, which roughly state that the these functions have the same rate of decay at infinity as the Green's function for the homogenized equation (i.e, the Laplacian).
In this section, we also mention estimates on the invariant measure associated to the linear operator in (1.1). In Section 5, we use results from the previous sections to measure the "sensitivity" of solutions of the approximate corrector equation with respect to the coefficients. Finally, in Section 6, we obtain the optimal rates of decay for the approximate corrector, proving our main result, and, in Section 7, demonstrate the existence of stationary correctors in dimensions d > 4. In the appendix, we give a proof of the concentration inequality we use in our analysis, which is a stretched exponential version of the Efron-Stein inequality.
Preliminaries
2.1. Approximate correctors and modified Green's functions. For each given M ∈ S d and ε > 0, the approximate corrector equation is
ε 2 φ ε -tr(A(x)(M + D 2 φ ε )) = 0 in R d .
The existence of a unique solution φ ε belonging to
C(R d ) ∩ L ∞ (R d
) is obtained from the usual Perron method and the comparison principle. We also introduce, for each ε ∈ (0, 1] and y ∈ R d , the "modified Green's function" G ε (⋅, y; A) = G ε (⋅, y), which is the unique solution of the equation
(2.1) ε 2 G ε -tr A(x)D 2 G ε = χ B (y) in R d .
Compared to the usual Green's function, we have smeared out the singularity and added the zeroth-order "massive" term.
To see that (2.1) is well-posed, we build the solution G ε by compactly supported approximations. We first solve the equation in the ball B R (for R > ) with zero Dirichlet boundary data to obtain a function G ε,R (⋅, y). By the maximum principle, G ε,R (⋅, y) is increasing in R and we may therefore define its limit as R → ∞ to be G ε (⋅, y). We show in the following lemma that it is bounded and decays at infinity, and from this it is immediate that it satisfies (2.1) in R d . The lemma is a simple deterministic estimate which is useful only in the regime xy ≳ ε -1 log ε and follows from the fact (as demonstrated by a simple test function) that the interaction between the terms on the left of (2.1) give the equation a characteristic length scale of ε -1 . Lemma 2.1. There exist a(d, λ, Λ) > 0 and C(d, λ, Λ) > 1 such that, for every
A ∈ Ω, x, y ∈ R d and ε ∈ (0, 1], (2.2) G ε (x, y) ≤ Cε -2 exp (-εa x -y ) .
Proof. Without loss of generality, let y = 0. Let φ(x) ∶= C exp (-εa x ) for C, a > 0 to be selected below. Compute, for every x ≠ 0,
D 2 φ(x) = φ(x) -εa 1 x I - x ⊗ x x 2 + ε 2 a 2 x ⊗ x x 2 .
Thus for any A ∈ Ω,
-tr A(x)D 2 φ(x) ≥ φ(x) 1 x λεa(d -1) -Λ(ε 2 a 2 ) in R d ∖ {0}. Set a ∶= 1 √ 2Λ . Then (2.3) ε 2 φ-tr A(x)D 2 φ(x) ≥ ε 2 φ(x)(1-Λa 2 )+ φ(x) x λεa(d-1) ≥ 0 in R d ∖{0}.
Take C ∶= exp(a ) so that φ(x) ≥ 1 in x ≤ . Define φ ∶= ε -2 min{1, φ}. Then φ satisfies the inequality
ε 2 φ(x) -tr A(x)D 2 φ(x) ≥ χ B in R d .
As φ > 0 on ∂B R , the comparison principle yields that φ ≥ G ε,R (⋅, 0) for every R > 1. Letting R → ∞ yields the lemma.
2.2. Spectral gap inequalities. In this subsection, we state the probabilistic tool we use to obtain the quantitative estimates for the modified correctors. The result here is applied precisely once in the paper, in Section 6, and relies on the stronger independence condition (P3). It is a variation of the Efron-Stein ("spectral gap") inequality; a proof is given in Appendix A. Proposition 2.2. Fix β ∈ (0, 2). Let X be a random variable on (Ω * , F * , P * ) and set
X ′ z ∶= E * X F * (Z d ∖ {z}) and V * [X] ∶= z∈Z d (X -X ′ z ) 2 .
Then there exists C(β) ≥ 1 such that
(2.4) E * exp X -E [X] β ≤ CE * exp (CV * [X]) β 2-β 2-β β .
The conditional expectation X ′ z can be identified by resampling the random environment near the point z (this is explained in depth in Section 5). Therefore, the quantity (X -X ′ z ) measures changes in X with respect to changes in the environment near z. Following [START_REF] Gloria | Quantification of ergodicity in stochastic homogenization: optimal bounds via spectral gap on Glauber dynamics[END_REF], we refer to (X -X ′ z ) as the vertical derivative of X at the point z.
2.3. Suboptimal error estimate. We recall the main result of [START_REF] Armstrong | Quantitative stochastic homogenization of elliptic equations in nondivergence form[END_REF], which is the basis for much of the analysis in this paper. We reformulate their result slightly, to put it in a form which is convenient for our use here.
Proposition 2.3. Fix σ ∈ (0, 1] and s ∈ (0, d). Let P satisfy (P1) and (P2). There exists an exponent α(σ, s, d, λ, Λ, ) > 0 and a nonnegative random variable X on (Ω, F), satisfying
(2.5) E [exp (X )] ≤ C(σ, s, d, Λ, λ, ) < ∞
and such that the following holds: for every
R ≥ 1, f ∈ C 0,σ (B R ), g ∈ C 0,σ (∂B R ) and solutions u, v ∈ C(B R ) of the Dirichlet problems (2.6) -tr A(x)D 2 u = f (x) in B R , u = g on ∂B R , and
(2.7) -tr AD 2 v = f (x) in B R , v = g on ∂B R ,
we have, for a constant C(σ, s, d, Λ, λ, ) ≥ 1, the estimate
R -2 sup B R u(x) -v(x) ≤ CR -α (1 + X R -s ) Γ R,σ (f, g),
where
Γ R,σ (f, g) ∶= sup B R f + R σ [f ] C 0,σ (B R ) + R -2 osc ∂B R g + R -2+σ [g] C 0,σ (∂B R ) .
We note that, in view of the comparison principle, Proposition 2.3 also give one-sided estimates for subsolutions and supersolutions.
Uniform C 1,1 Estimates
In this section, we present a large-scale (R ≫ 1) regularity theory for solutions of the equation
(3.1) -tr A(x)D 2 u = f (x) in B R
Recall that according to the Krylov-Safonov Hölder regularity theory [START_REF] Caffarelli | Fully nonlinear elliptic equations[END_REF], there exists an exponent σ(d, λ, Λ) ∈ (0, 1) such that u belongs to C 0,σ (B R 2 ) with the estimate
(3.2) R σ-2 [u] C 0,σ (B R 2 ) ≤ C R -2 osc B R u + ⨏ B R f (x) d dx 1 d
. This is the best possible estimate, independent of the size of R, for solutions of general equations of the form (3.1), even if the coefficients are smooth. What we show in this section is that, due to statistical effects, solutions of our random equation are typically much more regular, at least on scales larger than , the length scale of the correlations of the coefficient field. Indeed, we will show that solutions, with high probability, are essentially C 1,1 on scales larger than the unit scale.
The arguments here are motivated by a similar C 0,1 regularity theory for equations in divergence form developed in [START_REF] Armstrong | Quantitative stochastic homogenization of convex integral functionals[END_REF] and should be seen as a nondivergence analogue of those estimates. In fact, the arguments here are almost identical to those of [START_REF] Armstrong | Quantitative stochastic homogenization of convex integral functionals[END_REF]. They can also be seen as generalizations to the random setting of the results of Avellaneda and Lin for periodic coefficients, who proved uniform C 0,1 estimates for divergence form equations [START_REF] Avellaneda | Compactness methods in the theory of homogenization[END_REF] and then C 1,1 estimates for the nondivergence case [START_REF] Avellaneda | Compactness methods in the theory of homogenization. II. Equations in nondivergence form[END_REF]. Note that it is natural to expect estimates in nondivergence form to be one derivative better than those in divergence form (e.g., the Schauder estimates). Note that the "C 1,1 estimate" proved in [START_REF] Gloria | A regularity theory for random elliptic operators[END_REF] has a different statement which involves correctors; the statement we prove here would be simply false for divergence form equations.
The rough idea, similar to the proof of the classical Schauder estimates, is that, due to homogenization, large-scale solutions of (3.1) can be well-approximated by those of the homogenized equation. Since the latter are harmonic, up to a change of variables, they possess good estimates. If the homogenization is fast enough (for this we need the results of [START_REF] Armstrong | Quantitative stochastic homogenization of elliptic equations in nondivergence form[END_REF], namely Proposition 2.3), then the better regularity of the homogenized equation is inherited by the heterogeneous equation. This is a quantitative version of the idea introduced in the context of periodic homogenization by Avellaneda and Lin [START_REF] Avellaneda | Compactness methods in the theory of homogenization[END_REF][START_REF] Avellaneda | Compactness methods in the theory of homogenization. II. Equations in nondivergence form[END_REF].
Throughout this section, we let Q be the set of polynomials of degree at most two and let L denote the set of affine functions. For σ ∈ (0, 1] and U ⊆ R d , we denote the usual Hölder seminorm by [⋅] C 0,σ (U ) . Theorem 3.1 (C 1,1 regularity). Assume (P1) and (P2). Fix s ∈ (0, d) and σ ∈ (0, 1]. There exists an F-measurable random variable X and a constant C(s, σ, d, λ, Λ, ) ≥ 1 satisfying
(3.3) E [exp (X s )] ≤ C < ∞
such that the following holds: for every
M ∈ S d , R ≥ 2X , u ∈ C(B R ) and f ∈ C 0,σ (B R ) satisfying -tr A(x)(M + D 2 u) = f (x) in B R
and every r ∈ X , 1 2 R , we have the estimate
(3.4) 1 r 2 inf l∈L sup Br u -l ≤ C f (0) + tr(AM ) + R σ [f ] C 0,σ (B R ) + 1 R 2 inf l∈L sup B R u -l .
It is well-known that any L ∞ function which can be well-approximated on all scales by smooth functions must be smooth. The proof of Theorem 3.1 is based on a similar idea: any function u which can be well-approximated on all scales above a fixed scale X , which is of the same order as the microscopic scale, by functions w enjoying an improvement of quadratic approximation property must itself have this property. This is formalized in the next proposition, the statement and proof of which are similar to those of [4, Lemma 5.1]. Proposition 3.2. For each r > 0 and θ ∈ (0, 1 2 ), let A(r, θ) denote the subset of L ∞ (B r ) consisting of w which satisfy
1 (θr) 2 inf q∈Q sup x∈B θr w(x) -q(x) ≤ 1 2 1 r 2 inf q∈Q sup x∈Br w(x) -q(x) . Assume that α, γ, K, L > 0, 1 ≤ 4r 0 ≤ R and u ∈ L ∞ (B R ) are such that, for every r ∈ [r 0 , R 2], there exists v ∈ A(r, θ) such that (3.5) 1 r 2 sup x∈Br u(x) -v(x) ≤ r -α K + 1 r 2 inf l∈L sup x∈B 2r u(x) -l(x) + Lr γ .
Then there exist β(θ) ∈ (0, 1] and C(α, θ, γ) ≥ 1 such that, for every s
∈ [r 0 , R 2], (3.6) 1 s 2 inf l∈L sup x∈Bs u(x) -l(x) ≤ C K + LR γ + 1 R 2 inf l∈L sup x∈B R u(x) -l(x) .
and
(3.7) 1 s 2 inf q∈Q sup x∈Bs u(x) -q(x) ≤ C s R β LR γ + 1 R 2 inf q∈Q sup x∈B R u(x) -q(x) + Cs -α K + LR γ + 1 R 2 inf l∈L sup x∈B R u(x) -l(x) .
Proof. Throughout the proof, we let C denote a positive constant which depends only on (α, θ, γ) and may vary in each occurrence. We may suppose without loss of generality that α ≤ 1 and that γ ≤ c so that θ γ ≥ 2 3 .
Step 1. We set up the argument. We keep track of the two quantitites
G(r) ∶= 1 r 2 inf q∈Q sup x∈Br u(x) -q(x) and H(r) ∶= 1 r 2 inf l∈L sup x∈Br u(x) -l(x) .
By the hypotheses of the proposition and the triangle inequality, we obtain that, for every r ∈ [r 0 , R 2],
(3.8) G(θr) ≤ 1 2 G(r) + Cr -α (K + H(2r)) + Lr γ .
Denote s 0 ∶= R and s j ∶= θ j-1 R 4 and let m ∈ N be such that
s -α m ≤ 1 4 ≤ s -α m+1 .
Denote G j ∶= G(s j ) and H j ∶= H(s j ). Noting that θ ≤ 1 2 , from (3.8) we get, for every j ∈ {1, . . . , m -1}, (3.9)
G j+1 ≤ 1 2 G j + Cs -α j (K + H j-1 ) + Ls γ j . For each j ∈ {0, . . . , m -1}, we may select q j ∈ Q such that G j = 1 s 2 j sup x∈Bs j u(x) -q j (x) .
We denote the Hessian matrix of q j by Q j . The triangle inequality implies that
(3.10) G j ≤ H j ≤ G j + 1 s 2 j sup x∈Bs j 1 2 x ⋅ Q j x = G j + 1 2 Q j , and 1 s 2 j sup x∈Bs j+1 q j+1 (x) -q j (x) ≤ G j + θ 2 G j+1 .
The latter implies
Q j+1 -Q j ≤ 2 s 2 j+1 sup x∈Bs j+1 q j+1 (x) -q j (x) ≤ 2 θ 2 G j + 2G j+1 .
In particular,
(3.11) Q j+1 ≤ Q j + C (G j + G j+1 )
Similarly, the triangle inequality also gives
(3.12) Q j = 2 s 2 j inf l∈L sup x∈Bs j q j (x) -l(x) ≤ 2G j + 2H j ≤ 4H j .
Thus, combining (3.11) and (3.12), yields
(3.13) Q j ≤ Q 0 + C j i=0 G i ≤ C H 0 + j i=0 G i .
Next, combining (3.9) (3.10) and (3.13), we obtain, for every j ∈ {0, . . . , m -1},
G j+1 ≤ 1 2 G j + Cs -α j K + H 0 + j i=0 G i + Ls γ j . (3.14)
The rest of the argument involves first iterating (3.14) to obtain bounds on G j , which yield bounds on Q j by (3.13), and finally on H j by (3.10).
Step 2. We show that, for every j ∈ {1, . . . , m},
(3.15) G j ≤ 2 -j G 0 + Cs -α j (K + H 0 ) + CL s γ j + R γ s -α j .
We argue by induction. Fix A, B ≥ 1 (which are selected below) and suppose that k ∈ {0, . . . , m -1} is such that, for every j ∈ {0, . . . , k},
G j ≤ 2 -j G 0 + As -α j (K + H 0 ) + L As γ j + BR γ s -α j .
Using (3.14) and this induction hypothesis (and that G 0 ≤ H 0 ), we get
G k+1 ≤ 1 2 G k + Cs -α k K + H 0 + k j=0 G j + Ls γ k ≤ 1 2 2 -k G 0 + As -α k (K + H 0 ) + L As γ k + BR γ s -α k + Ls γ k + Cs -α k K + H 0 + k j=0 2 -j G 0 + As -α j (K + H 0 ) + L As γ j + BR γ s -α j ≤ 2 -(k+1) G 0 + s -α k+1 (K + H 0 ) 1 2 A + CAs -α k + C + Ls γ k+1 1 2 θ -γ A + C + LR γ s -α k+1 1 2 B + CA + CBs -α k .
Now suppose in addition that k ≤ n with n such that Cs -α n ≤ 1 4 . Then using this and θ γ ≥ 2 3 , we may select A large enough so that
1 2 A + CAs -α k + C ≤ 3 4 A + C ≤ A and 1 2 θ -γ A + C ≤ A,
and then select B large enough so that
1 2 B + CA + CBs -α k ≤ B.
We obtain
G k+1 ≤ 2 -(k+1) G 0 + As -α k+1 (K + H 0 ) + ALs γ k+1 + BLR γ s -α k+1 .
By induction, we now obtain (3.15) for every j ≤ n ≤ m. In addition, for every j ∈ {n + 1, . . . , m}, we have that 1 ≤ s j s m ≤ C. This yields (3.15) for every j ∈ {0, . . . , n}.
Step 3. The bound on H j and conclusion. By (3.13), (3.15), we have
Q j ≤ C H 0 + j i=0 G i ≤ C H 0 + j i=0 2 -i G 0 + Cs -α i (K + H 0 ) + CL(s γ i + R γ s -α i ≤ C H 0 + G 0 + Cs -α j (K + H 0 ) + CLR γ 1 + s -α j ≤ CH 0 + CKs -α j + CLR γ .
Here we also used that s -α j ≤ s -α m ≤ C. Using the previous inequality, (3.10) and (3.15), we conclude that
H j ≤ G j + 1 2 Q j ≤ CH 0 + CKs -α j + CLR γ .
This is (3.6). Note that (3.15) already implies (3.7) for β ∶= (log 2) log θ .
Next, we recall that solutions of the homogenized equation (which are essentially harmonic functions) satisfy the "improvement of quadratic approximation" property.
Lemma 3.3. Let r > 0. Assume that A satisfies (1.9) and let w ∈ C(B r ) be a solution of
(3.16) -tr AD 2 w = 0 in B r .
There exists θ(d, λ, Λ) ∈ (0, 1 2 ] such that, for every r > 0,
1 (θr) 2 inf q∈Q sup x∈B θr w(x) -q(x) ≤ 1 2 1 r 2 inf q∈Q sup x∈Br w(x) -q(x) .
Proof. Since A is constant, the solutions of (3.16) are harmonic (up to a linear change of coordinates). Thus the result of this lemma is classical.
Equipped with the above lemmas, we now give the proof of Theorem 3.1.
Proof of Theorem 3.1. Fix s ∈ (0, d). We denote by C and c positive constants depending only on (s, σ, d, λ, Λ, ) which may vary in each occurrence. We proceed with the proof of (3.4). Let Y be the random variable in the statement of Proposition 2.3, with α the exponent there. Define X ∶= Y 1 s and observe that X satisfies (3.3). We take σ to be the smallest of the following: the exponent in (3.2) and half the exponent α in Proposition 2.3.
We may suppose without loss of generality thattr AM = f (0) = 0.
Step 1. We check that u satisfies the hypothesis of Proposition 3.2 with r 0 = CX . Fix r ∈ [CX , R 2]. We take v, w ∈ C(B 3r 4 ) to be the solutions of the problems
-tr AD 2 v = f (x) in B 3r 4 , v = u on ∂B 3r 4 , -tr AD 2 w = 0 in B 3r 4 , w = u on ∂B 3r 4 .
By the Alexandrov-Bakelman-Pucci estimate [START_REF] Caffarelli | Fully nonlinear elliptic equations[END_REF], we have
(3.17) 1 r 2 sup B r 2 v -w ≤ C ⨏ Br f (x) d dx 1 d ≤ Cr σ [f ] C 0,σ (Br) .
By the Krylov-Safanov Hölder estimate (3.2),
(3.18) r σ-2 [u] C 0,σ (B 3r 4 ) ≤ C 1 r 2 osc Br u + ⨏ Br f (x) d dx 1 d ≤ C 1 r 2 osc Br u + r σ [f ] C 0,σ (Br) .
By the error estimate (Proposition 2.3), we have
1 r 2 sup B r 2 u -v ≤ Cr -α (1 + Yr -s ) r σ [f ] C 0,σ (Br) + 1 r 2 osc ∂B 3r 4 u + r σ-2 [u] C 0,σ (B 3r 4 ) .
Using the assumption r s ≥ X s = Y and (3.18), this gives
(3.19) 1 r 2 sup B r 2 u -v ≤ Cr -α r σ [f ] C 0,σ (Br) + 1 r 2 osc Br u .
Using (3.17) and (3.19), the triangle inequality, and the definition of σ, we get
1 r 2 sup B r 2 u -w ≤ Cr -α 1 r 2 osc Br u + Cr σ [f ] C 0,σ (B R ) .
By Lemma 3.3, w ∈ A(r, θ) for some θ ≥ c.
Step 2. We apply Proposition 3.2 to obtain (3.4). The proposition gives, for every r ≥ r
0 = CX , 1 r 2 inf l∈L sup x∈Br u(x) -l(x) ≤ C R σ [f ] C 0,σ (B R ) + 1 R 2 inf l∈L sup x∈Br u(x) -l(x) , which is (3.4).
It is convenient to restate the estimate (3.4) in terms of "coarsened" seminorms. Recall that, for φ ∈ C ∞ (B 1 ),
Dφ(x 0 ) ≃ 1 h osc B h (x 0 ) φ(x), D 2 φ(x 0 ) ≃ 1 h 2 inf p∈R d osc B h (x 0 ) (φ(x) -p ⋅ x) , for 0 < h ≪ 1.
This motivates the following definitions: for α ∈ (0, 1], h ≥ 0, U ⊆ R d and x 0 ∈ U , we define the pointwise, coarsened h-scale C 0,1 h (U ) and C 1,1 h (U ) seminorms at x 0 by
[φ] C 0,α h (x 0 ,U ) ∶= sup r>h 1 r α osc Br(x 0 )∩U φ,
and
[φ] C 1,α h (x 0 ,U ) ∶= sup r>h 1 r 1+α inf l∈L osc Br(x 0 )∩U (φ(x) -l(x)) .
This allows us to write (3.4) in the form
(3.20) [u] C 1,1 1 (0,B R 2 ) ≤ CX 2 f (0) + tr(AM ) + R σ [f ] C 0,σ (B R ) + 1 R 2 inf l∈L sup x∈B R u -l .
As a simple corollary to Theorem 3.1, we also have C 0,1 1 bounds on u: Corollary 3.4. Assume the hypotheses and notation of Theorem 3.1. Then,
(3.21) [u] C 0,1 1 (0,B R 2 ) ≤ X R f (0) + tr AM + R 1+σ [f ] C 0,σ (B R ) + 1 R osc B R u .
The proof follows from a simple interpolation inequality, which controls the seminorm
[⋅] C 0,1 h (B R ) in terms of [⋅] C 1,1 h (B R
) and the oscillation in B R : Lemma 3.5. For any R > 0 and φ ∈ C(B R ), we have
(3.22) [φ] C 0,1 h (0,B R ) ≤ 14 [φ] C 1,1 h (0,B R ) 1 2 osc B R φ 1 2
.
Proof. We must show that, for every s ∈ [h, R],
(3.23) 1 s osc Bs φ ≤ 14 [φ] C 1,1 h (0,B R ) 1 2 osc B R φ 1 2 . Set K ∶= [φ] C 1,1 h (0,B R ) -1 2 osc B R φ 1 2
and observe that, for every s ∈ [K, R], we have
(3.24) 1 s osc Bs φ ≤ K -1 osc B R φ = [φ] C 1,1 h (0,B R ) 1 2 osc B R φ 1 2
.
Thus we need only check (3.23) for s ∈ [h, K].
We next claim that, for every s ∈ [h, R),
(3.25) 2 s osc B s 2 φ ≤ 3s [φ] C 1,1 h (0,B R ) + 1 s osc Bs φ. Fix s and select p ∈ R d such that 1 s 2 osc Bs (φ(y) -p ⋅ y) ≤ [φ] C 1,1 h (0,B R ) . Then p = 1 2s osc Bs (-p ⋅ y) ≤ 1 2s osc Bs (φ(y) -p ⋅ y) + 1 2s osc Bs φ.
Together these yield 2 s osc
B s 2 φ ≤ 2 s osc B s 2 (φ(y) -p ⋅ y) + 2 s osc B s 2 (-p ⋅ y) = 2 s osc B s 2 (φ(y) -p ⋅ y) + 2 p ≤ 3 s osc Bs (φ(y) -p ⋅ y) + 1 s osc Bs φ ≤ 3s [φ] C 1,1 h (0,B R ) + 1 s osc Bs φ.
This is (3.25). We now iterate (3.25) to obtain the conclusion for s ∈ [h, K]. By induction, we see that for each j
∈ N with R j ∶= 2 -j K ≥ h, R -1 j osc B R j φ ≤ K -1 osc B K φ + 3 j-1 i=0 R i [φ] C 1,1 h (0,B R ) ≤ K -1 osc B K φ + 6K [φ] C 1,1 h (0,B R ) . Using (3.24), we deduce that for each j ∈ N with R j ∶= 2 -j K ≥ h R -1 j osc B R j φ ≤ 7 [φ] C 1,1 h (0,B R ) 1 2 osc B R φ 1 2 . For general s ∈ [h, R) we may find j ∈ N such that R j+1 ≤ s < R j to get s -1 osc Bs φ ≤ R -1 j+1 osc B R j φ ≤ 2R -1 j osc B R j φ ≤ 14 [φ] C 1,1 h (0,B R ) 1 2 osc B R φ 1 2
.
Equipped with this lemma, we now present the simple proof of Corollary 3.4:
Proof of Corollary 3.4. By interpolation, we also obtain (3.21). This follows from (3.4) and Lemma 3.5 as follows:
[u] C 1,1 h (0,B R ) ≤ C [u] 1 2 C 1,1 h (0,B R ) osc B R u 1 2 ≤ CX K 0 + f (0) + R σ [f ] C 0,σ (B R ) + R -2 osc B R u 1 2 osc B R u 1 2 ≤ CX K 0 R + R f (0) + R 1+σ [f ] C 0,σ (B R ) + R -1 osc B R u ,
where we used (3.22) in the first line, (3.4) to get the second line and Young's inequality in the last line. Redefining X to absorb the constant C, we obtain (3.21).
Green's function estimates
We will now use a similar argument to the proof of Theorem 3.1 to obtain estimates on the modified Green's functions G ε (⋅, 0) which are given by the solutions of:
(4.1) ε 2 G ε -tr A(x)D 2 G ε = χ B in R d . Proposition 4.1. Fix s ∈ (0, d). There exist a(d, λ, Λ) > 0, δ(d, λ, Λ) > 0 and an F-measurable random variable X ∶ Ω → [1, ∞) satisfying (4.2) E [exp (X s )] ≤ C(s, d, λ, Λ, ) < ∞ such that, for every ε ∈ (0, 1] and x ∈ R d , (4.3) G ε (x, 0) ≤ X d-1-δ ξ ε (x)
where ξ ε (x) is defined by:
(4.4) ξ ε (x) ∶= exp (-aε x ) ⋅ ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ log 2 + 1 ε(1 + x ) , in d = 2, (1 + x ) 2-d , in d > 2,
and
(4.5) osc B 1 (x) G ε (⋅, 0) ≤ (T x X )X d-1-δ (1 + x ) 1-d exp (-aε x ) .
We emphasize that (4.3) is a random estimate, and the proof relies on the homogenization process. In contrast to the situation for divergence form equations, there is no deterministic estimate for the decay of the Green's functions. Consider that for a general A ∈ Ω, the Green's function G(⋅, 0; A)
solves -tr AD 2 G(⋅, 0; A) = δ 0 in R d .
The solution may behave, for x ≫ 1, like a multiple of
K γ (x) ∶= ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ x -γ γ > 0, log x γ = 0, -x -γ γ < 0, for any exponent γ in the range d -1 Λ -1 ≤ γ ≤ Λ(d -1) -1.
In particular, if Λ is large, then γ may be negative and so G(⋅, 0; A) may be bounded near the origin. To see that this range for γ is sharp, it suffices to consider, respectively, the diffusion matrices
A 1 (x) = Λ x ⊗ x x 2 + I - x ⊗ x x 2 and A 2 (x) = x ⊗ x x 2 + Λ I - x ⊗ x x 2 .
Note that A 1 and A 2 can be altered slightly to be smooth at x = 0 without changing the decay of G at infinity. Before we discuss the proof of Proposition 4.1, we mention an interesting application to the invariant measure associated with (4.1). Recall that the invariant measure is defined to be the solution m ε of the equation in doubledivergence form:
ε 2 (m ε -1) -div (D(A(x)m ε )) = 0 in R d .
By (4.3), we have that for every y ∈ R d ,
B (y) m ε (x) dx ≤ R d G ε (x, 0) dx ≤ X d-1-δ .
In particular, we deduce that, for some δ > 0,
P B (y) m ε (x) dx > t ≤ C exp -t d d-1 +δ .
This gives a very strong bound on the location of particles undergoing a diffusion in the random environment.
We now return to the proof of Proposition 4.1. Without loss of generality, we may change variables and assume that the effective operator A = I. The proof of (4.5) is based on the idea of using homogenization to compare the Green's function for the heterogeneous operator to that of the homogenized operator. The algebraic error estimates for homogenization in Proposition 2.3 are just enough information to show that, with overwhelming probability, the ratio of Green's functions must be bounded at infinity. This is demonstrated by comparing the modified Green's function G ε (⋅, 0) to a family of carefully constructed test functions.
The test functions {ϕ R } R≥C will possess the following three properties:
(4.6) inf A∈Ω -tr A(x)D 2 ϕ R ≥ χ B in B R , (4.7) -∆ϕ R (x) ≳ x -d in R d ∖ B R 2 , (4.8) ϕ R (x) ≲ R d-1-δ (1 + x 2-d ) in R d ∖ B R .
As we will show, these properties imply, for large enough R (which will be random and depend on the value of X from many different applications of
Proposition 2.3), that G ε (⋅, 0) ≤ ϕ R in R d .
The properties of the barrier function ϕ R inside and outside of B R will be used to compare with G ε (⋅, 0) in different ways. If G ε (⋅, 0) ≤ ϕ R then, since they both decay at infinity, G ε (⋅, 0)ϕ R must achieve its global maximum somewhere in R d . Since ϕ R is a supersolution of (4.6), this point must lie in R d ∖ B R . As ϕ R is a supersolution of the homogenized equation outside B R 2 , this event is very unlikely for R ≫ 1, by Proposition 2.3. Note that there is a trade-off in our selection of the parameter R: if R is relatively large, then ϕ R is larger and hence the conclusion G ε (⋅, 0) ≤ ϕ R is weaker, however the probability that the conclusion fails is also much smaller.
Since the Green's function for the Laplacian has different qualitative behavior in dimensions d = 2 and d > 2, we split the proof of Proposition 4.1 into these two cases, which are handled in the following subsections.
(4.9) ϕ R (x) ≤ CR d-2+γ (1 + x ) 2-d ,
(ii) there exists a smooth function ψ R such that
(4.10) ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ -∆ψ R ≥ c x -2-β ψ R in R d ∖ B R 2 , ϕ R ≤ ψ R in R d ∖ B R 2 ,
and (iii) for each R ≥ C and A ∈ Ω, we have
(4.11) -tr A(x)D 2 ϕ R ≥ χ B in B R .
Proof. Throughout, we fix s ∈ (0, d) and let C and c denote positive constants which depend only on (s, d, λ, Λ, ) and may vary in each occurrence. We define ϕ R . For each R ≥ 4 , we set
ϕ R (x) ∶= ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ m R - h γ 2 + x 2 γ 2 , 0 ≤ x ≤ R, k R x 2-d exp - 1 β x -β , x > R,
where we define the following constants:
2β ∶= α(s, d, λ, Λ, ) > 0 is the exponent from Proposition 2.3 with σ = 1, γ ∶= max 1 2 , 1 - λ 2Λ h ∶= 2 λ (2 ) 2-γ k R ∶= h d -2 -2 β R -β -1 R d-2+γ exp 1 β 2 β R -β m R ∶= h γ 2 + R 2 γ 2 + k R R 2-d exp - 1 β R -β .
Notice that the choice of m R makes ϕ R continuous. We next perform some calculations to guarantee that this choice of ϕ R satisfies the above claims.
Step 1. We check that for every R ≥ 4 and x ∈ R d , (4.9) holds. Note that β = 1 2 α ≥ c and thus, for every R ≥ 4 , (4.12)
c ≤ exp - 1 β R -β ≤ 1.
For such R, we also have that since
d ≥ 3, (d -2 -2 β R -β ) ≥ c. Morever, since R ≥ 4 ≥ 4, this implies that (2 R) β ≤ 1 -c.
Using also that h ≤ C, we deduce that for every R ≥ 4 ,
(4.13) k R ≤ CR d-2+γ and m R ≤ CR γ .
For x > R, (4.9) is immediate from the definition of ϕ R , (4.12) and (4.13).
For x ≤ R, we first note that ϕ R is a decreasing function in the radial direction and therefore sup
R d ϕ R = ϕ R (0) ≤ m R .
We then use (4.13) to get, for every
x ≤ R, ϕ R (x) ≤ m R ≤ CR γ ≤ CR d-2+γ (1 + x ) 2-d .
This gives (4.9).
Step 2. We check that ϕ R satisfies
(4.14) ϕ R (x) ≤ ψ R (x) ∶= k R x 2-d exp - 1 β x -β in R d ∖ B R 2 .
Since this holds with equality for x ≥ R, we just need to check it in the annulus {R 2 ≤ x < R}. For this it suffices to show that in this annulus, ψ Rϕ R is decreasing in the radial direction. Since both ψ R and ϕ R are decreasing radial functions, we simply need to check that
(4.15) Dϕ R (x) < Dψ R (x) for every x ∈ B R ∖ B R 2 .
We compute, for
R 2 ≤ x ≤ R, since γ ≤ 1, Dϕ R (x) = h 2 + x 2 γ 2 -1 x ≤ h x γ-1
and
Dψ R (x) = x -1 d -2 -x -β ψ R (x) = k R d -2 -x -β x 1-d exp - 1 β x -β ≥ k R d -2 -2 β R -β x 1-d exp - 1 β 2 β R -β .
It is now evident that the choice of k R ensures that (4.15) holds. This completes the proof of (4.14).
Step 3. We check that ψ R satisfies
(4.16) -∆ψ R (x) ≥ c x -2-β ψ R (x) in x ≥ C.
By a direct computation, we find that, for x ≠ 0,
-∆ψ R (x) = x -2-β d -2 + β -x -β ψ R (x).
This yields (4.16). For future reference, we also note that for every x > 1, (4.17)
x -2 osc
B x 2 (x) ψ R + sup y∈B x 2 (x) y -1 Dψ R (y) ≤ C x -2 ψ R (x).
This follows from the computation
Dψ R (x) = x -1 ψ R (x) 2 -d + x -β .
Step 4. We check that (4.11) holds. By a direct computation, we find that for every x ∈ B R ,
D 2 ϕ R (x) = -h 2 + x 2 γ 2 -1 Id + γ -2 2 + x 2 (x ⊗ x) = -h 2 + x 2 γ 2 -1 Id - x ⊗ x x 2 + 2 -(1 -γ) x 2 2 + x 2 x ⊗ x x 2 .
Making use of our choice of γ, we see that, for any A ∈ Ω and x ∈ B R ,
-tr A(x)D 2 ϕ R (x) ≥ h 2 + x 2 γ 2 -1 (d -1)λ -Λ(1 -γ)( 2 + x 2 ) -1 x 2 ≥ h 2 + x 2 γ 2 -1 ((d -1)λ -Λ(1 -γ)) .
The last expression on the right side is positive since, by the choice of γ,
(d -1)λ -Λ(1 -γ) ≥ d - 3 2 λ > λ > 0,
while for x ∈ B , we have, by the choice of h,
h 2 + x 2 γ 2 -1 ((d -1)λ -Λ(1 -γ)) ≥ h 2 2 γ 2 -1 λ > 1.
This completes the proof of (4.11).
Proof of Proposition 4.1 when d ≥ 3. As before, we fix s ∈ (0, d) and let C and c denote positive constants which depend only on (s, d, λ, Λ, ). We use the notation developed in Lemma 4.2 throughout the proof. We make one reduction before beginning the main argument. Rather than proving (4.3), it suffices to prove
(4.18) ∀x ∈ R d , G ε (x, 0) ≤ X d-1-δ (1 + x ) 2-d .
To see this, we notice that
G ε (x, 0) ≤ sup x ≤ε -1 G ε (x, 0) (1 + x ) 2-d ε d-2 exp(a) exp (-aε x ) in R d ∖ B ε -1 .
Indeed, the right hand side is larger than the left hand side on ∂B ε -1 , and hence in R d ∖ B ε -1 by the comparison principle and the fact that the right hand side is a supersolution of (2.3) for a(d, λ, Λ) > 0 (by the proof of Lemma 2.1). We then obtain (4.3) in R d ∖ B ε -1 by replacing X by CX and a by 1 2 a, using (4.18), and noting that
ε d-2 exp (-aε x ) ≤ C x 2-d exp - a 2 ε x for every x ≥ ε -1 .
We also get (4.3) in B ε -1 , with X again replaced by CX , from (4.18) and the simple inequality exp (-aε x ) ≥ c for every x ≤ ε -1 .
Step 1. We define X and check that it has the desired integrability. Let Y denote the random variable X in the statement of Proposition 2.3 in B R with s as above and σ = 1. Also denote Y x (A) ∶= Y(T x A), which controls the error in balls of radius R centered at a point x ∈ R d .
We now define (4. [START_REF] Gloria | An optimal error estimate in stochastic homogenization of discrete elliptic equations[END_REF])
X (A) ∶= sup z ∶ z ∈ Z d , Y z (A) ≥ 2 d z s .
The main point is that X has the following property by Proposition 2.3: for every z ∈ Z d with z > X , and every R > 1 8 z and g ∈ C 0,1 (∂B R (z)), every pair
u, v ∈ C(B R ) such that (4.20) -tr(A(x)D 2 u) ≤ 0 ≤ -∆v in B R (z), u ≤ g ≤ v on ∂B R (z),
must satisfy the estimate
(4.21) R -2 sup B R (z) (u(x) -v(x)) ≤ CR -α R -2 osc ∂B R (z) g + R -1 [g] C 0,1 (∂B R (z)) .
Let us check that
(4.22) E [exp (X s )] ≤ C(s, d, λ, Λ, ) < ∞.
A union bound and stationarity yield, for t ≥ 1,
P [X > t] ≤ z∈Z d ∖Bt P Y z ≥ 2 d z s ≤ n∈N, 2 n ≥t z∈B 2 n ∖B 2 n-1 P Y z ≥ 2 d z s ≤ C n∈N, 2 n ≥t 2 dn P Y ≥ 2 (n-
2 dn P Y ≥ 2 (n-1)s+d ≤ C n∈N, 2 n ≥t 2 dn exp -2 (n-1)s+d ≤ C exp (-2t s ) .
It follows then that
E[exp(X s )] = s ∞ 0 t s-1 exp(t s )P[X > t] dt ≤ sC ∞ 0 t s-1 exp(-t s ) dt ≤ C.
This yields (4.22).
Step 2. We reduce the proposition to the claim that, for every R ≥ C,
(4.23) A ∈ Ω ∶ sup 0<ε≤1 sup x∈R d (G ε (x, 0; A) -ϕ R (x)) > 0 ⊆ {A ∈ Ω ∶ X (A) > R} .
If (4.23) holds, then by (4.9) we have
A ∈ Ω ∶ sup 0<ε≤1 sup x∈R d G ε (x, 0; A) -CR d-2+γ (1 + x ) 2-d > 0 ⊆ {A ∈ Ω ∶ X (A) > R} .
However this implies that, for every R ≥ C, 0 < ε ≤ 1 and
x ∈ R d , G ε (x, 0) ≤ CX d-2+γ (1 + x ) 2-d .
Setting δ ∶= 1γ ≥ c(d, λ, Λ) > 0, we obtain (4.18).
Step 3. We prove (4.23). Fix A ∈ Ω, 0 < ε ≤ 1 and R ≥ 10
√ d for which sup R d (G ε (⋅, 0) -ϕ R ) > 0.
The goal is to show that X ≥ R, at least if R ≥ C. We do this by exhibiting z > R and functions u and v satisfying (4.20), but not (4.21).
As the functions G ε (⋅, 0) and ϕ R decay at infinity (c.f. Lemma 2.1), there exists a point x 0 ∈ R d such that
G ε (x 0 , 0) -ϕ R (x 0 ) = sup R d (G ε (⋅, 0) -ϕ R ) > 0.
By the maximum principle and (4.11), it must be that x 0 ≥ R. By (4.14),
(4.24) G ε (x 0 , 0) -ψ R (x 0 ) = sup B x 0 2 (x 0 ) (G ε (⋅, 0) -ψ R ) .
We perturb ψ R by setting ψR (x) ∶= ψ R (x) + c x 0 -2-β ψ R (x 0 ) xx 0 2 which, in view of (4.16), satisfies
-∆ ψR ≥ 0 in B x 0 2 (x 0 ).
The perturbation improves (4.24) to
G ε (x 0 , 0) -ψR (x 0 ) ≥ sup ∂B x 0 2 (x 0 ) G ε (⋅, 0) -ψR + c x 0 -β ψ R (x 0 ).
If R ≥ C, then we may take z 0 ∈ Z d to be the nearest lattice point to x 0 such that z 0 > x 0 and get x 0 ∈ B z 0 4 (z 0 ). Since ψ( x ) is decreasing in x , this implies
G ε (x 0 , 0) -ψR (x 0 ) ≥ sup ∂B z 0 4 (z 0 ) G ε (⋅, 0) -ψR + c z 0 -β ψ R (z 0 ).
In view of (4.17), this gives
G ε (x 0 , 0) -ψR (x 0 ) ≥ sup ∂B z 0 4 (z 0 ) G ε (⋅, 0) -ψR + cΓ z 0 -β , where Γ ∶= osc ∂B z 0 4 (z 0 ) ψR + z 0 ψR C 0,1 (∂B z 0 4 (z 0 )) .
We have thus found functions satisfying (4.20) but in violation of (4.21). That is, we deduce from the definition of Y z that Y z 0 ≥ c z 0 s+α-β -C and, in view of the fact that β = 1 2 α < α and z 0 > R, this implies that Y z 0 ≥ 2 d z 0 s provided R ≥ C. Hence X ≥ z 0 > R. This completes the proof of (4.23).
Dimension two.
The argument for (4.3) in two dimensions follows along similar lines as the proof when d ≥ 3, however the normalization of G ε is more tricky since it depends on ε. Lemma 4.3. Let s ∈ (0, 2). Then there exist constants C, c, γ, β > 0, depending only on (s, d, λ, Λ, ), and a family of continuous functions {ϕ R,ε } R≥C,ε≤c satisfying the following: (i) for every R ≥ C,
(4.25) ϕ R,ε (x) ≤ CR γ log 2 + 1 ε(1 + x ) exp (-aε x ) ,
(ii) there exists a smooth function ψ R,ε such that
-∆ψ R,ε ≥ c x -2-β ψ R,ε in B 2ε -1 ∖ B R 2 , ϕ R,ε ≤ ψ R,ε in B 2ε -1 ∖ B R 2 ,
and (iii) for every R ≥ C and A ∈ Ω, (4.26)
-tr A(x)D 2 ϕ R,ε ≥ χ B in B R ⋃ R 2 ∖ B ε -1 .
Proof. Throughout we assume d = 2, we fix s ∈ (0, 2) and let C and c denote positive constants which depend only on (s, λ, Λ, ) and may vary in each occurrence. We roughly follow the outline of the proof of (4.3) above in the case of d ≥ 3.
Step 1. The definition of ϕ R . For ε ∈ (0,
1 2 ], 4 ≤ R ≤ ε -1 and x ∈ R 2 , we set ϕ R,ε (x) ∶= ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ m R,ε - h γ 2 + x 2 γ 2 , 0 ≤ x ≤ R, k R 1 a exp(a) + log ε -log x exp - 1 β x -β , R < x ≤ 1 ε , b R,ε exp (-aε x ) , x > 1 ε ,
where the constants are defined as follows:
a ∶= a(λ, Λ) is the constant from Lemma 2.1, 2β ∶= α(s, λ, Λ, ) > 0 is the exponent from Proposition 2.3 with σ = 1,
γ ∶= max 1 2 , 1 - λ 2Λ , h ∶= 2 λ (2 ) 2-γ , k R ∶= 2h exp 1 β 2 β R -β R γ , m R,ε ∶= h γ 2 + R 2 γ 2 + k R 1 a exp(a) + log ε -log R exp - 1 β R -β , b R,ε ∶= 1 a k R exp 2a - 1 β ε β Observe that (4.27) k R ≤ CR γ , m R,ε ≤ CR γ (1 + log ε -log R) and b R,ε ≤ CR γ .
Step 2. We show that, for every ε ∈ (0, 1 2 ], 4 ≤ R ≤ ε -1 and x ∈ R 2 , (4.25) holds. This is relatively easy to check from the definition of ϕ R,ε , using (4.12) and (4.27). For x ∈ B R , we use ϕ R,ε ≤ m R,ε , (4.27) and exp(-aεR) ≥ c to immediately obtain (4.25). For x ∈ B ε -1 ∖ B R , the estimate is obtained from the definition of ϕ R,ε , the bound for k R in (4.27) and (4.12). For x ∈ R 2 ∖ B ε -1 , the logarithm factor on the right side of (4.25) is C and we get (4.25) from the bound for b R,ε in (4.27).
Step 3. We show that, for ε ∈ (0, c] and R ≥ C, we have
(4.28) ϕ R,ε (x) ≤ ψ R,ε (x) ∶= k R 1 a exp(a) + log ε -log x exp - 1 β x -β in 1 2 R ≤ x ≤ 2 ε .
We have equality in (4.28) for R ≤ x ≤ ε -1 by the definition of ϕ R,ε . As ϕ R,ε is radial, it therefore suffices to check that the magnitude of the radial derivative of ϕ R,ε is less than (respectively, greater than) than that of ψ R,ε in the annulus {R 2 ≤ x ≤ R} (respectively, {ε -1 ≤ x ≤ 2ε -1 }). This is ensured by the definitions of k R and b R,ε , as the following routine computation verifies:
first, in x ∈ B R ∖ B R 2 , we have Dϕ R,ε (x) = h 2 + x 2 γ 2 -1 x < h x γ-1 ,
and thus in B R ∖ B R 2 , provided R ≥ C, we have that Dψ R,ε (x) = k R x -1 -1 + x -β 1 a exp(a) + log ε -log x exp - 1 β x -β > 1 2 k R x -1 exp - 1 β 2 β R -β = hR γ x -1 ≥ h x γ-1 > Dϕ R,ε (x) . Next we consider x ∈ B 2ε -1 ∖ B ε -1 and estimate Dϕ R,ε (x) = aεb R,ε exp (-aε x ) > aεb R,ε exp (-2a) = 2εk R exp - 1 β ε β and Dψ R,ε (x) ≤ k R x -1 1 + 1 a exp(a) x -β exp - 1 β ε β 2 β ≤ 2εk R exp - 1 β ε β ,
the latter holding provided that ε ≤ c. This completes the proof of (4.28).
Step 4. We show that ψ R,ε satisfies
(4.29) -∆ψ R,ε ≥ c x -2-β ψ R,ε (x) in C ≤ x ≤ 2 ε .
By a direct computation, for every x ∈ R 2 ∖ {0}, we have
-∆ψ R,ε (x) = x -2-β β -x -β ψ R,ε (x) + k R 1 x 2 + 1 exp - 1 β x -β ≥ x -2-β β -x -β ψ R,ε (x).
From β ≥ c and the definition of ψ R,ε , we see that ψ R,ε > 0 and (βx -β ) ≥ c for every β -1 β ≤ x ≤ 2ε -1 . This yields (4.29).
For future reference, we note that, for every x ≤ 2ε -1 , (4.30) x -2 osc
B x 2 (x) ψ R,ε + sup y∈B x 2 (x) y -1 Dψ R,ε (y) ≤ C x -2 ≤ C x -2 ψ R,ε (x).
Step 5. We check (4.26) by checking that for every A ∈ Ω, (4.31)
-tr A(x)D 2 ϕ R,ε ≥ χ B in B R ∪ (R 2 ∖ B ε -1 ).
In fact, for B R , the computation is identical to the one which established (4.11), since our function ϕ R,ε here is the same as ϕ R (from the argument in the case d > 2) in B R , up to a constant. Therefore we refer to Step 5 in Lemma 4.2 for details. In R 2 ∖ B ε -1 , ϕ R,ε is a supersolution by the proof of Lemma 2.1 and by the choice of a.
We remark that in the case that R = ε -1 , the middle annulus in the definition of ϕ R,ε disappears, and we have that ϕ R,ε is a global (viscosity) solution of (4.31), that is,
(4.32) -tr A(x)D 2 ϕ R,ε ≥ χ B in R 2 if R = ε -1 .
To see why, by (4.26), we need only check that ϕ R,ε is a viscosity supersolution of (4.26) on the sphere ∂B R = ∂B ε -1 . However, the function ϕ R,ε cannot be touched from below on this sphere, since its inner radial derivative is smaller than its outer radial derivative by the computations in Step 3. Therefore we have (4.32). It follows by comparison that, for every ε ∈ (0, 1 2 ] and
x ∈ R 2 , G ε (x, 0) ≤ ϕ R,ε (x) if R = ε -1 ≤ Cε -γ log 2 + 1 ε(1 + x ) exp (-aε x ) (4.33)
Proof of Proposition 4.1 when d = 2. The proof follows very similar to the case when d ≥ 3, using the appropriate adaptations for the new test function introduced in Lemma 4.3. As before, we fix s ∈ (0, 2) and let C and c denote positive constants which depend only on (s, λ, Λ, ). We use the notation developed in Lemma 4.3 throughout the proof.
Step 1. We define the random variable X in exactly the same way as in (4. [START_REF] Gloria | An optimal error estimate in stochastic homogenization of discrete elliptic equations[END_REF], so (4.34)
X (A) ∶= sup z ∶ z ∈ Z d , Y z (A) ≥ 2 d z s ,
The argument leading to (4.22) follows exactly as before, so that
E[exp(X s )] ≤ C(s, λ, Λ, ) < ∞.
Step 2. We reduce the proposition to the claim that, for every R ≥ C,
(4.35) ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ A ∈ Ω ∶ sup 0<ε< 1 R sup x∈R 2 (G ε (x, 0, A) -ϕ R,ε (x)) > 0 ⎫ ⎪ ⎪ ⎬ ⎪ ⎪ ⎭ ⊆ {A ∈ Ω ∶ X (A) > R} .
If (4.35) holds, then by (4.25) we have
⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ A ∶ sup 0<ε< 1 R sup x∈R 2 G ε (x, 0, A) -CR γ log 2 + 1 ε(1 + x ) exp (-aε x ) > 0 ⎫ ⎪ ⎪ ⎬ ⎪ ⎪ ⎭ ⊆ {A ∈ Ω ∶ X (A) > R} .
From this, we deduce that, for every 0 < ε < X -1 and x ∈ R 2 , (4.36)
G ε (x, 0) ≤ CX γ log 2 + 1 ε(1 + x ) exp (-aε x ) .
Moreover, if ε ∈ (0, 1 2 ] and ε ≥ X -1 , then by (4.33) we have
G ε (x, 0) ≤ Cε -γ log 2 + 1 ε(1 + x ) exp (-aε x ) ≤ CX γ log 2 + 1 ε(1 + x ) exp (-aε x ) .
Thus we have (4.36) for every ε ∈ (0, 1 2 ] and x ∈ R 2 . Taking δ ∶= 1γ ≥ c, we obtain the desired conclusion (4.3) for d = 2.
Step 3. We prove (4.35). This is almost the same as the last step in the proof of (4.23) for dimensions larger than two. Fix A ∈ Ω, 0 < ε ≤ 1 and R ≥ 2 for which sup
R d (G ε (⋅, 0) -ϕ R,ε ) > 0.
As in the case of dimensions larger than two, we select a point
x 0 ∈ R d such that G ε (x 0 , 0) -ϕ R,ε (x 0 ) = sup R d (G ε (⋅, 0) -ϕ R,ε ) > 0.
By the maximum principle and (4.26), it must be that R ≤ x 0 ≤ ε -1 . By (4.28), (4.37)
G ε (x 0 , 0) -ψ R,ε (x 0 ) = sup B x 0 2 (x 0 ) (G ε (⋅, 0) -ψ R,ε ) .
We perturb ψ R,ε by setting ψR,ε (x) ∶= ψ R,ε (x) + c x 0 -2-β ψ R,ε (x 0 ) xx 0 2 which, in view of (4.29), satisfies
-∆ ψR,ε ≥ 0 in B x 0 2 (x 0 ).
According to (4.24), we have
G ε (x 0 , 0) -ψR,ε (x 0 ) ≥ sup ∂B x 0 2 (z 0 ) G ε (⋅, 0) -ψR,ε + c x 0 -β ψ R,ε (x 0 ).
Assuming R ≥ C, we may take z 0 ∈ Z d to be the nearest lattice point to x 0 such that z 0 > x 0 and deduce that x 0 ∈ B z 0 4 as well as
G ε (x 0 , 0) -ψR,ε (x 0 ) ≥ sup ∂B z 0 4 (z 0 ) G ε (⋅, 0) -ψR,ε + c z 0 -β ψ R,ε (z 0 ).
In view of (4.30), this gives
G ε (x 0 , 0) -ψR,ε (x 0 ) ≥ sup ∂B z 0 4 (z 0 ) G ε (⋅, 0) -ψR,ε + cΓ z 0 -β , where Γ ∶= z 0 -2 osc ∂B z 0 4 (z 0 ) ψR,ε + z 0 -1 ψR,ε C 0,1 (∂B z 0 4 (z 0 )) .
We have thus found functions satisfying (4.20) but in violation of (4.21), that is, we deduce from the definition of Y z that Y z 0 ≥ c z 0 s+α-β -C. In view of the fact that β = 1 2 α < α and z 0 > R, this implies that
Y z 0 ≥ 2 d z 0 s provided R ≥ C.
By the definition of X , we obtain X ≥ z 0 > R. This completes the proof of (4.35).
Sensitivity estimates
In this section, we present an estimate which uses the Green's function bounds to control the vertical derivatives introduced in the spectral gap inequality (Proposition 2.2). Recall the notation from Proposition 2.2:
X ′ z ∶= E * X F * (Z d ∖ {z}) and V * [X] ∶= z∈Z d (X -X ′ z ) 2 .
The vertical derivative (X -X ′ z ) measures, in a precise sense, the sensitivity of X subject to changes in the environment near z. We can therefore interpret (X -X ′ z ) as a derivative of X with respect to the coefficients near z. The goal then will be to understand the vertical derivative (X -X
′ z ) when X is φ ε (x), for fixed x ∈ R d .
The main result of this section is the following proposition which computes
φ ε (x) -E * [φ ε (x) F * (Z d ∖ {z})
] in terms of the random variable introduced in Proposition 4.1. Throughout the rest of the section, we fix M ∈ S d with M = 1 and let ξ ε be defined as in (4.4).
Proposition 5.1. Fix s ∈ (0, d). There exist positive constants a(d, λ, Λ) > 0, δ(d, λ, Λ) > 0 and an
F * -measurable random variable X ∶ Ω * → [1, ∞) satisfying (5.1) E [exp(X s )] ≤ C(s, d, λ, Λ, ) < ∞ such that, for every ε ∈ 0, 1 2 , x ∈ R d and z ∈ Z d , (5.2) φ ε (x) -E * φ ε (x) F * (Z d ∖ {z}) ≤ (T z X ) d+1-δ ξ ε (x -z).
Before beginning the proof of Proposition 5.1, we first provide a heuristic explanation of the main argument. We begin with the observation that we may identify the conditional expectation E * [X F * (Z d ∖ {z})] via resampling in the following way. Let (Ω ′ * , F ′ * , P ′ * ) denote an independent copy of (Ω * , F * , P * ) and define, for each z ∈ Z d , a map
θ ′ z ∶ Ω × Ω ′ → Ω by θ ′ z (ω, ω ′ )(y) ∶= ω(y) if y ≠ z, ω ′ (z) if y = z.
It follows that, for every ω ∈ Ω,
(5.3) E * X F * (Z d ∖ {z}) (ω) = E ′ * [X(θ ′ z (ω, ⋅))]
. Therefore, we are interested in estimating differences of the form X(ω) -X(θ ′ z (ω, ω ′ )), which represent the expected change in X if we resample the environment at z. Observe that, by (1.16)
, if ω, ω ′ ∈ Ω * , z ∈ R d , and A ∶= π(ω) and A ′ ∶= π(θ ′ z (ω, ω ′ )), then (5.4) A ≡ A ′ in R d ∖ B 2 (z).
Denote by φ ε and φ ′ ε the corresponding approximate correctors with modified Green's functions G ε and G ′ ε . Let w ∶= φ εφ ′ ε . Then we have
ε 2 w -tr A ′ (x)D 2 w = tr ((A(x) -A ′ (x)) (M + D 2 φ ε ) ≤ dΛ 1 + D 2 φ ε (x) χ B 2 (z) (x),
where χ E denotes the characteristic function of a set E ⊆ R d . By comparing w to G ′ ε , we deduce that, for C(d, λ, Λ) ≥ 1,
(5.5)
φ ε (x) -φ ′ ε (x) ≤ C(1 + [φ ε ] C 1,1 (B 2 (z)) )G ′ ε (x, z).
If φ ε satisfied a C 1,1 bound, then by (3.20) and ( 4.3), we deduce that
φ ε (0) -φ ′ ε (0) ≤ C ((T z X )(ω)) 2 ((T z Y)(θ ′ z (ω, ω ′ ))) d-1-δ ξ ε (z),
for X defined as in (3.20), Y defined as in (4.3), and ξ ε (z) defined as in (4.4).
Taking expectations of both sides with respect to P ′ * , we obtain
(5.6) φ ε (0) -E * φ ε (0) F * (Z d ∖ {z}) ≤ C(T z X ) 2 (T z Y * ) d-1-δ ξ ε (z),
where
(5.7) Y * ∶= E ′ * Y(θ ′ 0 (ω, ω ′ )) d-1-δ 1 (d-1-δ) .
Jensen's inequality implies that the integrability of Y * is controlled by the integrability of Y. First, consider that, for s ≥ d -1δ, by the convexity of t ↦ exp(t r ) for r ≥ 1, we have
E [exp (Y s * )] = E exp E ′ * Y(θ ′ 0 (ω, ω ′ )) d-1-δ s (d-1-δ) ≤ E [E ′ * [exp (Y(θ ′ 0 (ω, ω ′ )) s )]] = E [exp (Y s )] .
Integrability of lower moments for s ∈ (0, d -1δ) follows by the bound
E [exp (Y s * )] = E exp Y d-1-δ * sup exp -Y d-1-δ * + Y s * ≤ E exp Y d-1-δ *
by the monotonicity of the map p ↦ x p for p ≥ 0 and x ≥ 1, which we can take without loss of generality by letting Y * = Y * + 1. We may now redefine X to be X + Y * to get one side of the desired bound (5.2). The analogous bound from below is obtained by exchanging M for -M in the equation for φ ε , or by repeating the above argument and comparing w to -G ′ ε . The main reason that this argument fails to be rigorous is technical: the quantity [φ ε ] C 1,1 (B 2 (z)) is not actually controlled by Theorem 3.1, rather we have control only over the coarsened quantity
[φ ε ] C 1,1 1 (B 2 (z))
. Most of the work in the proof of Proposition 5.1 is therefore to fix this glitch by proving that (5.5) still holds if we replace the Hölder seminorm on the right side by the appropriate coarsened seminorm. This is handled by the following lemma, which we write in a rather general form:
Lemma 5.2. Let ε ∈ (0, 1 ] and z ∈ Z d and suppose A, A ′ ∈ Ω satisfy (5.4). Also fix f, f ′ ∈ C(R d ) ∩ L ∞ (R d ) and let u, u ′ ∈ C(R d ) ∩ L ∞ (R d ) be the solutions of ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ ε 2 u -tr A(x)D 2 u = f in R d , ε 2 u ′ -tr A ′ (x)D 2 u ′ = f ′ in R d .
Then there exists a constant C(d, λ, Λ, ) ≥ 1 such that, for every x ∈ R d and δ > 0,
(5.8) u(x) -u ′ (x) ≤ C δ + [u] C 1,1 1 (z,B 1 ε (z)) + sup y∈B (z), y ′ ∈R d f (y ′ ) -f (y) -δε 2 y ′ -z 2 G ′ ε (x, z) + y∈Z d G ε (x, y) sup B 2 (y) (f -f ′ ).
Here G ′ ε denotes the modified Green's function for A ′ . Proof. We may assume without loss of generality that z = 0. By replacing u(x) by the function
u ′′ (x) ∶= u(x) - y∈Z d G ε (x, y) sup B 2 (y) (f -f ′ ),
we may assume furthermore that f ′ ≥ f in R d . Fix ε ∈ (0, 1 2 ]. We will show that sup
x∈R d (u(x) -u ′ (x) -KG ′ ε (x, 0)) > 0 implies a contradiction for K > C 1 + [u] C 1,1 1 (z,B 1 ε (z)) + sup y∈B (z), y ′ ∈R d f (y ′ ) -f (y) -ε 2 y ′ -z 2 G ′ ε (x, z)
and C = C(d, λ, Λ, ) chosen sufficiently large.
Step 1. We find a touching point x 0 ∈ B 2 . Consider the auxiliary function ξ(x) ∶= u(x)u ′ (x) -KG ′ ε (x, 0). By (5.4) and using that f ′ ≥ f , we see that ξ satisfies
ε 2 ξ -tr A(x)D 2 ξ ≤ 0 in R d ∖ B 2 .
By the maximum principle and the hypothesis, sup
R d ξ = sup B 2 ξ > 0. Select x 0 ∈ B 2 such that (5.9) ξ(x 0 ) = sup R d ξ.
Step 2. We replace u by a quadratic approximation in B and get a new touching point. Select p ∈ R d such that (5.10) sup
x∈B u(x) -u(x 0 ) -p ⋅ (x -x 0 ) ≤ 4 2 [u] C 1,1 1 (0,B 1
ε ) . Fix ν ≥ 1 to be chosen below and define the function
ψ(x) ∶= u(x 0 ) + p ⋅ (x -x 0 ) -ν x -x 0 2 -u ′ (x) -KG ′ ε (x, 0),
The claim is that (5.11) x ↦ ψ(x) has a local maximum in B .
To verify (5.11), we check that ψ(x 0 ) > sup ∂B ψ. For y ∈ ∂B , we compute
ψ(x 0 ) = u(x 0 ) -u ′ (x 0 ) -KG ′ ε (x 0 , 0) ≥ u(y) -u ′ (y) -KG ′ ε (y, 0) (by (5.9)) ≥ u(x 0 ) + p ⋅ (y -x 0 ) -8 2 [u] C 1,1 1 (0,B 1 ε ) -u ′ (y) -KG ′ ε (y, 0) (by (5.10)) = ψ(y) + ν y -x 0 2 -8 2 [u] C 1,1 1 (0,B 1 ε ) ≥ ψ(y) + 2 ν -8 2 [u] C 1,1 1 (0,B 1 ε )
, where in the last line we used yx 0 ≥ 2 . Therefore, for every
ν > 8 [u] C 1,1
1 (0,B 1 ε ) , the claim (5.11) is satisfied.
Step 3. We show that, for every x ∈ R d , (5.12)
u(x) ≤ Cδε -2 + x 2 + sup y∈R d ε -2 f (y) -y 2 Define w(x) ∶= u(x) -x 2 + L , for L ∶= sup x∈R d ε -2 f (x) -δ x 2 + 2dΛε -2 δ.
Using the equation for u, we find that
ε 2 w -tr A(x)D 2 w ≤ f -ε 2 ( x 2 + L) + 2dΛδ.
Using the definition of L, we deduce that
ε 2 w -tr A(x)D 2 w ≤ 0 in R d .
Since w(x) → -∞ as x → ∞, we deduce from the maximum principle that w ≤ 0 in R d . This yields (5.12).
Step 4. We conclude by obtaining a contradiction to (5.11) for an appropriate choice of K. Observe that, in B , the function ψ satisfies
ε 2 ψ -tr A ′ (x)D 2 ψ ≤ ε 2 (u(x 0 ) + p ⋅ (x -x 0 ) -ν x -x 0 2 ) + Cν -f ′ (x) -K ≤ ε 2 u(x) + Cν -f (x) -K ≤ Cν + sup y∈R d δ + f (y) -f (x) -ε 2 y 2 -K.
Thus (5.11) violates the maximum principle provided that
K > C(ν + δ) + sup x∈B , y∈R d f (y) -f (x) -ε 2 y 2 .
This completes the proof.
We now use the previous lemma and the estimates in Sections 3 and 4 to prove the sensitivity estimates (5.2).
Proof of Proposition 5.1. Fix s ∈ (0, d). Throughout, C and c will denote positive constants depending only on (s, d, λ, Λ, ) which may vary in each occurrence, and X denotes an F * -measurable random variable on Ω * satisfying E [exp (X p )] ≤ C for every p < s, which we also allow to vary in each occurrence.
We fix z ∈ Z d and identify the conditional expectation with respect to F * (Z d ∖ {z}) via resampling, as in (5.3). By the discussion following the statement of the proposition, it suffices to prove the bound (5.6) with Y * defined by (5.7) for some random variable Y ≤ X . To that end, fix ω ∈ Ω, ω ∈ Ω ′ , and denote ω ∶= θ z (ω, ω ′ ) as well as A = π(ω) and A ′ = π(ω). Note that (5.4) holds. Also let φ ε and φ ′ ε denote the approximate correctors and G ε and G ′ ε the Green's functions.
Step 1. We use Theorem 3.
1 to estimate [φ ε ] C 1,1 1 (z,B 1 ε (z)) .
In preparation, we rewrite the equation for φ ε in terms of
w ε (x) ∶= 1 2 x ⋅ M x + φ ε (x), which satisfies -tr A(x)D 2 w ε = -ε 2 φ ε (x).
In view of the fact that the constant functions ± sup x∈R d tr(A(x)M ) are super/subsolutions, we have (5.13) sup
x∈R d ε 2 φ ε (x) ≤ sup x∈R d tr(A(x)M ) ≤ dΛ ≤ C,
and this yields (5.14)
ε 2 osc B 4 ε w ε ≤ ε 2 osc B 4 ε 1 2 x ⋅ M x + ε 2 osc B 4 ε φ ε ≤ C.
By the Krylov-Safonov Holder estimate (3.2) applied to
w ε in B R with R = 4ε -1 yields (5.15) ε 2-β [w ε ] C 0,β (B 2 ε ) ≤ C. Letting Q(x) ∶= 1 2 x ⋅ M x, we have (5.16) ε 2 [φ ε ] C 0,β (B 2 ε ) ≤ ε 2 [w ε ] C 0,β (B 2 ε ) +ε 2 [Q] C 0,β (B 2 ε ) ≤ Cε β +C M ε β ≤ Cε β .
We now apply Theorem 3.1 (specifically (3.20)) to w ε with R = 2ε -1 to obtain
[w ε ] C 1,1 1 (z,B 1 ε (z)) ≤ (T z X ) 2 sup B 2 ε ε 2 φ + ε -β ε 2 φ ε C 0,β (B 2 ε ) + ε 2 osc B 2 ε w ε ≤ C(T z X ) 2 .
As w ε and φ ε differ by the quadratic Q, we obtain (5.17)
[φ ε ] C 1,1 1 (z,B 1 ε (z)) ≤ C (T z X ) 2 + M ≤ C(T z X ) 2 .
Step 2. We estimate G ′ ε (⋅, z) and complete the argument for (5.6). By Proposition 4.1, we have
(5.18) G ′ ε (x, z) ≤ (T z X ) d-1-δ ξ ε (x -z) for ξ ε (x -z) defined as in (4.4). Lemma 5.2 yields, for every x ∈ R d , φ ε (x) -φ ′ ε (x) ≤ C 1 + [φ ε ] C 1,1 1 (z,B 1 ε (z)) G ′ ε (x, z) + 2dΛG ε (x, z
). Inserting (5.17) and (5.18) gives (5.19)
φ ε (x) -φ ′ ε (x) ≤ C(T z X ) 2 (T z X ) d-1-δ ξ ε (x -z)
. This is (5.6).
Optimal scaling for the approximate correctors
We complete the rate computation for the approximate correctors φ ε . We think of breaking up the decay of ε 2 φ ε (0)tr(AM ) into two main contributions of error:
ε 2 φ ε (0) -tr(AM ) = ε 2 φ ε (0) -E ε 2 φ ε (0) "random error" + E ε 2 φ ε (0) -tr(AM ) "deterministic error" .
The "random error" will be controlled by the concentration inequalities established in Section 5. We will show that the "deterministic error" is controlled by the random error, and this will yield a rate for ε 2 φ ε (0) + tr(AM ).
First, we control the random error using Proposition 2.2 and the estimates from the previous three sections. Proposition 6.1. There exist δ(d, λ, Λ) > 0 and C(d, λ, Λ, ) ≥ 1 such that, for every ε ∈ (0, 1 2 ], and x ∈ R d , (6.1)
E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ exp ⎛ ⎝ 1 E(ε) ε 2 φ ε (x) -E ε 2 φ ε (x) 1 2 +δ ⎞ ⎠ ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ≤ C.
Proof. For readability, we prove (6.1) for x = 0. The argument for general x ∈ R d is almost the same. Define
ξ ε (x) ∶= exp(-aε x ) ⋅ ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ log 2 + 1 ε(1 + x ) if d = 2, (1 + x ) 2-d if d ≥ 3.
According to Proposition 5.1, for β > 0,
exp ⎛ ⎝ C V * ε 2 φ ε (0) E(ε) β ⎞ ⎠ = exp ⎛ ⎝ C ε 4 E(ε) 2 z∈Z d φ ε (0) -E * φ ε (0) F * (Z d ∖ {z}) 2 β ⎞ ⎠ ≤ exp ⎛ ⎝ C ε 4 E(ε) 2 z∈Z d (T z X ) 2d+2-2δ ξ ε (z) 2 β ⎞ ⎠ .
We claim (and prove below) that (6.2)
z∈Z d ξ ε (z) 2 ≤ Cε -4 E(ε) 2 .
Assuming (6.2) and applying Jensen's inequality for discrete sums, we have
exp ⎛ ⎝ C ε 4 E(ε) 2 z∈Z d (T z X ) 2d+2-2δ ξ ε (z) 2 β ⎞ ⎠ ≤ exp ⎛ ⎝ C ∑ z∈Z d (T z X ) 2d+2-2δ ξ(z) 2 ∑ z∈Z d ξ ε (z) 2 β ⎞ ⎠ ≤ ∑ z∈Z d ξ(z) 2 exp C (T z X ) (2d+2-2δ)β ∑ z∈Z d ξ ε (z) 2 .
Select β ∶= d (2d + 2 -3δ). Taking expectations, using stationarity, and applying Proposition 5.1, we obtain
E * ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ exp ⎛ ⎝ C V * ε 2 φ ε (0) E(ε) β ⎞ ⎠ ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ≤ C.
Finally, an application of Proposition 2.2 gives, for γ ∶= 2β (1 + β) ∈ (0, 2),
E exp ε 2 φ ε (0) E(ε) -E ε 2 φ ε (0) E(ε) γ ≤ C.
This completes the proof of the proposition, subject to the verification of (6.2), which is a straightforward computation. In dimension d ≥ 3, we have
z∈Z d ξ ε (z) 2 ≤ C R d (1 + x ) 4-2d exp (-2aε x ) dx = Cε d-4 R d (ε + y ) 4-2d exp (-2a y ) dy ≤ Cε d-4 R d ∖Bε y 4-2d exp (-2a y ) dy + Bε ε 4-2d dy = C ⋅ 1 + ε d-4 in d ≠ 4, 1 + log ε in d = 4, = Cε -4 E(ε) 2 .
In dimension d = 2, we have
z∈Z d ξ ε (z) 2 ≤ C R d log 2 2 + 1 ε(1 + x ) exp (-2aε x ) dx ≤ Cε -2 R d log 2 2 + 1 ε + y exp (-2a y ) dy ≤ Cε -2 Bε log 2 2 + 1 ε exp (-2a y ) dy + R d ∖Bε log 2 2 + 1 y exp (-2a y ) .
We estimate the two integrals on the right as follows:
Bε log 2 2 + 1 ε exp (-2a y ) dy ≤ B ε log 2 2 + 1 ε ≤ Cε 2 log 2 2 + 1 ε and R d ∖Bε log 2 2 + 1 y exp (-2a y ) dy ≤ B 1 log 2 2 + 1 ε + C R d ∖B 1 exp (-2a y ) dy ≤ C log 2 2 + 1 ε .
Assembling the last three sets of inequalities yields in d = 2 that
z∈Z d ξ ε (z) 2 ≤ Cε -2 log 2 2 + 1 ε ≤ Cε -4 E(ε) 2 .
Next, we show that the deterministic error is controlled from above by the random error. The basic idea is to argue that if the deterministic error is larger than the typical size of the random error, then this is inconsistent with the homogenization. The argument must of course be quantitative, so it is natural that we will apply Proposition 2.3. Note that if we possessed the bound sup x∈R d φ ε (x)-E [φ ε (0)] ≲ ε -2 E(ε), then our proof here would be much simpler. However, this bound is too strong-we do not have, and of course cannot expect, such a uniform estimate on the fluctuations to hold-and therefore we need to cut off larger fluctuations and argue by approximation. This is done by using the Alexandrov-Backelman-Pucci estimate and (6.1) in a straightforward way. Proposition 6.2. There exists C(d, λ, Λ, ) ≥ 1 such that, for every ε ∈ (0, 1 2 ] and x ∈ R d , (6.3) E ε 2 φ ε (x)tr(AM ) ≤ CE(ε).
Proof. By symmetry, it suffices to prove the following one-sided bound: for every ε ∈ (0, 1 2 ] and x ∈ R d , (6.4) tr(AM ) -E ε 2 φ ε (x) ≥ -CE(ε).
The proof of (6.4) will be broken down into several steps.
Step 1. We show that (6.5)
E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ exp ⎛ ⎝ 1 ε -2 E(ε) osc B √ d φ ε 1 2 +δ ⎞ ⎠ ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ≤ C.
Let k ∈ L be the affine function satisfying
sup x∈B √ d φ ε -k(x) = inf l∈L sup x∈B √ d φ ε -l(x) .
According to (5.17), (6.6) sup
x∈B √ d φ ε -k(x) ≤ CX 2 .
Since k is affine, its slope can be estimated by its oscillation on
B √ d ∩ Z d : ∇k ≤ C osc B √ d ∩Z d k.
The previous line and (6.6) yield that ∇k ≤ C osc
B √ d ∩Z d φ ε + CX 2 .
By stationarity and (6.1), we get (6.7)
E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ exp ⎛ ⎝ 1 ε -2 E(ε) osc B √ d ∩Z d φ ε 1 2 +δ ⎞ ⎠ ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ≤ C Therefore, E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ exp ⎛ ⎝ 1 ε -2 E(ε) ∇k 1 2 +δ ⎞ ⎠ ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ≤ C
The triangle inequality, (6.6) and (6.7) imply (6.5).
Step 2. Consider the function
f ε (x) ∶= -ε 2 φ ε (x) + E ε 2 φ ε (0) + .
We claim that, for every R ≥ 1,
(6.8) E ⨏ B R f ε (x) d dx 1 d ≤ CE(ε).
Indeed, by Jensen's inequality, (6.1) and (6.5),
E ⨏ B R f ε (x) d dx 1 d ≤ ⨏ B R E f ε (x) d dx 1 d ≤ CE(ε).
Step 3. We prepare the comparison. Define fε (x) ∶= min -ε
2 φ ε (x), E -ε 2 φ ε (0) = -ε 2 φ ε (x) -f ε (x),
fix R ≥ 1 (we will send R → ∞ below) and denote by φε , the solution of
⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ -tr A(x)(M + D 2 φε ) = fε in B R , φε = dΛε -2 M on ∂B R .
Note that the boundary condition was chosen so that φ ε ≤ φε on ∂B R . Thus the Alexandrov-Backelman-Pucci estimate and (6.8) yield (6.9
) E R -2 sup B R φ ε -φε ≤ CE(ε).
Step 4. Let φ denote the solution to (6.10)
⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ -tr(A(M + D 2 φ)) = E -ε 2 φ ε (0) in B R , φ = dΛε -2 M on ∂B R .
Notice that the right hand side and boundary condition for (6.10) are chosen to be constant. Moreover, we can solve for φ explicitly: for x ∈ B R , we have (6.11)
φ(x) = dΛε -2 M - x 2 -R 2 2 tr A tr(AM ) -E ε 2 φ ε (0) .
We point out that since fε ≤ E [-ε 2 φ ε (0)], we have that φε satisfies
⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ -tr(A(x)(M + D 2 φε )) ≤ E -ε 2 φ ε (0) in B R , φε = dΛε -2 M on ∂B R .
It follows then that we may apply Proposition 2.3 to the pair 1 2 x ⋅ M x + φε (x) and 1 2 x ⋅ M x + φ(x), which gives
(6.12) E R -2 sup x∈B R φε -φ ≤ CR -α .
Step 5. The conclusion. We have, by (6.9), (6.12) and (6.11),
-dΛε -2 M ≤ E [φ ε (0)] ≤ E φε (0) + CE(ε)R 2 ≤ φ(0) + CR 2-α + CE(ε)R 2 = dΛε -2 M + R 2 2 tr A tr(AM ) -ε 2 E [φ ε (0)] + CR 2-α + CE(ε)R 2 .
Rearranging, we obtain
tr(AM ) -ε 2 E [φ ε (0)] ≥ C -2dΛ M ε -2 R -2 -CR -α -CE(ε) .
Sending R → ∞ yields tr(AM ) -E ε 2 φ ε (0) ≥ -CE(ε).
Since (6.5) and stationarity implies that, for every x ∈ R d ,
E ε 2 φ ε (x) -E ε 2 φ ε (0) ≤ CE(ε),
the proof of (6.4) is complete.
The proof of Theorem 1.1 is now complete, as it follows immediately from Propositions 6.1, 6.2 and (6.5).
Existence of stationary correctors in d > 4
In this section we prove the following result concerning the existence of stationary correctors in dimensions larger than four.
P [ φ(x) > t] ≤ C exp -t 1 2
.
To prove Theorem 7.1, we argue that, after subtracting an appropriate constant, φ ε has an almost sure limit as ε → 0 to a stationary function φ. We introduce the functions φε ∶= φ ε -1 ε 2 tr(AM ) Observe that (7.3) ε 2 φεtr A(x)D 2 φε =tr(AM ).
To show that φε has an almost sure limit as ε → 0, we introduce the functions
ψ ε ∶= φ ε -φ 2ε .
Then φε -φ2ε = ψ ε -3 4ε 2 tr(AM ), and the goal will be to prove bounds on φεφε which are summable over the sequence ε n ∶= 2 -n . We proceed as in the previous section: we first estimate the fluctuations of ψ ε using a sensitivity estimate and a suitable version of the Efron-Stein inequality. We then use this fluctuation estimate to obtain bounds on its expectation using a variation of the argument in the proof of Proposition 6.2.
We begin with controlling the fluctuations.
Lemma 7.2. For every p ∈ [1, ∞) and γ > 0, there exists C(p, γ, d, λ, Λ, ) < ∞ such that, for every ε ∈ 0, 1 2 and x ∈ R d , (7.4)
E [ ψ ε (x) -E[ψ ε (x)] p ] 1 p ≤ Cε d-4 2 ∧2 -γ .
Proof. In view of the Efron-Stein inequality for pth moments (cf. (A.2)), it suffices to show that
(7.5) E V * [ψ ε (x)] p 2 1 p ≤ Cε d-4 2 ∧2 -γ .
We start from the observation that ψ ε satisfies the equation
(7.6) ε 2 ψ ε -tr A(x)D 2 ψ ε = 3ε 2 φ 2ε in R d .
Denote the right-hand side by h ε ∶= 3ε 2 φ 2ε .
Step 1. We outline the proof of (7.5). Fix z ∈ R d . We use the notation from the previous section, letting A ′ ∶= π(θ ′ z (ω, ω ′ )) denote a resampling of the coefficients at z. We let ψ ′ ε , φ ′ ε , G ′ ε , etc, denote the corresponding functions defined with respect to A ′ . Applying Lemma 5.2 with δ = ε 2 , in view of (7.6), we find that
ψ ε (x) -ψ ′ ε (x) ≤ C ε 2 + [ψ ε ] C 1,1 1 (z,B 1 ε (z)) + sup y∈B (z), y ′ ∈R d h ε (y ′ ) -h ε (y) -ε 4 y ′ -z 2 G ′ ε (x, z) + y∈Z d G ε (x, y) sup B 2 (y) (h ε -h ′ ε ) =∶ CK(z)ξ ε (x -z) + C y∈Z d H(y, z)ξ ε (z -y)ξ ε (x -y).
Here we have defined
K(z) ∶= ξ ε (x -z) -1 ε 2 + [ψ ε ] C 1,1 1 (z,B 1 ε (z)) + sup y∈B (z), y ′ ∈R d h ε (y ′ ) -h ε (y) -ε 4 y ′ -z 2 G ′ ε (x, z)
and
H(y, z) ∶= (ξ ε (z -y)ξ ε (x -y)) -1 G ε (x, y) sup B 2 (y) (h ε -h ′ ε ).
These are random variables on the probability space Ω × Ω ′ with respect to the probability measure P ∶= P * × P ′ * . Below we will check that, for each p ∈ [1, ∞) and γ > 0, there exists C(p, γ, d, λ, Λ, ) < ∞ such that
(7.7) Ẽ [K(z) p ] 1 p + Ẽ [H(y, z) p ] 1 p ≤ Cε 2-γ .
We first complete the proof of (7.5) assuming that (7.7) holds. In view of the discussion in Section 5, we compute:
E V * [ψ ε (x)] p 2 ≤ E ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎛ ⎜ ⎝ z∈Z d ⎛ ⎝ CK(z)ξ ε (x -z) + C y∈Z d H(y, z)ξ ε (z -y)ξ ε (x -y) ⎞ ⎠ 2 ⎞ ⎟ ⎠ p 2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ≤ CE ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ z∈Z d [K(z)ξ ε (x -z)] 2 p 2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ + CE ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎛ ⎜ ⎝ z∈Z d ⎛ ⎝ y∈Z d H(y, z)ξ ε (z -y)ξ ε (x -y) ⎞ ⎠ 2 ⎞ ⎟ ⎠ p 2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
. By Jensen's inequality, (6.2) and (7.7),
E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ z∈Z d [K(z)ξ ε (x -z)] 2 p 2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ≤ z∈Z d ξ 2 ε (x -z) p 2 -1 E z∈Z d K(z) p ξ 2 ε (x -z) ≤ Cε (2-γ)p z∈Z d ξ 2 ε (x -z) p 2 ≤ Cε (2-γ)p
and by Jensen's inequality and (7.7),
E ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎛ ⎜ ⎝ z∈Z d ⎛ ⎝ y∈Z d H(y, z)ξ ε (z -y)ξ ε (x -y) ⎞ ⎠ 2 ⎞ ⎟ ⎠ p 2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = E ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎛ ⎝ z∈Z d y,y ′ ∈Z d H(y, z)H(y ′ , z)ξ ε (z -y)ξ ε (x -y)ξ ε (z -y ′ )ξ ε (x -y ′ ) ⎞ ⎠ p 2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ≤ ⎛ ⎝ z,y,y ′ ∈Z d ξ ε (z -y)ξ ε (x -y)ξ ε (z -y ′ )ξ ε (x -y ′ ) ⎞ ⎠ p 2 -1 × E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ z,y,y ′ ∈Z d H(y, z) p 2 H(y ′ , z) p 2 ξ ε (z -y)ξ ε (x -y)ξ ε (z -y ′ )ξ ε (x -y ′ ) ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ = ⎛ ⎜ ⎝ z∈Z d ⎛ ⎝ y∈Z d ξ ε (z -y)ξ ε (x -y) ⎞ ⎠ 2 ⎞ ⎟ ⎠ p 2 -1 × z,y,y ′ ∈Z d E H(y, z) p 2 H(y ′ , z) p 2 ξ ε (z -y)ξ ε (x -y)ξ ε (z -y ′ )ξ ε (x -y ′ ) ≤ ⎛ ⎜ ⎝ z∈Z d ⎛ ⎝ y∈Z d ξ ε (z -y)ξ ε (x -y) ⎞ ⎠ 2 ⎞ ⎟ ⎠ p 2
Cε (2-γ)p .
In view of the inequality
(7.8) ⎛ ⎜ ⎝ z∈Z d ⎛ ⎝ y∈Z d ξ ε (z -y)ξ ε (x -y) ⎞ ⎠ 2 ⎞ ⎟ ⎠ 1 2 ≤ C 1 + ε -1 4-d 2 .
which we also will prove below, the demonstration of (7.5) is complete.
To complete the proof, it remains to prove (7.7) and (7.8).
Step 2. Proof of (7.8). We first show that, for every x, z ∈ R d , (7.9)
y∈Z d ξ ε (z -y)ξ ε (x -y) ≤ C exp(-cε x -z ) (1 + x -z ) 4-d + ε d-4 .
Recall that in dimensions d > 4, ξ ε (x) = exp(-aε x )(1+ x ) 2-d . Denote r ∶= x-z . We estimate the sum by an integral and then split the sum into five pieces:
y∈Z d ξ ε (z -y)ξ ε (x -y) ≤ C R d ξ ε (z -y)ξ ε (x -y) dy ≤ C B r 4 (x) ξ ε (z -y)ξ ε (x -y) dy + C B r 4 (z) ξ ε (z -y)ξ ε (x -y) dy + C B 2r (z)∖ B r 4 (z)∪B r 4 (x) ξ ε (z -y)ξ ε (x -y) dy + C B 1 ε (z)∖B 2r (z) ξ ε (z -y)ξ ε (x -y) dy + C R d ∖B 1 ε (z) ξ ε (z -y)ξ ε (x -y) dy.
We now estimate each of the above terms. Observe first that
B r 4 (x) ξ ε (z -y)ξ ε (x -y) dy + B r 4 (z) ξ ε (z -y)ξ ε (x -y) dy ≤ C exp(-cεr) Br(0) (1 + r) 2-d (1 + y ) 2-d dy = C exp(-cεr)(1 + r) 4-d .
Next, we estimate
B 2r (z)∖ B r 4 (z)∪B r 4 (x) ξ ε (z -y)ξ ε (x -y) dy ≤ C exp(-cεr) B 2r (z) (1 + y ) 2(2-d) dyC exp(-cεr)(1 + r) 4-d and B 1 ε (z)∖B 2r (z) ξ ε (z -y)ξ ε (x -y) dy ≤ C B 1 ε ∖B 2r exp(-cεr)(1 + y ) 4-2d dy = C exp(-cεr)(1 + r) 4-d . Finally, since d > 4, R d ∖B 1 ε (z) ξ ε (z -y)ξ ε (x -y) dy ≤ C exp(-cεr) R d ∖B 1 ε exp(-cε y )(1 + y ) 4-2d dy = C exp(-cεr)ε d-4 R d ∖B 1 exp(-2a y )(ε + y ) 4-2d dy = C exp(-cεr)ε d-4 .
Combining the above inequalities yields (7.9).
To obtain (7.8), we square (7.9) and sum it over z ∈ Z d to find that
z∈Z d ⎛ ⎝ y∈Z d ξ ε (z -y)ξ ε (x -y) ⎞ ⎠ 2 ≤ R d C exp (-cε x -z ) (1 + x -z ) 8-2d + ε 2d-8 dz = C + Cε d-8 .
Step 3. The estimate of the first term on the left side of (7.7). Notice that according to Proposition 4.1, we have that
(7.10) K(z) ≤ (T z X (θ ′ z (ω, ω ′ ))) d-1-δ ε 2 + [ψ ε ] C 1,1 1 (z,B 1 ε (z)) + sup y∈B (z), y ′ ∈R d h ε (y ′ ) -h ε (y) -ε 4 y ′ -z 2 .
We control each part individually. First, we claim that for every γ ∈ (0, 1), for every p ∈ (1, ∞), there exists C(γ, λ, Λ, d, , p) such that
(7.11) E [ψ ε ] C 1,1 1 (z,B 1 ε (z)) p 1 p ≤ Cε 2-γ .
Observe that ψ ε is a solution of (7.12)
-tr A(x)D 2 ψ ε = -ε 2 φ ε + 4ε 2 φ 2ε in R d .
Denote the right side by
f ε ∶= -ε 2 φ ε + 4ε 2 φ 2ε = -ε 2 ψ ε + 3ε 2 φ 2ε .
We show that, for every γ > 0 and p ∈ [1, ∞), there exists C(γ, p, d, λ, Λ, ) < ∞ such that, for every ε ∈ (0, 1 2 ],
(7.13) E f ε L ∞ (B 1 ε ) + ε -β [f ε ] C 0,β (B 1 ε ) p 1 p ≤ Cε 2-γ .
We first observe that (6.1), (6.3), and (6.5) imply that
E ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ exp ⎛ ⎜ ⎝ ⎛ ⎝ 1 E(ε) sup B √ d ε 2 φ ε -tr AM ⎞ ⎠ 1 2 +δ ⎞ ⎟ ⎠ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ≤ C.
A union bound and stationarity then give, for every γ > 0 and p ∈ [1, ∞),
(7.14) E ε 2 φ ε -tr AM p L ∞ (B 4 ε ) 1 p ≤ Cε -γ E(ε).
where
C = C(γ, p, d, λ, Λ, ) < ∞. The Krylov-Safonov estimate yields E ε -β [φ ε ] C 0,β (B 2 ε ) p 1 p ≤ ε -2 E(ε).
The previous two displays and the triangle inequality yield the claim (7.13). Now (7.11) follows from Theorem 3.1, (7.13), (7.14) and the Hölder inequality. We next show that for every γ ∈ (0, 1) and for every p ∈ (1, ∞),
(
h ε (y ′ ) > c2 2n ε 2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ≤ ∞ n=n(t)+1
C2 dn P sup
y ′ ∈B 1 ε h ε (y ′ ) > c2 2n ε 2 ≤ Cε (2-γ)p ∞ n=n(t)+1 2 dn 2 -2np ε -2p ≤ C ε (2-γ) t -1 p-d 2 .
Combining the above, taking p sufficiently large, integrating over t, and shrinking γ and redefining p yields (7.15). A combination of (6.7), (7.10), (7.11), (7.15) and the Hölder inequality yields the desired bound for the first term on the left side of (7.7).
Step 4. The estimate of the second term on the left side of (7.7). According to (5.19) and Proposition 4.1, for X and δ(d, λ, Λ) > 0 as in Proposition 4.1,
G ε (x, y) sup B 2 (y) (h ε -h ′ ε ) ≤ Cε 2 (T y X ) d-1-δ ξ ε (y -x)(T z X ) d+1-δ ξ 2ε (y -z)
≤ Cε 2 (T y X ) d-1-δ (T z X ) d+1-δ ξ ε (yx)ξ ε (yz).
Therefore, H(y, z) ≤ Cε 2 (T y X ) d-1-δ (T z X ) d+1-δ . Thus Hölder's inequality yields that, for every p ∈ (1, ∞), Ẽ [H(y, z) p ] ≤ Cε 2p .
This completes the proof of (7.7).
We next control the expectation of φε -φ2ε = ψ ε (x) -3 4ε 2 tr AM . Proof. The main step in the argument is to show that (7.17 Let us assume (7.17) for the moment and see how to obtain (7.16) from it. First, it follows from (7.17) that, for every ε and m, n ∈ N with m ≤ n,
(2 -m ε) 2 E [ψ 2 -m ε (0)] -(2 -n ε) 2 E [ψ 2 -n ε (0)] ≤ C(2 -m ε) d 2 ∧4
.
Thus, the sequence {(2 -n ε) 2 E [ψ 2 -n ε (0)]} n∈N is Cauchy and there exists L ∈ R with
(2 -m ε) 2 E [ψ 2 -m ε (0)] -L ≤ C(2 -m ε) d 2 ∧4
. Taking m = 0 and dividing by ε 2 , this yields
E [ψ ε (0)] - L ε 2 ≤ Cε d-4 2 ∧2 .
But in view of (6.3), we have that L = 3 4 tr AM . This completes the proof of the lemma, subject to the verification of (7.17).
We denote h ε (x) ∶= ψ ε (x) -4ψ 2ε (x) so that we may rewrite (7.17 and observe that η ε is a solution of (7.20)
-tr A(x)D 2 η ε = -ε 2 h ε in R d .
In the first step, we show that ψ ε has small oscillations in balls of radius ε -1 , and therefore so do h ε and η ε . This will allow us to show in the second step that (7.20) is in violation of the maximum principle unless the mean of h ε is close to zero.
Step 1. The oscillation bound for ψ ε . The claim is that, for every γ ∈ (0, 1) and p ∈ [1, ∞), there exists C(p, γ, λ, Λ, d, ) such that By the equation (7.12) for ψ ε , the Krylov-Safonov estimate (3.2) and the bounds (7.13), we have that (taking γ sufficiently small), The previous two lines complete the proof of (7.21).
Step 2. We prove something stronger than (7.18) by showing that
E sup x∈B 1 ε h ε (x) ≤ Cε d-4 2 ∧2
.
By (7.21), it suffices to show that E sup
x∈B 1 ε h ε (x) ≥ -Cε d-4 2 ∧2
and E inf
x∈B 1 ε h ε (x) ≤ Cε d-4
2 ∧2 .
We will give only the argument for the second inequality in the display above since the proof of the first one is similar. Define the random variable
κ ∶= ε 2 sup x∈B 1 ε ψ ε (x) -E [ψ ε (0)] .
Observe that the function
x ↦ ψ ε (x) -8κ x 2 has a local maximum at some point x 0 ∈ B 1 2ε .
The equation (7.12) for ψ ε implies that -ε 2 h ε (x 0 ) ≥ -16Λdκ ≥ -Cκ.
Thus inf
x∈B 1 2ε h ε (x) ≤ h ε (x 0 ) ≤ Cκ = C sup x∈B 1 ε ψ ε (x) -E [ψ ε (0)] .
Taking expectations and applying (7.21) yields the claim.
We now complete the proof of Theorem 7.1.
Proof. Proof of Theorem 7.1 According to (7.4), (7.21), (7.16) and a union bound, we have 2 ∧2 -γ .
Passing to the limit ε → 0 in (7.3) and using the stability of solutions under uniform convergence, we obtain that φ is a solution of (7.1). The estimates (7.2) are immediate from (1.17). This completes the proof of the theorem. and apply (A.1) to get
⌊2 β⌋ n=0 1 n! E X βn ≤ ⌊2 β⌋ n=0 1 n! E X 2 nβ 2 ≤ ⌊2 β⌋ n=0 1 n! E [V[X]] nβ 2 ≤ exp(E [V[X]] β 2 ).
For the other terms, we apply (A.2) (with κ in place of C) and the discrete Holder inequality to obtain, for α > 0 to be selected below,
∞ n=⌊2 β⌋+1 1 n! E X βn ≤ ∞ n=1 1 n! (κnβ) nβ 2 E (V[X]) nβ 2 ≤ ∞ n=1 1 n! κnβ α n β 2 ∞ n=1 1 n! E (αV[X]) nβ 2 2 2-β 2-β 2
.
We estimate the first factor on the right hand side by using the classical inequality (related to Stirling's approximation) which states that, for every n ∈ N, n! ≥ (2π) 1 2 n n+1 2 exp(-n).
This yields that for every α > eκβ,
∞ n=1 1 n! κnβ α n ≤ 1 √ 2π ∞ n=1 n -1 2 eκβ α n ≤ α α -eκβ .
Combining this with our previous estimate, we obtain
∞ n=⌊2 β⌋+1 1 n! E X β n ≤ α α -eκβ β 2 ∞ n=1 1 n! E (αV[X]) nβ 2 2 2-β 2-β 2
.
Observe that
∞ n=1 1 n! E (αV[X]) nβ 2 2 2-β ≤ ∞ n=1 1 n! E (αV[X]) nβ 2-β = E exp((αV[X]) β 2-β ,
and this implies that
∞ n=⌊2 β⌋+1 1 n! E X β n ≤ α α -eκβ β 2 E exp((αV[X]) β 2-β 2-β 2 .
Combining all of the previous estimates yields that
E exp X β ≤ exp E[V[X]] β 2 + CE exp (CV[X]) β 2-β 2-β 2 .
Since β ≤ 2 and we can take α = 20, the constant C is universal. This completes the proof.
1. 3 .
3 Assumptions. In this subsection, we introduce some notation and present the hypotheses. Throughout the paper, we fix ellipticity constants 0 < λ ≤ Λ and the dimension d ≥ 2. The set of d-by-d matrices is denoted by M d , the set of d-by-d symmetric matrices is denoted by S d , and Id ∈ S d is the identity matrix. If M, N ∈ S d , we write M ≤ N if every eigenvalue of N -M is nonnegative; M denotes the largest eigenvalue of M . 1.3.1. Definition of the probability space (Ω, F). We begin by giving the structural hypotheses on the equation. We consider matrices A ∈ M d which are uniformly elliptic: (1.9) λId ≤ A(⋅) ≤ ΛId and Hölder continuous in x:
4. 1 .
1 Proof of Proposition 4.1: Dimensions three and larger. Lemma 4.2. Let s ∈ (0, d). Then there exist constants C, c, γ, β > 0, depending only on (s, d, λ, Λ, ), and a family of continuous functions {ϕ R } R≥C satisfying the following: (i) for every R ≥ C and x ∈ R d ,
Theorem 7 . 1 .
71 Suppose d > 4 and fix M ∈ S d , M = 1. Then there exists a constant C(d, λ, Λ, ) ≥ 1 and a stationary function φ belonging P-almost surely to C(R d ) ∩ L ∞ (R d ), satisfying (7.1) tr A(x) M + D 2 φ =tr AM in R d and, for each x ∈ R d and t ≥ 1, the estimate (7.2)
Lemma 7 . 3 . 4 2
734 For every p ∈ [1, ∞) and γ > 0, there exists C(p, γ, d, λ, Λ, ) < ∞ such that, for every ε ∈ 0,1 2 and x ∈ R d ,(7.16) E[ψ ε (x)] -3 4ε 2 tr AM ≤ Cε d-∧2 -γ .
) E [ψ ε (0)] -4E [ψ 2ε (0)] ≤ Cε
( 7 . 1 p≤
71 22) Esupx∈B 1 2ε ψ ε (x)ψ ε ([x]) p Cε σ ε 2-γ ≤ Cε 2 .Here [x] denotes the nearest point of Z d to x ∈ R d . By the fluctuation estimate (7.4), stationarity and a union bound, we have, for every γ > 0 and p ∈ (1, ∞), ∩Z d ψ ε (z) -E [ψ ε (0)]
1.3.3. Assumptions on the random environment. Throughout the paper, we fix ≥ 2 √ d and a probability measure P on (Ω, F) which satisfies the following:
(P1) P has Z d -stationary statistics: that is, for every z ∈ Z d and E ∈ F,
By a union bound, we find that, for every t > 0, P supy ′ ∈R d h ε (y ′ )ε 4 y ′z 2 >t is the largest positive integer satisfying 2 2n(t) ≤ tε -2 . By (7.14), C(n(t) + 1)2 dn(t) P supy ′ ∈B 1 ε h ε (y ′ ) > t ≤ C(n(t) + 1)2 dn(t) ε γ-2 t -p ≤ C(log tε -2 ) t -1 ε (2-γ) p-d 2 ε
≤ ≤ ∞ n=0 n(t) n=0 P P ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ y ′ ∈B 2 n ε (z) sup ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ sup y ′ ∈B 2 n ε (z) h ε (y ′ ) > c2 2n ε 2 + t ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ h ε (y ′ ) > t ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ + ∞ n=n(t)+1 P ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ y ′ ∈B 2 n ε (z) sup h ε (y ′ ) > c2 2n ε 2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎦
where n(t) n(t) n=0 P ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ y ′ ∈B 2 n ε (z) sup h ε (y ′ ) > t ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ≤ (n(t) + 1)P ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ y ′ ∈B 2 n(t) ε (z) sup h ε (y ′ ) > t ⎤ ⎥ ⎥ ⎥ ⎥ ⎦
≤ γd
2
and
∞ n=n(t)+1 P ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ y sup
7.15) E sup y∈B (z), y ′ ∈R d h ε (y ′ )h ε (y)ε 4 y ′z 2 p 1 p ≤ Cε 2-γ . ′ ∈B 2 n ε (z)
Acknowledgements. The second author was partially supported by NSF Grant DMS-1147523.
Appendix A. Proof of the stretched exponential spectral gap inequality
We give the proof of Proposition 2.2, the Efron-Stein-type inequality for stretched exponential moments. We first recall the classical Efron-Stein (often called the "spectral gap") inequality. Given a probability space (Ω, F, P) and a sequence F k ⊆ F of independent σ-algebras, let X denote a random variable which is measurable with respect to F ∶= σ(F 1 , . . . , F n ). The classical Efron-Stein inequality states that
where
. Therefore, we see that the variance is controlled by the 2 -norm of the vertical derivative
If we have control of higher moments of V[X], then we can obtain estimates on the moments of X -E[X] . Indeed, a result of Boucheron, Lugosi, and Massart which can be found in [START_REF] Boucheron | Concentration inequalities using the entropy method[END_REF][START_REF] Boucheron | Concentration inequalities[END_REF] states that for every p ≥ 2,
where we may take C = 1.271. The same authors were also able to obtain similar estimates on exponential moments of X -E[X]. Their result [START_REF] Boucheron | Concentration inequalities using the entropy method[END_REF][START_REF] Boucheron | Concentration inequalities[END_REF] states that
We now give the proof of Proposition 2.2, which is obtained by writing a power series formula for the stretched exponential and then using (A.2) to estimate each term. We thank J. C. Mourrat for pointing out this simple argument and allowing us to include it here.
Proof of Proposition 2.2. We show that for every β ∈ (0, 2),
We may assume without loss of generality that E[X] = 0. Fix β ∈ (0, 2). We estimate the power series
by splitting up the sum into two pieces and estimating each of them separately as follows. First, we consider the terms in which the power of X is less than 2, | 89,942 | [
"929207"
] | [
"60",
"194207"
] |
01483492 | en | [
"math"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01483492/file/Kavian-Qiong-Zhang-wave.pdf | Otared Kavian
email: kavian@math.uvsq.fr
Qiong Zhang
email: zhangqiong@bit.edu.cn
Polynomial Stabilization of Solutions to a Class of Damped Wave Equations *
Keywords: wave equation, Kelvin-Voigt, damping, polynomial stability. MSC: primary 37B37, secondary 35B40, 93B05, 93B07, 35M10, 34L20, 35Q35, 35Q72
We consider a class of wave equations of the type ∂ tt u + Lu + B∂ t u = 0, with a self-adjoint operator L, and various types of local damping represented by B. By establishing appropriate and raher precise estimates on the resolvent of an associated operator A on the imaginary axis of C, we prove polynomial decay of the semigroup exp(-tA) generated by that operator. We point out that the rate of decay depends strongly on the concentration of eigenvalues and that of the eigenfunctions of the operator L. We give several examples of application of our abstract result, showing in particular that for a rectangle Ω := (0, L 1 ) × (0, L 2 ) the decay rate of the energy is different depending on whether the ratio L 2 1 /L 2 2 is rational, or irrational but algebraic.
Introduction
In this paper, we study the long time behavior of a class of wave equations with various types of damping (such as Kelvin-Voigt damping, viscous damping, or both). More precisely, let N ≥ 1 be an integer, and let Ω ⊂ R N be a bounded Lipschitz domain, its boundary being denoted by ∂Ω. The wave equations we study in this paper are of the following type
u tt -∆u + b 1 (x)u t -div (b 2 (x)∇u t ) = 0 in (0, ∞) × Ω, u(t, σ) = 0 on (0, ∞) × ∂Ω, u(0, x) = u 0 (x)
in Ω u t (0, x) = u 1 (x)
in Ω.
(1.1)
In the above equation ∆ is the Laplace operator on R N and we denote u t := ∂ t u and u tt := ∂ tt u, while b 1 , b 2 ∈ L ∞ (Ω) are two nonnegative functions such that at least one of the following conditions
∃ ε 1 > 0 and ∅ = Ω 1 ⊂ Ω, s.t. Ω 1 is open and b 1 ≥ ε 1 on Ω 1 (1.2) or ∃ ε 2 > 0 and ∅ = Ω 2 ⊂ Ω, s.t. Ω 2 is open and b 2 ≥ ε 2 on Ω 2 (1.3)
is satisfied. When b 1 ≡ 0 and condition (1.3) is satisfied, the wave equation described in (1.1) corresponds to a wave equation with local viscoleastic damping on Ω 1 , that is a damping of Kelvin-Voigt's type (see, e.g., S. Chen, K. Liu & Z. Liu [START_REF] Chen | Spectrum and stability for elastic systems with global or local Kelvin-Voigt damping[END_REF], K. Liu & Z. Liu [START_REF] Liu | Exponential decay of energy of vibrating strings with local viscoelasticity[END_REF], M. Renardy [START_REF] Renardy | On localized Kelvin-Voigt damping[END_REF], and references therein). The case in which b 2 ≡ 0, and (1.2) is satisfied, corresponds to a damped wave equation where the damping, or friction, is activated on the subdomain Ω 1 , and is proportional to the velocity u t (see, e.g., C. Bardos, G. Lebeau & J. Rauch [START_REF] Bardos | Sharp sufficient conditions for the observation, control and stabilization of waves from the boundary[END_REF], G. Chen, S.A. Fulling, F.J. Narcowich & S. Sun [START_REF] Chen | Exponential decay of energy of evolution equation with locally distributed damping[END_REF], and references therein). The energy function associated to the system (1.1) is
E(t) = 1 2 Ω |u t (t, x)| 2 dx + 1 2 Ω |∇u(t, x)| 2 dx. (1.4)
and it is dissipated according to the following relation:
d dt E(t) = - Ω b 1 (x)|u t (t, x)| 2 dx - Ω b 2 (x)|∇u t (t, x)| 2 dx. (1.5)
If Ω 1 = Ω, that is a damping of viscous type exists on the whole domain, it is known that the associated semigroup is exponentially stable (G. Chen & al [START_REF] Chen | Exponential decay of energy of evolution equation with locally distributed damping[END_REF]). It is also known that the Kelvin-Voigt damping is stronger than the viscous damping, in the sense that if Ω 2 = Ω, the damping for the wave equation not only induces exponential energy decay, but also restricts the spectrum of the associated semigroup generator to a sector in the left half plane, and the associated semigroup is analytic (see e.g. S. Chen & al [START_REF] Chen | Spectrum and stability for elastic systems with global or local Kelvin-Voigt damping[END_REF] and references therein). When b 2 ≡ 0 and the viscous damping is only present on a subdomain Ω 1 = Ω, it is known that geometric optics conditions guarantee the exact controllability, and the exponential stability of the system (C. Bardos & al. [5]). However, when b 1 ≡ 0, and Ω 2 = Ω, even if Ω 2 satisfies those geometric optics conditions, the Kelvin-Voigt damping model does not necessarily have an exponential decay. In fact, for the one dimensional case N = 1, S. Chen & al. [START_REF] Chen | Spectrum and stability for elastic systems with global or local Kelvin-Voigt damping[END_REF] have proved that when b 2 := 1 Ω 2 (and Ω 2 = Ω), the energy of the Kelvin-Voigt system (1.1) does not have an exponential decay. A natural question is to study the decay properties of the wave equation with local viscoelastic damping, or when viscous damping is local and geometric optics conditions are not satisfied.
Our aim is to show that, if one of the conditions (1.2) or (1.3) is satisfied then, as a matter of fact, the energy functional decreases to zero at least with a polynomial rate: more precisely, there exists a real number m > 0 and a positive constant c > 0 depending only on Ω 1 , Ω 2 , Ω and on b 1 , b 2 , such that E(t) ≤ c (1 + t) 2/m ∇u 0 2 + u 1 2 .
The positive number m depends in an intricate way on the distribution of the eigenvalues (λ k ) k≥1 of the Laplacian with Dirichlet boundary conditions on ∂Ω, and it depends also strongly on the concentration, or localization, properties of the corresponding eigenfunctions, and thus on the geometry of Ω, and those of Ω 1 , Ω 2 . More precisely, let (λ k ) k≥1 be the sequence of eigenvalues given by
-∆ϕ k,j = λ k ϕ k,j in Ω, ϕ k,j ∈ H 1 0 (Ω), Ω ϕ k,j (x)ϕ ℓ,i (x)dx = δ kℓ δ ij .
Here we make the convention that each eigenvalue has multiplicity m k ≥ 1 and that in the above relation 1 ≤ j ≤ m k and 1 ≤ i ≤ m ℓ . As usual, the eigenvalues λ k are ordered in an increasing order: 0
< λ 1 < λ 2 < • • • < λ k < λ k+1 < • • • .
We shall consider the cases where an exponent denoted by γ 1 ≥ 0 exists such that for some constant c 0 > 0 one has
∀ k ≥ 2, min λ k λ k-1 -1, 1 - λ k λ k+1 ≥ c 0 λ -γ 1 k . (1.6)
We shall need also the following assumption on the concentration properties of the eigenfunctions ϕ k,j : there exist a constant c 1 > 0 and an exponent
γ 0 ∈ R such that if ϕ := m k j=1 c j ϕ k,j with m k j=1 |c j | 2 = 1, then ∀ k ≥ 2, Ω 1 |ϕ(x)| 2 dx + Ω 2 |∇ϕ(x)| 2 dx ≥ c 1 λ -γ 0 k . (1.7)
Then we show that if m := 3 + 2γ 0 + 4γ 1 , the decay of the energy is at least of order (1 + t) -2/m . The fact that the above conditions are satisfied for certain domains Ω, and subdomains Ω 1 , Ω 2 , depends strongly on the geometry of these domains and will be investigated later in this paper, by giving examples in which these assumptions are satisfied.
Before stating our first result, which will be in an abstract setting and will be applied to several examples later in this paper, let us introduce the following notations. We consider an infinite dimensional, complex, separable Hilbert space H 0 , and a positive (in the sense of forms) self-adjoint operator (L, D(L)) acting on H 0 , which has a compact resolvent (in particular D(L) is dense in H 0 and (L, D(L)) is an unbounded operator). The dual of H 0 having been identified with H 0 , we define the spaces H 1 and H -1 by
H 1 := D(L 1/2 )
and
H -1 := (H 1 ) ′ , (1.8)
the space H 1 being endowed with the norm u → L 1/2 u . We shall denote also by •, • the duality between H -1 and H 1 , adopting the convention that if f ∈ H 0 and ϕ ∈ H 1 , then f, ϕ = (ϕ|f ).
As we mentioned earlier, we adopt the convention that the spectrum of L consists in a sequence of distinct eigenvalues (λ k ) k≥1 , with the least eigenvalue λ 1 > 0, numbered in an increasing order and
λ k → +∞ as k → ∞, each eigenvalue λ k having multiplicity m k ≥ 1.
Next we consider an operator B satisfying the following conditions:
B : H 1 -→ H -1 is bounded, B * = B, i.e. Bϕ, ψ = Bψ, ϕ for ϕ, ψ ∈ H 1 , ∀ ϕ ∈ H 1 , Bϕ, ϕ ≥ 0.
(1.9)
We will assume moreover that B satisfies the non degeneracy condition
∀k ≥ 1, 1 β k := min { Bϕ, ϕ ; ϕ ∈ N (L -λ k I), ϕ = 1} > 0, (1.10)
and we study the abstract second order equation of the form
u tt + Lu + Bu t = 0 in (0, ∞) u(t) ∈ D(L) u(0) = u 0 ∈ D(L) u t (0) = u 1 ∈ H 1 .
(1.11)
Our first result is:
Theorem 1.1. Assume that the operators (L, D(L)) and B are as above, and let the eigenvalues (λ k ) k≥1 of L satisfy
λ * := inf k≥1 λ k λ k+1 > 0.
(1.12)
Moreover assume that there exist a constant c 0 > 0 and two exponents γ 0 ∈ R and γ 1 ≥ 0 such that, β k being defined by (1.10), for all inetgers k ≥ 1 we have
β k ≤ c 0 λ γ 0 k , (1.13) λ k-1 λ k -λ k-1 + λ k+1 λ k+1 -λ k ≤ c 0 λ γ 1 k . (1.14)
Then, setting m := 3 + 2γ 0 + 4γ 1 , there exists a constant c * > 0 such that for all (u 0 , u 1 ) ∈ D(L) × H 1 , and for all t > 0, the energy of the solution to equation (1.11) satisfies
(Lu(t)|u(t)) + u t (t) 2 ≤ c * (1 + t) -2/m (Lu 0 |u 0 ) + u 1 2 . (1.15)
Our approach consists in establishing, by quite elementary arguments, rather precise a priori estimates on the resolvent of the operator
u := (u 0 , u 1 ) → Au := (-u 1 , Lu 0 + Bu 1 ) (1.16)
on the imaginary axis i R of the complex plane. Indeed the proof of Theorem 1.1 is based on results characterizing the decay of the semigroup exp(-tA) in terms of bounds on the norm of the resolvent (Ai ω) -1 as |ω| → ∞ (see J. Prüss [START_REF] Prüss | On the spectrum of C 0 -semigroups[END_REF], W. Arendt & C.J.K. Batty [START_REF] Arendt | Tauberian theorems and stability of oneparameter semigroups[END_REF], Z. Liu & B.P. Rao [START_REF] Liu | Characterization of polynomial decay rate for the solution of linear evolution equation[END_REF], A. Bátkai, K.J. Engel, J. Prüss and R. Schnaubelt [START_REF] Bátkai | Polynomial stability of operator semigroups[END_REF], C.J.K. Batty & Th. Duyckaerts [START_REF] Batty | Non-uniform stability for bounded semigroups on Banach spaces[END_REF]). In this paper we use the following version of these results due to A. Borichev & Y. Tomilov [START_REF] Borichev | Optimal polynomial decay of functions and operator semigroups[END_REF]:
Theorem 1.2. Let S(t) := exp(-tA) be a C 0 -semigroup on a Hilbert space H generated by the operator A. Assume that i R is contained in ρ(A), the resolvent set of A, and denote R(λ, A) := (A -λI) -1 for λ ∈ ρ(A). Then for m > 0 fixed, one has:
sup ω∈R R(i ω, A) (1 + |ω|) 1/m < ∞ ⇐⇒ sup t≥0 (1 + t) m S(t)A -1 < ∞.
(1.17)
The remainder of this paper is organized as follows. In Section 2 we gather a few notations and preliminary results concerning equation (1.1), and we prove Theorem 1.1 by establishing a priori estimates for the resolvent of the generator of the semigroup associated to (1.1), in order to use the above Theorem 1.2. In Section 3 we apply our abstract result to various wave equations in dimension one, and in Section 4 we give a few examples of damped wave equations in higher dimensions when Ω := (0, L 1 )ו • • (0, L N ), with N ≥ 2, which show that depending on algebraic properties of the numbers L 2 i /L 2 j the decay rate of the energy may be different.
An abstract result
As stated in the introduction, in what follows H 0 is an infinite dimensional, separable, complex Hilbert space, whose scalar product and norm are denoted by (•|•) and • , on which we consider a positive, densely defined selfadjoint operator (L(D(L))) acting on H 0 . With the definition (1.8) we have H 1 ⊂ H 0 = (H 0 ) ′ ⊂ H -1 , with dense and compact embeddings, and L can be considered as a selfadjoint isomorphism (even an isometry) between the Hilbert spaces H 1 and H -1 . We denote R * := R \ {0}, N * := N \ {0}. By an abuse of notations, X being a Banach space, when there is no risk of confusion we may write f := (f 1 , f 2 ) ∈ X to mean that both functions f 1 and f 2 belong to X (rather than writing f ∈ X × X or f ∈ X 2 ). Thus we shall write also (f |g) := (f
1 |g 1 ) + (f 2 |g 2 ) for f = (f 1 , f 2 ) ∈ H 0 × H 0 and g = (g 1 , g 2 ) ∈ H 0 × H 0 . Analogously f will stand for f 1 2 + f 2 2 1/2 .
Recall that we have denoted by (λ k ) k≥1 the increasing sequence of distinct eigenvalues of L, each eigenvalue λ k having multiplicity m k ≥ 1. We shall denote by P k the orthogonal projection of H 0 on the eigenspace N (Lλ k I). We denote by
F k := j≥k+1 N (L -λ j I) , (2.1)
and
E k := (N (L -λ k I) ⊕ F k ) ⊥ = 1≤j≤k-1 N (L -λ j I) , (2.2)
(note that E 1 = {0}). We recall that E k and F k are invariant under the action of L, and obviously under that of Lλ k I. More precisely
L(E k ) = (L -λ k I)(E k ) = E k ,
and also
L (D(L) ∩ F k ) = (L -λ k I) (D(L) ∩ F k ) = F k .
Next we consider a bounded linear operator B satisfying the conditions (1.9), and we recall that one has a Cauchy-Schwarz type inequality for B, more precisely
| Bϕ, ψ | ≤ Bϕ, ϕ 1/2 Bψ, ψ 1/2 , ∀ ϕ, ψ ∈ H 1 . (2.3)
which implies in particular that if Bϕ, ϕ = 0 then we have Bϕ = 0. We shall consider the abstract damped wave equation of the form (1.11), and we introduce the Hilbert space
H := H 1 × H 0 , (2.4)
corresponding to the energy space associated to equation (1.11), whose elements will be denoted by u := (u 0 , u 1 ) and whose norm is given by
u 2 H = L 1/2 u 0 2 + u 1 2 .
In order to solve and study (1.11), we define an unbounded operator (A, D(A)) acting on H by setting
Au := (-u 1 , Lu 0 + Bu 1 ), (2.5)
for u ∈ D(A) defined to be
D(A) := {u ∈ H ; u 1 ∈ H 1 , Lu 0 + Bu 1 ∈ H 0 } . (2.6)
Since for u = (u 0 , u 1 ) ∈ D(A) we have
(Au|u) H = Bu 1 , u 1 ≥ 0,
one can easily see that the operator A is m-accretive on H, that is for any λ > 0 and any f ∈ H there exists a unique u ∈ D(A) such that λAu + u = f, and u ≤ f .
Thus D(A) is dense in H, and A is a closed operator generating a C 0semigroup acting on H, denoted by S(t) := exp(-tA) (see for instance K. Yosida [START_REF] Yosida | Functional Analysis[END_REF], chapter IX). Then, writing the system (1.11) as a Cauchy problem in H:
dU dt + AU (t) = 0 for t > 0, U (0) = U 0 := (u 0 , u 1 ) ∈ H,
we have U (t) = (U 0 (t), U 1 (t)) = exp(-tA)U 0 , and the solution of (1.11) is given by u(t) = U 0 (t), the first component of U (t).
In order to study the behavior of u(t), or rather that of U (t), as t → +∞, we are going to show that the resolvent set of A contains the imaginary axis i R of the complex plane and that, under appropriate assumptions on the operators L and B, the norm (Ai ωI) -1 has a polynomial growth as |ω| → ∞.
Lemma 2.1. The adjoint of (A, D(A)) is given by the operator (A * , D(A * )) where
D(A * ) = {v ∈ H : v 1 ∈ H 1 , -Lv 0 + Bv 1 ∈ H 0 } , (2.7)
and for v = (v 0 , v 1 ) ∈ D(A * ) we have
A * v = (v 1 , -Lv 0 + Bv 1 ) . (2.8)
Proof. Since D(A) is dense in H, the adjoint of A can be defined. Recall that v = (v 0 , v 1 ) ∈ D(A * ) means that there exists a constant c > 0 (depending on v) such that
∀ u = (u 0 , u 1 ) ∈ D(A), |(Au|v) H | ≤ c u H .
To determine the domain of A * , let v ∈ D(A * ) be given, and consider first an element u = (u 0 , 0) ∈ D(A). Thus Au = (0, Lu 0 ) ∈ H and
|(Au|v) H | = |(Lu 0 |v 1 )| ≤ c u H = c L 1/2 u 0 .
This means that the linear form u 0 → (Lu 0 |v 1 ) extends to a continuous linear form on H 1 , and this is equivalent to say that v 1 ∈ H 1 , and that for u = (u 0 , 0) ∈ H we have
(Au|v) H = Lu 0 , v 1 = (v 1 |u 0 ) H 1 .
Now take u = (0, u 1 ) ∈ D(A). Since, according to (1.9), B * = B, we have
(Au|v) H = (-u 1 |v 0 ) H 1 + Bu 1 , v 1 = -(L 1/2 u 1 |L 1/2 v 0 ) + Bv 1 , u 1 ,
and thus (Au|v) H = -Lv 0 + Bv 1 , u 1 and since v ∈ D(A * ) means that the mapping u 1 → (Au|v) extends to a continuous linear form on H 0 , we conclude that
-Lv 0 + Bv 1 ∈ H 0 , and (Au|v) H = (-Lv 0 + Bv 1 |u 1 ) = (u 1 | -Lv 0 + Bv 1 ).
From these observations it is easy to conclude that in fact
A * v = (v 1 , -Lv 0 + Bv 1 ),
and that the domain of A * is precisely given by (2.7).
We shall use the following classical results of S. Banach which characterizes operators having a closed range (see for instance K.Yosida [START_REF] Yosida | Functional Analysis[END_REF], chapter VII, section 5): Theorem 2.2. Let (A, D(A)) be a densely defined operator acting on a Hilbert space H. Then
R(A) closed ⇐⇒ R(A * ) closed ,
and either of the above properties is equivalent to either of the following equivalent equalities
R(A) = N (A * ) ⊥ ⇐⇒ R(A * ) = N (A) ⊥ . Moreover when N (A) = N (A * ) = {0}, the range R(A) is closed if and only if there exist two constants c 1 , c 2 > 0 such that ∀ u ∈ D(A) u ≤ c 1 Au , ∀ v ∈ D(A * ) v ≤ c 2 A * v . (2.9)
In the following lemma we show that 0 ∈ ρ(A), that is that the operator A defined by (2.5) has a bounded inverse: Lemma 2.3. Denote by N (A) the kernel of the operator A defined by (2.5)-(2.6), and by R(A) its range. Then we have N (A) = {0} = N (A * ) and R(A), as well as R(A * ), are closed. In particular A : D(A) -→ H is oneto-one and its inverse is continuous on H.
Proof. It is clear that N (A) = N (A * ) = {0}.
On the other hand, thanks to Banach's theorem 2.2, we have only to show that R(A) is closed. For a sequence
u n = (u 0n , u 1n ) ∈ D(A) such that f n = (f 0n , f 1n ) := Au n → f = (f 0 , f 1 ) ∈ H, we have to show that there exists u = (u 0 , u 1 ) ∈ D(A) for which f = Au. Since u 1n = -f 0n → -f 0 in H 1 , setting u 1 := -f 0 , due to the fact that B : H 1 -→ H -1 is continuous, we have that Lu 0n = f 1n -Bu 1n → f 1 -Bu 1 in H -1 ,
and therefore, denoting by u 0 ∈ H 1 the unique solution of
Lu 0 = f 1 -Bu 1 ,
we have that u = (u 0 , u 1 ) ∈ D(A) and that Au = f . This proves that the range of A, as well as that of A * , are closed. Thus we have
R(A) = N (A * ) ⊥ and R(A * ) = N (A) ⊥ . Since N (A) = N (A * ) = {0}, by property (2.9) of Theorem 2.2 there exist two constants c 1 , c 2 > 0 such that ∀ u ∈ D(A), Au ≤ c 1 u , ∀ v ∈ D(A * ), A * v ≤ c 2 v .
This means that A -1 : H -→ D(A) is continuous, and naturally the same is true of (
A * ) -1 : H -→ D(A * ).
Next we show that a certain perturbation of L, which appears in the study of the resolvent of A, is invertible. Proposition 2.4. (Main estimates). Assume that B satisfies (1.9), and that for any fixed k ≥ 1 condition (1.10) is satisfied. Let ω ∈ R, and for j ≥ 1 and ω 2 = λ j denote
α j (ω) := λ j |ω 2 -λ j | . ( 2
.10)
Then the operator L ω : H 1 -→ H -1 defined by
L ω u := Lu -i ωBu -ω 2 u. (2.11)
has a bounded inverse, and
L -1 ω H -1 →H 1 ≤ c(ω) where the constant c(ω) is given by c(ω) := β k λ k |ω| + (1 + β k λ k )(α k-1 (ω) + α k+1 (ω)) 2 (1 + |ω|) c * , (2.12)
for ω such that λ k-1 < ω 2 < λ k+1 , with c * := 16(1 + B ) 2 , and B :=
B H 1 →H -1 .
Proof. Note that according to (2.10), the constants α k-1 (ω) and α k+1 (ω) are well-defined whenever λ k-1 < ω < λ k+1 .
For ω ∈ R fixed, and any given g ∈ H -1 we have to show that there exists a unique u 0 ∈ H 1 solution of
Lu 0 -i ω Bu 0 -ω 2 u 0 = g , (2.13)
and there exists a constant c(ω) > 0 such that
L 1/2 u 0 ≤ c(ω) g H -1 . (2.14) Note that L ω is a bounded operator from H 1 into H -1 and that (L ω ) * = L -ω . First we show that N (L ω ) = {0}. If ω = 0, then we know that L 0 = L and by assumption N (L) = {0}. If ω = 0, and if u ∈ H 1 satisfies L ω u = 0, we have -ω Bu, u = Im Lu -i ωBu -ω 2 u, u = 0.
Since ω = 0, this yields Bu, u = 0 and, as remarked above after the Cauchy-Schwarz inequality (2.3), the latter implies that Bu = 0 and thus Luω 2 u = 0. If u were not equal to zero, this would imply that u ∈ D(L) and that ω 2 is an eigenvalue of L, say
ω 2 = λ k for some integer k ≥ 1, that is u ∈ N (L-λ k I)\{0}.
However we have Bu, u = 0 and this in contradiction with the assumption (1.10). Therefore u = 0 and N (L ω ) = {0}.
Next we show that R(L ω ) is closed, that is, according to property (2.9) of Banach's theorem 2.2, there exists a constant c(ω) > 0 such that (2.14) is satisfied.
To this end, ω ∈ R being fixed, we define two bounded operators B and L ω acting in H 0 by
B := L -1/2 BL -1/2
(2.15)
L ω := I -ω 2 L -1 -i ωB = I -i ωL -1/2 (B -i ωI)L -1/2 , (2.16)
and we note that
L ω = L 1/2 I -i ωL -1/2 (B -i ωI)L -1/2 L 1/2 = L 1/2 L ω L 1/2 .
Since L 1/2 is an isometry between H 1 and H 0 , and also between H 0 and H -1 , in order to see that L -1 ω is a bounded operator mapping H -1 into H 1 , with a norm estimated by a certain constant c(ω), it is sufficient to show that the operator L ω , as a mapping on H 0 , has an inverse and that c(ω) being defined in (2.12) we have
L -1 ω ≤ c(ω).
(2.17)
We observe also that B : H 0 -→ H 0 is a bounded, selfadjoint and nonnegative operator and thus, as recalled in (2.3), for any f, g ∈ H 0 we have the Cauchy-Schwarz inequality
|(Bf |g)| ≤ (Bf |f ) 1/2 (Bg|g) 1/2 .
(2.18)
Now consider f, g ∈ H 0 such that g ≤ 1 and
L ω f = f -ω 2 L -1 f -i ωBf = g. (2.19)
We split the proof into two steps, according to whether ω 2 is smaller or larger than λ 1 /2.
Step 1. Assume first that ω 2 ≤ λ 1 /2. Using the fact that
(L -1 f |f ) ≤ 1 λ 1 f 2 ,
upon multiplying (2.19) by f , and then taking the real part of the resulting equality, one sees that
f ≤ λ 1 λ 1 -ω 2 .
(2.20)
Thus if ω 2 < λ 1 one has L -1 ω ≤ λ 1 /(λ 1 -ω 2 )
, and more precisely
L -1 ω ≤ 2 if ω 2 ≤ λ 1 /2.
Step 2. Now assume that for some integer k ≥ 1 we have
λ k-1 < ω 2 < λ k+1 .
(2.21)
(If k = 1 by convention we set λ 0 := 0). Multiplying, in the sense of H 0 , equation (2.19) by f and taking the imaginary part of the result yields
(Bf |f ) ≤ |ω| -1 f . (2.22)
This first estimate is indeed not sufficient to obtain a bound on f , since the operator B may be neither strictly nor uniformly coercive. However, as we shall see in a moment, this crude estimate is a crucial ingredient to obtain our result.
We begin by decomposing f into three parts as follows: there exist a unique t ∈ C and ϕ ∈ N (Lλ k I), with ϕ = 1, such that
f = v + tϕ + z, where v ∈ E k , and z ∈ F k ∩ H 1 .
(Recall that we have E 1 = {0}; also when λ k has multiplicity m k ≥ 2, then ϕ may depend also on g, but in any case its norm in
H 1 is √ λ k ). With these notations, equation (2.19) reads v -ω 2 L -1 v + z -ω 2 L -1 z + λ k -ω 2 λ k tϕ = i ω Bf + g. (2.23)
When k = 1, by convention we have v = 0, while when k ≥ 2 we may multiply the above equation by -v, and using the fact that for v
∈ E k we have (L -1 v|v) ≥ 1 λ k-1 v 2 ,
we deduce that
ω 2 -λ k-1 λ k-1 v 2 ≤ v + |ω| |(Bf |v)|.
Using (2.18) and (2.22) we have
|(Bf |v)| ≤ (Bf |f ) 1/2 (Bv|v) 1/2 ≤ |ω| -1/2 f 1/2 • B 1/2 • v , so that since B ≤ B := B H 1 →H -1 , we get finally v ≤ α k-1 (ω) 1 + |ω| 1/2 B 1/2 f 1/2 . (2.24)
Analogously, multiplying (2.23) by z and using the fact that
(L -1 z|z) ≤ 1 λ k+1 z 2 , we get λ k+1 -ω 2 λ k+1 z 2 ≤ z + |ω| |(Bf |z)|,
and, proceeding as above, we deduce that
z ≤ α k+1 (ω) 1 + |ω| 1/2 B 1/2 f 1/2 . (2.25)
Writing (2.23) in the form
(I -ω 2 L -1 )(v + z) + λ k -ω 2 λ k t ϕ -i t ω Bϕ = g + i ω B(v + z),
we multiply this equation by ϕ and we take the imaginary part of the resulting equality to obtain
|t| |ω|(Bϕ|ϕ) ≤ 1 + |ω| • |(Bϕ|v + z)| ≤ 1 + |ω|(Bϕ|ϕ) 1/2 B 1/2 ( v + z ) . (2.26)
(Here we have used the fact that ((I -
ω 2 L -1 )(v + z)|ϕ) = 0 since v + z ∈ N (L -λ k I) ⊥ ). Now we have (Bϕ|ϕ) = BL -1/2 ϕ, L -1/2 ϕ = 1 λ k Bϕ, ϕ ≥ 1 β k λ k ,
and thus the above estimate (2.26) yields finally
|t| ≤ λ k β k |ω| + B 1/2 (β k λ k ) 1/2 ( v + z ). (2.27)
Using this together with (2.24) and (2.25), we infer that
f ≤ λ k β k |ω| + (1 + ( B β k λ k ) 1/2 )(α k-1 (ω) + α k+1 (ω))(1 + (|ω| B f ) 1/2 ).
From this, upon using Young's inequality αβ ≤ εα 2 /2+ε -1 β 2 /2 on the right hand side, with α := f 1/2 and β the terms which are factor of f 1/2 , it is not difficult to choose ε > 0 appropriately and obtain (2.17), and thus the proof of Proposition 2.4 is complete.
Remark 2.5. When B : H 0 -→ H 0 is bounded, the estimate of Proposition 2.4 can be improved, but the improvement does not seem fundamental in an abstract result such as the one we present here. Instead, for instance when one is concerned with a wave equation where B∂ t u := 1 ω ∂ t u, in specific problems one may find better estimates using the local structure of the operator B.
In the next lemma we give a better estimate when, in equation (2.13), the data g belongs to H 0 or to H 1 . Lemma 2.6. Assume that B satisfies (1.9) and (1.10), ω ∈ R * and g ∈ H 0 be given. Then, the operator L ω being given by (2.11) and with the notations of Proposition 2.4, the solution
u 0 ∈ H 1 of L ω u 0 = g satisfies u 0 ≤ 2c(ω) ω √ λ 1 + 3 ω 2 g .
(2.28) Also, for any ω ∈ R * any v ∈ H 1 we have
L -1 ω (B -i ωI) v H 1 ≤ 1 + c(ω) |ω| v H 1 .
(2.29)
Proof. When g ∈ H 0 , computing L ω u 0 , u 0 = (u 0 |g) and taking the imaginary part yields |ω| Bu 0 , u 0 ≤ g u 0 .
(2.30)
Then, using Proposition 2.4, we have
ω 2 u 0 2 = L 1/2 u 0 2 -i ω Bu 0 , u 0 -g, u 0 ≤ c(ω) 2 g 2 H -1 + 2 g u 0 .
From this, and the fact that λ 1 g 2 H -1 ≤ g 2 one easily conclude that
ω 2 u 0 2 ≤ 2c(ω) 2 λ 1 + 8 ω 2 g 2 .
In order to see that (2.29) holds, it is sufficient to observe that
L -1 ω (B -i ωI) = (i ω) -1 L -1 ω (L -L ω ) = (i ω) -1 L -1 ω L -I ,
and using once more Proposition 2.4, the proof of the Lemma is complete.
We can now prove that i R ⊂ ρ(A), the resolvent set of A.
Lemma 2.7. Assume that the operator B satisfies conditions (1.9) and (1.10). Then i R ⊂ ρ(A).
Proof. It is clear that we may assume |ω| > 0, since the case ω = 0 is already treated by Lemma 2.3.
In order to see that λ = i ω belongs to ρ(A) for any ω ∈ R * , we begin by showing that
N (A -λI) = N (A * -λI) = {0}.
Indeed if u ∈ D(A) and Aui ω u = 0, then we have u 1 = -iω u 0 and Lu 0i ω Bu 0ω 2 u 0 = 0 , and by Proposition 2.4 we know that u 0 = 0, and thus the N (A-iωI) = {0}.
In the same way, one may see that N (A * + iωI) = {0}.
Next we show that both R(A -iωI) and R(A * + iωI) are closed. Since it is clearly sufficient to prove the former property, let a sequence (u n ) n≥1 = {(u 0n , u 1n )} n≥1 in D(A) be so that
f n = (f 0n , f 1n ) := Au n -i ω u n → f = (f 0 , f 1 ) in H.
In particular we have
-u 1n -i ω u 0n = f 0n → f 0 in H 1 .
Reporting the expression of u 1n = -f 0ni ω u 0n into the second component of Au n , upon setting
g n := f 1n + Bf 0n -i ω f 0n , and g := f 1 + Bf 0 -i ω f 0 , we have clearly g n → g in H -1 and Lu 0n -i ω Bu 0n -ω 2 u 0n = g n .
(2.31) Using Proposition 2.4, we know that Li ω Bω 2 I has a bounded inverse, and thus u 0n → u 0 in H 1 , where u 0 is the unique solution of
Lu 0 -i ω Bu 0 -ω 2 u 0 = g.
It is clear that this shows that u n → u := (u 0 , u 1 ), where
u 1 = -i ω u 0 -f 0 . Thus R(A -i ω I) is closed, in fact R(A -i ω I) = H and (A -i ω I) -1 is bounded.
Proposition 2.8. Assume that the operator B satisfies conditions (1.9) and (1.10). Then there exists a constant c * > 0 such that for all ω ∈ R we have
R(i ω, A) ≤ c * c(ω), (2.32)
where c(ω) is defined in (2.12).
Proof. By Lemma 2.7 we know that the imaginary axis of the complex plane is contained in the resolvent set of the operator A. For f = (f 0 , f 1 ) ∈ H, the equation Aui ωu = f can be written as
-u 1 -i ωu 0 = f 0 ∈ H 1 , Lu 0 + Bu 1 -i ωu 1 = f 1 ∈ H 0 .
Consequently, it follows that
u 0 = L -1 ω (B -i ωI) f 0 + L -1 ω f 1 , u 1 = -f 0 -i ωu 0 .
By Proposition 2.4 and the estimate (2.29) of Lemma 2.6 we have
u 0 H 1 ≤ L -1 ω (B -i ωI) f 0 H 1 + L -1 ω f 1 H 1 ≤ 1 + c(ω) |ω| f 0 H 1 + c(ω) f 1 H -1 ≤ c * c(ω) f H ,
for some constant c * > 0 independent of ω and f . On the other hand, using (2.28) we have, again for some constant c * independent of f 1 and |ω| ≥ 1
|ω| L -1 ω f 1 ≤ c * c(ω) f 1 ,
and thus, since
u 1 = -f 0 -i ωu 0 , u 1 ≤ f 0 + |ω| L -1 ω (B -i ωI) f 0 + |ω| L -1 ω f 1 ≤ c * c(ω) f H ,
for some appropriate constant c * independent of ω and f . We are now in a position to prove our main abstract result. Proof of Theorem 1.1. Take ω ∈ R. In order to prove our claim, using Theorem 1.2, it is enough to show that the constant c(ω) which appears in (2.32) has a growth rate of at most (1 + |ω| m ), where m is given by (1.15). It is clear that it is sufficient to prove the estimate on c(ω) when ω 2 ≥ λ 1 /2. Therefore, assuming that such is the case, there is an integer
k ≥ 1 such that λ k-1 < λ k-1 + λ k 2 ≤ ω 2 ≤ λ k + λ k+1 2 < λ k+1 .
Thus, with the notations of Proposition 2.4, we have
α k-1 (ω) + α k+1 (ω) ≤ 2λ k-1 λ k -λ k-1 + 2λ k+1 λ k+1 -λ k ≤ 2 c 0 λ γ 1 k . (2.33)
Now, thanks to the assumption (1.12) we have λ k+1 ≤ λ k /λ * and, when k ≥ 2, we have also λ k-1 ≥ λ * λ k . From this we may conclude that
1 2 (1 + λ * )λ k ≤ ω 2 ≤ 1 + λ * 2λ * λ k .
Using the expression of c(ω) given by (2.12), one may find a constant c * 0 > 0, depending only λ * , c 0 and on λ 1 , c * , so that for all ω with ω 2 ≥ λ 1 /2 we have
c(ω) ≤ c * 0 |ω| 1+2γ 0 + (1 + |ω| 2(1+γ 0 ) ) |ω| 4γ 1 (1 + |ω|) .
From this, setting m := 3 + 2γ 0 + 4γ 1 , it is not difficult to see that one has c(ω) ≤ c * 1 (1 + |ω| m ), at the expense of choosing another constant c * 1 , and the proof of our Theorem is complete.
In the following sections we give a few examples of damped wave equations which can be treated according to Theorem 1.1.
Wave equations in dimension one
In this section we give a few applications of Theorem 1.1 to the case of a class of wave equations, in dimension one, that is a system corresponding to the vibrations of a string. The treatment of such a problem is easier in one dimension than in higher dimensions, due to the fact that on the one hand the multiplicity of each eigenvalue is one, the distance between consecutive eigenvalues is large, and on the other hand the eigenfunctions are explicitely known in some cases, and have appropriate asymptotic behaviour when they are not explicitely known.
More precisely, without loss of generality, we may assume that Ω = (0, π) and, with the notations of the previous section, we set H 0 := L 2 (0, π), the scalar product of f, g ∈ L 2 (0, π) = L 2 ((0, π), C) being denoted by
(f |g) := π 0 f (x) g(x) dx,
and the associated norm by • . Let a ∈ L ∞ (0, π) be a positive function such that for a certain α 0 > 0, we have a(x) ≥ α 0 a.e. in (0, π). Then, two nonnegative functions b 1 , b 2 ∈ L ∞ (0, π) being given, the system
∂ tt u -∂ x (a∂ x u) + b 1 ∂ t u -∂ x (b 2 ∂ x ∂ t u) = 0 in (0, ∞) × (0, π), u(t, 0) = u(t, π) = 0 on (0, ∞), u(0, x) = u 0 (x) in (0, π) u t (0, x) = u 1 (x) in (0, π), (3.1)
is a special case of the system (1.11). We are going to verify that under certain circumstances, we can apply Theorem 1.1 and obtain a polynomial decay for the energy associated to equation (3.1).
First we consider the operator (L, D(L)) defined by
Lu := -(a(•)u ′ ) ′ (3.2) D(L) := u ∈ H 1 0 (0, π) ; Lu ∈ L 2 (0, π) , (3.3)
which is a selfadjoint, positive operator with a compact resolvent and one has H 1 := D(L 1/2 ) = H 1 0 (0, π). We shall endow H 1 0 (0, π) with the scalar product
(u|v) H 1 0 := π 0 u ′ (x) • v ′ (x) dx,
and its associated norm u → u ′ (the resulting topology is equivalent to that resulting from the equivalent Hilbertian norm u → a 1/2 u ′ ).
For the operator B, assuming that the functions b 1 , b 2 are such that at least one of the conditions (1.2) or (1.3) is satisfied, we define
Bϕ := b 1 ϕ -(b 2 ϕ ′ ) ′ . (3.4)
It is easy to verify that the operator B is bounded and selfadjoint from H 1 0 (0, π) into H -1 (0, π) and that it satisfies conditions (1.9). Assume also that Ω 1 and Ω 2 are given by
Ω 1 = (ℓ 1 , ℓ 1 + δ 1 ), Ω 2 := (ℓ 2 , ℓ 2 + δ 2 ) (3.5)
with 0 ≤ ℓ 1 < ℓ 1 + δ 1 ≤ π and 0 ≤ ℓ 2 < ℓ 2 + δ 2 ≤ π. Then we have the following result:
Proposition 3.1. Assume that N = 1 and let the domains Ω 1 , Ω 2 be as in (3.5) with δ 2 > 0. Let the function a in (3.1) be of class C 2 ([0, π]) and, for j = 1 or j = 2, the functions b j ∈ L ∞ (0, π) be such that b j ≥ ε j ≥ 0 on Ω j , where ε j is a constant. Then, if ε 2 > 0, there exists a constant c * > 0 such that the energy of the solution of (3.1) satisfies
∂ x u(t, •) 2 + ∂ t u(t, •) 2 ≤ c * (1 + t) -2/3 ∂ x u 0 2 + u 1 2 . (3.6)
Also, if b 2 ≡ 0 and ε 1 > 0, there exists a constant c * > 0 such that
∂ x u(t, •) 2 + ∂ t u(t, •) 2 ≤ c * (1 + t) -1/2 ∂ x u 0 2 + u 1 2 . (3.7)
Proof. Consider first the case a(x) ≡ 1. Then, for all integers k ≥ 1 λ k = k 2 , and ϕ k = 2/π sin(kx).
One sees immediately that for some constant c * > 0 independent of k we have
λ k-1 λ k -λ k-1 + λ k+1 λ k+1 -λ k ≤ k ≤ c * λ 1/2 k ,
and thus, with the notations of Theorem 1.1, we can take γ 1 = 1/2. On the other hand, when ε 2 > 0, one checks easily that for some constant c independent of k we have
Bϕ k , ϕ k ≥ ε 2 ℓ 2 +δ 2 ℓ 2 |ϕ ′ k (x)| 2 dx = 2ε 2 k 2 π ℓ 2 +δ 2 ℓ 2 cos 2 (kx) dx ≥ c k 2 . (3.8)
Therefore for some other constant c * independent of k, we have β k ≤ c * λ -1 k , and thus we can take γ 0 := -1.
Finally we have m = 3 + 2γ 0 + 4γ 1 = 3 and, according to Theorem 1.2, the semigroup decays polynomially with rate 1/3, that is the decay estimate for the energy is given by (3.6).
When b 2 ≡ 0 and ε 1 > 0, then the only damping comes from the term involving b 1 and in this case
Bϕ k , ϕ k ≥ ε 1 ℓ 1 +δ 1 ℓ 1 |ϕ k (x)| 2 dx = 2ε 1 π ℓ 1 +δ 1 ℓ 1 sin 2 (kx) dx ≥ c. (3.9)
Thus β k ≤ c * , and we can take γ 0 := 0. From this we infer that m = 3 + 2γ 0 + 2γ 1 = 4, which means that (3.7) holds.
When a is not identically equal to 1, it is known that there exist two positive constants C 1 , C 2 and a sequence of real numbers (c k ) k≥1 , satisfying
k≥1 |c k | 2 < ∞, such that as k → ∞ the eigenvalues λ k and eigenfunctions ϕ k satisfy, uniformly in x, λ k = ℓ 2 k 2 + C 1 + c k , (3.10) ϕ k (x) = C 2 a(x) -1/4 sin(kξ(x)) + O(k -1 ), (3.11) ϕ ′ k (x) = C 2 a(x) -3/4 k cos(kξ(x)) + O(1), (3.12)
where
ℓ := π 0 a(y) -1/2 dy, ξ(x) := π ℓ
x 0 a(y) -1/2 dy, and
k≥1 |c k | 2 < ∞.
These formulas are obtained through the Liouville transformation, and we do not give the details of their computations, since we can refer to J. Pöschel & E. Trubowitz [START_REF] Pöschel | Pure and Applied Mathematics[END_REF], or A. Kirsch [13,Chapter 4]. Indeed in the latter reference, in Theorem 4.11, the result is stated for the Dirichlet eigenvalue problem -ϕ ′′ +qϕ = λϕ, but one may show that after an appropriate change of variable and unknown function, described in the introduction of chapter 4, on pages 121-122 of this reference, one can prove the formulas given above, which are of interest in our case. Now, according to the definition of x → ξ(x), making a change of variable in the first integral below, one has
ℓ 2 +δ 2 ℓ 2 a(x) -3/2 cos 2 (kξ(x)) dx = ℓ π ξ(ℓ 2 +δ 2 ) ξ(ℓ 2 )
a(x(ξ)) -1 cos 2 (kξ) dξ, so that on a close examination of the asymptotic expansions (3.10)-(3.12), one is convinced that the same values for the exponents γ 0 and γ 1 of Theorem 1.1 can be obtained, and the proof of the Proposition is complete.
Remark 3.2. When a ≡ 1, a great number of results exist in the literature.
In particular, assuming that b 1 ≡ 0 and b 2 := 1 Ω 2 , Z. Liu and B.P. Rao [START_REF] Liu | Characterization of polynomial decay rate for the solution of linear evolution equation[END_REF], M. Alves & al. [START_REF] Alves | The asymptotic behavior of the linear transmission problem in viscoelasticity[END_REF] have shown that the semigroup has a decay rate of (1 + t) -2 , thus the energy decays with the rate (1 + t) -4 , and that this decay rate is optimal. However the cases in which a ≡ 1, or b 1 ≥ ε 1 > 0 on Ω 0 , are not covered by these authors, while the method we present here can handle such cases, at the cost of not establishing an optimal decay in simpler cases.
Remark 3.3. As a matter of fact the same decay rate of the energy, with the same exponent number m = 3, holds for a wave equation of the form
ρ(x)∂ tt u -∂ x (a(x)∂ x u) + q(x)u + b 0 (x)∂ t u -∂ x (b 1 (x)∂ xt u) = 0.
In such a case, the operator L will be given by
Lu := -ρ(x) -1 (a(x)u ′ ) ′ + ρ(x) -1 q(x)u, (3.13)
where ρ and a belong to C 2 ([0, π]) and min(ρ(x), a(x)) ≥ ε 0 > 0, while the potential q ∈ C([0, π]) is such that the least eigenvalue λ 1 of the problem
-(a(x)ϕ ′ ) ′ + qϕ = λρ(x)ϕ, ϕ(0) = ϕ(π) = 0,
verifies λ 1 > 0 (in fact any other boundary conditions, such as Neumann, or Fourier conditions, ensuring that the first eigenvalue λ 1 > 0, can be handled, with the same decay rate for the corresponding wave equation). Indeed, such an operator L is selfadjoint in the weighted Lebesgue space L 2 (0, π, ρ(x)dx), and it is known that (see for instance A. Kirsch [START_REF] Kirsch | An Introduction to the Mathematical Theory of Inverse Problems[END_REF], as cited above) an expansion of the form (3.10)-(3.12) holds in this case for the eigenvalues and eigenfunctions of L, with the only difference that in (3.11) the function a(x) -1/4 should be replaced by a(x) -1/4 ρ(x) -1/4 , and in (3.12) the function a(x) -3/4 should be replaced by a(x) -3/4 ρ(x) However, in some cases in which the coefficients a and ρ are not smooth, it is nevertheless possible to show that the behaviour of the eigenvalues λ k and eigenfunctions ϕ k resembles those of the Laplace operator with Dirichlet boundary conditions on (0, π). Such an example may be given by coefficients having a finite number of discontinuites, such as step functions, for which explicit calculation of λ k and ϕ k is possible. For instance consider ρ(x) ≡ 1 and a ∈ L ∞ (0, π) the piecewise constant function given by a(x) := 1 (0,π/2) (x) + 4 × 1 (π/2,π) (x), where for a set A the function 1 A denotes the characteristic function of A. Then a simple, but perhaps somewhat dull, if not tedious, calculation shows that the eigenvalues and eigenfunctions solutions to
-(a(x)ϕ ′ k (x)) ′ = λ k ϕ k (x), ϕ k (0) = ϕ k (π) = 0, are given by {(λ k , ϕ k ) ; k ≥ 1} = {(µ 1,m , ϕ 1,m ) ; m ∈ N * } ∪ {(µ 2,n , ϕ 2,n ) ; n ∈ Z} ,
where the sequences (µ 1,m , ϕ 1,m ) m≥1 and (µ 2,n , ϕ 2,n ) n∈Z are defined as follows. For m ≥ 1 integer, µ 1,m := 16m 2 and (up to a multiplicative normalizing constant independent of m)
ϕ 1,m (x) = sin(4mx)1 (0,π/2) (x) + 2 × (-1) m sin(2mx) 1 (π/2,π) (x).
Also, for n ∈ Z, the sequence λ 2,n is given by
µ 2,n := 16 n + arctan( √ 2) π 2 ,
and (again up to a multiplicative normalizing constant independent of n)
ϕ 2,n (x) := sin( √ µ 2,n x) 1 (0,π/2) + (-1) n 2 √ 3 3 sin √ µ 2,n (π -x) 2 1 (π/2,π) .
Now it is clear that proceeding as in the proof of Proposition 3.1, when ε 2 > 0, one can infer that the exponent m in Theorem 1.1 can be taken as m = 3, so that the decay rate of the energy is at least (1 + t) -2/3 . Analogously when b 2 ≡ 0 and ε 1 > 0, then one can take m = 4 and the energy decays at least with the rate (1 + t) -1/2 .
Wave equations in higher dimensions
Our next example of a damped wave equation for which a decay rate of the energy can be proven using Theorem 1.1, with an explicitly computed rate of decay, for the solution of
∂ tt u -∆u + b 1 ∂ t u -div(b 2 ∇∂ t u) = 0 in (0, ∞) × Ω, u(t, σ) = 0 on (0, ∞) × ∂Ω, u(0, x) = u 0 (x) in Ω u t (0, x) = u 1 (x) in Ω, (4.1)
where b j ∈ L ∞ (Ω) are nonnegative functions.
As we shall se below, in order to find the adequate exponents γ 0 and γ 1 which are used in Theorem 1.1, one has to carry out a precise analysis of the behaviour of the eigenvalues and eigenfunctions of the underlying operator. As far as the Laplace operator is concerned, in a few cases one can perform this analysis, but even in those cases one sees that the exponents γ 0 and γ 1 depend in avery subtle way on the domain Ω.
The following lemma takes of the condition (1.12) in the cases we study in this section.
Lemma 4.1. Let Ω := (0, L 1 ) × • • • × (0, L N ) ⊂ R N where L j > 0 for 1 ≤ j ≤ N .
Denoting by (λ k ) k≥1 the eigenvalues of the Laplacian operator with Dirichlet boundary conditions on Ω, we have
lim k→∞ λ k+1 λ k = 1.
Proof. The eigenvalues of the operator L defined by Lu := -∆u with
D(L) := u ∈ H 1 0 (Ω) ; ∆u ∈ L 2 (Ω) ,
are given by
λ n := n 2 1 π 2 L 2 1 + • • • + n 2 N π 2 L 2 N , for n = (n 1 , . . . , n N ) ∈ (N * ) N .
As before we denote by (λ k ) k≥1 the sequence of eigenvalues obtained upon reordering this family ( λ n ) n , with the convention that each eigenvalue has multiplicity m k ≥ 1 and λ k < λ k+1 . If all the eigenvalues λ k were simple, we could use Weyl's formula, asserting that [START_REF] Weyl | Über die asymptotische verteilung der Eigenwerte[END_REF][START_REF] Weyl | Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen[END_REF]) where c * > 0 is a constant depending only on L 1 , . . . , L N . However, here we have made the convention that λ k < λ k+1 , each eigenvalue λ k having multiplicity m k ≥ 1, and thus if one has no information on m k , one cannot use Weyl's formula.
λ k ∼ c * k 2/N as k → ∞ (see W. Arendt & al. [3], H. Weyl
However in general the eigenvalues are not simple, implying that we cannot use directly Weyl's formula. Nevertheless the proof of the Lemma can be done in an elementary way: consider a sequence of integers k j → +∞ as j → +∞. Then there exists a sequence of N -tuples of integers n j ∈ (N * ) N such that
λ k j = n 2 j1 π 2 L 2 1 + • • • + n 2 jN π 2 L 2 N .
It is clear that necessarily there exists ℓ j > k j such that
λ ℓ j = (n j1 + 1) 2 π 2 L 2 1 + • • • + (n jN + 1) 2 π 2 L 2 N ,
and thus λ k j +1 ≤ λ ℓ j . Thus we have
1 ≤ λ k j +1 λ k j ≤ (n j1 + 1) 2 L 2 1 + • • • + (n jN + 1) 2 L 2 N n 2 j1 L 2 1 + • • • + n 2 jN L 2 N -1
.
Since k j → ∞, we have max {n ji ; 1 ≤ i ≤ N } → ∞, and thus lim j→∞
(n j1 + 1) 2 L 2 1 + • • • + (n jN + 1) 2 L 2 N n 2 j1 L 2 1 + • • • + n 2 jN L 2 N -1 = 1,
and the proof of the Lemma is complete.
To illustrate how Theorem 1.1 can be used, first we investigate the case of dimension N = 2 with a choice of the domains Ω, Ω 1 and Ω 2 as follows
Ω := (0, π) × (0, π), Ω 1 := (ℓ 1 , ℓ 1 + δ 1 ) × (0, π), Ω 2 := (ℓ 2 , ℓ 2 + δ 2 ) × (0, π), (4.2)
where, for j = 1 and j = 2, it is assumed that 0 ≤ ℓ j < ℓ j + δ j ≤ π.
As an inspection of the proof of the following proposition shows, the exact same result holds when one of the sets Ω 1 or Ω 2 is a horizontal strip, and also for any dimension N ≥ 2 with Ω := (0, π) N while the damping subdomains Ω 1 , Ω 2 are narrow strips of the above type, parallel to one of the axis and touching the boundary of Ω. As we have mentioned before, one can also consider the case of an operator such as Lu := -∆u with boundary conditions which ensure that L is self-adjoint and its least eigenvalue is positive (for instance mixed Neumann and Dirichlet boundary conditions, or of Fourier type, also called Robin type boundary condition). However, for the sake of clarity of exposition, we present the result, and its proof, only for the case N = 2 and Dirichlet boundary conditions.
Then we can state the following: Proposition 4.2. Assume that N = 2 and the domains Ω, Ω 1 , Ω 2 are as in (4.2). For j = 1 or j = 2, let the functions b j ∈ L ∞ (Ω) be such that b j ≥ ε j ≥ 0 on Ω j , where ε j is a constant. Then, when ε 2 > 0, there exists a constant c * > 0 such that the energy of the solution of (4.1) satisfies
∇u(t, •) 2 + ∂ t u(t, •) 2 ≤ c * (1 + t) -2/5 ∇u 0 2 + u 1 2 . (4.3)
When ε 1 > 0 and b 2 ≡ 0, one has
∇u(t, •) 2 + ∂ t u(t, •) 2 ≤ c * (1 + t) -2/7 ∇u 0 2 + u 1 2 . (4.4)
Proof. Setting Lu := -∆u with
D(L) := u ∈ H 1 0 (Ω) ; ∆u ∈ L 2 (Ω) ,
the eigenvalues and eigenfunctions of the operator L are given by
λ n := n 2 1 + n 2 2 , ϕ n (x) := 2 π sin(n 1 x 1 ) sin(n 2 x 2 ), for n ∈ N * × N * . (4.5)
Rearranging these eigenvalues λ n in an increasing order, we denote them by (λ k ) k≥1 , the multiplicity of each λ k being
m k := card(J k ), where J k := n ∈ N * × N * ; n 2 1 + n 2 2 = λ k . (4.6)
To begin with the verification of the conditions of Theorem 1.1, we recall that Lemma 4.1 ensures that we have lim k→∞ λ k /λ k+1 = 1, and thus condition (1.12) is satisfied.
Observe also that each λ k being an integer, we have λ k+1λ k ≥ 1, and thus there exists a constant c * > 0 such that for all k ≥ 1 we have
λ k-1 λ k -λ k-1 + λ k+1 λ k+1 -λ k ≤ c * λ k ,
and therefore condition (1.14) is also satisfied with γ 1 = 1.
When ε 2 > 0, in order to verify condition (1.13), it is sufficient to show that there exist γ 0 ∈ R and some constant c * > 0, such that for any k ≥ 1 and any ϕ ∈ N (Lλ k I) with ϕ = 1 we have
Ω 2 |∇ϕ(x)| 2 dx ≥ c * λ -γ 0 k . (4.7)
Since the family (ϕ n ) n∈J k is a Hilbert basis of the finite dimensional space N (Lλ k I), we have
ϕ ∈ N (L -λ k I), ϕ = 1 ⇐⇒ ϕ = n∈J k c n ϕ n with n∈J k |c n | 2 = 1. (4.8)
Thus we have
Ω 2 |∇ϕ(x)| 2 dx = n∈J k |c n | 2 Ω 2 |∇ϕ n (x)| 2 dx + n,m∈J k n =m c n c m Ω 2 ∇ϕ n (x) • ∇ϕ m (x) dx. (4.9)
Now it is clear that we have
Ω 2 |∂ 1 ϕ n (x)| 2 dx = 4n 2 1 π 2 π 0 ℓ 2 +δ 2 ℓ 2 cos 2 (n 1 x 1 ) sin 2 (n 2 x 2 ) dx 1 dx 2 ,
which yields
Ω 2 |∂ 1 ϕ n (x)| 2 dx = 2n 2 1 π ℓ 2 +δ 2 ℓ 2 cos 2 (n 1 x 1 ) dx 1 .
Analogously, we have
Ω 2 |∂ 2 ϕ n (x)| 2 dx = 2n 2 2 π ℓ 2 +δ 2 ℓ 2 sin 2 (n 1 x 1 ) dx 1 ,
and thus one can find a constant c * > 0 such that for all k ≥ 1 and all n ∈ J k , we have
Ω 2 |∇ϕ n (x)| 2 dx ≥ c * δ 2 (n 2 1 + n 2 2 ) = c * δ 2 λ k . (4.10)
Regarding the second sum in (4.9), taking n, m ∈ J k and n = m, we observe that since n 2 1 + n 2 2 = m 2 1 + m 2 2 , we have necessarily n 2 = m 2 and therefore,
Ω 2 ∂ 1 ϕ n (x)∂ 1 ϕ m (x) dx = 4n 1 m 1 π 2 π 0 ℓ 2 +δ 2 ℓ 2 cos(n 1 x 1 ) cos(m 1 x 1 ) dx 1 sin(n 2 x 2 ) sin(m 2 x 2 ) dx 2 = 0.
In the same manner one may see that
Ω 2 ∂ 2 ϕ n (x)∂ 2 ϕ m (x) dx = 4n 2 m 2 π 2 π 0 ℓ 2 +δ 2 ℓ 2 sin(n 1 x 1 ) sin(m 1 x 1 ) dx 1 cos(n 2 x 2 ) cos(m 2 x 2 ) dx 2 = 0.
Finally one sees that for all n, m ∈ J k such that n = m we have
Ω 2
∇ϕ n (x) • ∇ϕ m (x) dx = 0, so that reporting this and (4.10) into (4.9) we have, for all ϕ ∈ N (Lλ k I) with ϕ = 1,
Ω 2 |∇ϕ(x)| 2 dx ≥ c * δ 1 λ k ,
which means that, when ε 2 > 0, the inequality (4.7), and thus (1.13), is satisfied with γ 0 = -1. Therefore, when ε 2 > 0 we have m := 3 + 2γ 0 + 4γ 1 = 5, and (4.3) holds.
When b 2 ≡ 0 and ε 1 > 0, proceeding as above, one checks easily that using (4.8) there exists a constant c * > 0 such that for ϕ ∈ N (Lλ k I) and ϕ = 1 we have
Ω 1 |ϕ(x)| 2 dx ≥ c * .
Thus we may take γ 0 = 0, so that m = 3 + 2γ 0 + 4γ 1 = 7, yielding (4.4), and the proof of our claim is complete.
As a matter of fact, one sees that in order to establish a decay result for other domains Ω ⊂ R N and N ≥ 2, there are two issues which should be inspected carefully: the first one is an estimate of (λ k+1λ k ) from below (comparing it with a power of λ k ) and this is related to the concentration properties of the eigenvalues as k → ∞. The second issue is to obtain an estimate of the local norm of an eigenfunction ϕ on Ω 1 , or that of ∇ϕ on Ω 2 , and this is related to the concentration properties of the eigenfunctions of the Laplacian.
Regarding the first issue we shall use the following lemma for the special case of two dimenional rectangles. Then we have δ(ξ) ≥ 1/q if ξ = p/q where p, q ≥ 1 are integers and mutually prime, while δ(ξ) = 0 if ξ is irrational.
Proof. If ξ = p/q for two mutually prime integers p, q ≥ 1, then we have
|µ n -µ m | = 1 q q(n 2 1 -m 2 1 ) + p(n 2 2 -m 2 2 ) ≥ 1 q , because q(n 2 1 -m 2 1 ) + p(n 2 2 -m 2
2 ) ∈ Z * , and any non zero integer has an absolute value greater or equal to 1.
If ξ / ∈ Q, then the subgroup Z + ξZ is dense in R and for any ε > 0, with ε < min(1, ξ), there exist two integers k ′ , j ′ ∈ Z such that 0 < k ′ + j ′ ξ < ε/8. One easily sees that necessarily we must have k ′ j ′ < 0, and thus, without loss of generality, we may assume that we are given two integers k, j ≥ 1 such that 0 < kjξ < ε 8 .
(This corresponds to the case k ′ > 0 and j ′ < 0; when k ′ < 0 and j ′ > 0 one can adapt the argument which follows). Choosing now n := (2k + 1, 2j -1), m := (2k -1, 2j + 1), one verifies that µ n (ξ)µ m (ξ) = 8k -8jξ ∈ (0, ε), and thus δ(ξ) ≤ ε. We conclude that as a matter of fact we have δ(ξ) = 0.
For the case of a rectangle Ω = (0, L 1 ) × (0, L 2 ) we recalled previously that the eigenvalues of the Laplace operator with Dirichlet boundray conditions are given by
n 2 1 π 2 L 2 1 + n 2 2 π 2 L 2 2 , with (n 1 , n 2 ) ∈ N * × N * .
Using the above lemma we conclude that when Ω is such a rectangle and L 2 1 /L 2 2 ∈ Q, we can take again the exponent γ 1 = 1 appearing in (1.14), yielding the same decay estimate for the energy, provided that Ω 1 and Ω 2 are strips of the form (ℓ j , ℓ j + δ j ) × (0, L 2 ) with 0 ≤ ℓ j < ℓ j + δ j ≤ L j (in which case the exponent γ 0 in (1.13) is -1 when ε 2 > 0, or 0 when b 2 ≡ 0 and ε 1 > 0). Thus we can state the following: Corollary 4.4. Assume that Ω = (0, L 1 ) × (0, L 2 ) and (L 1 /L 2 ) 2 ∈ Q. then the gap between the eigenvalues is bounded below, more precisely,
λ k+1 -λ k ≥ π 2 L 2 1 q , if L 2 1 L 2 2 = p q ,
with p and q mutually prime. Moreover, if Ω j := (ℓ j , ℓ j + δ j ) × (0, L 2 ) with 0 ≤ ℓ j < ℓ j +δ j ≤ L 1 , the results of Proposition 4.2 are valid for the solution of (4.1) on Ω.
Remark 4.5. We should point out that when L 2 1 /L 2 2 ∈ Q, we take the domains Ω j to be a strip which touches the boundary of ∂Ω, in order to give a lower bound for
Ω 2 |∇ϕ(x)| 2 dx or Ω 1 |ϕ(x)| 2 dx,
for all ϕ ∈ N (Lλ k I) with ϕ = 1. Indeed, if Ω 0 ⊂⊂ Ω is an open subset, and λ k is not a simple eigenvalue, then, as far as we know, it is an open problem to give a lower bound in terms of λ k for
Ω 0 |∇ϕ(x)| 2 dx or Ω 0 |ϕ(x)| 2 dx, for all ϕ ∈ N (L -λ k I) with ϕ = 1 (however cf. D. S. Grebenkov & B. T.
Nguyen [START_REF] Grebenkov | Geometrical structure of Laplacian eigenfunctions[END_REF], sections 6 and 7).
Actually one can easily generalize the above Lemma 4.3 so that it can be applied to the study of the gap between eigenvalues of the Laplacian on a domain Ω ∈ R N with N ≥ 3, which is a product of N intervals. The proof of the following statement is straightforward and can be omitted here (with the notations of the corollary, take (n 1 , n 2 ) = (m 1 , m 2 ) and n j = m j for 3 ≤ j ≤ N , then apply Lemma 4.3). Lemma 4.6. Let N ≥ 3 be an integer and ξ j > 0 for 2 ≤ j ≤ N . For n ∈ (N * ) N denote µ n (ξ) := n 2 1 + N j=2 ξ j n 2 j , and
δ(ξ) := inf |µ n -µ m | ; n, m ∈ (N * ) N , µ n = µ m .
Then if there exists j such that ξ j / ∈ Q we have δ(ξ) = 0, while if for all j we have ξ j = p j /q j for two mutually prime integers p j , q j ≥ 1, we have
δ(ξ) ≥ 1 q ,
where q is the least common multiple of q 2 , . . . , q N .
As a consequence, the result of Corollary 4.4 is valid in any dimension N ≥ 2. More precisely, for instance, we can state the following: Corollary 4.7. When the Kelvin-Voigt damping region Ω 2 is a strip of the form (ℓ 1 , ℓ 1 + δ 1 ) × (0, L 2 ) × • • • × (0, L N ) with 0 ≤ ℓ 1 < ℓ 1 + δ 1 ≤ L 1 and b 2 ≥ ε 2 > 0 on Ω 2 , the rate of decay of the energy for the wave equation is also (1 + t) -2/5 , provided that for all 1 ≤ i < j ≤ N the ratios (L i /L j ) 2 ∈ Q.
Next we consider the case of a domain Ω := (0, L 1 ) × (0, L 2 ) such that if ξ := L 2 1 /L 2 2 / ∈ Q. In this case the exponent γ 1 in (1.14) cannot be taken equal to 1, and a further analysis is necessary. As a matter of fact, Lemma 4.3 shows that inf k≥1 (λ k+1λ k ) = 0, and therefore the best we can hope for is to find an estimate of the type
λ k+1 -λ k ≥ c 0 λ -τ k ,
for some c 0 > 0 and τ > 0 independent of k. To this end, we recall that the degree of an algebraic number ξ is the minimal degree of all polynomials P with integer coefficients such that P (ξ) = 0. The following result of K. F. Roth [START_REF] Roth | Rational approximations to algebraic numbers[END_REF] (see Y. Bugeaud [9], chapter 2, Theorem 2.1, page 28) states how well, or rather how badly, as algebraic number of degree greater or equal to two can be approximated by rational numbers:
Theorem 4.8. (Roth's Theorem) Let ξ > 0 be an algebraic number of degree greater or equal to two. Then for any ε > 0 there exists a positive constant c(ξ, ε) > 0 such that for any rational number p/q with q ≥ 1 one has ξ -p q > c(ξ, ε) q 2+ε . (4.11)
We use this result in order to give an estimate of λ k+1λ k from below when ξ = L 2 1 /L 2 2 is an algebraic number.
Lemma 4.9. Assume that Ω = (0, L 1 ) × (0, L 2 ) and that ξ := L 2 1 /L 2 2 is an algebraic number of degree greater or equal to two. Then for any ε > 0, there exists a constant c 0 (ξ, ε) such that λ k+1λ k ≥ min 1, c 0 (ξ, ε) λ -1-ε k .
(4.12)
Proof.
Let k ≥ 1 be fixed, and for m ∈ N * × N * let us denote µ m := m 2 1 + ξm 2 2 . There exist m, n ∈ N * × N * with m = n such that
λ k = m 2 1 π 2 L 2 1 + m 2 2 π 2 L 2 2 = π 2 L 2 1 µ m , λ k+1 = n 2 1 π 2 L 2 1 + n 2 2 π 2 L 2 2 = π 2 L 2 1 µ n .
Then we can write
λ k+1 -λ k = π 2 L 2 1 (µ n -µ m ).
If From this, and the fact that for some constant c * > 0 depending only on L 1 , L 2 we have λ k+1 ≤ c * λ k (cf Lemma 4.1), one is convinced that (4.12) holds.
To finish this paper, we state the following decay estimate when ξ := L 2 1 /L 2 2 is an algebraic number, and we point out that when ξ is a transcendant number we cannot state such a result. and for j = 1 or j = 2, let the functions b j ∈ L ∞ (Ω) be such that b j ≥ ε j ≥ 0 on Ω j , where ε j is a constant. Then, when ε 2 > 0, for any ε > 0 there exists a constant c * (ε) > 0 such that the energy of the solution of (4. Proof. First note that for any ε > 0, thanks to Lemmas 4.1 and 4.9, we have for some constant c 0 (ε)
λ k-1 λ k -λ k-1 + λ k+1 λ k+1 -λ k ≤ c 0 (ε) λ 2+ε k ,
so that we can take γ 1 = 2 + ε.
On the other hand, in this case each eigenvalue λ k = (n 2 1 π 2 /L 2 1 ) + (n 2 2 π 2 /L 2 2 ) is simple and the corresponding eigenfunction is ϕ k (x) = 2 sin(n 1 πx 1 /L 1 ) sin(n 2 πx 2 /L 2 )/ L 1 L 2 .
Therefore, when ε 2 > 0, it is easily seen that
Lemma 4 . 3 .
43 Let ξ > 0 be a real number and for n ∈ (N * ) 2 denote µ n (ξ) := n 2 1 + ξn 2 2 and δ(ξ) := inf |µ nµ m | ; n, m ∈ (N * ) 2 , µ n = µ m .
n 2 =-m 2 1 ≥ 1 , since n 2 1 -m 2 1 ∈
21121 m 2 , we have clearly µ nµ m = n 2 1 N * . If m 2 = n 2 , using Roth's Theorem 4.8, by (4.11) we have, for any ε > 0, µ nµ m = n 2 2 -
Proposition 4 . 10 .
410 Assume that N = 2 and Ω := (0, L 1 ) × (0, L 2 ) where ξ := L 2 1 /L 2 2 is an algebraic number of degree greater or equal to 2. LetΩ 1 := (a 1 , a 1 + δ 1 ) × (a 2 , a 2 + δ 2 ), Ω 2 := (b 1 , b 1 + δ 1 ) × (b 2 , b 2 + δ 2 ),
Ω 2 |∇ϕ 2 =
22 k (x)| 2 dx ≥ c * b 1 b 2 π 2 n c * b 1 b 2 λ k ,so that we can take γ 0 = -1, which implies m = 3 + 2γ 0 + 4γ 1 = 9 + 4ε, and (4.13) can be deduced.When b 2 ≡ 0 and ε 1 > 0, we notice that for some constant c * > 0 and all k ≥ 1 we haveΩ 1 |ϕ k (x)| 2 dx ≥ c * ,therefore we can take γ 0 = 0, hence m = 3 + 2γ 0 + 4γ 1 = 11 + 4ε, and (4.14) follows easily.
-1/4 . Remark 3.4. It is noteworthy to observe that the assumption a ∈ C 2 ([0, π]) of Proposition 3.1, as well as the condition ρ ∈ C 2 ([0, π]) in Remark 3.3, are needed in order to apply the general result which ensures the precise asymptotics (3.10)-(3.12). We are not aware of any result analogous to the precise expansion properties (3.10)-(3.12) in the general case where a, ρ are only in L ∞ (0, π).
1) satisfies ∇u(t, •) 2 + ∂ t u(t, •) 2 ≤ c * (1 + t) -2/(9+ε) ∇u 0 2 + u 1 2 . (4.13)When ε 1 > 0 and b 2 ≡ 0, one has∇u(t, •) 2 + ∂ t u(t, •) 2 ≤ c * (1 + t) -2/(11+ε) ∇u 0 2 + u 1 2 . (4.14)
Acknowledgment. This work was essentially done during the visit of the first author at the Mathematics Department of the Beijing Institute of Technology in October 2015. The first author would like to express his thanks to the Mathematics Department of the Beijing Institute of Technology for its hospitality. He would also thank his colleague Vincent Sécherre (Université Paris-Saclay, UVSQ, site de Versailles), for helpful discussions.
* This work was supported by the National Natural Science Foundation of China (grant No. 60974033) and Beijing Municipal Natural Science Foundation (grant No. 4132051). | 56,321 | [
"777777"
] | [
"244899",
"74978"
] |
01474701 | en | [
"spi"
] | 2024/03/04 23:41:48 | 2017 | https://enpc.hal.science/hal-01474701/file/D-DPC-Post-print.pdf | M H Khalili
S Brisard
M Bornert
email: michel.bornert@enpc.fr
P Aimedieu
email: patrick.aimedieu@enpc.fr
J.-M Pereira
J.-N Roux
Discrete Digital Projections Correlation: a reconstruction-free method to quantify local kinematics in granular media by X-ray tomography
Keywords: Computed tomography, Digital image correlation, Full-field measurement, Granular material, Tomographic reconstruction
We propose a new method to measure the translations and rotations of each individual grain in a granular material imaged by computerized tomography. Unlike the classic approach, which requires that both initial and current configurations be fully reconstructed, ours only requires a reconstruction of the initial configuration. In this sense, our method is reconstruction-free, since any subsequent deformed state can be analyzed without further reconstruction. One distinguishing feature of the proposed method is that it requires very few projections of the deformed sample, thus allowing for time-resolved experiments.
Introduction
The mechanical behaviour of granular materials has long been (and still is) investigated by means of macroscopic (e.g. oedometer and triaxial tests) experiments, see [START_REF] Wan | A simple constitutive model for granular soils: Modified stress-dilatancy approach[END_REF][START_REF] Sanzeni | Compression and creep of venice lagoon sands[END_REF][START_REF] Blanc | Intrinsic creep of a granular column subjected to temperature changes[END_REF][START_REF] Karimpour | Creep behavior in Virginia Beach sand[END_REF] among many others. Since the early eighties, computerized tomography [START_REF] Kak | Principles of computerized tomographic imaging[END_REF][START_REF] Hsieh | Computed tomography: principles, design, artifacts, and recent advances[END_REF] has been successfully invoked to complement these global experiments, first to track collective events (e.g. the onset of shear bands [START_REF] Desrues | Tracking strain localization in geomaterials using computerized tomography[END_REF][START_REF] Desrues | Void ratio evolution inside shear bands in triaxial sand specimens studied by computed tomography[END_REF]), then to quantify the rigid-body motion of each individual grain [START_REF] Hall | Discrete and continuum analysis of localised deformation in sand using X-ray µCT and volumetric digital image correlation[END_REF][START_REF] Andò | Grain-scale experimental investigation of localised deformation in sand: a discrete particle tracking approach[END_REF]. Together with numerical simulations based on the discrete element method (DEM) [START_REF]Discrete-element Modeling of Granular Materials[END_REF]12,[START_REF] Thornton | Quasi-static simulations of compact polydisperse particle systems[END_REF], these local measurements have the potential to deliver new insight on the complex behaviour of granular materials.
Full field measurements of 3D displacement fields are now performed almost routinely on 3D tomographic reconstructions by means of volumetric digital image correlation techniques. While the standard form of these techniques is better-suited to continua [START_REF] Bay | Digital volume correlation: Three-dimensional strain mapping using Xray tomography[END_REF][START_REF] Bornert | Mesure tridimensionnelle de champs cinématiques par imagerie volumique pour l'analyse des matériaux et des structures[END_REF][START_REF] Germaneau | Comparison between X-ray micro-computed tomography and optical scanning tomography for full 3d strain measurement by digital volume correlation[END_REF][START_REF] Lenoir | Volumetric digital image correlation applied to Xray microtomography images from triaxial compression tests on argillaceous rock[END_REF][START_REF] Roux | Threedimensional image correlation from X-ray computed tomography of solid foam[END_REF], discrete forms have later been devised with granular materials in mind (DV-DIC [START_REF] Hall | Discrete and continuum analysis of localised deformation in sand using X-ray µCT and volumetric digital image correlation[END_REF], ID-Track [START_REF] Andò | Grain-scale experimental investigation of localised deformation in sand: a discrete particle tracking approach[END_REF]). They led to significant advances in the understanding of complex phenomena, such as strain localization. In particular, it was shown that grains undergo large rotations within shear bands [START_REF] Hall | Discrete and continuum analysis of localised deformation in sand using X-ray µCT and volumetric digital image correlation[END_REF] that may have a width of several grains. The importance of the overall angularity of the grains was also highlighted [START_REF] Andò | Grain-scale experimental investigation of localised deformation in sand: a discrete particle tracking approach[END_REF][START_REF] Desrues | Strain localisation in granular media[END_REF].
Most volumetric digital image correlation techniques are based on the comparison of 3D images of the sample in its initial (undeformed) and current (deformed) states. Within the framework of computerized tomography, this means that a full tomographic scan is required in both initial and current states. This is a serious limitation for time-resolved experiments, as the total acquisition time of a full scan is of the order of the hour with laboratory facilities. While fast and ultra-fast tomography setups developed at synchrotron facilities [START_REF] Lhuissier | Ultra fast tomography: New developments for 4d studies in material science[END_REF][START_REF] Limodin | In situ investigation by X-ray tomography of the overall and local microstructural changes occurring during partial remelting of an Al-15.8 wt.% Cu alloy[END_REF][START_REF] Mader | High-throughput fullautomatic synchrotron-based tomographic microscopy[END_REF] can overcome this limitation, such a solution is sometimes unpractical.
In this paper, an alternative route is explored in order to reduce the total acquisition time of the current configuration. Instead of reducing the acquisition time of each individual radiographic projection (as in fast and ultra-fast setups, rely-ing on very bright sources), we propose to reduce the overall number of projections itself. This idea is motivated by the fact that (assuming no breakage occurs) each grain undergoes a rigid body motion. Therefore, the current configuration is fully defined by a limited set of unknowns (3N translational degrees of freedom and 3N rotational degrees of freedom, where N is the total number of grains), which suggests that a limited number of projections should suffice to accurately identify these unknowns and fully reconstruct the local displacements.
Owing to the insufficient number of projections, the downside of this approach is of course the impossibility of carrying out a 3D reconstruction of the sample in its current state, at least with standard methods making use of projections of the current configuration only. In other words, correlations cannot be performed on the 3D reconstructions, and we propose to skip the reconstruction step and directly match the radiographic projections instead. In this sense, our method can be considered as reconstruction-free, as it does not require the reconstruction of the current configuration. It should however be noted that together with the reconstruction of the reference configuration, the kinematics of each grain thus determined can be used to reconstruct a posteriori the current configuration, if needed.
The proposed method proceeds as follows. A full scan of the sample in its initial state is first carried out. A 3D image is reconstructed and each grain is segmented. The sample then undergoes a transformation, and a limited number of projections are acquired (target projections). Applying a trial rigid body motion to each grain (whose shape, local attenuation and initial position and orientation are known from the segmentation of the initial 3D reconstruction), the resulting trial projections can be computed. The trial rigid body motions are then optimized in order to minimize the discrepancy between the trial projections and the target projections.
Since correlations are evaluated on the radiographic projections, rather than the 3D reconstruction, our method is similar in spirit to the Projection-based Digital Volume Correlation (P-DVC) recently proposed by Leclerc, Roux and Hild [START_REF] Leclerc | X-CT Digital Volume Correlation without Reconstruction[END_REF][START_REF] Leclerc | Projection savings in CT-based digital volume correlation[END_REF][START_REF] Taillandier-Thomas | Projection-based digital volume correlation: application to crack propagation[END_REF][START_REF] Taillandier-Thomas | Soft route to 4d tomography[END_REF] in a continuous setting. Like the present work, P-DVC is based on a full reconstruction of the sample in its initial state, and a few projections of the sample in its current state. Unlike the present work, the displacement is assumed continuous and is interpolated between the nodes of a superimposed mesh and there is no need for segmentation of the initial state.
The method proposed here should therefore be understood as the discrete version of the work of Leclerc, Roux and Hild [START_REF] Leclerc | Projection savings in CT-based digital volume correlation[END_REF], just like DV-DIC [START_REF] Hall | Discrete and continuum analysis of localised deformation in sand using X-ray µCT and volumetric digital image correlation[END_REF] is the discrete version of V-DIC [START_REF] Bay | Digital volume correlation: Three-dimensional strain mapping using Xray tomography[END_REF][START_REF] Bornert | Mesure tridimensionnelle de champs cinématiques par imagerie volumique pour l'analyse des matériaux et des structures[END_REF]. For this reason, we will refer to our method as D-DPC, for Discrete Digital Projections Correlation.
It is clear from the above description of D-DPC that the proposed method can be seen as an inverse problem.
The corresponding forward problem consists in finding the radiographic projections of an assembly of grains subjected to trial rigid body motions. This forward problem is addressed in Sec. 2. It is formulated so as to minimize discretization errors, while allowing for an efficient implementation. Sec. 3 is devoted to the inverse problem itself. We first define the distance function which is used to measure the discrepancy between trial and target projections. A few synthetic test cases then illustrate the performance of the method. Finally, this paper closes in Sec. 4 with an application of the proposed method to true projections of gravel submitted to simple geometric transformations. An estimate of the measurement error is provided. Our method is shown to compare well with more conventional discrete correlation techniques, while requiring a substantially smaller acquisition time.
At this point, the terminology adopted throughout this paper ought to be clarified. First, the term "pixels" (denoted p in this paper) will always refer to cells of the grid detector. Similarly, "voxels" (denoted v in this paper) will always refer to cells of the object space grid. The projection angle (rotation angle of the sample holder with respect to some reference direction) is denoted θ. Finally, upper indices between round brackets refer to grains.
In general, bounds will be omitted in sums. Expressions like p . . . , v . . . , θ . . . and i (. . .) (i) should be understood as sums over all pixels, all voxels, all projections and all grains, respectively.
The projection model
The D-DPC method presented in this paper is formulated as an inverse problem. In view of solving this inverse problem, we first introduce a projection model which solves the following forward problem: find the radiographic projections of an assembly of objects (grains) which are subjected to trial rigid body motions. More precisely we assume that the geometry and initial position of the objects are known. Each object is subjected to an individual trial rigid body motion. In this section, we show how the projections of the resulting updated configuration can be computed by means of our projection model. In the remainder of this paper, objects which have been subjected to a rigid body motion will be called "transformed objects".
For untransformed objects, our model coincides with the approaches of most algebraic reconstruction techniques [START_REF] Kak | Principles of computerized tomographic imaging[END_REF][START_REF] Sidky | Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT[END_REF]. As such, it relies on the same assumptions. In particular, the in-plane dimensions of the pixels of the sensor are neglected. In other words, each pixel of the sensor is associated with a unique ray (no averaging over the surface of the pixel). For transformed objects, our model follows a Lagrangian approach: the rays are pulled back to the initial configuration. In the case of voxel-based geometrical descriptions of the objects, this reduces the accuracy losses.
The projection model is first developed in Sec. 2.1 for a single object subjected to a rigid body motion. It is then extended in Sec. 2.2 to an assembly of objects (e.g. granular materials), each object being subjected to an individual rigid body motion. Sec. 2.3 finally addresses some implementation issues for voxel-based representations of the grains.
Projections of a single object subjected to a rigid body motion
In the present paper, we consider a general tomography setup. The sample B is placed on a rotating stage. ∆ denotes its axis of rotation; it is oriented by the unit vector e ∆ . The origin O is placed on the rotation axis ∆, so that points can be identified in the remainder of this paper with their radius vector.
The sample is illuminated by an X-ray source (parallel or cone-beam), and the resulting projection is measured on a plane detector D. Each point p ∈ D of the detector is hit by a unique X-ray, the direction of which is given by the unit vector T (p), oriented from the source to the detector. We will assume that pixel values returned by the detector correspond to the intensity at the center of the pixel; therefore, p ∈ D will usually refer in the present paper to the center of a pixel of the detector. For parallel projections, we have T (p) = const. (see Fig. 1), while for cone-beam projections (see Fig. 2)
T (p) = p -a p -a , (1)
where a denotes the location of the X-ray point source (apex of the cone); in the case of parallel tomography, a is located at infinity.
No assumption is made regarding the geometry of the setup. In particular, it is not assumed that the detector D is parallel to the axis of rotation ∆ of the sample stage. Likewise, it is not assumed that the plane which contains the axis of rotation ∆ and the point source a is perpendicular to the detector D. Indeed, our method is formulated in an intrinsic way which does not require perfect geometries.
For the sake of simplicity, it will however be assumed in most applications presented below that the geometry of the setup is indeed "perfect": for parallel setups, the detector and the axis of rotation of the sample are parallel and T (p) is normal to the detector, while for cone-beam setups, the axis of rotation of the sample is parallel to the detector, and the plane formed by the point source and the axis of rotation is perpendicular to the detector.
In both cases of perfect geometries, it is convenient to introduce a global frame (O, e x , e y , e z ) defined as follows (see also Figs. 1 and2). The unit vector e z , normal to the detector, points in the direction of propagation of the X-rays. e y = e ∆ is the (ascending) direction of the axis of rotation. Finally, e x = e y × e z . Besides, for cone-beam setups, the origin O is placed at the intersection between the normal to the detector passing through the point source, and the axis of rotation.
A body B is placed on the sample holder. The map x ∈ B → µ(x) denotes its local linear absorption coefficient. Then, from the Beer-Lambert law, we get the following projection formula
ln [I 0 /I] (p) = µ p + sT (p) ds, (2)
where s denotes the arc-length along the ray (p, T (p)), I 0 and I are the incident and transmitted intensities. In practice, for each pixel of the sensor, the ratio [I 0 /I](p) is deduced from the grey level. This preliminary calibration corrects for the possible non uniformity of the incident intensity and of the sensitivity of the individual pixels of the sensor [START_REF] Ketcham | Acquisition, optimization and interpretation of X-ray computed tomographic imagery: applications to the geosciences[END_REF].
It might be argued that the Beer-Lambert law (2) does not fully account for the complex physical phenomena governing the formation of a radiographic projection. It is assumed that (i) phase contrast is negligible in front of attenuation contrast [START_REF] Maire | On the application of x-ray microtomography in the field of materials science[END_REF] and (ii) the X-ray source is monochromatic and beam hardening effects are disregarded [START_REF] Boas | CT artifacts: causes and reduction techniques[END_REF]. Most tomographic reconstruction techniques currently in use rely on these assumptions; experience shows that these methods perform extremely satisfactorily, even on scans acquired in a laboratory (polychromatic) setup. Our method is nothing but a constrained classical algebraic reconstruction technique. As such, the use of the Beer-Lambert law ( 2) is not more (but not less) questionable for our method than for traditional reconstruction techniques.
B is now submitted to a rigid body motion prior to projection. This motion is defined by the translation vector u ∈ R 3 , the rotation center c and the rotation tensor Ω ∈ SO(3), so that the point initially located at X is transported to x
x = Ω • X -c + u + c. (3)
The above equation is intrinsic in the sense that it does not require a frame of reference; as such, Ω should really be understood as a rotation tensor. In the global frame of reference (O, e x , e y , e z ) introduced above, this rotation tensor can be represented by a rotation matrix.
Observing that the absorption coefficient µ is conserved during the motion, the projection at pixel p of the transformed sample results from the combination of Eqs. ( 2) and ( 3)
ln [I 0 /I] (p) = µ Ω T • p + sT (p) -u -c + c ds. (4)
In a tomography experiment, the sample stage is rotated by an angle θ about the axis of rotation ∆, oriented by the unit vector e ∆ (see Figs. 1 and2); R θ denotes the corresponding rotation tensor (the rotation center is the origin).
Composing the rigid body motion of the body and the rotation of the sample stage, it is readily seen that the point initially located at X is transported to x, given by
x = R θ • Ω • X -c + R θ • u + c , (5)
while the final expression of the projection at pixel p and angle θ of the transformed body B reads
ln [I 0 /I] (p, θ) = µ Ω T • R T θ • p + sT (p) -u -c + c ds. (6)
Extension of the projection model to granular materials
We now consider a granular medium. It is submitted to a mechanical loading, causing each grain to undergo a rigid body motion. Prior to loading, a first set of projections leads to a reconstruction of the 3D map x → µ(x) of the linear attenuation in the reference configuration (unloaded sample, unrotated sample stage).
The linear absorption coefficient of air is significantly smaller than that of the grains and will be neglected in the remainder of this paper; in other words, it is assumed that µ(x) = 0 outside the grains.
The reconstructed image is then segmented; in other words, the total attenuation µ is decomposed as follows
µ(x) = i µ (i) (x), (7)
where the sum extends to all grains, and µ (i) denotes the local attenuation of the i-th grain (µ (i) (x) = 0 outside grain i).
The rigid body motion of grain i is defined by the translation vector u (i) , the rotation center c (i) and the rotation tensor Ω (i) . The projection of this grain is then retrieved from Eq. ( 6)
P(i) (θ, p; u (i) , Ω (i) ) = µ (i) Ω (i) T • R T θ • p + sT (p) -u (i) -c (i) + c (i) ds. ( 8
)
It should be noted that arbitrary choices of the rotation centers c (i) (which are licit) might induce artificially large variations of the amplitude of the translation u (i) from grain to grain, which in turn might lead to convergence issues for the inverse problem considered in Sec. 3. To avoid such issues, the center of rotation c (i) of grain (i) was placed at its center of mass. In other words, u (i) is the translation of the center of mass of grain i.
Summing Eq. ( 8) over all grains, we get the following expression of the projection of the deformed sample P(θ, p; u (1) , Ω (1) , . . . , u (n) , Ω (n) ) = i P(i) (θ, p; u (i) , Ω (i) ). [START_REF] Hall | Discrete and continuum analysis of localised deformation in sand using X-ray µCT and volumetric digital image correlation[END_REF] The above expression defines our projection model, which solves the forward problem. In other words, if the translations and rotations of each grain are known, we can evaluate the projections of the deformed granular material.
For later use in the inverse problem [START_REF] Lenoir | Volumetric digital image correlation applied to Xray microtomography images from triaxial compression tests on argillaceous rock[END_REF] which defines the D-DPC method, the grain rotations Ω (i) must be parameterized. We use Rodrigues' formula [START_REF] Rodrigues | Des lois géométriques qui régissent les déplacements d'un système solide dans l'espace, et de la variation des coordonnées provenant de ces déplacements considérés indépendamment des causes qui peuvent les produire[END_REF][START_REF] Argyris | An excursion into large rotations[END_REF] to map the rotation vector ω = ω n (ω: angle of rotation; n: axis of rotation, n = 1) to the rotation tensor Ω
Ω = I + sin ω ω ω + 1 -cos ω ω 2 ω 2 = exp ω, (10)
where we have introduced the skew-symmetric tensor ω such that ω • x = ω × x for all x ∈ R 3 . It should be noted that this parameterization is not differentiable at some points (namely, ω = 2π) [START_REF] Cardona | A beam finite element non-linear theory with finite rotations[END_REF][START_REF] Ibrahimbegović | Computational aspects of vector-like parametrization of threedimensional finite rotations[END_REF], which might cause issues with gradient-based optimization algorithms. In the present study, the rotations are relatively small, so that it was not required to consider this corner case.
We then introduce the compact notation
q = u (1) 1 , u (1) 2 , u (1) 3 , ω (1) 1 , ω (1) 2 , ω (1) 3 , . . . . . . , u (n) 1 , u (n) 2 , u (n) 3 , ω (n) 1 , ω (n) 2 , ω (n) 3 T , (11)
which gathers in a unique column-vector the parameters defining the rigid body motion of each grain. The projection model defined by Eqs. ( 8) and ( 9) can then be written
P(θ, p; q) = i P(i) (θ, p; q (i) ), (12)
where q (i) = S (i) • q and the 6n × 6 matrix S (i) selects in q the rows corresponding to u (i) and ω (i)
S (i) = [O 6 , . . . , O 6 i-1 times , I 6 , O 6 , . . . , O 6 n-i times ], (13)
(O 6 : 6 × 6 null matrix; I 6 : 6 × 6 identity matrix).
Implementation of the projection model
In situations of practical interest, the 3D map of the linear attenuation results from an initial 3D reconstruction of the (undeformed) granular sample. It is therefore voxelized, and we write the attenuation as the following discrete sum
µ(x) = v µ(v)χ(x -v), (14)
where v denotes the center of the current voxel and χ is the indicator function of the voxel centered at the origin. The projection formula (2) then becomes ln
I 0 I(p) = v µ(v) χ p -v + sT (p) ds, (15)
where the integral in the above equation is the chord length of the voxel centered at v intersected by the ray (p, T (p)). Thus, the value of the projection is the summation of the chord lengths weighted by their attenuation coefficients µ(v). This summation is referred to as the radiological path and can be efficiently evaluated by means of Siddon's algorithm [START_REF] Siddon | Fast calculation of the exact radiological path for a three-dimensional CT array[END_REF]. The improved version proposed by Jacobs [START_REF] Jacobs | A fast algorithm to calculate the exact radiological path through a pixel or voxel space[END_REF] was implemented here.
Eq. ( 15) should be evaluated for every pixel of the detector. In Siddon's algorithm, unnecessary computations are avoided by considering only the voxels that are actually intersected by the current ray. A further gain can be made by only considering the rays that intersect the grain: to this end, bounding boxes are attached to each grain.
To close this section, we note that our projection model is based on a Lagrangian approach. In order to compute the projection of a transformed object, the inverse transform is applied to the rays, and the untransformed object is then projected along these Lagrangian rays. The alternative, Eulerian approach might seem more natural: the direct transform is applied to the object itself, which is then projected. However, this latter approach would require to discretize the (already discretized) object over a rotated grid. Our Lagrangian approach avoids the additional data losses that might possibly result from this rediscretization.
The Discrete Digital Projection Correlation method
The projection model presented in Sec. 2 allows to generate digital projections of an assembly of grains subjected to arbitrary rigid body motions. It is recalled that P(θ, p; q) denotes the resulting digital projection (θ: rotation angle of the sample holder; p: center of detector pixel; q: generalized displacements of grains). We are now in a position to provide a definition of the Discrete Digital Projection Correlation method (D-DPC), which is formulated as an inverse problem, where the above projection model is used as the forward solver.
Formulation of the method
We consider a granular sample undergoing a geometric transformation resulting from e.g. mechanical loading. As already argued in Sec. 2.2, it is assumed that a full set of projections of the initial configuration is available, allowing for a fine description of the geometry and position of each grain. We then record a few projections P(θ, p) of the sample in its deformed (current) state. For each trial generalized displacement q of the grains, we compute the corresponding set of trial projections P(θ, p; q), and evaluate the discrepancy between experimental and trial projections. Minimizing this discrepancy with respect to q then leads to an estimate of the displacements of each grain. In the present work, we selected the following objective function as a measure of the discrepancy between P and P
F(q) = θ p P(θ, p; q) -P(θ, p) 2 , (16)
where the sum runs over the limited set of projection angles θ and all pixels p of the detector. Minimization of the objective function F is known to deliver the maximum likelihood estimate of q for projections corrupted with Gaussian noise.
In the more realistic case of Poisson noise, a different cost function ought to be adopted [START_REF] Shepp | Maximum likelihood reconstruction in positron emission tomography[END_REF]; this is ongoing work.
To sum up, the grain displacements are retrieved from the following optimization problem
q = arg min q F(q). ( 17
)
Numerical optimization of the cost function is carried out with the Levenberg-Marquardt method, which is wellsuited to nonlinear least-squares problems. At this stage, its performance has not been compared with other optimization techniques. It requires the partial derivatives of P with respect to the parameters q, which are estimated by finite differences. Our implementation accounts for the sparsity of the resulting Jacobian matrix [see Eq. ( 12)]: the partial derivatives of P(i) with respect to q ( j) are not evaluated for j i. Furthermore, P(i) and its derivatives are evaluated simultaneously to avoid redundant function calls. Nonlinear optimization methods are known to be sensitive to the initial guess; this point will be addressed in Sec. 3.3.
Validation of the method
In this section, we present a few test-cases of the D-DPC method, where the "experimental" projections P(θ, p) [see Eq. ( 16)] are in fact generated numerically from reference images of grains by means of our projection model. In other words, P(θ, p) = P(θ, p; q exact ), [START_REF] Roux | Threedimensional image correlation from X-ray computed tomography of solid foam[END_REF] where q exact is the generalized displacement that we impose to the grains and expect to retrieve through minimization of the cost function F [see Eq. ( 16)].
The simulations presented here are restricted to the case of parallel projection of two-dimensional objects in the xz plane (perfect geometry, see Fig. 1). The rigid body motions of the grains are then characterized by three scalars (two displacements and one angle of rotation): let u (i) and w (i) denote the translation of the center of mass of grain i along the x and z directions, and ω (i) its angle of rotation about its center (x (i) , z (i) ).
It should be observed that displacements along the R T θ • e z direction induce no change in the projection at angle θ. Hence, at least two projections are needed to fully resolve the displacement of a grain.
The simulations presented below are successful if q D-DPC = q exact (up to a specified tolerance), where q D-DPC denotes the D-DPC estimate of the displacements. The initial guess for the optimization algorithm will always be taken as the reference state (q init = 0) Fig. 3 shows a digital image of the grains considered here. This image corresponds to the reference (initial) configuration, q = 0. In order to ensure that we explored realistic shapes and sizes of grains, as well as gray level variations within grains, this image was extracted from the (experimental) tomographic reconstruction of a real granular material. However, it is again emphasized that the projections P(θ, p) are generated numerically from this experimental image.
Validation with two projections and small displacements
In this section, only two projections are considered, at angles θ = 22.5 • and 112.5 • (see Fig. 3). All components of the applied rigid body motion q exact are selected randomly: translations u (i) and w (i) are uniformly distributed between -1 pix and 1 pix (about one tenth of the diameter of the grains), while rotations ω (i) are uniformly distributed between -6 • and 6 • .
As a first test, we carried out the D-DPC optimization described in Sec. 3.1 on each grain individually. In other words, we optimized the following cost functions [compare with Eq. ( 16)]
F (i) (q) = θ p P(i) (θ, p; q (i) ) -P(i) (θ, p; q (i) exact ) 2 , (19)
and retrieved q (i) D-DPC = q (i) exact (up to machine accuracy) for each grain i.
We then tested our method on assemblies of grains. For loose assembly of grains, the rays intersect a limited number of grains. Each pixel of the sensor therefore measures information relating to a small number of grains, and we expected the D-DPC method to deliver more accurate results in this case. In order to verify this intuition, we studied three different groups of grains (subsets of the assembly shown in Fig. 3). In the first group (labeled "loose" in what follows), the six selected grains are separated by roughly two diameters (see blue squares on Fig. 3). In the second group (labeled "dense" in what follows), the six selected grains are separated by a few pixels (see red circles on Fig. 3). Finally, the tests were also carried out on all grains shown in Fig. 3. It should be noted that in all these test cases, the grains are not in contact; more realistic configurations are tested in Sec. 4. To quantify the accuracy of our method, we measure the component-wise maximal relative error of q exact and q D-DPC . The results (averaged over five realizations of q exact ) are reported in table 1. It is observed that the minimization is successful in all three cases. In particular, contrary to what we expected, convergence of the method is not affected by the density of the sample. This is a very desirable feature for future applications to real, experimental situations. We finally note that the number of iterations of the Levenberg-Marquardt algorithm grows with the number of grains.
Validation with two to six projections and large displacements
In this section, we present tests carried out with larger displacements, defined as follows. The translations are deterministic
u (i) = αx (i) , (20a)
w (i) = βz (i) , (20b)
with α = 0.15 and β = 0.1. Rotations ω (i) are sampled from a uniform distribution between -30 • and 30 • . The error was measured as in Sec. 3.2.1 above, and the results are reported in table 2, where it is observed that the D-DPC method fails in this case with two projections (θ = 22.5 • , 112.5 • ). We therefore carried out two additional simulations with four (θ = 22.5 • , 67.5 • , 112.5 • , 157.5 • ) and six (θ = 22.5 • , 52.5 • , 82.5 • , 112.5 • , 142.5 • , 172.5 • ) projections. With four projections, the error was still unacceptably high, while six projections led to an excellent accuracy. It is very likely that with two and four projections, the optimization algorithm converged to a local minimum. This point is discussed in the next section.
Sensitivity to the initial guess
The D-DPC cost function F defined by Eq. ( 16) is not convex. Therefore, for the Levenberg-Marquardt method to converge, the initial guess should be close enough to the global minimum. Otherwise, the D-DPC method may return a local minimum. In true, experimental conditions, the load should be applied in small increments, and the D-DPC method should be applied at each load step, using as initial guess for the current load step the converged generalized displacement at the previous load step.
In the present section, we study the sensitivity to the initial guess empirically. We consider the D-DPC cost function F corresponding to the two-dimensional, parallel projections of one grain
F(u, w, ω) = θ,p P(θ, p; u, w, ω) -P(θ, p) 2 = θ,p P(θ, p; u, w, ω) -P(θ, p; 0, 0, 0) 2 , (21)
where the two "experimental" projections P(θ, p) (θ = 22.5 • , 112.5 • ) are generated from the 2D image of the grain marked with a green cross on Fig. 3 (largest diameter: 37 pix). In the present example, the exact generalized displacement is q exact = 0, and an empirical study of this function in the neighborhood of this minimum is provided.
Our observations show that the optimization procedure is more sensitive to the initial value of the rotation ω than translations u and w. We therefore focus in what follows on a 1D cross-section of the cost function: ω → F(0, 0, ω), where the translations u and w are frozen. The resulting crosssection is plotted in Fig. 4 (left axis) for values of ω ranging from -90 • to 90 • . Clearly, the function is convex only in the neighborhood of the minimum ω = 0, and the initial guess of ω should be selected in this neighborhood. This is illustrated on Fig. 4 (right axis), where the symbols show the converged value of the rotation ω D-DPC , for the initial guess (0, 0, ω init ), where ω init takes the values -90 • , -45 • , -22.5 • , -4.5 • , 4.5 • , 22.5 • , 45 • , 90 • . The simulation is successful if the converged value of the rotation is null (up to machine accuracy). In Fig. 4, successful simulations correspond to the green squares lying on the x-axis. These results confirm that the initial guess must be close enough to the solution.
It should be noted that we deliberately considered extremely large rotations: true rotations usually observed in experimental conditions are much smaller [START_REF] Hall | Discrete and continuum analysis of localised deformation in sand using X-ray µCT and volumetric digital image correlation[END_REF][START_REF] Andò | Grain-scale experimental investigation of localised deformation in sand: a discrete particle tracking approach[END_REF]. It is observed that succesful convergence is obtained for initial guesses of the rotation which are in error by about 30 • . This illustrates the robustness of our method.
To close this section, we mention that the same analysis (not presented here) can be carried out on the translations. Our simulations show that convergence to the exact displacement is obtained for initial guesses of the translations which are in error by several pixels. The amplitude of the convergence domain is deemed large enough for practical applications, especially if the load is applied in small increments.
Accounting for brightness and contrast evolutions of the projections
Changes of the intensity of the X-ray source and imperfections of the grey level calibration procedure can induce variations of brightness and contrast between the two series of projections. In standard DIC or V-DIC algorithms, a local brightness and contrast correction (to be optimized) is usually applied to the 3D images. Similarly, we introduce two additional optimization parameters, namely: a (global scaling of the grey levels) and b (global shift of the grey levels). The cost function F now reads
F(a, b, q) = θ p P(θ, p; q) -(aP(θ, p) + b) 2 , (22)
[compare with Eq. ( 16)].
Implementation of the method
The standard Python implementation of the Levenberg-Marquardt method provided by the scipy.optimize1 was used to solve the optimization problem defined by Eq. [START_REF] Lenoir | Volumetric digital image correlation applied to Xray microtomography images from triaxial compression tests on argillaceous rock[END_REF]. Naturally, the bottleneck of the code is the evaluation of the cost function F defined by Eq. ( 16); as such, it was implemented with great care. The Cython 2 [START_REF] Behnel | Cython: The best of both worlds[END_REF][START_REF] Smith | Cython -A Guide for Python Programmers[END_REF] static compiler was used to produce a native C-extension for Python of the projection operator P(i) (θ, p; q (i) ) defined by Eq. ( 9). Then, observing that each term of the sum appearing in Eq. ( 16) can be evaluated independently, the MapReduce pattern [START_REF] Mccool | Structured Parallel Programming[END_REF] was used for the parallelization of the computation of the objective function F. This means that each available core is in charge of computing the projections of a subset of all grains.
Experimental validation
In the present section, the D-DPC methodology is applied to a simple experiment carried out on a simplified granular medium.
Specimen
The specimen is an assembly of 15 grains of limestone gravel from the Boulonnais quarries (mean diameter of the grains: 5 mm, density: 2.6). The grains are placed in a polypropylene syringe (diameter: 10 mm; height: 30 mm), see Fig. 5. The X-ray absorption of the container is small compared to that of gravel; it only contributes to about 10 % of the sinogram (defined here loosely as the set of radiographic projections). All scans were performed at 100 kV and 500 µA, with a frame rate of 2 images per second. Each projection is the result of averaging 40 images. Owing to the rather large effective exposure time (20 s), the signal-to-noise ratio reaches approximately 50 (see below), which allowed us to neglect noise-induced errors in the evaluation of the D-DPC method presented below. Further studies (to be reported elsewhere) indeed confirm that noise-induced errors are dominated by discretization and modelling errors.
According to the geometry of the tomography setup, the voxel size was estimated to 0.112 mm • vox -1 . Fig. 6 shows two orthogonal projections of the sample.
Three full scans (352 radiographs spanning the full 360 • ) were then performed. For scans 1 and 2, the position of the sample was unchanged (reference configuration), while for scan 3, a 3.5 mm (±0.1 µm, accuracy of moving stage) translation in the Oxz plane (see Fig. 2) was applied to the sample.
The D-DPC method is based on an algebraic projection operator. Consistency then requires that the initial configuration be reconstructed by means of the same projection operator (as was also observed by Leclerc and coauthors [START_REF] Leclerc | Projection savings in CT-based digital volume correlation[END_REF]). This of course precludes the use of efficient reconstruction techniques such as the Filtered Back-Projection. Instead, we implemented the Simultaneous Algebraic Reconstruction Technique (SART), parallelism being provided by the Portable Extensible Toolkit for Scientific Computation (PETSc) [START_REF] Balay | Efficient management of parallelism in object oriented numerical software libraries[END_REF][START_REF] Balay | PETSc users manual[END_REF]. Our implementation was then applied to a 180 × 280 pix 2 region of interest of the detector (leading to a 180 × 280 × 180 vox 3 reconstructed volume).
In order to set up a stopping criterion for the SART iterations, we first defined the relative residual error
= b -Ax 2 b 2 , (23)
where A denotes the projection operator, b denotes the sinogram, and x is the unknown reconstruction. For all three scans, the iterations were stopped when no significant reduction of the error was observed. In all cases, this led to 3 %, which is consistent with a stopping criterion based on the discrepancy principle [START_REF] Morozov | Methods for Solving Incorrectly Posed Problems[END_REF]. Indeed, it should be observed that the specimen was not moved between scans 1 and 2. In other words, the projections from these two scans are essentially identical, up to noise. Their difference therefore gives a reliable estimate of the signal-to-noise (SNR) ratio, which was computed as follows
SNR = 1 2 b 1 + b 2 2 b 2 -b 1 2 , ( 24
)
where b 1 , b 2 denote the sinograms from the first and second scans, respectively. We found that 1/SNR 2 %. The residual error at the end of the iterative reconstruction was therefore comparable to the amplitude of the noise, which validates our stopping criterion.
Segmentation of the reconstructed volume
Segmentation of the reconstructed volume (reference configuration) is a crucial step of our method, since the geometry of each grain is required for the determination of their displacements. We applied the watershed algorithm [START_REF] Beucher | The watershed transformation applied to image segmentation[END_REF][START_REF] Beucher | Use of watersheds in contour detection[END_REF] to the distance transform of the thresholded image. To avoid over-segmentation, regional minima whose depth was less than a specified threshold were suppressed by means of the Hminima transform [START_REF] Soille | Morphological Image Analysis: Principles and Applications[END_REF]. Fig. 7 shows the resulting segmented image.
Application of the D-DPC method
The segmented 3D reconstruction resulting from Scan 1 was used as a reference configuration for the generation of trial projections within the framework of the D-DPC method, which was then applied to Scans 2 and 3, successively.
Application to Scan 2 The specimen is untransformed; therefore, our method should converge to null displacements for all grains. Deviations from this expected result provide an estimate of the accuracy of the D-DPC method.
Application to Scan 3 The specimen was subjected to a 3.5 mm (31.25 vox) translation in the Oxz plane; therefore, the D-DPC method should converge to the same rigid body motion for all grains. The expected value of this rigid body motion was estimated by means of the standard Volumetric Digital Image Correlation technique (VDIC) [START_REF] Lenoir | Volumetric digital image correlation applied to Xray microtomography images from triaxial compression tests on argillaceous rock[END_REF], leading to q VDIC = 2.78 mm, 0.00 mm, -2.12 mm , = 24.8 vox, 0.0 vox, -18.9 vox (25)
q VDIC = 3.50 mm.
Again, deviations from this expected result provide an estimate of the accuracy of the D-DPC method. In both cases, only 4 projections (0 • , 45 • , 90 • , 135 • ) were used to estimate the rigid body motion of each grain. The results are presented in Tables 4.4 (Scan 2) and 4.4 (Scan 3). The measured translations and rotations are averaged over all grains, while the corresponding standard deviation is used as a measure of the method accuracy. For both scans, we found that the accuracy was about 0.01 mm (0.1 vox) for translations and 1 • for rotations.
It is again emphasized that only 4 projections were required to achieve the reported accuracy, although the initial guess passed to the optimization algorithm provided purposedly a very poor estimate of the true displacements. Indeed, the reference configuration was used as initial guess (q init = 0). If the initial guess is chosen within 1 vox (translations) and 6 • (rotations) of the true displacements, then the accuracy achieved with only 2 projections (the bare minimum) was similar for translations, and only slightly degraded for rotations.
To close this section, we note that Leclerc and coauthors [START_REF] Leclerc | Projection savings in CT-based digital volume correlation[END_REF] report similar accuracies when using very few projections.
Conclusion
In this paper, we have proposed a new method based on Xray microtomography to capture the movements of grains within granular materials. The most salient feature of this method, wich we called D-DPC (Discrete Digital Projection Correlation), is that it does not require a 3D reconstruction of the specimen in its current (deformed) state (it does require a reconstruction of the specimen in its initial state). Our tests show that as few as two projections suffice to deliver a satisfactory estimate of the displacements. This results in a dramatic reduction of the acquisition time, therefore allowing for time-dependent phenomena (such as creep) to be studied.
The D-DPC method is formulated as an inverse problem, which is fully stated in the present paper. Both synthetic and real-life test cases confirm the value of the method, which is accurate to about 0.1 vox (translations) and 1 • (rotations) in standard laboratory experimental conditions. Although more conventional correlation techniques can achieve more accurate measurements, we believe that this is largely compensated by the significant gain in acquisition time that our method offers. Besides, we are currently investigating the potential sources of errors (noise, fluctuations of the beam, . . . ) as well as several ways to improve the accuracy of D-DPC, such as refining the projection model, accouting for geometric imperfections of the tomography setup, . . .
Fig. 1 Fig. 2
12 Fig. 1 Tomography setup for parallel (synchrotron) tomography.
Fig. 3
3 Fig. 3 Image of the 30 grains considered for the validation of the D-DPC method in Sec. 3.2. The blue squares (resp. red circles) indicate the grains belonging to the "loose" (resp. "dense") set. The typical diameter of the grains is about 30 pix.
-Fig. 4
4 Fig.4Plot of the objective function F considered in Sec. 3.3 around its minimum (continuous line, left axis). F is not convex for large values of ω, and the initial guess ought to be close enough to the minimum, as illustrated by the symbols (right axis). See main text for a description of this plot.
Fig. 5
5 Fig. 5 The specimen considered in Sec. 4. The photograph also shows the sample stage of the tomography setup.
4. 2 X
2 -ray microtomography and reconstruction X-ray microtomography experiments were performed at Laboratoire Navier with an Ultratom scanner from RX Solutions combining a Hamamatsu L10801 X-ray source (230 kV, 200 W, smallest spot size: 5 µm) and a Paxscan Varian 2520V flat-panel imager (1920 × 1560 pix 2 , pixel size 127 µm).
Fig. 6
6 Fig. 6 Two orthogonal projections of the specimen used for validating the D-DPC method (see Sec. 4).
Fig. 7
7 Fig. 7 Two orthogonal cross-sections through the segmented, reconstructed volume.
Table 1
1 Results of the tests described in Sec. 3.2.1. For all three assemblies of grains, the table reports the maximum component-wise relative error on the generalized displacement, as well as the number of iterations of the Levenberg-Marquardt algorithm.
Loose Dense All grains Max. rel. err. Num. iter. 3 10 -13 8 2 10 -13 8 13 10 -13 12
Num. proj. Max. rel. err. Num. iter.
All grains All grains All grains 2 4 6 27 3 3 10 -13 67 935 42
Table 2
2 Results of the tests described in Sec. 3.2.2. The table reports the number of projections, the maximum component-wise relative error on the generalized displacement and the number of iterations of the Levenberg-Marquardt algorithm.
Table 3
3 Application of the D-DPC method to Scan 2 (see Sec. 4.4). The components of the translation u and the rotation ω are averaged over all grains. Figures in parentheses are standard deviations over the grains.
X Y Z
u [mm] 0.00 (0.04) 0.00 (0.02) 0.01 (0.03) u [vox] 0.0 (0.3) 0.0 (0.2) 0.1 (0.3) ω [ • ] -0.2 (0.7) -0.2 (0.9) 0.1 (0.8)
X Y Z
u [mm] 2.76 (0.03) 0.01 (0.04) -2.12 (0.04) u [vox] 24.7 (0.3) 0.1 (0.4) -18.9 (0.4) ω [ • ] -0.2 (1.5) -0.8 (1.9) 0.3 (0.8)
Table 4
4 Application of the D-DPC method to Scan 3 (see Sec. 4.4). The components of the translation u and the rotation ω are averaged over all grains. Figures in parentheses are standard deviations over the grains.
https://www.scipy.org/scipylib/. Retrieved 15 December
2 http://cython.org/. Retrieved 15 December 2016.
Acknowledgements This work has benefited from a French government grant managed by ANR within the frame of the national program Investments for the Future ANR-11-LABX-022-01.
The authors would like to thank Matthieu Vandamme for fruitful discussions. | 49,950 | [
"5427",
"2804",
"172093",
"1022614",
"2284",
"17285"
] | [
"204904",
"204904",
"204904",
"204904",
"204904",
"204904"
] |
01483571 | en | [
"chim",
"phys"
] | 2024/03/04 23:41:48 | 2016 | https://theses.hal.science/tel-01483571/file/RAKOTONIRINA_Andriarimina_2016LYSEN047_These.pdf | C
s quelques lignes sont dédiées à toutes les personnes formidables et admirables que j'ai eu l'occasion de rencontrer durant ces dix dernières années d'études sans qui ce manuniscrit de thèse n'aurait jamais existé. Cette thèse n'aurait pas été la même sans celles que j'ai cotoyées pendant ces trois dernières années. Certes, ces quelques lignes sont bien assez maigres mais comportent toute ma sincérité.
Je voudrais tout d'abord exprimer mes plus profonds remerciements à Anthony Wachs et Matthieu Rolland qui m'ont guidé tout au long de cette aventure. Parfois le chemin a été long, éprouvant, escarpé mais l'arrivée à destination fait toujours oublier toutes les di fcultés. A Anthony, le chef, pour sa patience, sa passion qu'il a su transmettre à ses étudiants. Il a été d'un support inconditionnel, a su m'apprendre à repousser mes limites. Aujourd'hui, je suis très er de dire que je ne regrette pas d'avoir choisi de faire cette thèse avec lui. A Matthieu, pour sa patience, ses conseils bien avisés et surtout son aide, ô combien precieuse, pour l'aboutissement de cette thèse. Merci de m'avoir fait découvrir le monde de l'industrie, tes conseils resteront à jamais dans mes pratiques dans ma future carrière, en n si j'en oublie pas quelques uns. Messieurs, vos encadrement étaient, pour moi, ce dont j'avais rêvé d'avoir pour une thèse.
Je tiens aussi à remercier Neils Deen et Christoph Müller d'avoir accepté de rapporter mes travaux de thèse et de m'avoir aidé à améliorer mes connaissances dans le domaine. Mes remerciements vont aussi aux membres du jury Farhag Radjaï, Jean-Yves Delenne, Carole Delenne qui m'ont honoré de leur présence.
Je tiens à ne surtout pas oublier l'encadrement de Abdelkader Hammouti durant la dernière année de ma thèse qui m'a été d'une aide très précieuse. Je remercie aussi Jean-Yves Delenne pour toutes les discussions qu'on a pu avoir durant ma thèse. Elles ont permis d'améliorer ma compréhension de la mécanique des milieux granulaires.
Je remercie également Véronique Henriot de m'avoir acceuilli au sein du département de mécanique des uides d'IFP Energies nouvelles mais également Thierry Bécue de m'avoir permis de rester après la soutenance de ma mi-thèse.
Je remercie tout perticulièrement tous les membres de l'équipe PeliGRIFF, à savoir Kad, le nouveau chef, pour ses conseils, son aide très précieuse qui va au-delà du cadre du travail et surtout pour son amitié (Kad a été comme un grand frère), Guillaume, Jean-Lou pour les derniers gros calculs, Florian, Amir, Mostafa. Je tiens à remercier Amir mon premier cobureau de m'avoir supporté surtout toutes sortes de musiques qu'il a pu entendre à travers mon casque, pour les nombreuses fois qu'il m'avait o fert son aide surtout au-delà du cadre du travail. Merci à Flo, qui nous a rejoint en 2014, qui a certainenemt trouvé l'idée de m'o frir un o f-road en 4x4 comme cadeau. Et Mostafa, notre petit dernier, qui nous a appris plein d'expressions souvent à plier de rire. Celle qui m'a le plus marqué est: "la n de thèse est comme un feu rouge, on ne s'arrête pas au dernier moment mais on se prépare à s'arrêter depuis une bonne distance".
Ici, je tiens à remercier Rim pour son bon humeur partout où on va, Haïfa, Ferdaous, Yoldes (comme une maman), Mafalda pour toutes ces sorties et tous ces repas qui m'ont aidé à me déconnecter, de temps en temps, de mes travaux de thèse.
iii 1 É ' . . . . .
Je remercie aussi les formidables collègues thésards membres de l'ADIFP que j'ai eu la chance de rencontrer.
Je remercie également tous les membres de la direction mécanique appliquée d'IFPEN de m'avoir permis de connaitre une bonne ambiance de travail. Manu, comment oublier les cours du bord du Rhône, merci pour tout chef! Arnaud, Sophie, Alice, Alex pour le covoiturage, Fabien, Fabrice, Martin, Ian, Michael, Vincent, Didier, Timothée, Christian, les 2 Philippe, Philippe merci pour tous ces plats africains qui m'ont tant fait rappeller mon pays, Jean-Pierre, Francis le beau-gosse, Fred, Guillaume, Véro, Cyril pour le sport et les discussions scienti ques, Malika, Myriam, Pierre-Antoine, le nouveau thésard et ancien stagiaire, d'avoir partagé notre bureau. Je tiens aussi à remercier Françoise pour son aide et sa disponibilité pour toutes les tâches admnistratives.
Mes remerciements vont également à toutes les personnnes que j'ai pu rencontrer à Madagascar pour avoir rendu mes études possibles. Par ordre chronologique, Ridha de m'avoir donner la possibilité de travailler pour lui a n que je puisse économiser pou réaliser mon rêve. Je tiens à exprimer ma profonde et éternelle gratitude à la famille Raza ntsalama & Lugan de m'avoir donner l'opportunité de poursuivre mes études en France, sans vous tout celà n'aurait juste été qu'un rêve, les mots ne seront jamais assez pour vous remercier. Je remercie également Rina, Randza et surtout Andalinda pour leur participation dans l'accomplissement de ce rêve. Je tiens aussi à remercier les amis qui sont devenus un peu comme une famille dans un pays qui est très loin du mien. Ils ont fait de mon séjour en France, surtout à Toulouse, agréable. A savoir, Anicet, Mahery, Zo, Yves, Mario, Andalinda, Randza et Lalaina.
Je tiens à remercier également ma famille (mes parents, mes frères (Aina, Mamy) et ma soeur (Enoka)) qui m'a été d'un soutient infaillible et incontournable. Cette famille qui m'a donné le courage de me lever quand j'étais à terre, cette famille qui me donne et me donnera toujours cette rage de vaincre. On est parti de très très loin mais je vous ai dis que j'allais y arriver et maintenant j'y suis. Je tiens à remercier aussi mes meilleurs amis de Madagascar Mihaja, Tantely, Jacky, Nirina et Sitraka qui m'ont toujours soutenu malgré la distance et qui m'ont toujours porté dans leurs coeurs.
Je conclurai en remerciant de tout coeur Betelhem pour ces années de bohneur passées à mes côtés, pour son soutien inconditionnel durant toute cette thèse. Les mots me manqueront toujours pour t'éxprimer ma gratitude. N n convex granular media are involved in many industrial processes as, e.g., particle calcination/drying in rotating drums or solid catalyst particles in chemical reactors. In the case of optimizing the shape of catalysts, the experimental discrimination of new shapes based on packing density and pressure drop proved to be di cult due to the limited control of size distribution and loading procedure. There is therefore a strong interest in developing numerical tools to predict the dynamics of granular media made of particles of arbitrary shape and to simulate the ow of a uid (either liquid or gas) around these particles. Non-convex particles are even more challenging than convex particles due to the potential multiplicity of contact points between two solid bodies. In this work, we implement new numerical strategies in our home made high-delity parallel numerical tools: Grains3D for granular dynamics of solid particles and PeliGRIFF for reactive uid/solid ows. The rst part of this work consists in extending the modelling capabilities of Grains3D from convex to non-convex particles based on the decomposition of a non-convex shape into a set of convex particles. We validate our numerical model with existing analytical solutions and experimental data on a rotating drum lled with 2D cross particle shapes. We also use Grains3D to study the loading of semi-periodic small size reactors with trilobic and quadralobic particles. The second part of this work consists in extending the modelling capabilities of PeliGRIFF to handle polylobed (and hence non-convex) particles. Our Particle Resolved Simulation (PRS) method is based on a Distributed Lagrange Multiplier / Fictitious Domain (DLM/FD) formulation combined with a Finite Volume / Staggered Grid (FV/SG) discretization scheme. Due to the lack of analytical solutions and experimental data, we assess the accuracy of our PRS method by examining the space convergence of the computed solution in assorted ow con gurations such as the ow through a periodic array of poly-lobed particles and the ow in a small size packed bed reactor. Our simulation results are overall consistent with previous experimental work.
À ma famille, À Betelhem, v C C vii A xv R xvii
Keywords: Non-convex Particles, Discrete Element Method, Granular Mechanics, Direct Numerical Simulation, Rotating Drums, Fixed Beds, Porous Media, High Performance Computing xv R C tte thèse porte sur l'étude numérique des écoulements uide-particules rencontrés dans l'industrie. Ces travaux se situent dans le cadre de la compréhension des phénomènes qui se déroulent dans des tambours tournants et réacteurs à lit xe en présence de particules de forme non convexe. En e fet, la forme des particules in uence de manière importante la dynamique de ces mileux. A cet e fet, nous nous sommes servis de la plateforme numérique parallèle Grans3D pour la dynamique des milieux granulaires et PeliGRIFF pour les écoulements multiphasiques. Dans la première partie de cette thèse, nous avons développé une nouvelle stratégie numérique qui permet de prendre en compte des particules de forme arbitrairement non convexe dans le solveur Grains3D. Elle consiste à décomposer une forme non convexe en plusieurs formes convexes quelconques. Nous avons nommé cette méthode "glued-convex". Le modèle a été validé avec succès sur des résultats théoriques et expérimentaux de tambours tournants en présence de particules en forme de croix. Nous avons aussi utilisé le modèle pour simuler le chargement de réacteurs à lits xes puis des lois de corrélation sur les taux de vide ont été déduites de nos résultats numériques. Dans ces travaux, nous avons aussi testé les performances parallèles de nos outils sur des simulations numériques à grande échelle de divers systèmes de particules convexes. La deuxième partie de cette thèse a été consacrée à l'éxtension du solveur PeliGRIFF à pouvoir prendre en compte la présence de particules multilobées (non convexes) dans des écoulements monophasiques. Une approche du type Simulation Numérique Directe, basée sur les Multiplicateurs de Lagrange Distribués / Domaine Fictif (DLM/FD), a alors été adoptée pour résoudre l'écoulement autour des particules. Une série d'études de convergence spatiale a été faite basée sur diverses con gurations et divers régimes. En n, ces outils ont été utilisés pour simuler des écoulements au travers de lits xes de particules de forme multi-lobée dans le but d'étudier l'in uence de la forme des particules sur l'hydrodynamique dans ces lits. Les résultats ont montré une consistance avec les résultats expérimentaux disponibles dans la littérature. "Un corps est liquide lorsqu'il est divisé en plusieurs petites parties qui se meuvent séparément les unes des autres en plusieurs façons di érentes, et qu'il est dur lorsque toutes ses parties s'entretouchent, sans être en action pour s'éloigner l'une de l'autre". Descartes (1852) 1 I T e idea of Descartes (1852) can be extended to granular media such that a granular media might be a particular state of matter usually de ned between liquid and solid. It behaves like a liquid because it ows, can ll a container and can take its shape. Unlike liquids, a non horizontal free surface can be stable. It also behaves like a solid since it can resist to compression and slightly to shear stress (or deviatoric stress). However, a solid can resist to traction, whereas a granular media can not. [START_REF] Brown | Principles of Powder Mechanics: essays on the packing of powders and bulk solids[END_REF] Since a granular media is a collection of particles, it is essential to introduce the concept and the de nition of the particle size classi cation, particle shape, roughness, etc (T . 1.1).
The sorted categories of particles are encountered in many applications such as in civil engineering, food processing, pharmaceutics, foundry, geophysics, astrophysics, oil and gas, energy, etc. Thus, each eld of applications has its own speci c vocabulary for the classi cation of the shape and size.
In the eld of civil engineering, the cement industry appears to be one of the largest users of granular materials. In fact, cement is obtained by mixing limestone (80%) and clay (40%) at high temperature. There is also the concrete manufacturing industry which plays a significant role in terms of granular materials usage. For instance, Lafarge, a French multinational company, which is the world leader in the production of cement, construction aggregates and concrete has 166 plants in the world and a capacity of 225 Mt/year. After water, granular materials are the second most used resources on Earth [START_REF] Duran | Sables, poudres et grains: Introduction à la physique des milieux granulaires[END_REF]). One of the main issues encountered in the eld of food processing is the storage prob-lem and the discharge of containers. F . 1.4 illustrates the particular problem of segregation in the discharge of containers. Generally, segregation occurs when a owing granular media made of various particle sizes is disturbed leading to a rearrangement of particles. It appears often while vibrating a container during a pouring or a discharge procedure. Granular materials are also seen in nature such as sand on the beach, in the desert (10% of Earth surface), in rivers, on continental shelves and abyssal plains, on hills, etc. There are various phenomena which are related to the presence of sand, for instance the displacement of sand dunes in the desert, river bed erosion, submarine avalanche, etc.
Nature can put on display dreadful and devastating phenomena such as snow avalanches (F . 1.5) and landslides (F . 1.6).
Figure 1.5 -Typical powder snow avalanche.
Figure 1.6 -Landslide burying a six-lane motorway in Taiwan.
Technically, an avalanche is an amount of snow sliding down a mountainside while landslide is the movement of rock, shallow debris or earth down a slope. In particular, powder snow avalanche (F . 1.5) is known as an extremely violent avalanche. The typical mass of an avalanche can easily exceed 10 Gt and its velocity can reach 300 km/h. This type of avalanche holds a large amount of snow grains in the surrounding turbulent uid. This phenomenon is usually close to a dust storm in arid and semi-arid regions (F . 1.7) where particles are suspended in the uid. Another example is pyroclastic flows known as pyroclastic density currents which come from volcano eruptions (F . 1.8) and where hot gas of about 1000 • C is mixed with rocks with a current velocity up to 700 km/h. The boulders moving in pyroclastic ows have very high kinetic energy so that they can atten trees and destroy a whole building which comes across their path. The hot gases are extremely lethal since they can spontaneously incinerate living organisms. Granular media are also found in the eld of astrophysics. For example, the rings of Saturn (F . 1.9) which are a massive collection of granular materials that in nitely collide while rotating around the planet. Another example is the granular materials found in Mars which are investigated during Mars exploration by NASA's Mars rover Curiosity. In F . 1.10, the rover cuts a wheel scu f mark into a wind-formed ripple at the "Rocknest" site to examine the particle-size distribution of the material forming the ripple. All of these phenomena involving granular media are still challenging to describe especially at a very large scale, where the overall dynamics is controlled by the scale of an individual particle. Hence, scientist and engineers set up small-scale laboratory experimentations in order to have an insight in the physics involved in the study of granular dynamics. In addition, numerical simulations play an important role as enhanced physical models implemented in modern parallel codes lead to increasingly accurate numerical models able to examine large scale granular ows.
R
Granular media ow is of utmost interest in many other industries. Rotating furnaces are widely used in treatment of solids like drying, torrefaction, pyrolysis, calcination, impregnation, chemical treatment . . . In all theses cases, it is highly preferable that the granular media is mixed so as to control residence time and to prevent that the solid stays too long near or far from the walls (or the injection points). The rotating drum is an experimental device that is widely used to study the dynamics of granular media. The reproducibility of experiments are quite satisfactory and the system is continuously fed. One of the advantages of the set-up is that experiments can be reproduced in short period of time. Hence, it o fers the possibility of performing a large amount of experiments on many ow regimes. The rotating drum is also often chosen to study environmental ows such as pyroclastic ows or avalanches. Granular ow regimes are known to be impacted by particle shapes. For this purpose, many authors studied the in uence of particle shape on the dynamics of granular media, among others [START_REF] Favier | Shape representation of axisymmetrical, non-spherical particles in discrete element simulation using multi-element model particles[END_REF], [START_REF] Höhner | Experimental and numerical investigation on the inuence of particle shape and shape approximation on hopper discharge using the discrete element method[END_REF][START_REF] Bernard | Multi-scale approach for particulate flows[END_REF], [START_REF] Lu | E fect of wall rougheners on cross-sectional ow characteristics for non-spherical particles in a horizontal rotating cylinder[END_REF]. However, the numerical simulations are performed on limited amount of particle shapes. Many ow regimes can appear as a function of the rotation rate. [START_REF] Mellmann | The transverse motion of solids in rotating cylinders-forms of motion and transition behavior[END_REF] proposed mathematical models to predict the transitions between the di ferent forms of transverse motion of a free-owing bed material in a rotating drum. These regimes are widely referred to in the literature and are summarized in F . 1.11.
The Froude number F r is usually the key factor of the characterization of the regime transitions in rotating drums. Nonetheless, this dimensionless number is often subjected to modi cation to account for the height of the bed of granular media, the particle diameter to the drum diameter aspect ratio (d p /D d ) or the material properties of particles and of the drum.
H
Industrial context
Catalytic reactions and reactors have numerous applications such as production of chemicals bulk, petroleum re ning, ne chemical pharmaceutics, biomass conversion, etc. Most catalytic re ning and petrochemical reactions are operated in xed bed reactors. In these reactors, catalyst pellets are randomly stacked in a large cylindrical vessel and reactants, usually gas and liquid, ow through the bed to react inside the catalyst pellets. Catalyst pellets are designed to be porous so that the reacting uid can penetrate the particle to reach the reactive phase (noble metals, metal sulphides, etc.) coated onto them. Main interest of heterogeneous catalysis is that the surface area available for reaction is very large (typically 20 -200 m 2 /g per pellet). Catalyst particles are typically [0.2; 5] mm in size and can be spherical, cylindrical or can have more complex shapes (F . 1.12 and 1.13). Catalyst shape is chosen in order to optimize the reactor performance.
Performance, in the point of view of the re ner, is a compromise between catalyst lifetime and cost, reactor yield, mechanical strength and operating costs. A higher catalyst activity is generally preferable as it allows to operate either at lower temperature or in more severe conditions (higher ow-rate or more di cult feedstock). A better activity can be achieved by in- creasing the amount of active phase which generally results in a more expensive catalyst. In the case of mass transfer limitations, it can be interesting to increase the pellet surface to volume ratio: the higher external area ensures a better accessibility to the inner volume of the pellet. A higher surface to volume ratio can be achieved in reducing pellet size or changing the pellet shape. Pressure drop in the xed bed should be minimum to reduce the gas compression costs, especially on the hydrogen feed. This is achieved using large pellets and high voidage packing. Catalyst lifetime is limited by several mechanisms: bed plugging, catalyst leaching (part of the active phase is taken out with products), catalyst ageing (active phase changes in time and is less active), catalyst coking (formations of deposits in the particle reduced access to active sites), etc. Changing catalyst shape is a way to manage bed plugging. Mechanical strength depend on the pellet support material, its inner porosity and of course on the pellet shape. A low mechanical strength leads to a higher risk of pellet breakage which results in high pressure drop. If nes are produced during pellet breakage, they can even plug the bed. In summary, shape change in a convenient way to optimize catalyst performance.
Catalyst production method has an impact on its costs. Extrusion of the pellets (F . 1.13b) is quite cheap and allows to modify the shape by changing the die. Therefore, an important e fort is dedicated to the extrudate shapes and consists in nding the most optimised ones. From a chemical engineering perspective, optimizing a catalyst shape means nding a shape minimizing pressure drop while maximizing the chemical conversion rate of the catalyst.
Shape and apparent catalytic activity
Inside the particle, mass transfer limitations may prevent all catalytic sites to exhibit full performances: if reactant di fusion is slow compared to reactant consumption, it is possible that reactant concentration at the center may be signi cantly lower than at catalyst pellet surface. Using the classical Thiele approach [START_REF] Thiele | Relation between catalytic activity and size of particle[END_REF]), chemical engineers can estimate the loss of activity due to mass transfer limitation. Catalyst e ciency is de ned as the ratio of the activity of the particle to the activity of the particle if the concentration was uniform. For a reaction of order n, this can be written :
η = V K i C n dv V K i C n 0 dv (1.1)
The analytical solutions of this equation are based on the dimensionless number known as Thiele Modulus. It compares the consumption by the reaction and the di fusion phenomena. If it is larger than 1 then the reaction is mass transfer limited.
Φ L = L p K i C n-1 D ef f 0.5 (1.2)
Φ L denotes the Thiele modulus, L p a characteristic particle dimension, K i the intrinsic reaction rate constant, C the concentration of reactant, n the reaction order and D ef f the e fective di fusivity.
Exact derivation of e ciency as a function of the Thiele modulus exists for semi-in nite plate, sphere and in nite cylinder (F . 1.14).
For example:
(i) For plate:
η = tanh Φ L /Φ L (ii) For sphere: η = (3Φ L coth 3Φ L -1)/3Φ 2 L (iii) For cylinder: η = I 1 (2Φ L )/Φ L I 0 (2Φ L )
where I n (x) is the Bessel function of order n. For these derivations, the Thiele modulus does not depend on shape if it is rewritten using L p = V p /S p , as proposed by [START_REF] Aris | On shape factors for irregular particles -I: The steady state problem[END_REF].
For a given Thiele modulus, it may appear that shape has little e fect of the e ciency (F . 1.14). Nevertheless, for Φ L ∼ 1, which is a frequent case, changing shape can improve e ciency by a few percent, which is signi cant for industrial purposes. In fact, shape optimisation is mostly about changing the Thiele Modulus by changing the characteristic dimension of the particle L p .
In order to improve e ciency, it is interesting to lower the Thiele modulus, hence increasing the ratio V p /S p . Hence, this leads to the development of poly-lobed extrudates.
Pressure drop and Void fraction
Before industrialising a new shape of catalyst, it should be known how the shape will a fect pressure drop and the amount of catalyst that can be loaded in a reactor.
A large number of correlations has been proposed in literature. They are based whether on empirical data or on numerical models. Among others, [START_REF] Cooper | Hydroprocessing conditions a fect catalyst shape selection[END_REF] proposed a model which is based on the correlations of [START_REF] Midoux | Flow pattern, pressure loss and liquid holdup data in gas-liquid down ow packed beds with foaming and nonfoaming hydrocarbons[END_REF]. The model is written in the following from:
∆P LG = f 1 (X) • ∆P L = f 2 (X) • ∆P G (1.3)
where X = ∆P G ∆P L and ∆P LG denotes the two-phase pressure drop per unit length. ∆P G and ∆P L denote respectively the pressure drop of gas and liquid if they exist and assumed to ow alone. If the pressure drop of a single phase ow is known, then the gas-liquid pressure drop can be computed with su cient accuracy. Being able to predict single phase pressure drop is thus su cient in the context of shape optimization.
The correlation of single phase pressure drop in packed bed of [START_REF] Ergun | Fluid ow through randomly packed columns and uidized beds[END_REF] and [START_REF] Ergun | Fluid ow through packed columns[END_REF] is widely used in the chemical sectors. They suggested to predict the pressure drop through a packed bed as the sum of a viscous term (friction on particle surface) and an inertia term (change in direction, expansion, contraction).
∆P H = 150 µ(1 -ε) 2 ε 3 u d 2 p + 1.75 ρ f (1 -ε) ε 3 u 2 d p (1.4)
where ε, µ, U , d p , H and ρ f denote respectively bed void fraction, uid dynamic viscosity, particle diameter, height of the bed and uid density. The numerical constants (150 and 1.75) are tted to match experimental data points and depend on the particle shape.
The correlation E . 1.4 exhibits a very strong dependency on void fraction. So far, there is yet no way to analytically predict the void fraction of a packed bed for an arbitrary (new) particle shape . Experiments are necessary and they are not so easy to perform. A rst problem is that particles have random dimensions: for extrudates, the diameter is almost constant but the length can vary a lot in an uncontrolled manner. Length distribution variations may in uence experimental results. A second issue is that a good accuracy of the void fraction is required to be able to discriminate shapes. Reaching high accuracy requires the use of large vessels, and repetition of experiments which is seldom performed on prototype shapes produced in small amounts. A third issue is that the bed void fraction depends on the loading procedure and it is very likely that some procedures designed for a speci c shape may lead to very di ferent results on others (for example: cylinders subjected to vibration tend to align vertically, which is of course not observed on spheres). Thus, void fraction measurements in these beds are quite time consuming.
The pressure drop correlation uses a "particle diameter", whose de nition is not straightforward for non spherical particles. Several approaches have been suggested that try to estimate an "equivalent diameter" based on shape factors [START_REF] Cooper | Hydroprocessing conditions a fect catalyst shape selection[END_REF]:
d e = 1 φ s 6 • V p S p (1.5)
where φ s is the shape factor (surface area of a sphere of equal volume/surface area of the particle), V p and S p denotes respectively the volume and the surface of the particle. Interestingly, this expression resembles the characteristic length recommended by Aris [START_REF] Aris | On shape factors for irregular particles -I: The steady state problem[END_REF] to compute the Thiele modulus.
It can be seen in F . 1.15 that the pressure drop is quite dependent on particle shape and volume to surface ratio. So far correlations to estimate pressure drop of new particles shapes are failing to be predictive enough due to a lack of knowledge of the void fraction as well as scarce and scattered of experimental data [START_REF] Nemec | Flow through packed bed reactors: 1. single-phase ow[END_REF]). 3.4 Summary on catalyst shape optimization: Need for predictive tools
Changing particle shapes can be quite interesting to increase particle e ciency through increasing the surface to volume ratio. E ciency wise, shape selection can be performed using simulation tools. On the opposite side, the pressure drop estimation requires experiments that are time consuming and ill-adapted to screen a large number of candidate shapes. New numerical tools are welcome to ease particle shape evaluation.
S
This Ph.D. thesis is a multi-disciplinary work in the framework of a collaboration between two departments at IFP Energies nouvelles: The overall objective of this work is twofold :
• to develop numerical tools:
-Extension of the modelling capabilities of Grains3D (a massively parallel Discrete Element Method code for granular dynamics) to treat non-convex particles based on a decomposition of a non-convex particle into a set of convex ones -Extension of the modelling the capabilities of PeliGRIFF (a massively parallel Direct Numerical Simulation code) to handle non-convex particles in the coupling the dispersed granular phase with the ow solver using a Distributed Lagrange Multiplier / Fictitious Domain (DLM/FD) formulation • to use these enhanced of the tools to improve physical comprehension of:
-Silo discharge -Dam breaking -Fluidization -Simulation of 2D-and 3D-cross particles in a rotating drum -Assessing the e fect of catalyst shape on xed bed void fraction -Assessing the e fect of catalyst shape on pressure drop This work has been or will be presented for publication in 4 papers that will be used as the back bone of this thesis manuscript which is organized as follows:
• Chapter 2: Granular ow simulation: A literature review • Chapter 3: Non-convex granular media modelling with Grains3D (paper 1)
• Chapter 4: Optimizing particle shape in xed beds: simulation of void fraction with poly-lobed particles (paper 2) • Chapter 5: Grains3D: a massively parallel 3D DEM code (paper 3)
R
Ce chapitre introduit les contextes scienti que et technique des milieux granulaires ainsi que leurs di férentes applications. En particulier, les tambours tournants et les réacteurs à lit xe. Dans la première application, un intérêt particulier est porté sur la dynamique des milieux granulaires dans les tambours tournants en vue d'étudier l'impact de la forme des particules celle-ci. Pour ce qui est de la deuxième application, la prise en compte de nouvelles formes de grains de catalyseur permet d'augmenter leur e cacité en augmentant le rapport surfacevolume. Grace aux simulations numériques, il est alors possible de tester plusieurs formes de particules et de calculer les pertes de charge au travers des lits de catalyseurs formés par ceux issus des séléctions.
Les principaux objectifs de cette thèse sont organisés en deux volets:
• Développement d'outils numériques:
-Extension du code Grains3D (code "Discrete Element Method" massivement parallèle pour la dynamique des milieux granulaires) pour pouvoir traiter des particules de formes non-convexes. Le modèle est basé sur la décomposition de la forme non-convexe en plusieurs formes convexes quelconques. -Extension du module Simulation Numérique Directe du code PeliGRIFF pour le couplage entre la phase dispersée (particules non-convexes) et le solveur des équation de Navier-Stokes en utilisant la formulation Multiplicateur de Lagrange / Domaine Fictif ("Distributed Lagrange Multipliers / Fictitious Domain"). • Utilisation des modéles implémentés pour des études physiques, telles que:
-Vidange de silo -E fondrement de colonne de particules -Fluidisation -Dynamique des particules en forme de croix dans des tambours tourants -E fet de forme des catalyseurs sur le taux de vide dans des lits xes -E fet de forme des catalyseurs sur la perte de charge au travers des lits xes Ces travaux de thèse ont donné lieu à quatres articles soumis ou encore à soumettre qui serviront de bases pour ce manuscrit:
• Chapitre 2: Simulation numérique d'écoulement granulaire: une revue de la littérature
L
N merical simulations are meaningless without experimental validations. These valida- tions allow scientists to gain con dence in their numerical tools. Therefore, numerical models can then later be used to produce more accurate predictions. Many dry granular ow con gurations can be studied at the laboratory scale. Con gurations presented in F . 2.1 are among the most studied ones. The shear cell con guration (F . 2.1a) is a classical case where an imposed strain rate in the form of a relative motion is applied to a collection of particles between two walls which can be either those of coaxial cylinders [START_REF] Miller | Stress uctuations for continuously sheared granular materials[END_REF][START_REF] Schöllmann | Simulation of a two-dimensional shear cell[END_REF]), or those of parallel planes [START_REF] Babic | The stress tensor in granular shear ows of uniform, deformable disks at high solids concentrations[END_REF], [START_REF] Aharonov | Rigidity phase transition in granular packings[END_REF]). This type of con guration is useful in the investigation of the e fect of continuous shear stress on granular materials. F . 2.1b shows a gravity-driven ow con ned between two vertical planes or in a cylinder [START_REF] Nedderman | The thickness of the shear zone of owing granular materials[END_REF], [START_REF] Nedderman | The thickness of the shear zone of owing granular materials[END_REF], [START_REF] Denniston | Dynamics and stress in gravity-driven granular ow[END_REF]) controlled by a horizontal plan or disk with a vertical, steady and uniform motion. Since hopper discharge is a matter of interest in many elds, this con guration o fers the opportunity to gain a better comprehension of phenomena which are involved in industrial facilities as e.g. mining. Flows on inclined planes [START_REF] Hanes | Simulations and physical measurements of glass spheres owing down a bumpy incline[END_REF], [START_REF] Silbert | Granular ow down an inclined plane: Bagnold scaling and rheology[END_REF]) in F . 2.1c are common for the study of geophysical phenomena such as landslides or avalanches. This experimental set up gives a representation of how granular materials are accelerated by an inclined surface (e.g. down a hill). F . 2.1d exhibits a ow on a pile [START_REF] Khakhar | Surface ow of granular materials: model and experiments in heap formation[END_REF], [START_REF] Andreotti | Selection of velocity pro le and ow depth in granular ows[END_REF]) where the slope is promoted by the ow rate which is the unique control parameter of the system.
Hypothesis 2.1 In the following sections, the study of a granular media is carried out under the following assumptions:
• attraction forces are neglected (e.g. electrostatic, capillary, van der Waals, etc.) • particles are most of the time in contact, a packing of granular material can be considered as porous medium
G
Far from being understood, granular media is a simple system of large number of particles of various shapes, sizes and materials [START_REF] Umbanhowar | Patterns in the sand[END_REF]). The motion of the system can be described by the classical Newton's laws of motion which are the foundation of classical Mechanics. The nature of contact depends on the type of materials and geometrical properties of particles and de nes the behaviour of the granular media which can be simulated by various methods.
Hard sphere approach
Elastic collision essentially governs the hard sphere model. The state of the particles after a collision is described by the conservation of momentum (translational and angular). The collision is only reduced to the interpretation of the total kinetic energy which is converted to potential energy associated with a repulsive force between two bodies and converted back again to kinetic energy. The following hypothesis are adopted in this approach:
• Contact occurs at a single point only • Collisions are supposed to be binary and quasi-instantaneous • Multiple collisions are de ned as a succession of binary collisions During a collision, the energy is conserved in the elastic deformation associated to normal and tangential displacements of the contact point, then dissipated in these directions.
Before a contact, for given velocities, only three coe cients are needed to evaluate the postcollisional velocities [START_REF] Herrmann | Modeling granular media on the computer[END_REF]):
• The coe cient of normal restitution which de nes the incomplete restitution of the normal component of the relative velocity. • The coe cient of friction which relates the tangential force to the normal force (Coulomb's law) • The coe cient of maximum tangential restitution which delimits the restitution of tangential velocity of the contact point.
2.2 Soft-particle and Discrete Element Method (DEM)
"Soft-particle" is usually referred to the deformation of the particle during contact. In reality, this method allows a small overlap of particles during the contact. Whilst particles remain geometrically rigid, the deformation is considered in the formulation of force models. The duration of contact is nite and multiple contact may occur simultaneously. Discrete Element Method, sometimes called Distinct Element Method has been developed over the past 30+ years. [START_REF] Cundall | A discrete numerical model for granular assemblies[END_REF] historically designed DEM for industrial process simulations of very small systems. The numerical model dealt with granular assemblies made of discs and spherical particles.
Following this work, numerous authors in di ferent scienti c communities were interested in modelling systems up to 1000 particles in two dimensions using idealised particles. Later on, DEM models have been improved in a way that complex three dimensional geometries can be treated. As the computing power increases, large scale simulations started to show an important potential. In their study, [START_REF] Walther | Large-scale parallel discrete element simulations of granular ow[END_REF] presented a large-scale computation of 122 million particles using High Performance Computing to simulate a sand avalanche.
Thanks to High Performance Computing, the realism of granular simulation has been drastically improved. Therefore, large scale industrial applications can be treated such as oil and gas re ning or geophysical ows. With the increase of computing power, researchers are now able to simulate multiphase ow systems. For instance, particulate ows which are systems of particles lled with uid in their surrounding interstices. This type of system can be simulated by coupling a Computational Fluid Dynamics code and a Discrete Element Method code (e.g. [START_REF] Tsuji | Discrete particle simulation of two-dimensional uidized bed[END_REF] and [START_REF] Wachs | PeliGRIFF, a parallel DEM-DLM/FD direct numerical simulation tool for 3D particulate ows[END_REF]).
Most of DEM simulations are performed using spherical particles. Nevertheless, real particles have irregular or complex shape. Spherical particles are usually used because of the easiness of its characterization. In fact, its radius is all that is needed to describe it. The contact detection is simple, it satis es G 1 G 2 -r 1 -r 2 ≤ 0, where G and r denote respectively the centres of gravity and the radii of particle 1 and particle 2. The contact model is de ned as a single point whereas for complex particles it can be several surfaces, lines or points. As a consequence, the mechanical behaviour of granular materials can be modi ed [START_REF] Nouguier-Lehon | In uence of particle shape and angularity on the behaviour of granular materials: a numerical analysis[END_REF], [START_REF] Szarf | In uence of the grains shape on the mechanical behavior of granular materials[END_REF], [START_REF] Flemmer | An experimental study on the e fect of particle shape on uidization behavior[END_REF], [START_REF] Escudié | E fect of particle shape on liquid-uidized beds of binary (and ternary) solids mixtures: segregation vs. mixing[END_REF]).
The accuracy of computations is improved by introducing a variety of contact detection algorithms for various particle shapes.
Non-Smooth Contact Dynamics (NSCD)
The Non-Smooth Contact Dynamics also called Contact Dynamics was originally resulting from the mathematical formulation of non-smooth dynamics developed by [START_REF] Moreau | Evolution problem associated with a moving convex set in a hilbert space[END_REF]1994), [START_REF] Jean | A system of rigid bodies with dry friction[END_REF], [START_REF] Jean | Frictional contact in collections of rigid or deformable bodies: numerical simulation of geomaterial motions[END_REF]1999). It is also a Discrete Element Method [START_REF] Radjai | Contact dynamics as a nonsmooth discrete element method[END_REF]) dedicated for numerical simulations of granular materials. Unlike the traditional DEM soft-sphere model, the NSCD method does not use numerical schemes to resolve the small time and length scales involved in particle-particle interactions. The e fects of small scales are incorporated in contact laws with a non-smooth formulation described at larger scales.
This method has been successfully applied to numerous problems, among other the works of [START_REF] Radjai | Force distributions in dense two-dimensional granular systems[END_REF][START_REF] Jing | Formulation of discontinuous deformation analysis (dda)-an implicit discrete element model for block systems[END_REF], [START_REF] Radjai | Turbulentlike uctuations in quasistatic ow of granular media[END_REF], [START_REF] Mcnamara | Measurement of indeterminacy in packings of perfectly rigid disks[END_REF], [START_REF] Azéma | Force transmission in a packing of pentagonal particles[END_REF].
2.4 Hybrid soft and hard sphere collision [START_REF] Buist | On an e cient hybrid soft and hard sphere collision integration scheme for dem[END_REF] introduced the hybrid soft and hard sphere model. It is a novel and e cient approach to compute collisions in particulate ow systems. It takes the advantages of both the hard sphere collision model and the soft sphere model. In fact, the hard sphere model is used for binary collisions, whereas the soft sphere model is required for multi-boy contacts. The hybrid model has the ability of discarding the numerical integration of the contact for all pairs of binary interactions. Hence, the model allows the use of large time step which decreases the computing time.
Continuum Mechanics Methods (CMM)
The distinctive feature of this model is that it uses an Eulerian approach for the granular behaviour [START_REF] Tüzün | The ow of granular materials -ii : Velocity distributions in slow ow[END_REF], [START_REF] Polderman | Solids ow velocity pro les in mass ow hoppers[END_REF], [START_REF] Jenike | A theory of ow of particulate solids in converging and diverging channels based on a conical yield function[END_REF], [START_REF] Drescher | On the criteria for mass ow in hoppers[END_REF] and [START_REF] Džiugys | An approach to simulate the motion of spherical and non-spherical fuel particles in combustion chambers[END_REF]). The set of continuum equations (continuum mechanics) can be used to describe the motion of granular media. In this framework, a granular media can be described as a viscoplastic "granular uid", a "granular gas" [START_REF] Campbell | Rapid granular ows[END_REF]) or a viscoelastic-plastic soil.
The equations of uid mechanics are involved in this model. If the motion of the granular ow is rapid enough, predicting the system behaviour leads to the solution of a turbulent two-phase ow. In that case, the model becomes less accurate and very complex. Thus, this method is suitable for particular processes only [START_REF] Barker | Computer simulations of granular materials[END_REF]) and its results can di fer from experimental data by an order of magnitude [START_REF] Džiugys | An approach to simulate the motion of spherical and non-spherical fuel particles in combustion chambers[END_REF]).
Other approaches
Other approaches exist in the literature such as:
• Geometrically steepest descent method which has been studied by [START_REF] Jullien | Simple three-dimensional models for ballistic deposition with restructuring[END_REF] • Quasi-static approach [START_REF] Borja | Micromechanics of granular media Part I: Generation of overall constitutive equation for assemblies of circular disks[END_REF])
• Shinbrot's model which combines CMM and DEM models [START_REF] Umbanhowar | Patterns in the sand[END_REF])
C
The contact resolution is very important, particularly in the investigation of multi-body system evolution over time. This has a signi cant number of applications such as computer graphic, computer animation, especially in 3D computer games [START_REF] Palmer | Collision detection for animation using sphere-trees[END_REF]), robotics [START_REF] Gilbert | Computing the distance between general convex objects in threedimensional space[END_REF]) and military applications.
For large number of objects the contact detection is a major computational obstacle. In fact, the process of contact detection is divided in two phases: the neighbour search phase and the contact resolution phase. Thus, numerous authors investigated algorithms for particulate simulations [START_REF] Iwai | Fast particle pair detection algorithms for particle simulations[END_REF], [START_REF] Gilbert | A fast procedure for computing the distance between complex objects in three-dimensional space[END_REF], [START_REF] Feng | A 2D polygon/polygon contact model: algorithmic aspects[END_REF] and [START_REF] King | Collision Detection for Ellipsoids and Other Quadrics[END_REF]) in order to increase the accuracy of contact detection and decrease its computational cost.
D E M
Granular dynamics are described in terms of Newton's laws of motion which are physical laws that laid the foundation of classical Mechanics [START_REF] Newton | Sir Isaac Newton's mathematical principles of natural philosophy and his system of the world[END_REF]). They are summarized as follow:
N '
• F : Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon
• S
: The alteration of motion is ever proportional to the motive force impressed; and is made in the direction of the right line in which that force is impressed
• T
: To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts Modelling di culties arise from the consideration of particle shapes which can range form a very simple shape such as sphere in 3D or disk in 2D to very complex shapes. In fact, a continuously increasing number of studies is dedicated to non-spherical particles. In addition, contact detection requires robust and fast algorithms in order to save computing cost. Thanks to High Performance Computing, Discrete Element Method allows the computation of large systems relevant of industrial applications such as oil and gas re ning processes which are the main scope of this thesis.
4.1 Importance of particle shape DEM simulations can provide both macroscopic and microscopic measurements in granular media, but the shape representation of particles is still a challenging aspect. Therefore, handling non-spherical particle shape in DEM simulations is not straightforward. Contact detection algorithm are very rarely valid for any shapes. Many authors designed advanced strategies to compute contacts between various types of shape but most of these strategies are only suitable for a single speci c shape such as cubes [START_REF] Fraige | Distinct element modelling of cubic particle packing and ow[END_REF]), ellipsoids [START_REF] Džiugys | An approach to simulate the motion of spherical and non-spherical fuel particles in combustion chambers[END_REF]). Super-quadrics o fer a rst level of versatility as many shape can be approached by varying coe cients in the generalised quadric equation [START_REF] Cleary | DEM prediction of industrial and geophysical particle ows[END_REF]).
Traditionally, particles shape is approximated by a sphere in 3D and a disc in 2D. The shape plays a signi cant role in Discrete Element Method simulations since neither a sphere nor a disc approximation can always reproduce the real behaviour of granular assemblies. The major di ferences between real and approximated particle shapes are: resistance to shear stress and failure, volume of revolution, realistic void fraction and energy partition.
Intuitively, it is easy to gure out how the force is oriented when two circular particles collide. In fact, the normal force is directed along the line of both centres and no torque is generated. Whereas for non-circular particles, if the normal force is not directed toward the centre of mass, a torque is generated (see F . 2.2).
Brief review of particle shape in literature
Ellipse/Ellipsoid
One of the simplest representation of non-spherical shape is an ellipsoid in 3D and ellipse in 2D. The algebraic and parametric form of an ellipsoid is expressed as follows:
x a 2 + y b 2 + z c 2 x = a cos θ cos ϑ, y = b cos θ sin ϑ, z = c sin ϑ (2.2)
Where x, y, z are the coordinates in the xed-body reference system, whereas a, b, c are the half length of the principal axes of the shape, θ ∈ [-π/2; π/2] and ϑ ∈ [-π; π] are the parametric representation of the particle. 2015) (adapted from [START_REF] Rothenburg | Numerical simulation of idealized granular assemblies with plane elliptical particles[END_REF]).
The contact detection algorithm relies on the determination of the intersection point between two ellipses in 2D [START_REF] Rothenburg | Numerical simulation of idealized granular assemblies with plane elliptical particles[END_REF]) and two ellipsoids in 3D [START_REF] Ouadfel | An algorithm for detecting inter-ellipsoid contacts[END_REF]). The contact resolution procedure in 3D is illustrated in F . 2.3.
Super-quadrics
The so-called super-quadric equation allows the representation of both convex and nonconvex shapes and was suggested by [START_REF] Barr | Superquadrics and angle-preserving transformations[END_REF] and later on adopted in Discrete Element Method by [START_REF] Williams | Superquadrics and modal dynamics for discrete elements in interactive design[END_REF]. The algebraic and the parametric forms are expressed as:
f (x, y, z) = x a 2 ε 2 + y b 2 ε 2 ε 2 ε 1 + z c 2 ε 1 -1 (2.3) x = a(sin θ) ε 1 (cos ϑ) ε 2 , y = b(sin θ) ε 1 (sin ϑ) ε 2 , z = c(cos ϑ) ε 1 (2.4)
where a, b, c denote the half length of the principal axes of the shape, and ε 1 and ε 2 are the parameters which control the "blockiness" of the particle. ε 1 controls the blockiness of the cross-sectional planes yOz and yOz, whereas ε 2 controls that of the cross-sectional plane in xOy. When ε 1 = ε 2 = 1 the equation of the super-quadric E . 2.3 is equal to that of the ellipsoids (E . 2.1). θ ∈ [-π/2; π/2] and ϑ ∈ [-π; π] are the parametric representations of both super-quadric and ellipsoid.
Polygons and Polyhedrons
Since ellipsoidal and super-quadric shapes do not always represent all particles found in nature and industry, many authors oriented their research in the exploration of polygonal and polyhedral shapes (e.g. [START_REF] Hart | Formulation of a three-dimensional distinct element model-Part II. Mechanical calculations for motion and interaction of a system composed of many polyhedral blocks[END_REF] and [START_REF] Lee | A packing algorithm for threedimensional convex particles[END_REF]). While new shapes are designed, new corresponding contact algorithms are required. Polygons in 2D and polyhedra in 3D are such shapes that ellipsoids can not represent and super-quadrics can only but asymptotically. For instance, the contact detection algorithm is quite straightforward in 2D. In fact, the contact detection algorithm relies on the number of edges of the polygon. The computational cost scales with N i × N j where N i and N j are the numbers of vertices of the colliding particles i and j.
In 3D, the contact resolution can be very complex. In fact, the contact detection algorithm requires some complex combinations of elements such as vertex-vertex, vertex-edge, vertexface, edge-edge, edge-face and face-face. The computational e ciency of the contact detection algorithms of polygonal/polyhedral particles is improved by introducing the so-called "Common Plane" algorithm developed by [START_REF] Cundall | Formulation of a three-dimensional distinct element model-Part I. A scheme to detect and represent contacts in a system composed of many polyhedral blocks[END_REF].
De nition 2.1 A common is a plane that, in some sense, bisects the space between the two contacting particles. [START_REF] Cundall | Formulation of a three-dimensional distinct element model-Part I. A scheme to detect and represent contacts in a system composed of many polyhedral blocks[END_REF] G. [START_REF] Nezami | Shortest link method for contact detection in discrete element method[END_REF] demonstrated the uniqueness of the Common Plane for any couple of two convex particles and the perpendicularity of the contact normal to the CP. and2). They are the plane which is perpendicular to the linear section P Q, and the planes that are parallel to the polygon edge P P 1 , P P 2 , QQ 1 and QQ 2 where P 1 and P 2 are the vertices of particle 1 next to P and Q 2 and Q 2 are the vertices of particle 2 next to Q. Credit: [START_REF] Lu | Discrete element models for non-spherical particle systems: From theoretical developments to applications[END_REF] (adapted from [START_REF] Nezami | A fast contact detection algorithm for 3-d discrete element method[END_REF]).
Spherosimplices
The modelling of non-spherical particles using the so-called "spherosimplices" has received a particular interest over the last decade (e.g. Pournin and Liebling (2005) and Alonso-Marroquín and Wang ( 2009)). A spherosimplices-shaped particle is a combination of a skeleton (e.g. a point, a linear segment, a polygon or a polyhedron) and a disk or a sphere (e.g. F . 2.6) (a) Non-spherical spherosimplices particles. Credit: Pournin and Liebling (2005).
Composite particles made of multiple spheres
Particles composed of multiple spheres are often called "glued spheres" (F . 2.7) in the literature, referring to the fact that spherical particles are glued together to build the composite shape. This method is quite popular in the DEM community (e.g. [START_REF] Nolan | Random packing of nonspherical particles[END_REF], [START_REF] Kruggel-Emden | A study on the validity of the multisphere discrete element method[END_REF]).
One of the advantages of this method is its ability to reproduce a given shape with a loose approximation by "gluing" many spherical particles together. Therefore, a fast and robust contact detection algorithm for spheres can be applied to the particle. Nonetheless, a very large number of spheres has to be glued to reach a high de nition of surface smoothness which increases the computational cost of the method.
The particularity of this method is that, if required, the primary spheres can also overlap with each other. Such a built particle is governed by rigid-body motion so that the relative positions of the components do not change during collisions. The forces and torques acting on primary spheres are summed relatively to the centre of mass of the composite particle and are subsequently used to calculate its trajectories [START_REF] Favier | Shape representation of axisymmetrical, non-spherical particles in discrete element simulation using multi-element model particles[END_REF]).
S
Since almost any non-convex particle can be decomposed into a set of arbitrary convex particles, none of the previous strategies is suitable for the goal of this study. In fact, the closest method would be the glued spheres method but regardless of the computation cost. Another option would be the use of super-quadrics but the range of parameters of their equation does not allow the access of the targeted shapes of this study. The other methods do not fall in line with the scoop of the present study. Based on these observations, it is concluded that the best strategy and suitable for modelling granular media of non-convex particles, at least at the current state of the granular code Grains3D, is the decomposition of a non-convex particle into a set of arbitrary convex bodies. The model is called "glued convex" and is introduced in the next chapter.
R
Ce chapitre comprend une revue détaillée de la littérature sur la modélisation des milieux granulaires de particules de formes complexes. Di férentes approches sont alors exposées ainsi que la complexité de la détection de contacts entre deux objets. En premier lieu, le modèle de sphère dure est présenté avec ses avantages et ses inconvénients. En second lieu, la combinaison modèle de sphere molle et méthode des éléments discrets (DEM) qui est couramment utilisée dans la litérature. Ensuite, le modéle "Non-Smooth Contact Dynamics (NSCD)" et le modèle hybride sphère molle et sphère dure. Et en n, quelques modèles qui sont moins utilisés que ceux cités précédement tels que la méthode des milieux continus ou l'approche quasi-statique ou encore le modèle de Shinbrot.
La détection de contact est un problème à part entière car elle est souvent dépendante de la forme étudiée. En e fet, pour certaines formes de particules, le contact peut se résoudre analytiquement tandis que pour d'autres formes elle nécessite des algorithmes puissants. Dans ce chapitre, quelques formes courantes sont introduites avec les méthodes de résolution des contacts associées.
Cette revue de littérature a permis de mettre en evidence que les modèles existants dans la littérature sont inadéquats pour les problémes qui font l'objet de cette thèse. D'où la proposition du nouveau modèle nommé glued convex ("convexes collés"). This chapter has been submitted for publication in Powder Technology:
N -G 3D
A. D. Rakotonirina, A. Wachs, J.-Y. Delenne, F. Radjaï. Grains3D, a exible DEM approach for particles of arbitrary convex shape -Part III: extension to non-convex particles.
In this paper, the "glued convex" method is presented to model non-convex particle shape with validation cases. Then we used the model to explore the e fect of particle shapes on packing porosity and on ow regimes in rotating drum with 2D-and 3D-crosses.
A
L rge-scale simulation using the Discrete Element Method (DEM) is a matter of interest as it allows to improve our understanding of the ow dynamics of granular ows involved in many industrial processes and the environment ows. In industry, it leads to an improved design and an overall optimisation of the corresponding equipment and process. Most of DEM simulations in the literature have been performed using spherical particles. Very few studies dealt with non-spherical particles, even less with non-convex ones. Even spherical or convex bodies do not always represent the real shape of certain particles. In fact, more complex shaped particles are found in many industrial applications as, e.g., catalytic pellets in chemical reactors. Their shape in uences markedly the behaviour of these systems. The aim of this study is to go one step further into the understanding of the ow dynamics of granular media made of non-convex particles. Our strategy is based on decomposing a non-convex shaped particle into a set of convex bodies, called elementary components. The novel method is called "glued convex" method, as an extension of the popular "glued spheres" method. At the level of elementary components of a "glued convex" particle, we employ the same contact detection strategy based on a Gilbert-Johnson-Keerthi algorithm and a linked-cell spatial sorting that accelerates the resolution of the contact. The new "glued convex" model is implemented as an extension of our in-house high delity code Grains3D that already supplies accurate solutions for arbitrary convex particles. The extension to non-convex particles are illustrated on the lling of catalytic reactors and the ow dynamics in a rotating drum.
I
Discrete Element Method was originally designed to handle spherical particles. The method is now able to deal with more complex particle shapes [START_REF] Cundall | Formulation of a three-dimensional distinct element model-Part I. A scheme to detect and represent contacts in a system composed of many polyhedral blocks[END_REF], [START_REF] Hart | Formulation of a three-dimensional distinct element model-Part II. Mechanical calculations for motion and interaction of a system composed of many polyhedral blocks[END_REF], [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]). Thanks to its conceptual simplicity this method is widely used in granular media modelling. Its computational implementation is very straightforward for spheres but is quite di cult for complex particle shapes. Many approaches have been investigated since the late 80's, among them the works of [START_REF] Cundall | Formulation of a three-dimensional distinct element model-Part I. A scheme to detect and represent contacts in a system composed of many polyhedral blocks[END_REF] and [START_REF] Hart | Formulation of a three-dimensional distinct element model-Part II. Mechanical calculations for motion and interaction of a system composed of many polyhedral blocks[END_REF]. They studied a system composed of polyhedral blocks and used a robust and rapid technique (Common Plane technique) to detect and to categorise contacts between two polyhedral blocks. Later on, many authors worked on the extension of DEM to non-spherical particles. For example, [START_REF] Munjiza | A poly-ellipsoid particle for non-spherical discrete element method[END_REF] constructed a poly-ellipsoid particle by "gluing" ellipsoids together. One of the most famous extensions of DEM is the "glued spheres" model in which a complex shape is approximated by "gluing" spherical particles. For instance, [START_REF] Nolan | Random packing of nonspherical particles[END_REF] used this approximation to study the random close packings of cylindrical-, bean-and nail-shaped particles. They found good agreement between their simulations and experimental data. [START_REF] Song | Contact detection algorithms for DEM simulations of tablet-shaped particles[END_REF] used this approach to study the contact criteria for tablet-at surface and tablettablet contact. At rst sight, this method seems to be well adapted to any shape. Nonetheless, the higher the number of spheres is the less e cient the computation becomes as [START_REF] Song | Contact detection algorithms for DEM simulations of tablet-shaped particles[END_REF]) demonstrated. Li et al. (2004) modelled sphero-disc particles to study the ow behaviour, the arching and discharging in a hopper. Another extension of Discrete Element Method to polygonal shaped particles was suggested by [START_REF] Hart | Formulation of a three-dimensional distinct element model-Part II. Mechanical calculations for motion and interaction of a system composed of many polyhedral blocks[END_REF], [START_REF] Feng | A 2D polygon/polygon contact model: algorithmic aspects[END_REF] and polyhedral shaped particles [START_REF] Fraige | Distinct element modelling of cubic particle packing and ow[END_REF], [START_REF] Lee | A packing algorithm for threedimensional convex particles[END_REF]). These new features enabled research groups to address several problems in the eld of geophysics [START_REF] Hentz | Identi cation and validation of a discrete element model for concrete[END_REF], [START_REF] Jing | Formulation of discontinuous deformation analysis (dda)-an implicit discrete element model for block systems[END_REF], [START_REF] Camborde | Numerical study of rock and concrete behaviour by discrete element modelling[END_REF]).
Available strategies in the literature to handle complex shapes were already reviewed in de-tail in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]. We simply give here again a short overview. [START_REF] Williams | A linear complexity intersection algorithm for discrete element simulation of arbitrary geometries[END_REF] introduced the Discrete Function Representation of a particle shape to address contact resolution. DPR is applicable to convex geometries and to a restricted set of concave geometries. [START_REF] Williams | Superquadrics and modal dynamics for discrete elements in interactive design[END_REF] explored the critical in uence of particle shape on granular dynamics and suggested super-quadric particles for geophysical applications. This method allows the design of particles with rounded edges such as ellipsoid, blocks, or tablets by introducing a continuous function (f (x, y, z) = (x/a) m + (y/b) m + (z/c) m -1 = 0) that de nes the geometry of the object. The weakness of this method relies on the handling of contact detection. In fact, the more the edge angularity increases the more the discretisation needs points to discretise f (x, y, z). Therefore, computational cost of the contact detection increases with edge (or shape) angularity. A probability-based contact algorithm is presented in the work of [START_REF] Jin | Probability-based contact algorithm for non-spherical particles in DEM[END_REF]: contacts between non-spherical particles are translated into those between spherical particles with probability. Alonso-Marroquín and Wang (2009) presented a method to simulate two-dimensional granular materials with sphero-polygon shaped particles. The particle shape is represented by the classical concept of a Minkowski sum [START_REF] Bekker | An e cient algorithm to calculate the minkowski sum of convex 3d polyhedra[END_REF]), which permits the representation of complex shapes without the need to de ne the object as a composite of spherical or convex particles. Hence, this approach has proven to be much better than the glued spheres method. The modelling of non-spherical particles using the so-called "spherosimplices" has received a particular interest over the last decade (Alonso-Marroquín and Wang (2009), Pournin and Liebling (2005)). A spherosimplex-shaped particle is combination of a skeleton (e.g. a point, a linear segment, a polygon or a polyhedron) and a disk or a sphere. Contact resolution is a core component of DEM simulations. A proper contact resolution ensures accurate DEM computed solutions. The Gilbert-Johnson-Keerthi (GJK) algorithm [START_REF] Bergen | A fast and robust GJK implementation for collision detection of convex objects[END_REF], [START_REF] Gilbert | A fast procedure for computing the distance between complex objects in three-dimensional space[END_REF]) is a good candidate for this particular problem and well suited for arbitrary convex shaped particles. This algorithm was rst introduced by [START_REF] Petit | Shape e fect of grain in a granular ow[END_REF] and later generalized by [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF] to study the e fect of non-spherical particle shape in granular ows. The GJK algorithm is an iterative approach to compute the euclidean minimal distance between two convex objects. The GJK reduces the problem of nding the minimal distance between two convex bodies to nding the minimal distance between their Minkowski di ference and the origin [START_REF] Gilbert | A fast procedure for computing the distance between complex objects in three-dimensional space[END_REF]).
Beyond the problem of contact detection, modelling di culties related to multiple contact handling for complex shaped particles also require to be addressed. In the existing literature on this problem, [START_REF] Abbaspour-Fard | Theoretical validation of a multi-sphere, discrete element model suitable for biomaterials handling simulation[END_REF] pointed out the validity of a multi-sphere model in various phenomena such as sliding, dropping and conveying, while [START_REF] Kruggel-Emden | A study on the validity of the multisphere discrete element method[END_REF] studied the macroscopic collision properties of the glued sphere model and compared them to experimental results. In their study, the total contact force of a multi-sphere particle impacting a at wall is treated by computing the mean of the forces at each contact point. Later on [START_REF] Höhner | Comparison of the multi-sphere and polyhedral approach to simulate non-spherical particles within the discrete element method: In uence on temporal force evolution for multiple contacts[END_REF] pointed out that this method is not accurate enough. They showed that there is a non-negligible e fect of the particle shape approximation (arti cial roughness created by gluing spheres) on the force temporal evolution in normal and tangential directions.
N C M
The aim of this study is to introduce a novel variant of Discrete Element Method able to deal with non-convex particle shapes and to use it to simulate the ow dynamics of granular media. The strategy is based on decomposing a non-convex particle, called the composite, into a set of convex bodies, called elementary components. This approach, called "glued con-vex", is inspired by the glued spheres method introduced by [START_REF] Nolan | Random packing of nonspherical particles[END_REF]. Our glued convex method is implemented in our in-house granular solver Grains3D [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]). This enable us to use existing methods, models and algorithms already implemented in Grains3D such as time integration of equations of motion, quaternions for body rotation, linked-cell spatial sorting and the Gilbert-Johnson-Keerthi algorithm for collision detection [START_REF] Gilbert | A fast procedure for computing the distance between complex objects in three-dimensional space[END_REF], [START_REF] Gilbert | Computing the distance between general convex objects in threedimensional space[END_REF]). In particular, the GJK algorithm is applied to elementary components. A contact between a glued convex particle, i.e., a composite, and another glued convex particle, i.e., another composite is detected if at least one elementary component of the former contacts with one elementary component of the latter.
A two dimensional illustration is presented in F . 3.1.
Figure 3.1 -2D illustration of the decomposition of a non-convex particle into a set of elementary convex components.
Equations of motion
The dynamics of a granular material made of (non-convex) particles is entirely governed by Newton's law [START_REF] Newton | Sir Isaac Newton's mathematical principles of natural philosophy and his system of the world[END_REF]). Assuming that N bodies make up the granular system is made of N particles, the complete set of equations which governs the ow dynamics is:
M i dU i dt = F i (3.1) J i dω i dt + ω i ∧ J i ω i = M i (3.2) dx i dt = U i (3.3) dθ i dt = ω i (3.4)
where M i , J i , x i and θ i denote the mass, moment of inertia tensor, position of the centre of mass and angular position of particle i, i ∈ [0, N -1]. The translational velocity vector U i and the angular velocity vector ω i of the centre of mass are involved in the decomposition of the velocity vector as v i = U i + ω i ∧ R i , where R i denotes the position vector with respect to the centre of mass of a particle i. F i and M i stand for the sum of all forces and torques applied on particle i. They are de ned as follow:
F i = M i g + N -1 j=0,j =i F ij (3.5) M i = N -1 j=0,j =i R j ∧ F ij (3.6)
R j denotes a vector which points from the centre of mass of the particle i to the contact point with particle j. It is assumed that all particles are subjected to gravity and contact forces only.
Strategy
It is important to show the strategy adopted in the present work which allows the computation of granular ow made of non-convex particles. Our strategy is based on the following general steps:
• Apply the Newton's second law on a non-convex particle • Compute the translational and angular velocities of its centre of mass • Compute the position and angular positions of its centre of mass • Derive the positions and velocities of each convex component from that of the composite particle taking into account their relative positions • Due to the decomposition of a non-convex body into a set of convex particles, the contact forces are computed at the level of the elementary particles Considering two reference frames R and R ′ , where R is that of the space-xed coordinates system which does not depend on the particle con guration and R ′ is that of the particle and xed at its centre of mass, these steps are summarized in the following set of equations after the computation of momentum equations E . 3.1 and E . 3.2:
• Setting the centre of mass r i of the convex component i according to the reference frame R.
• Evaluating the centre of mass of the non-convex object r g and deriving the position of the component i according to the reference frame R ′ as r ′ i = r ir g • Computing the rotation matrix derived from the convex elementary particles:
M i = M • M 0 i (3.7)
M i is the matrix of rotation of the convex component i, M is that of the composite particle and M 0 i is the initial matrix of rotation of the component i. • Translating the component i using a displacement vector d i de ned as:
d i = (M • r ′ i ) -r ′ i (3.8)
• Computing the velocity
U i = U + ω ∧ (M • r ′ i ) (3.9) ω i = ω (3.10)
where U and ω denote respectively the translational and rotational velocities of the non-convex particle.
Mass properties
One of the challenges encountered with a non-convex particle shape is the computation of its mass properties (volume, centre of mass and components of moment of inertia tensor). In fact, the numerical integration of the volume sums involves the use of Boolean Algebra with solids since our non-convex particles are made of arbitrary convex shaped components which can overlap each other. This requires either to rely on an appropriate library such as the Computational Geometry Algorithms Library [START_REF] Doe | Computational Geometry Algorithms Library[END_REF]) or to implement an algorithm which provides an accurate approximation of the various volume sums corresponding to the particle mass properties. The latter option is used since it has a good compromise between accuracy and low complexity. Inspired by Monte-Carlo algorithms and the work of Alonso-Marroquín and Wang (2009), we carry out a numerical integration based on a pixelated particle. For the volume approximation, it consist in:
• de ning a box which circumscribes the shape (F . 3.2),
• uniformly discretising the box in the three directions,
• nding if the points X i , centre of the cells are either inside or outside the shape,
• summing up the volumes of all the cells that are found inside the shape to get the approximated volume. The centre of mass is then de ned as follow:
X g = 1 V N i=0 X i v i (3.11)
where X g denotes the vector position of the centre of mass, V is the approximated volume of the non-convex object, X i is the centre of the cell i and v i its volume. Using the same approximation method it is easy to compute the components of the moment of inertia tensor which can be expressed as follows:
J k,l = N i=0 f kl (X i )v i for k, l = 1, 2, 3 (3.12)
F . 3.3 shows clearly the grid convergence of the algorithm applied to the calculation of the volume of a sphere, a cylinder and two overlapping cylinders. The approximated volume and the relative error between the two volumes is plotted as a function of the number of discretisation points per direction. The error decreases in a relatively monotone way as the number of discretisation points per direction increases.
In this study, all the DEM simulations of glued convex shaped particles are performed with at least 500 grid points per direction to ensure correct results on the approximation of mass properties. [START_REF] Džiugys | An approach to simulate the motion of spherical and non-spherical fuel particles in combustion chambers[END_REF] carried out a full survey on the most popular integration schemes used in DEM simulations. This survey revealed that at least a second-order accurate in time scheme is required to properly predict the time evolution of the granular system. Our study uses the same DEM code as the one used by [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF].Hence, the time integration is performed with a second-order leap-frog Verlet scheme [START_REF] Langston | Continuous potential discrete particle simulations of stress and velocity elds in hoppers: transition from uid to granular ow[END_REF][START_REF] Jean | Frictional contact in collections of rigid or deformable bodies: numerical simulation of geomaterial motions[END_REF]):
Time integration
U t + ∆t 2 = U t - ∆t 2 + F(t) M ∆t x(t + ∆t) = x(t) + U t + ∆t 2 ∆t (3.13)
2.5 GJK-based contact detection [START_REF] Gilbert | A fast procedure for computing the distance between complex objects in three-dimensional space[END_REF] introduced the Gilbert-Johnson-Keerthi algorithm to compute the distance between two convex polyhedra. In 1990, the algorithm was improved by [START_REF] Gilbert | Computing the distance between general convex objects in threedimensional space[END_REF] to deal with general convex objects. Since each elementary component of a non-convex particle is a convex object, the Gilbert-Johnson-Keerthi algorithm can be applied to each elementary component to detect a potential collision with any other elementary component of a neighbouring non-convex particle. For further details on the use of Gilbert-Johnson-Keerthi algorithm for arbitrary convex shaped particles, the interested reader is referred to the work of [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]. By means of linked-cell spatial sorting [START_REF] Grest | Vectorized link cell Fortran code for molecular dynamics simulations for a large number of particles[END_REF]) for proximity detection, our GJK-based collision detection strategy can be summarized as follows:
• Use linked-cells to nd pairs of particles (P i , P j ) that potentially interact,
• For each pair that potentially interact, apply the Gilbert-Johnson-Keerthi distance algorithm to compute the minimal distance between all pairs (E k , E l ) of elementary components where E k is an elementary component of particle P i and E l is an elementary component of particle P j . The computing time of contact detection between two nonconvex particles scales as N i × N j where N i and N j are the number of elementary components of particle P i and particle P j , respectively. • the pairs (E k , E l ) in contact contribute to the total contact force and torque [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]). As pointed out in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF], the GJK algorithm applied right away to convex shapes is helpful to tell whether two convex shapes touch or not (if they do touch, the minimal distance between them is 0) but does not supply information on the contact features as contact point, overlap distance and unit normal vector at the point of contact. To access to this information, we suggest in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF] a 3-step procedure. This 3-step procedure for contact resolution is illustrated in F . 3.4 and summarized below as follows:
• Apply an homothety H to the pairs of convex elementary components (E k = A, E l = B) to slightly shrink them (by a thickness r A and r B respectively), such they do not overlap [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]), • Compute the minimal distance between the two shrinked objects A and B, • Based on the information provided by the GJK algorithm, reconstruct the contact features as:
δ =d(H A (A), H B (B)) -C H A (A) -C H B (B) (3.14) C = C A + C B 2 (3.15) n C = C B -C A ||C B -C A || (3.16)
where δ is the overlap distance, C the contact point and n C the unit normal vector.
0 1 0 0 0 0 0 1 1 1 1 1 0 0 1 1 ÈË Ö Ö ÔÐ Ñ ÒØ× A B H A (A) H B (B) r A r B C A C B C HA(A) C HB(B) crust A crust B C Figure 3.4 -Contact handling scenario between non-convex particles.
Contact is assumed to occur if δ 0. For more detail about our contact detection resolution method, the interested reader is referred to [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF].
Contact force and torque
The total contact force between two non-convex composite particles is calculating as a mean contact force over all their contact points. In other words, the total contact force is the sum of all forces resulting from contacts between two elementary components of the two non-convex composite particles divided by the total number of contact points. Same applies to the total torque where we pay a particular attention of using the right leverages (leverage calculated with respect to the centre of mass of the non-convex composite particle, not the center of mass of the elementary component). [START_REF] Džiugys | An approach to simulate the motion of spherical and non-spherical fuel particles in combustion chambers[END_REF] reviewed the most popular contact force models in the literature. In this work, we follow [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF] and employ a simple contact force model in which The total collision force F ij between two particles i and j acting on the contact surface is:
F ij = F ij,el + F ij,dn + F ij,t
(3.17)
The three components contributed to the total force have the following meaning and expression:
• The normal Hookean elastic restoring force reads:
F ij,el = k n δ ij n c (3.18)
where k n is a spring sti fness constant. In theory, k n can be related to material properties and contact geometry, but in DEM simulations it is essentially a numerical parameter that controls the amount of overlap between particles. δ ij denotes the overlap distance between particles i and j and n c the unit normal vector at the contact point.
• The normal dissipative (viscous-like) force reads:
F ij,dn = -2γ n m ij U rn where m ij = M i M j M i +M j (3.19)
where γ n is the normal dissipation coe cient and m ij the reduced mass of particles i and j. U rn denotes the normal relative velocity between both particles.
• The tangential friction force reads as follows:
F ij,t = -min{µ c |F el |, |F dt |}t c (3.20) F dt = -2γ t m ij U rt (3.21)
F dt denotes the dissipative frictional contribution, γ t the dissipative tangential friction coe cient, U rt the tangential relative velocity between both particles and t c the unit tangential vector at the contact surface.
DEM parameters for convex particles
Let us consider a sphere-sphere normal collision at zero gravity and a relative colliding velocity v 0 . Assuming the two spheres have the same radius R, the equation of time evolution of the penetration depth δ during the collision reads as follows:
d 2 δ dt 2 + 2γ n dδ dt + ω 2 0 δ = 0 , δ(t = 0) = 0 , dδ dt (t = 0) = v 0 (3.22)
The starting time of contact is assumed to be t = 0.
ω 2 0 = 2k n M
, where M denotes the mass of each particle. Hence,
δ(t) = v 0 ω 2 0 -γ 2 n e -γnt sin ω 2 0 -γ 2 n t (3.23)
The equation E . 3.23 leads to the contact duration:
T c = π ω 2 0 -γ 2 n (3.24)
According to [START_REF] Ristow | Dynamics of granular materials in a rotating drum[END_REF], for DEM simulations, the time step needs to be less than T c /10 to properly integrate each contact.
The time of maximum overlap is:
T max = 1 ω 2 0 -γ 2 n arctan ω 2 0 -γ 2 n γ n (3.25)
which gives the maximum penetration depth δ max = δ(t = T max ).
The coe cient of restitution e n is de ned as the ratio of both post-collisional and precollisional velocities.
e n = dδ dt (t = T c ) v 0 = e -γnTc = e -γn π √ ω 2 -γ 2 n (3.26)
If e n is given with k n , the damping coe cient γ n can be deduced from E . 3.26:
γ n = - ω 0 ln e n π 2 + (ln e n ) 2
(3.27)
Particle-wall and particle-particle interactions
Since particles have a non-convex shape, contacts can occur at several points. According to [START_REF] Kruggel-Emden | A study on the validity of the multisphere discrete element method[END_REF], forces and torques acting on a composite particle and involved in the resolution of equations E . 3.1, E . 3.2, E . 3.3 and E . 3.4 can be computed as follow:
F i = M j=1 N l=1 (F ijl /a ijl ) (3.28) M i = M j=1 N l=1 R i ∧ F i,t /a ijl (3.29)
Where F ijl denotes the force created between objects i and j at the contact point l and a ijl refers to the number of contact points during interaction. [START_REF] Höhner | Comparison of the multi-sphere and polyhedral approach to simulate non-spherical particles within the discrete element method: In uence on temporal force evolution for multiple contacts[END_REF] suggested to compute the forces incrementally since the number of contacts can vary during a collision. Their formulation is expressed as follows:
F n i = F n i,el + F n i,dn = F n i-1,el + k n N i N i j=1 (δ i,j -δ i-1,j ) + F n i-1,dn + γ n N i N i j=1 ( δi,j -δi-1,j ) (3.30)
where the elastic and viscous normal contact forces are incrementally computed by calculating and dividing only the incremental force elements by the number of contact points at the iteration step i. This is done to ensure that a multiple contact can be represented as a single contact.
As emphasized in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF], setting the contact force model parameters for nonspherical particles to guarantee an accurate and proper resolution of contacts is not an easy task. Here the potential occurrence of multiple contacts between two non-convex particles renders this task even more complicated. Using the previous simple analytical model for a gravityless contact between two particles, we consider two variants below.
The rst variant involves summing up the contact forces by considering a system made of parallel springs and dampers. Starting from the equation E . 3.22, the case of multiple contacts can be treated by assuming that the hookean elastic force and the normal dissipative force can be de ned as in E . 3.18 and in E . 3.19 respectively for each contact between two elementary components. Therefore, for N contacts E . 3.22 becomes:
d 2 δ dt 2 + N γ n dδ dt + N k n M δ = 0 , δ(t = 0) = 0 , dδ dt (t = 0) = v 0 (3.31)
t = 0 is assumed to be the initial time of contact. E . 3.31 can be written as follows to have the same form as E . 3.22:
d 2 δ dt 2 + 2 γ n dδ dt + ω 0 2 δ = 0 , δ(t = 0) = 0 , dδ dt (t = 0) = v 0 (3.32)
where
γ n = N γ n 2 ; ω 0 2 = N k n M (3.33)
And the expression of the contact time becomes:
T c = π ω 0 2 = πM N k n (3.34) E . 3
.34 shows that not only the sti fness coe cient in uences the time of contact but also the number of contact between elementary components. Actually, the higher the number of contact between elementary components is, the shorter the contact time is. This is a very undesirable property.
Solving equation E . 3.31 leads to the de nition of the damping coe cient γ n as a function of the number of contacts N and the coe cient of restitution e n as follows:
γ n = - 2 N ω 0 ln e n π 2 + (ln e n ) 2
(3.35)
In F . 3.5, we illustrate how the number of contact points modi es the damping coecient for a given coe cient of restitution. In fact, since forces from all contacts are added up during the interaction, E . 3.35 corrects the excessive damping of the system. Inspired by the works of [START_REF] Kruggel-Emden | A study on the validity of the multisphere discrete element method[END_REF] and [START_REF] Höhner | Comparison of the multi-sphere and polyhedral approach to simulate non-spherical particles within the discrete element method: In uence on temporal force evolution for multiple contacts[END_REF], the second variant to solve the multiple contact problem involves assuming that the problem can be treated as a single contact one. In fact, we compute the elastic and dissipative normal contact forces are as the sum of forces from all contact points divided by the number of contacts which occur at each time step ∆t. The e fect of compressing/elongating multiple springs and moving multiple dampers is modi ed in a way that it corresponds to a single contact dynamics.
E . 3.28 hence takes the following form:
F n = F n,el + F n,dn = k n N N i=1 δ i + γ n M N N i=1 δi (3.36)
where N denotes the number of contact points at the current time step. Compared to the work of [START_REF] Höhner | Comparison of the multi-sphere and polyhedral approach to simulate non-spherical particles within the discrete element method: In uence on temporal force evolution for multiple contacts[END_REF], the force implemented in Grains3D is not evaluated incrementally.
From now on, the formulation of E . 3.31 is referred to "model A" and the formulation of E . 3.36 is referred to as "model B".
V
Methodology
The methodology to validate our glued convex method is rather elementary but also su cient and well adapted. It involves running simulations with a convex shape treated as a single standard body in Grains3D and then running simulations of exactly the same ow con guration with the convex shape arti cially decomposed into a set of smaller convex shapes. There is almost an in nity of possibilities. The most intuitive ones include decomposing a cube into 8 smaller cubes or decomposing a cylinder into a number of thinner cylinders. For the sake of conciseness, we have selected a single test case that also admits an analytical solution: the normal impact of a cylinder on a at wall.
Normal cylinder-wall impact
This test case is inspired by the works of Kodam et al. (2010b) and [START_REF] Park | Modeling the dynamics of fabric in a rotationg horizontal drum[END_REF]. It involves a cylinder impacting a at wall in the normal direction to the wall and in a gravityless space (F . 3.6). The contact is also assumed frictionless. It is conceptually simple and very convenient for an accuracy assessment as it admits an analytical solution. Our goal is to compare the solutions computed with Grains3D for three representations of a cylinder to the analytical solution. These three representations are:
1. a true cylinder 2. a composite cylinder obtained by arti cially slicing the true cylinder in thinner cylinders and gluing them together, 3. a glued-sphere representation of the cylinder.
The initial conditions of the test case are characterized by:
• the initial angular position θ of the cylinder with respect to the horizontal plane,
• the initial translational velocity U = (0, 0, V - z,g ), • and the initial angular velocity ω = (0, 0, 0).
In other words, the pre-impact translational and angular velocity magnitude is set to V - z,g and 0, respectively. From [START_REF] Park | Modeling the dynamics of fabric in a rotationg horizontal drum[END_REF], the post-impact angular velocity can be written as fol- lows:
ω + y = M V - z,g (1 + ε)r cos(α + θ) I yy + M r 2 cos 2 (α + θ) (3.37)
where M is the mass of the particle, ε = -
V - z,g V + z,g
is the coe cient of restitution, V - z,g denotes the pre-impact velocity, α denotes the angle between the face of the cylinder and the line joining the contact point and the centre of mass, θ is the pre-impact angular position of the cylinder, I yy is the moment of inertia about the y axis and r = R 2 + 1 4 L 2 is a parameter which denotes the distance between the impact point and the centre of mass, R is the radius of the cylinder and L is the length of the cylinder (see F . 3.6). Similarly, the post-impact translational velocity reads as follows (Kodam et al. (2010b), [START_REF] Park | Modeling the dynamics of fabric in a rotationg horizontal drum[END_REF]):
V + z,g = ω + y r cos(α + θ) -εV - z,g (3.38)
Values of physical parameters are listed in T . 3.1. As in Kodam et al. (2010b), we set V - z,g = 1 m/s and vary θ, the pre-impact angular position of the cylinder.
X Z R L α θ r (a) Scketch of cylinder-wall impact.
Credit: Kodam et al. (2010b[START_REF] Park | Modeling the dynamics of fabric in a rotationg horizontal drum[END_REF].
α θ (b) A cylinder decomposed into thinner cylinders.
(c) Illustration of cylinderwall impact at 90 a about central diameter, b about central axis.
Table 3.1 -Experimental (Kodam et al. (2010b)) and numerical parameters for the normal impact of a cylinder on a flat wall.
We plot in F . 3.7 the computed post-impact translational and angular velocities as a function of the pre-impact angular position θ, for the true cylinder and the glued cylinder (regardless of model A or model B). The agreement between these two simulations is extremely satisfactory. It reveals that the glued convex method is well implemented in our code. We also compare these two quasi-similar computed solutions to the analytical solution E . 3.37-E . 3.38. The agreement of the two computed solutions with the analytical solution is also deemed to be very good, with the largest discrepancy observed on the post-impact angular velocity at low pre-impact angular positions (F . 3.7b). We now investigate more deeply the di ferences between the two formulations to compute the total force acting on a composite particle, the so-called model A and model B. We select a particular pre-impact angular position θ = 90 • and plot in F . 3.8 the time evolution of the normal contact force exerted on the cylinder over the time of contact. As expected from the formulation of model A in which the total force is the sum of the forces exerted at each contact point, model A predicts an increasing total normal force as the number of elementary cylinders N increases (note that N is also the number of contact points for θ = 90 • ) although the magnitude of the force per elementary cylinder, i.e., per contact point, decreases. Overall, the adjustment of γ n through E . 3.35 to get the expected restitution Coe cient e n guarantees that the solution is correct, as shown in F . 3.7, but the main drawback of model A as predicted by E . 3.34 and supported by results of F . 3.8a is the decrease of the contact duration T c with N . Consequently, the time step magnitude would have to be adjusted to the number of contact points in order to properly integrate a contact. This is a very undesirable property. Conversely, model B, that assumes that the total force exerted on the particle is the mean force over all contact points, provides a normal force magnitude, a contact duration as well as a maximum penetration depth independent of N , as shown in F . 3.8b.
Finally, we examine in the case θ = 90 • the e fect of N on the accuracy of the computed solution. For both model A and model B and N ≤ 30, F . 3.9a reveals that the error on the computed post-impact translational velocity is less than 0.5 . Model B performs remarkably better than model A with an error quasi independent of N and of the order of 0.05 . The error on the computed post-impact angular velocity plotted as a function of N in F . 3.9b is even more interesting. The analytical solution E . 3.37 predicts that the post-impact angular velocity is ω + y = 0. The true cylinder simulation predicts an artici cial non-zero post-impact angular velocity. This is due to the assumption, violated here, that the contact is always a point while geometrically in this case it is a line. However, the GJK algorithm supplies a point, that randomly lies somewhere along that contact line and whose position is primarily determined by rounding numerical errors. This somehow awed contact point creates an erroneous torque that makes the particle spin after contact. Interestingly, the composite cylinder simulation predicts a post-impact angular velocity ω + y that tends to 0, the correct value, as N increases. This is simply a bene cial side e fect of the distribution of the N contact points along the contact line. Torques from each contact point almost cancel out with each other and the total torque exerted on the particle tends to 0 as N increases. Once again, model B performs better than model A, although it is not entirely clear why. It might simply be due to rounding errors divided by N in model B.
Overall, the glued convex approach has been very satisfactorily validated in this cylinderwall impact test case. Model B seems to perform better and is also conceptually more sensible as contact feature estimates (and in particular the duration of contact) from a single contact point con guration are still valid. To complete the validation of the model and as a side question, we run simulations with a glued sphere representation of the cylinder and evaluate how well the glued sphere approach performs in a simple impact test case. We consider two composite particles made of 9 and 54 spheres, respectively, as also considered by Kodam et al. (2010b) and illutrated in F . 3.10. Values of physical parameters are listed in T . 3.2. For the mass properties, one can select those of a true cylinder or those of the glued-sphere representation. Kodam et al. (2010b) employed a mix of true cylinder (mass) and glued sphere (moment of inertia tensor) properties, although it is rather unclear what is the motivation for such a choice. F . 3.11a, F . 3.11b, F . 3.11c and F . 3.11d show the computed solutions with 9 and 54 glued spheres. Regardless of the set of mass property parameters (true cylinder, glued spheres or a mix as in Kodam et al. (2010b)), the computed solution is qualitatively the same and does not match at all the analytical solution. For 54 glued spheres, the computed solution starts to pick up the right qualitative form but is still quantitatively markedly o f. As the number of glued spheres used to represent the cylinder increases, it is however predictable that the computed solution will tend to the analytical solution. It is interesting to observe that for two particular pre-impact angular position values 0 • and 90 • , the glued sphere representation captures the right post-impact velocities. These two angles correspond to two particular contact con gurations in which the shape of the cylinder and speci cally the arti cial roundedness of the edges created by gluing spheres does not play any role. It fact, at 0 • and 90 • , the actual contact zone geometry is a surface and a line respectively. The homogeneous distribution of the glued spheres over the cylinder volume assures the proper computation of the normal contact force and the associated torque (that is 0). For all other pre-impact angular positions that lead to a single contact point, the error on the post-impact velocities is very signi cant, unless the number of glued spheres is large (probably of the order of O(10 2 -10 3 ), as a result of the arti cial rounded edges of the glued-sphere representation of the cylinder. In general, this simple test case reveals that the glued sphere representation of a complex shape, also intuitively attractive, might provide computed solutions of very weak accuracy and should hence be used with great care, if not prohibited.
R
Packing porosity
Void fraction or porosity of a (static) packing of granular material is simply the measure of the ratio of the volume of empty space to the total volume of the system. Compacity corresponds to the opposite of porosity and represents the ratio of total volume of particles to total volume of the system. Compacity of convex particles packings can be estimated by computing the Voronoï diagram of the system [START_REF] Luchnikov | Voronoi-delaunay analysis of voids in systems of nonspherical particles[END_REF]) whereas for non-convex particles the use of this method is impeded by their concavity. Consequently, another method has to be used for the characterization of compacity of random packings of non-convex particles. Here we use the same method as the one used to calculate the mass properties of a non-convex particle, i.e., we de ne a box embedding the packing of particles, pixelate that space with a ne cartesian constant grid size structured mesh and approximate the volume integral of the space actually occupied by particles by summing all the cells of the ne cartesian mesh whose center lies inside a particle. The method is fully parallelised as the total number of cells in this ne cartesian mesh is very often of the order of O(10 8 -10 9 ) to guarantee a su cient level of accuracy.
Packings are created by inserting particles at the top of the domain. Particles settle downwards by gravity and collide with neighbouring particles and/or the bottom wall. The lling process is deemed to be complete when all particles reach a pseudo stationary state characterized by a negligible total kinetic energy of the system. We consider the two following con gurations: 1. a system without lateral solid wall e fects designed as a box with bi-periodic boundary conditions on the lateral (vertical) boundaries, i.e., in the horizontal directions. 1000 particles are inserted in the simulation in the following way: (i) a particle position is randomly selected in a thin parallelepiped at the top of the domain at each time t n , (ii) a random angular position is assigned to the particle, (iii) insertion is attempted. If successful, the particle is inserted, otherwise a new random position together with a new random angular position is selected and insertion is attempted again at the next time t n+1 . This insertion procedure results in a moderately dense shower of particles stemming from the parallelepipedic insertion window.
2. a system with strong lateral wall e fecs designed as a cylindrical reactor with a circular cross-section. We select the same con guration as in our previous work [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]. In [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF], we examined the e fect of convexity on packing porosity. Now we extend this case study to non-convexity. 250 particles are randomly inserted at the top of the domain at a ow rate of 1 particle per second until the simulation is stopped at 260 s. Lateral wall e fects are deemed to be strong as the reactor diameter to particle equivalent diameter ratio is ≈ 50/8 = 6.25, an admittedly small value.
(a) "2D cross" shape (b) "3D cross" shape In both con gurations, we consider the 4 convex shapes already examined in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF] in addition to two new non-convex cross-like shapes illustrated in F . 3.12. All shapes have the same volume. The two meaningful physical parameters of the contact force model are set to e n = 0.73 and µ c = 0.55.
Packings of the di ferent shapes in the wall-free bi-periodic domain are presented in F . 3.13. The corresponding porosities computed by our approximate numerical integration based on pixelating the space occupied by the packing of particles are shown in T . 3.3. Although tetrahedra already exhibit a slightly higher porosity, there is a remarkable jump of porosity between the 4 convex shapes and the 2 non-convex cross-like shapes. In fact, ε for 3D crosses is twice larger than for spheres of same volume. With strong wall e fects, the e fect of shape on porosity ε is even more emphasised, as illustrated by F . 3.14. Porosity varies linearly with the height of the bed, and visually the variation of bed height as a function of shape speaks for itself. Bed height for 3D crosses (blue particles in F . 3.14(f)) is literally 5 times larger than that for spheres, cylinders and cubes, translating into a 5 times larger porosity. It is also 4 times larger than that for tetrahedra as well as 2 times larger than that for 2D crosses. For 3D crosses, it is quite remarkable in F . 3.14(f) that ε is close to 1 close to the reactor wall in a crown of width approximately half the length of the cross beams, whereas all other shapes, even 2D crosses, are able to ll that region much better. Obviously, we have selected these 2 non-convex shapes on purpose, as they exhibit a low sphericity and promote some sort of entanglement in the packing. They are hence good candidates for high porosity packings and other unusual intricate e fects in granular dynamics as we shall see in the next section. The analysis of the packing micro-structure can be easily extended e.g. by looking at the porosity radial pro le, by this goes beyond the scope of the present paper. Our goal here is primarily to evaluate quantitatively packing porosity for such shapes and to shed some light on how strong the e fect of shape can be, even in a very simple con guration.
Rotating drum
Following the work of [START_REF] Yang | Microdynamic analysis of particle ow in a horizontal rotating drum[END_REF][START_REF] King | Collision Detection for Ellipsoids and Other Quadrics[END_REF], [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF] we investigate the ow dynamics of a granular media in a rotating drum. We select the same ow con guration as in our previous work [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF] and our goal is to extend results previously obtained for convex particles to non-convex particles. As in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF], the drum has a radius of R drum = 50 mm and a depth of 24 mm (F . 3.15). A periodic boundary condition is ap- plied along the drum axis to avoid end wall e fects. The drum is loaded with mono-dispersed non-convex particles such that the region occupied by particles in the drum (regardless of porosity) corresponds to 35 of the drum volume, i.e. the pack has initially a height equal to ≈ 0.76R drum . For the non-convex shapes, we use the same 2D and 3D crosses as in Section 4.1. The new simulation results for the 2D and 3D crosses complement the existing set of results we obtained for convex particles (i.e., spheres, cylinders, cubes and regular tetrahedron) in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]. Once again, all shapes have the same volume, that corresponds here to a sphere with a radius of 1.5 mm. Values of all simulation parameters are listed in T . 3.4.
Parameter Value k n (N m -1 ) 1 × 10 5 e n 0.73 µ c 0.55 µ t (s -1 ) 1 × 10 5 δ max (m) , δ max /R e 1.1403 × 10 -5 , 0.007602 T C (s) 4.172 × 10 -5 ∆t (s) 2 × 10 -6
Table 3.4 -Contact force model parameters, estimate of contact features at v 0 = 1 m s -1 and time step magnitude used in rotating drum simulations.
As shown in Section 4.1 for the non-convex cross-like shapes and in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF] for tetrahedra, the total number of particles for each shape need to be adjusted such that the initial bed height is always ≈ 0.76R drum (35 of the drum volume) due to the high variations in porosity between shapes. While the drum was loaded with 3000 spheres, cylinders and cubes, only 2600 regular tetrahedra were used in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]. Here, we ll the drum with 1500 2D crosses and 1250 3D crosses. F . 3.16 shows the internal ow structure of a system lled with 3D crosses for Ω ∈ [5; 250] rpm. A rst sight already indicates the strong in uence of the particle shape on the ow dynamics, compared to spheres and even to convex particles. Similarly to the lling process in Section 4.1, the signi cant di ferences observed all result from the ability of 3D crosses to entangle. As for other convex shapes, we observe a transition from an avalanching regime to a cataracting regime, then to a pseudo-cataracting and eventually to a centrifuging regimes as the rotation rate increases. Note that for low rotation rates, the rolling regime observed for spheres is replaced by an avalanching regime (the same was observed for convex shapes in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]). Let us now describe qualitatively the features of each ow regime. At Ω = 5 rpm, the ow regime is representative of episodic avalanches governed by the pseudo-chaotic evolution of the highly entangled micro-structure of the pack of particles. Particle rotation is strongly impeded both close to the drum wall and at the free surface. As for other shapes, particles close to the drum wall experience a rigid body motion while the major di ference occurs at the free surface. Particles entanglements delay the onset of avalanching up to very high free surface angles, sometimes close to 90 • . Then the pack eventually breaks and big clusters of particles detach and fall down from the top right to the bottom left of the free surface. Big cluster detachment from the rest of the pack of particles at the top right resembles to some extent the fracturing of an homogeneous solid material or a cohesive granular media. Fracturing starts at the location in the pack that shows a weakness characterized by a lower level of entanglement, i.e., a lower level of cohesion. We call this regime episodic avalanching as the frequency of occurrence of avalanches is less regular and hence tougher to de ne than for convex shapes, as supported by F . 3.22. As the rotation rate increases to Ω = 20 rpm, big clusters of particles at the free surface disappear to give way to a thick layer of particles owing down the free surface from the top right to the bottom left. At Ω = 80 rpm, particles gain even kinetic energy to start freeing themselves from the pack. Flow dynamics is still strongly governed by particles entanglements but the pack of particles is not as dense anymore and consequently the strength or cohesion of the pack of entangled particles is weaker. This corresponds to a transition from avalanching to cataracting, although particles at the free surface do not yet have a free-y ballistic motion. At Ω = 125 rpm, the kinetic energy of particles at the top right of the free surface is large enough for them to almost free themselves from the pack and free y. This ow dynamics is a typical sign of a cataracting regime [START_REF] Mellmann | The transverse motion of solids in rotating cylinders-forms of motion and transition behavior[END_REF]). We would like to make a short digression of the determination of the onset of cataracting regime. As a comparison, we observe this ballistic trajectory of spherical particles in the range 150 rpm ≤ Ω ≤ 200 rpm, which suggests that cataracting regime starts at about 150 rpm for spheres. From F . 3.16, we might de ne the onset of cataracting regime at Ω = 125 rpm, which would hence indicate that 3D crosses exhibit a cataracting regime at lower rotation rates than spheres. At Ω = 125 rpm the Froude number de ned as Ω 2 R drum /g is Fr ≃ 0.87. Looking more closely at F . 3.16(d), the notion of free-ight is tougher to de ne. Although the overall ow pattern does look like a cataracting regime, particle that detach from the top right still seem to be linked together with neighbouring particles in their pseudo free-ight in a very weak way. For spheres (see [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]-Fig5(e)), it is very visible that particles ying from the top right to the bottom left of the free surface do not touch any other neighbouring particles. In other words, the transition from avalanching to cataracting is not necessarily easy to determine for 3D crosses. From Ω = 150 rpm, the cataracting regime starts to disappear and is progressively replaced by a pseudo-cataracting (or pseudo-centrifuging) regime. Ω is not high enough to already observe a fully centrifuging regime but not low enough for the cataracting regime to persist. The thin layer empty of particles at the top of the drum is a signature that the full centrifuging regime has not yet been attained. At Ω = 150 rpm, the Froude number is Fr ≃ 1.25. Finally, from Ω = 200 rpm, the fully centrifuging regime manifests, corresponding to Fr ≃ 2.25. For spheres, we determined in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF] that the tran- sition to centrifuging regime occurs at Ω ≃ 220 rpm, i.e., for Fr ≃ 2.7. This would suggest that the transition from cataracting to centrifuging occurs at lower rotation rates for 3D crosses than for spheres. As already noticed for spheres or any other shapes, the centrifuging regime is characterized by a continuous layer of particles attached to the drum wall and rotating with the drum as a rigid body. A particular and rather fascinating feature of 3D crosses is the form of the free surface of the pack of particles undergoing a rigid body motion. While for spheres this layer has a constant thickness, it is rather irregular for 3D crosses. Actually, the entangled 3D crosses create an imprint over the early transients of the drum rotation. In other words, the free surface is determined by a competition between the strength or cohesion of the entangled pack of particles and the centrifugal force that pushes particles towards the drum wall. The free surface very rapidly adopts its nal form (after a few drum rotations only) and then remains forever frozen in a rigid body rotation as shown in F . 3.16(f).
To illustate how much the 3D cross-like shape hinders the rotation of a particle compared to a sphere, even without entanglements with neighbouring particles, we perform a simulation of a single particle in the drum rotating at Ω = 150 rpm (F . 3.17). The resistance to rolling motion of the sphere is very low and accordingly the critical angle at which the sphere starts to roll down the drum wall very low too. 3D cross reaches much higher on the top right and their overall motion is far more chaotic. The ratio of translational to angular kinetic energy is much higher for a 3D cross than for a sphere. It would be interesting to extract this ratio in the multi-particle rotating drum simulations to shed some more light on di ferences in energy conversion mechanism between sphere, convex and non-convex shapes. This is an on-going work in our group and will be the topic on a future paper. We illustrate in F . 3.18 the avalanching nature of the ow dynamics at low rotation rates Ω = 5 rpm and Ω = 20 rpm. In particular at Ω = 5 rpm, we can neatly see in F . 3.16a(c) that the shallow layer of slumping particles at the free surface fractures in the middle into two big clusters. Another important comment concerns the determination of the dynamic angle of repose of the free surface in this avalanching regime. In fact, not only the free surface is anything but a at surface but the ow is highly intermittent (episodic) and the dynamic angle of repose varies over time with a large amplitude. In F . 3.16a(c), it is noticeable that the free surface is close to vertical. "2D cross" shape
The overall picture of 2D crosses is qualitatively similar to the picture of 3D crosses. Since 2D crosses have a higher sphericity than 3D crosses and a lower tendency to entangle, the original features observed for 3D crosses are also observed but less marked for 2D crosses. We notice the same transitions from avalanching to cataracting, then to pseudo-cataracting and eventually to centrifuging as the drum rotation rate increases, but these transitions occur for slightly different critical rotation rates. The di ferent ow regimes for 2D crosses are shown in F . 3.19. In general, the pack of 2D crosses is less cohesive than the pack of 3D crosses, in the sense that the strength of the entangled network of particles is weaker. This di ference manifests very visibly in F . 3.20 where we illustrate the transient ow dynamics in the drum. The dynamic angle of repose of 2D crosses, although pretty high compared to convex shapes, is lower than that of 3D crosses. It also seems that the free surface, although not very at, is signi cantly atter than that of 3D crosses. Finally, F . 3.22 suggests that avalanches are more regular and that the avalanching regime can be classi ed as periodic avalanching, in contrast to episodic avalanching for 3D crosses. At Ω = 5 rpm a single avalanching frequency for 2D crosses can be more clearly de ned than for 3D crosses, although this is not totally obvious. Finally, cataracting, pseudo-cataracting and centrifuging regimes of 2D crosses are very similar to those of 3D crosses. We plot in F . 3.21 the averaged in time coordination number as a function of rotation rate for all shapes. In general, 2D and 3D crosses exhibit a higher coordination number than other shapes regardless of the rotation rate, as a result of the highly entangled micro-structure. However, up to Ω = 150 rpm, the trend is very similar to convex particles and there is no major signature of non-convexity in the variation of the coordination with Ω. The plots for the 2 non-convex shapes are simply shifted to higher values of coordination number. The only signature of non-convexity pertains to the transition to cataracting/pseudo-cataracting regime and then to centrifuging regime. However, we run additional simulations for tetrahedra and notice the same trend than for 2D/3D crosses. Hence, this suggests that this signature is actually not relevant of non-convexity only, but more generally of non-sphericity. This emphasises again that the transitions to cataracting/pseudo-cataracting and to centrifuging are not easy to de ne. The increase of the coordination number above Ω = 150 rpm might however indicate the onset of transition to centrifuging. At high Ω ≥ 200 rpm, the absence of a neat plateau (as visible as for spheres) does not allow us to determine from this plot only when the fully centrifuging regime really starts. We plot in F . 3.22 the mean translational particle velocity as a function of time for different rotation rates. The interesting and already described in the above features of 2D and 3D crosses ow dynamics occur at low rotation rates Ω = 5 rpm and Ω = 20 rpm. From Ω = 42 rpm, the mean translational particle velocity of the 2 non-convex shape is very similar to that of any of the 3 non-spherical convex shapes. Ω = 5 rpm reveals that 3D crosses undergo more chaotic, in the sense of larger amplitude and more episodic, avalanches than 2D crosses and convex shapes. The peaks of mean translational particle velocity represent rapid avalanches of particles triggered by a very high dynamic angle of repose (up to ∼ 90 • ). The most remarkable manifestation of resistance to slump or to ow from the top right to the bottom left of the drum of the highly entangled pack of 3D crosses occurs at Ω = 20 rpm. While 2D crosses and cubes both exhibit a moderate avalanching dynamics, 3D crosses still undergo large amplitude and well de ned avalanches characterized by large amplitude uctuations of the mean translational particle velocity with time. At high rotation rates, the mean translational particle velocity progressively tends to a constant value over time. For instance, at Ω = 200 rpm, the mean translational particle velocity does not vary with time anymore. This represents a much more reliable signature of the onset of fully centrifuging regime than what we could extract from the coordination number analysis.
C
We suggested an extension of our DEM from convex to non-convex shapes. As a reference to the glued spheres model, the novel method is called glued convex method as convex particles are "glued" together to create any non-convex shape. Our novel method for non-convex shapes relies on the same tools we used for convex shapes in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]. In fact, contact detection between two non-convex bodies relies on contact detection between all the pairs of elementary convex components that compose each composite non-convex body. This reduces the complexity of the problem of contact detection between non-convex bodies to the problem of contact detection between convex bodies, a problem for which we have already suggested a reliable and accurate solution method in Wachs et al. ( 2012) using a Gilbert-Johnson-Keerthi algorithm. The novel method is extremely versatile as virtually any non-convex shape can be considered. We illustrated the new simulation capabilities of our in-house code Grains3D in two ow con gurations: (i) lling of a reactor and (ii) ow dynamics in a rotating drum. The simulation results we presented for non-convex 2D and 3D crosses are unprecedented in the literature.
We suggested a simple but robust solution to the problem of multi-contact points that enables us to keep using analytical estimates of contact features and in particular of contact duration. This signi cantly facilitates the estimation of the time-step magnitude in DEM simulations of non-convex bodies. We considered a normal cylinder-wall impact test case to illustrate the validation of our implementation. Along the way, we con rmed, as other works of the literature already showed Kodam et al. (2010b), that the accuracy of the glued sphere method to model particles of arbitrary shape is highly questionable as it rounds sharp angles and introduce an arti cial rugosity. Conversely, our glued convex approach preserves angularity since a non-convex composite particle is decomposed into a set of elementary convex shapes, that are by essence sharp. A side e fect of composite particles is their intrinsic ability to better handle contact con gurations in which the contact zone cannot be modelled as a point, but rather as a line or a surface. In fact, composite particles naturally introduce multiple contact points corresponding to the contact points of their elementary components. Although decomposing an already convex particle in a set of smaller elementary convex particles is not the most promising path from a computational viewpoint, this property can still be exploited to improve the stability of static heap of particles and somehow circumvent the conceptual inability of our Gilbert-Johnson-Keerthi -based contact detection strategy to provide a line of contact, a surface of contact or multiple contact points from two simple convex bodies that overlap.
Although our new DEM for non-convex bodies opens up unprecedented numerical modelling perspectives, the computing cost is still prohibitive. In fact, the computing cost of con-tact detection between two non-convex bodies scales as N × M , where N denotes the number of elementary components of the rst particle and M that of the second particle. Another computational drawback of the current implementation is that potential contacts are assessed with the circumscribed sphere to the non-convex particles, and then if they overlap with the circumscribed sphere to the convex elementary components (see F . 3.23). If the non-convex or elementary convex bodies are elongated, our method is not optimised and many contacts that actually do not exist are considered at the detection step. This undesirably slows down computations. An alternative solution would be to use oriented bounding boxes, with however no guarantee that the overall computing time will be lower as an oriented bounding box overlap test is more time consuming than a two sphere overlap test. In Chapter 5, we elaborate on the parallel implementation of the method. Although this could be a valuable way to speed up computations, we also show that scalability is satisfactory only for a minimum number of particles, as otherwise the MPI communication overhead is too high. We believe that contact detection between two non-convex bodies should be speeded up at the serial level. Potential applications of Grains3D were already quite broad, and the new glued convex model broadens even more its range of applicability. Only two examples of application were considered in this study that adequately illustrated the visible e fect of particle non-convex shape on ow dynamics. Results from the rotating drum as shown in F . 3.21 and 3.22 emphasise how ow dynamics di fers from convex particles to non-convex particles. Our analysis could easily be extended to gain more insight into regime transitions and overall ow dynamics. One rst step in that direction would be to analyse the PDF (Probability Density Function) of the time averaged particle translational and angular velocity and to seek in these plots any signatures of non-convexity. This is an on-going work in our research group.
R
Dans ce chapitre, les détails du modèle glued convex sont exposés avec toute la stratégie derrière cette approche. En e fet, elle est basée sur le fait que la particule non-convexe peut être décomposée en plusieurs formes élementaires arbitrairement convexes. Ainsi, elle peut être considérée comme une extension de la célèbre approche nommée "glued sphere" ("sphères collées"). Pour ce modèle la détection de contact se fait au niveau des particules élémentaires utilisant l'algorithme Gilbert-Johnson-Keerthi . La détéction de contact s'appuie sur l'algorithme "Linked-Cell" pour pouvoir accélérer la phase de recherche de collisions potentielles. Une importance particulière est dédiée aux intéractions impliquant de plusieurs points de contact.
Le modèle est validé sur quelques cas tests, par exemple, la comparaison de l'évolution de la force de contact lors de modélisation de l'intéraction dans le cas d'un simple cylindre et celui d'un cylindre formé de cylindres collés avec une paroi.
L'approche est ensuite utilisée pour montrer l'impact de la forme sur les taux de vide dans des lits constitués de di férentes formes de particules. Elle a aussi permis de mettre en évidence le changement de la dynamique des milieux granulaires dans un tambour tournant en fonction des formes des particules. Ces études ont illustré que non seulement la forme in uence la dynamique mais elle fait aussi apparaître de nouveaux régimes d'écoulement selon l'angularité des particules (allant de la particule spherique en passant par des formes convexes arbitraires telles que des cubes et des tétraèdre jusqu'aux particules en forme de croix).
O
: This paper presents the use of the glued convex method to simulate packing of poly-lobed particles. The simulations are carried out in bi-periodic domains in the aim of simulating a large xed bed (industrial) and in small cylindrical containers that have the exact dimensions of a pilot unit at IFPEN. The work was performed in collaboration with 2 internships that I co-supervised.
I
N merous chemical reactions are industrially performed using heterogeneous catalyst. Cat- alysts pellets can be shaped as spheres or extruded shapes (extrudates) or molded shapes [START_REF] Moyse | Raschig ring hds catalysts reduce pressure drop[END_REF], [START_REF] Cooper | Hydroprocessing conditions a fect catalyst shape selection[END_REF], [START_REF] Afandizadeh | Design of packed bed reactors: guides to catalyst shape, size, and loading selection[END_REF], [START_REF] Mohammadzadeh | Catalyst shape as a design parameter-optimum shape for methane-steam reforming catalyst[END_REF]). Due to the use of extrusion machines, extrudates are cheaper to produce in high quantities. They can have various shapes: cylinders, trilobes, and more recently quadralobes. Molded shapes include holes to improve internal transport. The best catalyst shape is a compromise between catalyst cost, catalyst e ciency, pressure drop, attrition, and bed plugging [START_REF] Moyse | Raschig ring hds catalysts reduce pressure drop[END_REF], [START_REF] Cooper | Hydroprocessing conditions a fect catalyst shape selection[END_REF], [START_REF] Afandizadeh | Design of packed bed reactors: guides to catalyst shape, size, and loading selection[END_REF], [START_REF] Mohammadzadeh | Catalyst shape as a design parameter-optimum shape for methane-steam reforming catalyst[END_REF]). Thus, it is application dependent. The challenge to design a better shape is to be able to predict the gains based only on the shape knowledge.
Catalyst e ciency is a measure of internal mass transfer limitation. It is de ned as the actual reaction rate (in mol/m 3 /s) divided by the reaction rate that would be achieved if the concentration inside the pellet was homogeneous and equal to that of the surface. If the reaction is fast enough, reactants may be consumed faster than they di fuse so that they have a lower concentration at the pellet centre than at its boundary. The active (expensive) phase located at the pellet centre is not used as e ciently as at its surface. The engineering pathways to improve e ciency are: (i) improving e fective di fusion in the pellet by changing the pore size distribution and (ii) changing the shape, including size and introducing holes, to reduce the volume to external surface ratio. For a given shape, the catalyst e ciency can be numerically predicted by solving the di fusion equation in the grains assuming kinetic schemes [START_REF] Mariani | Evaluating the e fectiveness factor from a 1d approximation tted at high thiele modulus: Spanning commercial pellet shapes with linear kinetics[END_REF]). With a little less accuracy, it can be reasonably predicted for any particle shape without holes using the generalized Thiele modulus as proposed by [START_REF] Aris | On shape factors for irregular particles -I: The steady state problem[END_REF], that can be written for a 1 st order reaction:
Φ = V p S p k D ef f (4.1) η = 1 Φ I 1 (2Φ) I 0 (2Φ) (4.2)
where V p , S p , k and D ef f denote particle volume, particle surface, intrinsic kinetic constant and e fectiveness coe cient respectively. I n is the Bessel function of order n. Reducing the particle diameter results in an improvement of the catalyst e ciency due to a lower V p /S p , unfortunately at the cost of a higher pressure drop. But it still an e cient way to improve e ciency. Gas-Liquid pressure drop in tricked bed reactors has been the subject of many publications. Their estimations are always performed using at some point the single phase predictions, so that for our purpose, optimizing trickle bed pressure drop is the same as optimizing single phase pressure drop (see for example [START_REF] Attou | Modelling of the hydrodynamics of the cocurrent gas-liquid trickle ow through a trickle-bed reactor[END_REF]). Pressure drop predictions are usually performed using correlations with a form following the Ergun's one [START_REF] Ergun | Fluid ow through packed columns[END_REF]):
∆P H = α µ(1 -ε) 2 u ε 3 d 2 p + β ρ(1 -ε)u 2 ε 3 d p (4.3)
In the formulation E . 7.23, the pressure drop is the combination of a frictional viscous term proportional to the velocity and a quadratic term on velocity accounting for ow direction and section changes [START_REF] Larachi | X-ray micro-tomography and pore network modeling of single-phase xed-bed reactors[END_REF]). [START_REF] Ergun | Fluid ow through packed columns[END_REF] proposed the constants α = 150 and β = 1.75 to describe the pressure drop for spheres, cylinders and crushed particles. The diameter for non-spherical particles is the equivalent diameter de ned as:
d e = 6V p S p (4.4)
Earlier [START_REF] Carman | Fluid ow through granular beds[END_REF] proposed α = 180 and β = 0 for Stokes ows (Re ∼ 0) in packed beds of spheres, which is more accurate than Ergun's coe cients in these conditions. For non-spherical particles, [START_REF] Nemec | Flow through packed bed reactors: 1. single-phase ow[END_REF] extended the correlation by introducing the sphericity:
Ψ = 36πV 2 p S 3 p 1 3 (4.5) ∆P H = 150 Ψ a µ(1 -ε) 2 u ε 3 d 2 e + 1.75 Ψ b ρ(1 -ε)u 2 ε 3 d e (4.6)
The coe cients a and b have been subjected to some modi cations by few authors, among others [START_REF] Nemec | Flow through packed bed reactors: 1. single-phase ow[END_REF] and [START_REF] Dorai | Fully resolved simulations of the ow through a packed bed of cylinders: E fect of size distribution[END_REF]. Other formulations have been proposed that take into account various shapes. Nevertheless, there is so far no universal method to precisely predict the Ergun's equation coe cients based only on particle shape.
As it can be noticed in E . 4.6, the pressure drop presents a very strong dependency on the void fraction which has been until recently measured experimentally. Due to the manufacturing process, the extrudates have random length. Therefore, length distribution may di fer from an experiment to another, especially for particles produced on di ferent extrusion dies. Automated sorting can be performed to narrow down the length distribution but this is not su cient to prevent di ferences from experiment to experiment. Therefore, the comparison of the void fraction (and the pressure drop) is always based on measurements with di ferent length distribution. As the di ferences between most e cient shapes are small, it is di cult to decouple shape and length e fects when measuring the packed bed void fraction. In addition, the void fraction is highly dependent on the loading procedure leading to some discrepancies between operators. Repetition e fects are barely quanti ed and are usually neglected, although we have no information on their magnitude compared to di ferences between shapes.
To summarize, it is yet impossible to predict the void fraction (and the pressure drop) accurately enough to rank innovative catalyst shapes without experiments. New numerical tools are required to optimise the particle shape "in silico". In this chapter, we present the use of DEM to estimate the void fraction for any trilobic and quadrilobic shapes, as well as an analysis of the trends in void fraction dependency.
M
2.1 DEM with non-convex particles Several numerical methods to produce packing of spheres have been published. Thanks to its exibility the Discrete Element Method (DEM) can be extended to more complex shapes and thus will be presented. This method [START_REF] Cundall | A discrete numerical model for granular assemblies[END_REF], [START_REF] Cundall | Formulation of a three-dimensional distinct element model-Part I. A scheme to detect and represent contacts in a system composed of many polyhedral blocks[END_REF], [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]) is a Lagrangian particle tracking method which computes the particle velocities, trajectories and orientations. A key feature of any DEM tool is its ability to detect collisions, determine the contact point(s) and compute the resulting contact forces. This is done for example using the Gilbert-Johnson-Keerthi algorithm [START_REF] Gilbert | A fast procedure for computing the distance between complex objects in three-dimensional space[END_REF], [START_REF] Gilbert | Computing the distance between general convex objects in threedimensional space[END_REF]. Recent developments of DEM allow the use of non-spherical particles, such as the glued spheres model which is a loose approximation of a complex shape [START_REF] Nolan | Random packing of nonspherical particles[END_REF], or by an accurate description of arbitrary convex particles [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]. Recent development by our group (Chapter 3) allows the simulation of non-convex particles composed of a collection of convex particles. This method, called "glued convex", is an extension of the glued spheres method of [START_REF] Nolan | Random packing of nonspherical particles[END_REF]. It allows the use of the existing methods, models and algorithms already implemented in Grains3D [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]) such as the equations of motion, time integration, collision resolution and particularly the Gilbert-Johnson-Keerthi algorithm for collision detection. Detailed information about the extension to non convex shape and the DEM features can be found in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF] and in Chapter 3.
Simulation principle
Fixed beds of non-convex particles are computed using Grains3D. An insertion window is de ned at the top of the domain (F . 4.1). It can be a box-like window or a at surface, or a single point. The particles are inserted in the simulation in the following sequence:
• for the subsequent particle to be inserted, the code draws randomly its position and orientation, • the particle is inserted as soon as there is enough space,
• the particle are subjected to the gravity force and leave the vicinity of the insertion zone. A larger insertion zone results in more particles inserted simultaneously. During their free fall, the particles will experience inelastic collisions with walls and other particles. The total kinetic energy of the system decreases exponentially with time. The simulations are completed when the maximum of the particle velocities is below 10 -5 m/s. The output of the simulations is a le containing nal positions, velocities and orientation for each particle. The domain geometry can be either constrained with rigid walls or using periodic conditions in the horizontal directions (bi-periodic).
Void fraction analysis
The average void fraction (porosity) is computed by two methods: (i) performing a 3D discretization of the space and counting the number of cells occupied by particles. Provided su ciently small grid cells, this method is very accurate but computationally expensive (Chapter 3). (ii) sorting all the particles according to their vertical position z and plotting that vertical position (vertical axis) against the particle ranking (horizontal axis) as illustrated in F . 4.2. For a random packing, the plot is a straight line whose slope is related to the void fraction as follow: the volume occupied by the particles scales with the number of the particles times the volume of a particle, the volume of the container scales with the container cross-section times the distance between particles. Thus, the void fraction ε reads:
ε = 1 - N ∆z V p S p = 1 - V p S p 1 s (4.7)
where N, ∆z, V p , S p and s denote respectively total number of particles, height of the cropped bed, particle volume, particle surface and slope related to void fraction. Incidentally, a non-linear trend in the ranking plot brings information about the structure: steps indicate "structured packing", a changing slope indicates a change in the average void fraction. This method neglects the volume of particles located near the ends of the control volume and is as accurate as the discretization method when the control volume is large enough. As a last remark: a correct estimation of void fraction has to be performed discarding a few layers at the top and bottom of packing [START_REF] Dorai | Packing xed bed reactors with cylinders: in uence of particle length distribution[END_REF]), avoiding end e fects ( at bottom in uence at the bottom and free surface at the top).
Cases description
A rst set of simulations is performed using bi-periodic boundary conditions. This simulates a semi-in nite container, and models the packing in a large reactor. The container size is set to 18 mm after checking that this parameter has no e fect of the void fraction. Another set of simulation is ran in a small size cylindrical reactors using solid walls. The vessel diameters are 14 mm, 16 mm and 19 mm.
Simulations are performed on the following shapes (F . 4.3): Cylinders (CYL), trilobes (TL) and quadralobes (QL). The particle cross-sectional diameter of trilobes and quadralobes is de ned as that of circumscribed cylinder (F . 4.3d). For identical diameter and length, TL and QL occupy a volume of respectively 69% and 74% of the cylinder. The particle diameter is varying in the range [1.0, 2.5] mm and its length is set to 3 mm, 4 mm and 5 mm. In each simulation at least 1000 particles are inserted to ll either a bi-periodic domain or a cylindrical vessel (F . 4.4). The parameters of all numerical simulations are listed in T . 4.1.
Parameter Value k n (N m -1 ) 1 × 10 5 e n 0.7 µ c 0.55 µ t (s -1 ) 1 × 10 5 δ max (m) , δ max /R e 1.5 × 10 -5 , 0.005 T C (s) 2.01 × 10 -5 ∆t (s) 1 × 10 -6
Table 4.1 -Contact force model parameters, estimate of contact features at v 0 = 2 m s -1 for static packings.
S
As mentioned earlier, the particles are inserted in the simulation with a random position and orientation. Afterwards, the simulations and measurements are deterministic and accurate.
Every packed bed has a di ferent void fraction. As we are interested in comparing the e fects of shape on void fraction, we must be able to quantify which part of the di ferences between two simulations are due to the shape or to the random insertion at the top of the domain.
Repeating the packing
Several loadings with the same set of 1000 particles are repeated for the 3 shapes (T . 4.2). As the particle shape and dimension di fer from one case to another, the average void fractions should not be compared for the moment but the reader should focus on the void fraction standard deviation (σ < 0.0053) which reads:
σ = 1 N N i=1 (ε i -µ) 2 , where µ = 1 N N i=1 ε i (4.8)
where N and ε i stand for total number of simulations and void fraction of the simulation i respectively.
As the number of repetitions is not large, this estimation of repeatability can be improved by aggregating data and removing the average of each sub-set (shape e fect). The standard deviation of the whole ensemble (18 elements) is indeed lower than σ 1 = 0.0042. At this point, it is worth reminding that once the particles are inserted in the simulation, the solver is deterministic and exact: σ 1 is a measure of the e fect of the random initial conditions.
E fect of insertion window size
We estimated the e fects of insertion window size for various geometric con gurations of the container (cylindrical / bi-periodic and its size), and the particle shape and its size (T . 4.3).
In this work, we only use a planar 2D square insertion window and an insertion point (see F . 4.5 for reference).
According to an analysis of variance (ANOVA), the void fraction di ference is statistically non zero. A larger window results in a higher void fraction. We propose the following mechanism: a larger window results in more particles inserted simultaneously, leaving less time for a particle at the top of the stack to reach the most stable position before the arrival of the subsequent ones. The standard deviation on the void fraction di ference is 0.0049.
Choosing the proper insertion geometry is a matter of compromise for the several reasons. First, none of the methods is more realistic than another: in the laboratories, reactor loading is not standardized and is often manual. A change in particle size while keeping the insertion window size the same results in a change in the number of particles that are inserted simultaneously, which yields more or less compact beds. An obvious geometrical constraint is that the insertion window must be smaller than the reactor: smaller reactors need smaller insertion windows which leads to denser beds. This is similar to the reduction of the funnel diameter during an experimental loading. Last, a small insertion window requires a long loading time, 4.9 whereas a larger one permits a fast loading. In order to decrease the computing time, the simulations are performed with a medium size planar square insertion window (4 mm and 6 mm wide) that ts in all geometries. This choice will overestimate the void fraction compared to a point insertion and underestimate the void fraction for large particles. If we assume that this insertion e fect can be modelled by a Gaussian random variable (of null average), then its standard deviation σ 2 must be equal to 1/ √ 2 of the standard deviation of the "void fraction di ference" (see Appendix for details): σ 2 = 0.00346 = 0.0049/ √ 2. σ 2 measures the unknown bias on the simulation induced by the choice of the insertion window size.
Overall uncertainty
An overall uncertainty on a single void fraction simulation result can now be estimated from σ 1 (random initial conditions) and σ 2 (bias induced by insertion window size). As both un-certainties are independent, a classical measurement statistic theory gives an estimate of the overall standard deviation: σ = σ 2 1 + σ 2 2 = 0.0054. An estimation of the overall uncertainty on a single measurement I is I = 2σ = 0.011 (see Appendix for details). According to this analysis, there is a 95% probability that, given the output ǫ of a single experiment, the average void fraction of a large number of simulations falls in the interval ε ± 0.011 (with ε = 0.42, this gives an estimate of 0.409 and 0.431). In other words, this corresponds to a relative uncertainty on the void fraction of less than 2.5%.
R
Bi-periodic container
The average void fraction for various shapes, length and diameters simulated in a bi-periodic container are presented in F . 4.6. This case corresponds to large containers similar to industrial reactors. The void fraction is linearly correlated with particle aspect ratio (L p /d p ). Bulkier, rounder particles are easier to pack, whereas cylindrical particles present a lower void fraction and lower dependence on the aspect ratio than poly-lobed shapes. Surprisingly, the void fraction of trilobes and quadrilobes can not be distinguished. In F . 4.6 the slope for the poly-lobed particles is much larger than that of the cylindrical ones. We suggest that during the packing, the lobes hinder rotation and result in a quick dampening of the vibrations induced by impacts. This results in less compact beds for poly-lobed particles.
Extending the trends to near spherical shape (L p /d p = 1) leads to a void fraction of 0.32 (CYL) and 0.36 (TL/QL) which are values close to dense packings of spheres.
Cylindrical container
Cylindrical particles
The void fraction of a packed bed of cylindrical particles in a cylindrical reactor is in line with experimental measurements [START_REF] Leva | Pressure drop through packed tubes. 3. prediction of voids in packed tubes[END_REF] (our values are in the range of d p /D < 0.3). It increases with the particle aspect ratio and seems to decrease with increasing reactor diameter D. However in the studied range, the e fect is barely larger than the repeatability. Following [START_REF] Leva | Pressure drop through packed tubes. 3. prediction of voids in packed tubes[END_REF], whose results suggest a proportional relationship to the inverse of vessel diameter, we propose the correlation in E . 4.11. It describes all the data set with a maximum absolute error of 0.014 and a standard deviation of 0.006, which is about half of the uncertainty (F . 4.7). The correlation is written as follows: In our data range, a simpli ed correlation that does not take into account the cylindrical vessel diameter predicts the void fraction (E . 4.12) with good accuracy (standard deviation of 0.0077).
CYL: ε = 0.
In our limited diameter range, a simpli ed correlation that does not take into account the reactor diameter predicts the void fraction with a slightly higher identical relative standard deviation (2%). It reads:
CYL: ε = 0.327 + 0.033 L p d p (4.12) 10 < D[mm] < 19, 1 < L p d p < 5, 3 < L p [mm] < 4
Poly-lobed particles
The following linear correlation (E . 4.13) predicts the void fraction with a lower accuracy (equal to the uncertainty) (see 5.1 E fect of domain size in bi-periodic directions?
Most of the bi-periodic simulations have been performed with a domain with a transverse size of 18 mm. 4 simulations have been repeated using smaller domains (8 mm and 10 mm) with CYL and QL with an aspect ratio 3. The void fraction in smaller domains is within the repeatability of that in the large domain with a transverse size of 18 mm. We have so far no indication of an e fect of bi-periodic domain size in the range 8 mm to 18 mm. It seems that performing simulations in the chosen domains does not impose any particular microstructure in the bed with a wave length correlated to the transverse domain size. Simulation results indicate that even a transverse size of 8 mm is large enough to represent an in nitely large domain in the transverse direction.
Remark on the e fect of container size
For all three particle shapes (CYL, TL and QL), the void fraction is higher in small reactors than in semi-in nite vessels as expected. When the reactor diameter increases, none of the correlations for cylindrical reactors so far converges to the correlation proposed for in nite vessels. This was however expected as our cylindrical reactors are quite small compared to the particle length. In fact, the minimum L p /D in our simulations is 3/18 = 0.167, which suggests that wall e fects are strong in these small reactors. To get asymptotically vanishing wall e fects in a reactor, L p /D is probably required to be at least as small as 0.05. More simulations at large reactor diameters and probably non-linear relationships would be necessary to propose a uni ed correlation.
C
DEM has been used to prepare packed beds of poly-lobed particles. Although the simulations are deterministic, random input parameters (location and orientation of particles) as well as simulation parameters (insertion window) lead to an overall uncertainty that has been estimated at 0.0011. A subsequent analysis of the void fraction and its dependence on the particle shape and reactor size showed that TL and QL present statistically identical void fractions. The e fects of random insertion, i.e. lling procedure, in packed beds mask the shape induced e fect for optimised particles. We suggested linear correlations to predict the void fraction for cylinders, trilobes and quadralobes in semi-in nite and small size cylindrical reactor that showed a reasonably satisfactory level of reliability. More simulations and probably nonlinear regressions are necessary to unify these correlations.
Ranking TL and QL and their chemical e ciencies are not possible based only on void fraction. A precise knowledge of the relationship between shape and pressure drop is necessary to conclude. An ongoing work is to perform a similar study on poly-disperse beds. Another ongoing work is to use Direct Numerical Simulation to evaluate the pressure drop in beds of poly-lobed particles, which is an extension of the work presented in [START_REF] Dorai | Fully resolved simulations of the ow through a packed bed of cylinders: E fect of size distribution[END_REF]. The next step will be the use of DNS in reactive ows as demonstrated in [START_REF] Dorai | Multi-scale simulation of reactive ow through a xed bed of catalyst particles[END_REF], which is probably more in the aim of assessing random induced uncertainty rather than predicting the xed bed performance.
A
In this work, simulations and measurements are deterministic and accurate. The resulting void fraction is di ferent each time a simulation is performed with the same particles, but inserted with di ferent (random) orientations and positions. Void fraction values appear as a random variable. Our interest is to compare the e fects of shape on the void fraction. Thus we want to quantify how much of the di ference between two simulations with di ferent shapes are due to the shape or to the random insertion e fects.
By de nition, the uncertainty is the value I so that 95% of the random values of the void fraction will be within ±I of the average . With a Gaussian probability law, this de nition is equivalent to I = 1.96σ which is classically simpli ed to I = 2σ. In mathematical terms, 95% of the area under the Gaussian probability curve is within average ±I . In our study, the e fect of particle position and orientation is estimated by repeating simulations and estimating the standard deviation.
The standard deviation of the sum or di ference of two independent Gaussian random variables is given by σ
X-Y = σ X+Y = σ 2 1 + σ 2 2 , yielding σ X-Y = σ X+Y = σ X √ 2
when X1 and X2 follow the same probability law with standard deviation σ X . The e fect of insertion window size is estimated using the di ference between two simulations, hence the introduction of a √ 2 in the calculations.
R
Dans ce chapitre, le modèle glued convex est utilisé pour optimiser les formes de particules rencontrées dans l'industrie du ra nage. L'intérêt de ce chapitre est particulièrement porté sur la mise en évidence des di férences sur le taux de vide dans de réacteurs à lit xe en fonction des formes des particules (ici, cylindre, trilobe et quadrilobe), du mode d'insertion des particules et en n la quanti cation de l'aspect aléatoire de la procédure de remplissage des réacteurs. En e fet, Grains3D dispose d'un algorithme qui joue le rôle de fenêtre d'insertion de particules dans le système étudié. Les particules sont crées avec une orientation aléatoire et tombent dans les réacteurs une par une si la taille caractéristique de la fenêtre est du même ordre que celle des particules ou par pluviation si elle est de quelques ordres de grandeur de celle des particules.
Le taux de vide dans un lit est calculé à l'aide d'une discrétsisation spatiale du système étudié. Cette méthode repose ensuite sur l'aspect parallèle du code Grains3D pour la prise en compte de gros système ainsi que pour l'accélération des simulations.
Ce chapitre a alors permis de mettre en évidence que les taux de vide calculés sont statistiquement identiques pour les particules multi-lobées et sont di férents de ceux des particules cylindriques dans les mêmes conditions. Grâce à ces observations, des corrélations lináires ont été mises en place pour prédire le taux de vide dans des réacteurs à lit xe. This chapter has been submitted for publication in Powder Technology:
G
A. D. Rakotonirina, A. Wachs. Grains3D, a exible DEM approach for particles of arbitrary convex shape -Part II: parallel implementation and scalable performances.
In this paper, we present the parallelisation strategy to be able to handle large numbers of particules. We also present simulations on silo discharge, dam breaking, uidization.
A I Wachs et al. (2012) we suggested an original Discrete Element Method that o fers the ca- pability to consider non-spherical particles of arbitrary convex shape. We elaborated on the foundations of our numerical method and validated it on assorted test cases. However, the implementation was serial and impeded to examine large systems. Here we extend our method to parallel computing using a classical domain decomposition approach and interdomain MPI communication. The code is implemented in C++ for multi-CPU architecture. Although object-oriented C++ o fers high-level programming concepts that enhance the versatility required to treat multi-shape and multi-size granular systems, particular care has to be devoted to memory management on multi-core architecture to achieve reasonable computing e ciency. The parallel performance of our code Grains3D is assessed on various granular ow con gurations comprising both spherical and angular particles. We show that our parallel granular solver is able to compute systems with up to a few hundreds of millions of particles. This opens up new perspectives in the study of granular material dynamics.
I
Discrete Element Method (DEM) based simulations are a very powerful tool to simulate the ow of a granular media. The foundations of the method were introduced by [START_REF] Cundall | A discrete numerical model for granular assemblies[END_REF] in the late seventies. Originally developed for contacts between spherical particles, the method was later extended to polyhedra by [START_REF] Cundall | Formulation of a three-dimensional distinct element model-Part I. A scheme to detect and represent contacts in a system composed of many polyhedral blocks[END_REF]. The conceptual simplicity combined with a high degree of e ciency has rendered DEM very popular. However, there are essentially still two bottlenecks in DEM simulations: (i) the non-sphericity of most real life particles and (ii) the generally large number of particles involved even in a small system.
In [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF] we addressed issue (i), i.e., the non-sphericity of particles by reviewing the various existing techniques to detect collisions between two non-spherical particles and by suggesting our own collision detection strategy that enables one to consider any convex shape and any size. Issue (ii) can be tackled in two di ferent and complementary ways. The former involves improving the computational speed of classical serial implementations of DEM. This can be achieved by a higher quality programming and smarter algorithms, but there is admittedly a limit in that direction, even with the most advanced implementations. The latter involves dividing the work load between di ferent computing units and hence using distributed computing. Nowadays, there are two competing technologies for DEM distributed computing: CPU [START_REF] Walther | Large-scale parallel discrete element simulations of granular ow[END_REF], [START_REF] Iglberger | Massively parallel rigid body dynamics simulations[END_REF][START_REF] Wachs | PeliGRIFF, a parallel DEM-DLM/FD direct numerical simulation tool for 3D particulate ows[END_REF]) vs GPU [START_REF] Radeke | Large-scale mixer simulations using massively parallel GPU architectures[END_REF], [START_REF] Govender | Collision detection of convex polyhedra on the NVIDIA GPU arhictecture for the discrete element method[END_REF]). Both technologies have assets and drawbacks. While GPU is parallel in essence (multi-threaded), fast on-chip memory is limited in size and global memory access is very slow, which can result in a weak performance of the code (Govender et al. ( 2015)). Besides, the built-in parallelism of GPU is not designed (yet) for multi-GPU computations, which limits the overall performance to that of a single GPU, in particular in terms of system size, i.e., number of particles. Conversely, CPU-based DEM codes, generally implemented with a domain decomposition technique, exhibit no limit in number of communicating CPUs (cores) and hence no limit in number of particles, provided the scalability is maintained at a reasonable level. Communications between cores is generally achieved using the Message Passing Interface (MPI) [START_REF] Gropp | Using MPI (2Nd Ed.): Portable Parallel Programming with the Message-passing Interface[END_REF]). While simulations with up to a few tens to hundreds of thousands of particles is attainable with GPU-based implementations [START_REF] Radeke | Large-scale mixer simulations using massively parallel GPU architectures[END_REF], [START_REF] Govender | Collision detection of convex polyhedra on the NVIDIA GPU arhictecture for the discrete element method[END_REF]), simulations with up to a few billions of particles can be envisioned with CPU-based implementations, provided computational practitioners have access to large supercomputers with many thousands of cores [START_REF] Walther | Large-scale parallel discrete element simulations of granular ow[END_REF], [START_REF] Iglberger | Massively parallel rigid body dynamics simulations[END_REF][START_REF] Wachs | PeliGRIFF, a parallel DEM-DLM/FD direct numerical simulation tool for 3D particulate ows[END_REF]). The forthcoming new GPU technology is likely to o fer similar parallel computing capabilities as CPU by improving inter-GPU communications but at the time we write this article, this enhanced GPU technology is not available yet.
The primary motivations for developing a parallel implementation of a serial code is either (i) to lower the computing time for a given system size by using more cores or (ii) to increase the size of the simulated system for a given computing time. In general, it is rather hard to de ne what is a "rationally acceptable computing time". Talking about the number of particles that one can simulate on a single-core computer in number of minutes/hours/days is meaningless without mentioning as well the time step magnitude and simulated physical time. In other words, the only rational measure of performance is the wall clock time per time step and per particle. Ironically, a highly e cient serial implementation might not scale well in parallel as the communication overhead will be signi cant, and conversely a time consuming (and/or badly programmed) serial implementation might scale much better. Obviously, this statement is not an incentive to write poor serial implementation or slow collision detection algorithm to get at a later stage a good scalability but simply underlines the fact that systems made of non-spherical particles have a chance to scale better than systems comprising spheres, as the collision detection step is a local (in the sense on each core without any communication) timeconsuming operation.
Our goal in the paper is to elaborate on a simple domain decomposition based parallel extension of our granular code Grains3D and to assess its computing performance on systems of up to a few hundreds of millions of particles. In Section 3, we quickly recall the features of our numerical model as already explained in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]. We then present our parallel strategy in Section 3. In Section 4 we measure the computing performance of our parallel implementation in various granular ow con gurations (particle shape, particle load by core, weak scalability). Finally, we discuss parallel computing performances exhibited by Grains3D in Section 5 and highlight the remaining intrinsic limitations of Grains3D and how to relax them.
N
The motion of the granular material is determined by applying Newton's second law to each particle i ∈< 0, N -1 >, where N is the total number of particles. The rigid body motion assumption leads to the decomposition of the velocity vector v as v = U + ω ∧ R, where U , ω and R denote the translational velocity vector of the center of mass, the angular velocity vector of the center of mass and the position vector with respect to the center of mass, respectively. The complete set of equations to be considered is the following one:
M i dU i dt = F i (5.1) J i dω i dt + ω i ∧ J i ω i = M i (5.2) dx i dt = U i (5.3) dθ i dt = ω i (5.4)
where M i , J i , x i and θ i stand for the mass, inertia tensor, center of mass position and angular position of particle i. F i and M i are the sum of all forces and torques applied on particle i, respectively, and can be further decomposed in purely granular dynamics (i.e., without accounting for any external forcing as e.g. hydrodynamic or electrostatic) into a torque-free gravity contribution and a contact force contribution as:
F i = M i g + N -1 j=0,j =i F ij (5.5) M i = N -1 j=0,j =i R j ∧ F ij (5.6)
where F ij is the force due to collision with particle j and R j is a vector pointing from the center of mass of particle i to the contact point with particle j. In our model, F ij comprises a normal Hookean elastic restoring force, a normal dissipative force and a tangential friction force.
The set of equations E . 5.1-E . 5.4 is integrated in time using a second order leap-frog Verlet scheme. Rotations of particles are computed using quaternions for computational eciency as well as to avoid any gimbal lock con gurations. The collision detection algorithm is a classical two-step process. Potential collisions are rst detected via a linked-cell list and then actual collisions are determined using a GJK algorithm. Our GJK-based collision detection strategy enables us to consider any convex shape and size. For more detail, we refer the reader to Grains3D-Part I [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF] and the references therein.
D D
Our parallel strategy is classical and is based on a domain decomposition technique. We consider below only the case of a constant in time domain decomposition, assuming that we know how to guarantee a reasonable load balancing of number of particles between subdomains over the whole simulation. The extension to dynamic load balancing in granular ows with large particle volume fraction heterogeneities will be shortly discussed in Section 5 as an extension of this work. We employ a cartesian domain decomposition. Each process hosts a single subdomain and we hence de ne a cartesian MPI communicator using the MPI_Cart_create command. It is then very convenient to identify the neighbouring subdmains on each subdomain as well as to implement multi-periodic boundary conditions. On each subdomain, we construct a cartesian linked-cell list with an additional layer of cells at the boundary with neighbouring subdomain to serve as an overlapping zone. This overlapping zone hosts clone particles used to compute collisions with particles located on a neighbouring subdomain (process). As a consequence, cells in a linked-cell list are tagged based on their location on the subdomain: 0 = interior, 1 = bu fer and 2 = clone, as illustrated on F . 5.1. At each time step, clone particles are either created, deleted or updated. All particles are tagged based on the cell they belong to. Hence they consistantly change status as they move in the subdomain. Corresponding operations are performed on neighbouring subdomains when a particle change status. For instance, if a particle moves from an interior cell (tag = 0) to a bu fer cell (tag = 1), a clone particle (tag = 2) is automatically created on the neighbouring subdomain.
The serial code is implemented in C++ which equips us with the required versatility to handle multiple particle shapes and sizes, based on inheritance mechanism, virtual classes and dynamic typing. Each particle is an instance of a C++ class and all active particles on a subdomain, including particles in bu fer and clone zones, are stored in a primary list. Two additional separate lists for bu fer and clone particles, respectively, are also created. As a consequence, when information of bu fer particles needs to be sent to a neighbouring subdomain, we rst loop on the list of bu fer particles, extract the relevant information and copy it to a bu fer memory container (a standard 1D array, i.e., a standard vector, of doubles or integers). Each subdomain keeps a list of reference particles corresponding to all the types of particle in the simulation. These reference particles store generic data as mass, moment of inertia tensor and geometric features, such that MPI messages contain velocity and position information only and their size is reduced to the minimum.
Assorted communication strategies between processes (subdomains) can be designed, ranging from the simplest strategy to the most advanced (to the best of our knowledge for a cartesian MPI decomposition) strategy. We list below the di ferent strategies we implemented and tested, ranked in growing complexity:
• the AllGatherGlobal strategy All processes send information from their bu fer particles to all other processes, regardless of their location in the MPI cartesian grid using a MPI_Allgather command.
A huge amount of useless information is sent, received and treated by each process. It is however a good starting point and performs well up to 8 (maybe 16) processes maximum.
• the AllGatherLocal strategy All processes send information from their bu fer particles to all their neighbouring processes. The amount of useless information is reduced, but it is still far from optimal. This can be achieved by creating local communicator for each process including itself and its neighbours and performing the MPI_Allgather command using this local communicator. This strategy performs reasonably well up to 16 (maybe 32) processes, but beyond the scalability markedly deteriorates.
• the AllGatherLocal strategy with non-blocking sending The next level of sophistication consists in replacing the MPI_Allgather command performed on the local communicator by a rst stage of non-blocking sending of messages with the MPI_Isend command combined with a classical blocking receiving stage with the MPI_Recv command. Incoming messages are rst checked with the MPI_Probe command and their size is detected with the MPI_Get_count command such that the receiving bu fer is properly allocated for each received message [START_REF] Iglberger | Massively parallel rigid body dynamics simulations[END_REF][START_REF] Wachs | PeliGRIFF, a parallel DEM-DLM/FD direct numerical simulation tool for 3D particulate ows[END_REF]. Using non-blocking sending speeds up communications as the MPI scheduler can initiate the receiving operations even if the sending operations are not completed, but still a large amount of useless information is sent, received and treated.
• the adopted optimal strategy called SendRecv_Local_Geoloc Not only cells (and hence particles belonging to these cells) are tagged in terms of their status (0 = interior, 1 = bu fer and 2 = clone, see F . 5.1) but cells in the bu fer zone are also tagged in terms of their location with respect to the neighbouring subdomains using a second tag, named GEOLOC for geographic location, that takes the 26 following values (whose meaning is rather obvious on a 3D cartesian grid as can be seen in F . 5. Depending on the particle's GEOLOC tag, information from a bu fer particle is copied to one or more bu fer vectors to be sent to neighbouring subdomains. There are essentially three situations as illustrated below:
-a bu fer particle with a main GEOLOC tag: for instance a particle tagged SOUTH is sent to the SOUTH neighbouring subdomain only (F . 5.3), -a bu fer particle with an edge GEOLOC tag: for instance a particle tagged SOUTH_EAST is sent to the SOUTH, EAST and SOUTH_EAST neighbouring subdomains only (F . 5.4), -a bu fer particle with a corner GEOLOC tag: for instance a particle tagged SOUTH_WEST_TOP is sent to the SOUTH, WEST, TOP, WEST_TOP, SOUTH_WEST, SOUTH_TOP and SOUTH_WEST_TOP neighbouring subdomains only.
Similarly to the AllGatherLocal strategy, exchange of information between neighbouring subdomains is performed by a combination of non-blocking sending operations using MPI_Isend and blocking receiving operations using MPI_Recv.
The bu fer vectors sent and received by processes are of the C double type. A bu fer vector contains for each particle the following data: particle identity number, particle reference type, MPI rank of sending process, velocity, position and orientation for a total of 29 numbers. Particle identity number, particle reference type and MPI rank of sending process are integer numbers and are cast into double numbers such that all features can be concatenated into a single vector of doubles. Hence each process sends to and receives from another neighbouring process a single message containing a vector of doubles with the MPI_DOUBLE data type (instead of sending and receiving separately in two di ferent messages a vector of doubles with the MPI_DOUBLE data type and a vector of integers with the MPI_INT data type, respectively). Each message size is then 29 times the size of a double times the number of bu fer particles with the appropriate GEOLOC tag. Due to the considerable latency involved in any MPI message, e cient parallel performance involves keeping the number of messages as low as possible. This explains why we cast integer to double as a way to avoid heterogeneous data types and/or twice more messages (the computing cost of the cast operation from integer to double when sending and back from double to integer when receiving is much smaller than the one associated to sending and receiving 2 messages instead of 1). Another option that we have not tried is to convert all data types to raw bytes and send a single vector of raw bytes using the MPI_BYTE data type. Each neighbouring process is then responsible to convert back the received raw byte messages to their original data types. This strategy has been successfully implemented in [START_REF] Iglberger | Massively parallel rigid body dynamics simulations[END_REF][START_REF] Wachs | PeliGRIFF, a parallel DEM-DLM/FD direct numerical simulation tool for 3D particulate ows[END_REF].
At each time step, the full solving algorithm on each subdomain reads as follows:
1. for all particles with status 0 or 1: initialize force to gravity and torque to 0
2. for all particles:
(i) detect collisions (ii) compute contact forces & torques 3. for all particles with status 0 or 1:
(i) solve Newton's law: E . 5.1 for translational velocity and E . 5.2 for angular velocity (ii) update position E . 5.3 and orientation E . 5.4
4. search for particles whose status changed from 0 to 1 add them to the list of bu fer particles 5. MPI step using the SendRecv_Local_Geoloc strategy (in the 3D general case of 26 neighbouring subdomains):
(i) copy bu fer particles features into the di ferent bu fer vectors of doubles depending on their GEOLOC tag, (ii) perform non-blocking sendings of each of the 26 bu fer vectors of doubles to the corresponding neighbouring subdomains, (iii) for j = 0 to 25 (i.e., for each of the 26 neighbouring subdomains):
(I) perform a blocking receiving of the vector of doubles sent by neighbouring subdomain j, (II) Treat the received vector of doubles containing particles information
• Create or update clone particles • Delete clone particles moved out of the subdomain 6. for all particles: based on their new position, update status and GEOLOC tags and the corresponding lists of bu fer and clone particles
C
In this section, we assess the computational performance of our parallel DEM code Grains3D on assorted ow con gurations in which load balancing in terms of number of particles per subdomain (process) is approximately constant over the whole simulation. All the test cases considered thereafter are fully three-dimensional. In all computations, each core hosts a single subdomain and a single process. Hence, the terms "per core", "per subdomain" and "per process" are equivalent. Computations are performed on a 16-core per node supercomputer. Our primary goal is to compute larger systems for a given computing time. We therefore assess the computational performance of Grains3D in terms of weak scaling. We compute the parallel scalability factor S(n) by the following expression:
S(n) = T (1, N ) T (n, N × n) (5.7)
where T (1, N ) denotes the computing time for a problem with N particles computed on a single core or a single full node and T (n, N × n) denotes the computing time for a similar problem with N × n particles computed on n cores or nodes.
Assessing memory management on multi-core node architecture
Discharge ow in silos
The rst test case is a discharge of particles from a silo. Before performing weak scaling tests, we validate our DEM solver versus experimental data. For that purpose, we select the work of González-Montellano et al. ( 2011) as a reference because of its conceptual simplicity. Their study consists in comparing their own DEM simulation results to experimental data of spherical glass beads of 13.8 mm diameter discharging from a silo. The silo has a 0.5 m height (H) and 0.25 m sides (L) (F . 5.5a). The bottom has a truncated pyramid shape with a square hopper opening of 57 mm sides whose walls make an angle θ = 62.5 • with respect to the horizontal plane. In our simulations, we extend the bottom of the silo to collect all particles owing through the opening of the hopper (see F . 5.5b). Obviously, this does not a fect the discharge dynamics and rate.
As in [START_REF] González-Montellano | Validation and experimental calibration of 3d discrete element models for the simulation of the discharge ow in silos[END_REF], we ll the silo with 14000 spherical particles by performing a rst granular simulation with the opening of the hopper sealed by a plate. In this preliminary simulation, we insert all particles together as a structured array in order to reduce the computing time (see F . 5.6 at t = 0). To this end, we extend the height of the silo in a way that all particles t into the silo before they start to settle. The initial particles positions at the insertion time are actually slightly perturbed with a low amplitude random noise in order to avoid any arti cial microstructural e fect. Particles then settle by gravity and collide until the system reaches a pseudo steady state corresponding to a negligible total kinetic energy (see F . 5.6 at t = T f ill ). As observed in González-Montellano et al. ( 2011), the 14000 spherical particles ll the silo up to H m ≃ 0.86H. After the lling of the silo, the plate that blocks the particles is removed by imposing a fast translational frictionless displacement to start the discharge. Simulations are run until all particles have exited the silo (see F . 5.6 at t = T dis ).
As in [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF] our contact model is the linear damped spring with tangential Coulomb friction for both particle-particle and particle-wall contacts. The magnitude of the parameters involved in the silo discharge simulations is given in T . 5.1. In [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF], we elaborated on the fact that the spring sti fness k n in our contact model can be linearly related to the Young modulus E of the material. Since the contact duration is inversely proportional to k n , a high E leads to a short contact duration, and hence a correspondingly small time-step ∆t. For glass beads, the Young modulus E is approximately 50 GP a. It leads to a time step magnitude of the order of ∆t ∼ 10 -7 s, which would require to compute an unnecessary large number of time steps to simulate the whole discharge of the silo. In fact, as explained in Wachs et al. (2012), the sti fness coe cient k n is generally not set in accordance with Hooke's law and Hertzian theory, but rather in a way to control the maximum overlap between particles as they collide. The meaningful parameters from a physical viewpoint are the coe cient of restitution e n and the Coulomb friction coe cient µ c . A smaller k n enables us to use much larger time steps without a fecting the whole dynamics of the system. This is rather customary in DEM simulations of non-cohesive materials. For more detail about how to determine k n , the reader is referred to [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF], [START_REF] Cleary | DEM modelling of industrial granular ows: 3D case studies and the e fect of particle shape on hopper discharge[END_REF], [START_REF] Cleary | Industrial particle ow modelling using discrete element method[END_REF][START_REF] Cleary | DEM prediction of industrial and geophysical particle ows[END_REF] and the references therein. In T . 5.2, the meaningful physical parameters e n and µ c are set to exactly same values as those selected by [START_REF] González-Montellano | Validation and experimental calibration of 3d discrete element models for the simulation of the discharge ow in silos[END_REF].
= 0 0 < t < T f ill t = T f ill T f ill < t < T dis t = T dis
Using an estimate of the maximum collisional velocity of v col = 4.5 m/s, the selected value of k n leads to a maximum overlap distance of 3 of the sphere radius. Please note that this estimate is highly conservative as v col = 4.5 m/s is the free fall velocity of particles as they collide with the bottom wall of the collecting bin underneath the hopper opening. In fact, the collecting bin height is ≈ 1 m, hence we get √ 2 × 9.81 × 1 ≈ √ 20 ≈ 4.5 m/s. In the dense discharging granular material above the hopper opening, the actual collisional velocity is much less. As a result, the maximum overlap between colliding particles in this part of the granular ow is less than 0.1 of the sphere radius, a value commonly deemed to be a very satisfactory (and almost over-conservative) approximation of rigid bodies in DEM simulations.
Parameter Value Particle-Wall k n (N m -1 ) 1 × 10 6 e n , µ n (s -1 ) 0.62 , 3.63 × 10 3 µ c 0.3 k ms 1 × 10 -5 δ max (m) , δ max /R 2.25 × 10 -4 , 0.033 T C (s) 1.85 × 10 -4 Particle-Particle k n (N m -1 )
7.2 × 10 5 e n , µ n (s -1 ) 0.75 , 1.87 × 10 3 µ c 0.3 k ms 1 × 10 -5 δ max (m) , δ max /R 1.92 × 10 -4 , 0.028 T C (s)
1.55 × 10 -4 ∆t (s) 1 × 10 -5 We report in T . 5.2 the values of the discharge time experimentally measured by González-Montellano et al. ( 2011) together with our simulation result. [START_REF] González-Montellano | Validation and experimental calibration of 3d discrete element models for the simulation of the discharge ow in silos[END_REF] carried out three times the same experiment but it seems that the observed devia-tion of the discharge time with respect to the mean value is very limited (of the order of 0.2 ). In other words, the initial microstructure of the particles in the silo before removing the hopper gate is essentially similar and does not markedly a fect the discharge process. Based on this observation, we perform a single discharge simulation. Our model shows a (even surprisingly) good agreement with the discharge time measured in the experiments of [START_REF] González-Montellano | Validation and experimental calibration of 3d discrete element models for the simulation of the discharge ow in silos[END_REF]. Snapshots of the discharge process also exhibit a highly satisfactory agreement between our simulations and the experiments in [START_REF] González-Montellano | Validation and experimental calibration of 3d discrete element models for the simulation of the discharge ow in silos[END_REF], as presented in F . 5.7. Although our goal in this work is not to carry out an extensive analysis of the discharge, it is computationally cheap and important to validate our model and gain con dence in the computed results. We are now in a sensible position to perform weak scaling tests and assess the scalability properties of our parallel DEM solver. 2011) and our simulation results with Grains3D: snapshots of discharge dynamics at di erent times.
Parallel scalability
On the single-core architecture of the 90s, each core had its own levels of cache and its own random-access memory (RAM). The limitation of parallel implementations was hence essentially the communication overhead. This overhead depends on the MPI strategy (size of message, synchronous/asynchronous communication, blocking/non-blocking communication, etc). Since the early 2000s, the new emerging architecture relies on multi-core processors. In a supercomputer, these multi-core processors are bundled in computing nodes, i.e., a computing node hosts multiple processors that each hosts multiple cores. Cores share levels of cache on the processor they belong to and processors share RAM on the computing node they belong to. The aftermath is a more complex and competitive access to memory by all the cores of a computing node. Hence, parallel implementations running on modern supercomputers can be limited as much by the communication overhead as by the intra-processor and intra-node memory management and access. Our parallel DEM solver Grains3D is programmed in C++. C++ equips programmers with a formidable level of exibility to handle multi-shape and multi-size granular ows with well-known object-oriented mechanisms as inheritance, virtual classes and dynamic typing. Another enjoyable tool of C++ is constructors and destructors that enables one to create and delete instances of object with a high level of control on memory allocation (provided the constructor and destructor are properly programmed). However, dynamic memory allocation/deallocation, even with absolutely no memory leak, can literally kill the parallel performance of a numerical code. This has nothing to do with inter-and intranode MPI communications between cores but rather with the management and competitive access to memory. The rst parallel version of Grains3D exhibited dramatically poor scalability properties. It took us a while to realize that the limitation was coming from an excessive use of constructors/destructors while our MPI strategy was performing quite well from the start. Complete refactoring of the code with use of dynamic memory allocation/deallocation only when absolutely necessary and partial over-allocation strikingly improved the scalability properties. It is hard to get here into the details of our C++ implementation but Grains3D is now programmed with the following guidelines: (i) use object-oriented programming concepts at a very high level of design only, (ii) use standard old-fashioned C/F77-like containers whenever possible and (iii) slightly over-allocate memory and reduce to the absolute minimum dynamic memory management. There is still room for improvement in our implementation but we are now in a position to present acceptable parallel properties. It is this new version of the code with enhanced memory management that we assess the scalability properties of in the rest of the paper. In this section, we design two slightly di ferent multi-silo discharge con gurations in order to discriminate the computing overhead related to (i) memory competitive access and management and pure MPI communication latency from (ii) actual MPI communications and treatment of received information.
The rst ow con guration consists in discharging particles from several silos using the previous con guration (F . 5.5). The multiple silos case is designed in a way that a silo is handled by a single core without any actual communication with neighbouring sub-domains (F . 5.8). In fact, silos are located far enough from each other to avoid the creation and destruction of clone particles. This ow con guration is hence illustrative of case (i): memory competitive access and management and MPI communication latency. In fact, the code runs in MPI but messages are empty. The overhead coming from MPI is hence essentially related to the latency of the MPI scheduler to send and receive messages. We adopt a two dimensional domain decomposition (N cores,x × N cores,y × 1 = N cores ) to guarantee exact load balancing between the cores. We evaluate the scalability of our code by gradually increasing the size of our system. To this end, we perform discharge simulations of 2, 000 cubic particles and 2, 000, 14, 000, and 100, 000 spherical particles per silo, starting from one silo till 256 silos. Varying the load of particles per core changes the amount of memory allocated, managed and accessed by the code on each core. This enables us to discriminate further between memory management and MPI latency so that the e fects of these two factors are not mixed up. In fact, MPI latency is independent of the particle load as the number of messages sent and received scales with the number of cores. The total number of particles N T in the system is a multiple of that in a single core system and is de ned as follows:
N T = N p,1 × N cores (5.8)
where N p,1 and N cores are respectively the number of particles on a single core system and the number of cores. The largest system comprises 100, 000 × 256 = 25, 600, 000 of spherical particles. As the granular media is dense in most of the domain, the largest part of the computing time (more than 85%) is spent in computing interactions between particles, i.e., contact detection and contact forces. For the weak scaling tests, we run all discharge simulations over 300, 000 time steps. Reference times on a single core job are listed in T . 5.3. A rst interest- ing comment about T . 5.3 is that the computing time per particle and per time step is not constant and slightly increases with the size of the system. Even when running in serial mode, memory access is apparently not optimal as containers of larger size (as e.g. a larger list of particles) seem to slow down the computation. Some additional e forts in refactoring the serial implementation of the code are required but this is beyond the scope of the present paper.
The second ow con guration is very similar except that right now all silos are merged together into a big silo. The whole domain is thus shared by each core and actual communications (in the sense communications with non-empty messages) between sub-domains are exchanged (see F . 5.9 and 5.10). For this purpose, we performed discharge simulations of Table 5.3 -Silo discharge for di erent systems: reference times of a serial job over 300, 000 time steps of 10 -5 s.
10, 000 cubic particles, 2, 000, 14, 000 and 100, 000 spherical particles. As for the rst ow con guration, a two dimensional domain decomposition is chosen such that each sub-domain has approximately the same number of particles as if the silos were independent. This hence guarantees again an almost perfect load balancing between the cores. F . 5.11 illustrates the scalability of our code of these two ow con gurations. At rst sight, results are very similar without (separate silos) and with (merged silos into a big silo) actual communications. We plot in F . 5.11a the parallel performance of Grains3D on the rst test case, i.e., without any overlap between separate silos and empty MPI messages. This gure indicates that for low numbers of particles per core, the limiting factor is clearly MPI latency while for high numbers of particle per core, the serial computations per core prevail and the MPI latency becomes negligible. Hence, the loss of performance is primarily related to a yet non-optimal memory access and management on multi-core architectures. However, for a high enough number of particles per core as e.g. 100, 000 spheres, the scaling factor S(n = N cores ) is independent of n up to n = 256 cores and is around 0.85. As the contact detection of convex bodies is more time-consuming than that of spheres, S(n) for cubes is higher than S(n) for spheres for the same number of particles per core. Hence, we expect that for more than 100, 000 particles per core, the observed scaling factor of 0.85 for spheres is actually a lower bound and that the scaling factor for non-spherical particles should be higher. We plot in F . 5.11b the parallel performance of Grains3D on the second test case, i.e., a big silo split into sub-domains and non-empty MPI messages. The 2000 spheres per core is a special case as on each sub-domain there are almost as many particles on the actual sub-domain, i.e., interior and bu fer zones, than in the clone layer, leading to a high global communication overhead (size of messages and treatment of information received). This is getting worse and worse as the number of cores increases (see blue line in F . 5.11b). The general outcome is in line with the rst test case with empty messages: for a large enough number of particles per core, the scaling factor S(n) is satisfactory (it is actually 0.78 for 100, 000 spheres and is likely to be higher for 100, 000 non-spherical particles). This is again emphasized in F . 5.12 where we compare the communication overhead to the serial computational task for a sphere and a polyhedron. The di ference shown there is primarily due to the contact detection that requires to use a GJK algorithm for non-spherical particles while it is analytical (and hence faster) for spheres (see [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF] for more detail). Interestingly, for 100, 000 spheres, the scaling factor S(n) drops from 0.85 with empty messages to 0.78 with non-empty messages and treatment of the received information. Therefore, the actual overall parallel overhead is around 7 and the rest of the loss of performance, i.e., the remaining 15 , is predominantly due to non-optimal memory access and management on multi-core chips. These two rst test cases are extremely informative. They show that for a dense granular ow with a minimum load of 100, 000 of particles per core, we can expect a good overall parallel performance with a scaling factor S(n) 0.75 on up to 512 to 1024 cores. Systems with a low particles load per core, i.e., of the order of a few thousands, show an unsatisfac-tory, although not dramatically poor, parallel performance that exhibits the obvious tendency to degrade with the number of cores n. Overall, the MPI strategy presented in Section 3 is deemed to perform very well while additional e forts in serial programming are required to improve memory access and management. Systems with up to 100, 000, 000 of particles can be computed with a scaling factor of at least 0.75, which is deemed to be very satisfactory for engineering and fundamental physics purposes.
Granular slumping
Dam break collapse
Granular column collapse is a very classical ow con guration to understand the fundamental dynamics of granular media Ritter (1892), [START_REF] Balmforth | Granular collapse in two dimensions[END_REF], Ancey et al., Lajeunesse et al. (2005), [START_REF] Lube | Collapses of two-dimensional granular columns[END_REF]. The "dam break" con guration in a rectangular channel has been extensively studied by many authors, experimentally [START_REF] Lajeunesse | Granular slumping on a horizontal surface[END_REF], [START_REF] Lube | Collapses of two-dimensional granular columns[END_REF], [START_REF] Balmforth | Granular collapse in two dimensions[END_REF]), analytically [START_REF] Balmforth | Granular collapse in two dimensions[END_REF]) and numerically [START_REF] Girolami | A three-dimensional discrete-grain model for the simulation of dam-break rectangular collapses: comparison between numerical results and experiments[END_REF]), among others. The experimental set up is cheap and experiments are easy to conduct. The overall picture of granular column collapse has been described in many papers and books (and in particular in the aforementioned papers) but a fully comprehensive understanding is still lacking. To summarize, the macroscopic features of the collapse, i.e., the nal height H ∞ /H and the run-out distance (L ∞ -L)/L = X f /L, scale with the initial aspect ratio a = H/L of the column, where H and L denote the initial height and initial length of the column, respectively, and H ∞ and L ∞ denote the nal height and nal length of the column, respectively. It has been established and veri ed by many authors that H ∞ /H and (L ∞ -L)/L are essentially functions of a and vary as H ∞ /H ≃ λ 1 a α and (L ∞ -L)/L ≃ λ 2 a β , with α ≈ 1 for a 0.7 and α ≈ 1/3 for a 0.7, and β ≈ 1 for a 3 and β ≈ 2/3 for a 3, although Balmforth and Kerswell found slightly di ferent exponents [START_REF] Balmforth | Granular collapse in two dimensions[END_REF]). Anyhow, the constants λ 1 and λ 2 are largely undetermined. In the inertia dominated regime a 3, [START_REF] Lube | Collapses of two-dimensional granular columns[END_REF] suggested that λ 2 = 1.9. Although the qualitative description of granular column collapse in a rectangular channel is acknowledged by all contributors to the eld, signi cant quantitative discrepancies can be found in terms of experimentally measured run-out distances between e.g. [START_REF] Lube | Collapses of two-dimensional granular columns[END_REF] and [START_REF] Balmforth | Granular collapse in two dimensions[END_REF]. It is admitted that the problem is primarily governed by the initial aspect ratio a but the various existing studies also suggest that λ 1 and λ 2 might not be true constants but functions of the transverse dimension of the channel (narrow or wide slots), the type of material and the shape of the particles, although this functional dependence might be weak. In any case, the scaling analysis is assumed to be valid, which implies that the general behaviour and hence H ∞ /H and (L ∞ -L)/L are independent of the dimensional system size.
In [START_REF] Girolami | A three-dimensional discrete-grain model for the simulation of dam-break rectangular collapses: comparison between numerical results and experiments[END_REF], we used Grains3D to carry out an extensive analysis of dam break granular collapses in a rectangular channel and satisfactorily reproduced the experimental data of [START_REF] Lajeunesse | Granular slumping on a horizontal surface[END_REF]. Here our objective is twofold: (i) show that the scaling analysis is indeed valid by computing systems of increasing size but constant a and that the computed run-out distance is within the reported experimental range of values, and (ii) use the largest system as a reference point for weak scaling parallel tests.
Numerical simulation
Simulations are performed based on a well-known experimental set-up: a box with a lifting gate (see F . 5.13). The simulation procedure consists in lling the parallelepipedic reservoir of length L and width W up to a height H with granular media. Particles are inserted at the top of the reservoir. They settle by gravity and collide until the system reaches a pseudo steady state corresponding to a negligible total kinetic energy. Then, the gate is lifted over a time scale much smaller than that of the collapsing media in a sense that it does not a fect the dynamics of the whole system. The moving gate is also chosen to be frictionless to avoid particle located close to the gate to be arti cially lifted in the air. The lateral boundaries of our system are subjected to periodic condition to mimic an in nite granular media in the transverse direction to the ow. Particles are assumed to have a mono-sized icosahedral shape that mimics quartz-sand grains. Icosahedral particles have an equivalent diameter d p (diameter of a sphere of same volume as the icosahedron) of 3 mm. The magnitude of the parameters involved in the granular collapse simulations is given in T . 5.4. We take the free-fall settling velocity of the highest heap of particles (Size 5, H = 0.905m) as an estimate of the maximum collisional velocity. We hence get v col = √ 2 × 9.81 × 0.905 ≈ 4.2 m/s. The theoretical maximum overlap is of the order of maximum 5 of the particle equivalent radius as shown in Table 5.4. In practice, the average overlap and maximum overlap in all simulation are of the order of 0.1 and 1 , respectively. 1 × 10 5 e n , µ n (s -1 ) 0.75 , 6.86 × 10 3 µ c,P B , µ c,P G 0.5, 0 We x a to roughly 7.3 and select ve systems of increasing dimensional size. The way we proceed is as follows: we set W = 0.5 m and select L = L 1 = 0.025m has the length of the smallest system. We ll the reservoir with N 1 = 98, 000 mono-disperse icosahedral particles and the resulting height is H = H 1 = 0.187 m. The 4 other systems have the following features: i ∈< 2, 5 >, L i = iL 1 , N i = i 2 N 1 , H i ≈ iH 1 . The simulation of the lling process results in the following actual height and aspect ratio of the reservoir of particles for the di ferent systems:
✲ ✛ ✻ ❄ ✒ ✠ H L W ✲ X ✻ Z ✒ Y ❅ ❅ ❅ ❅ ❅ |
Granular media
k ms 0 δ max (m) , δ max /R 7.16 × 10 -5 , 0.048 T C (s) 5.91 × 10 -5 Particle-Particle k n (N m -1 ) 1 × 10 5 e n , µ n (s -1 ) 0.75 , 6.86 × 10 3 µ c 0.5 k ms 0 δ max (m) , δ max /R 4.87 × 10 -5 , 0.0325 T C (s) 4.19 × 10 -5 ∆t (s) 2.5 × 10 -6
• Size 1: 98000 particles (H = 0.187m, L = 0.025m, a = 7.475)
• Size 2: 392000 particles (H = 0.365m, L = 0.05m, a = 7.305)
• Size 3: 882000 particles (H = 0.547m, L = 0.075m, a = 7.296)
• Size 4: 1568000 particles (H = 0.731m, L = 0.1m, a = 7.31)
• Size 5: 2450000 particles (H = 0.905m, L = 0.125m, a = 7.238)
The resulting aspect ratio a is 7.3 ± 2.3%. The observed limited deviation of 2.3 is the aftermath of systems of slightly di ferent compaction. In fact, the initial height is a result of the lling simulation and cannot be set a priori. It is only known after all particles have settled in the reservoir and the system exhibits a negligible total kinetic energy. It has been noticed that once the free fall phase of all particles is complete, the system relaxes and densi es extremely slowly over a time scale of a few seconds at least. Slow microstructural re-arrangements lead to a progressively more compact granular media in the reservoir. Actually, starting from a loose packing, the compaction of the system can be very slow, even with a successive vertical taps [START_REF] Knight | Density relaxation in a vibrated granular material[END_REF]). In terms of computational cost, this situation may lead to an extremely long simulation time since the typical time step is in the order of a micro-second. We assume that these slight variations of the initial aspect ratio a and correspondingly of the initial volume fraction and microstructure of the granular media have a very low impact on the whole granular collapse. In the worst case, it will result in similar slight variations of the nal height and the nal run-out distance.
Measuring the run-out distance in an unbiased way is not straightforward as once the collapse is complete the front of the deposit of particles is di fused (detached particles are spread out). In order to determine the total length of the nal deposit, we employ the following procedure:
• we consider the bottom layer of particles whose thickness is roughly a particle equivalent diameter d p , • we translate along the bottom wall a box-like control volume that spans the whole transverse dimension of the ow domain (V b = d p × W × d p ) from the origin of the X-axis and compute the solid volume fraction as a function of X as follows:
φ(X) = N i=1 V i V b , (5.9)
where V i = πd 3 p /6 is the particle volume and N is the number of particles whose center of mass belongs to V b .
• the total length of the nal deposit L ∞ is determined once the condition φ(L ∞ ) ≤ 0.1 is satis ed. Note that changing the critical value of the average solid volume fraction in the control volume V b from 0.1 to 0.05 or 0.025 does not change signi cantly the estimation of L ∞ .
F . 5.14 and F . 5.15 illustrate the dynamics of the granular collapse and the time evolution of the free surface in a 2D X -Z cut plane and in 3D, respectively, for case Size 4. As observed by [START_REF] Lube | Collapses of two-dimensional granular columns[END_REF], the early transient of the collapse correspond to a free-fall regime (F . 5.14 (a)-(c)) until the ow transitions to a phase over which the advancing front of the collapsing granular media reaches a quasi-constant velocity (F . 5.14 (d)-(f)), and nally the ow is friction-dominated and slows down to rest. Interestingly, over the second phase, the front of the collapsing granular media shows a rather chaotic dynamics. Although the front advances at a quasi-constant velocity, the singularity that the front represents leads to a high level of particles agitation with many particles being ejected/detached from the mass to balistically free-y until they settle back on the deposit. As experimentally observed by many authors, our computed results con rm that the overall dynamics and in particular the nal height, run-out distance and cross-sectional pro le of the deposit are independent of the size of the system and solely controlled by the initial as- pect ratio a. We present in F . 5.16 a view from the top of the nal deposit together with the scaled total length of the deposit L ∞ /L obtained with the criterion φ ≤ 0.1 (red line) for all systems. The variation of the run-out distance (L ∞ -L)/L is quantitatively plotted in F . 5.17. It is pretty obvious that (L ∞ -L)/L is quasi-constant as a function of the size of the system. The limited variations obtained are primarily a result of the slight variations of a for the di ferent sizes in the computations. Finally, the nal scaled cross-sectional pro les of the deposit for all system sizes nicely collapse on a unique master plot, as shown in F . 5.18, emphasizing once again the dependence to a and not to the dimensional size of the system. Let us complete this subsection by shortly discussing the value of the obtained run-out distance. [START_REF] Lajeunesse | Granular slumping on a horizontal surface[END_REF] and [START_REF] Lube | Collapses of two-dimensional granular columns[END_REF] agree on the scaling exponents while [START_REF] Balmforth | Granular collapse in two dimensions[END_REF] suggests slightly di ferent values. Please note that all these works are experimental. For inertia-dominated regimes a 3, [START_REF] Lube | Collapses of two-dimensional granular columns[END_REF] even determine that the value of the constant λ 2 is around 1.9 and independent of the granular material properties and shape. Using their correlation (L ∞ -L)/L ≃ 1.9a 2/3 , we get for a ≈ 7.3, (L ∞ -L)/L = 1.9 × 7.3 2/3 ≃ 7.15, a value signi cantly less than our numerical prediction of ≈ 10.5. In their experiments, Lube et al.have lateral walls while we have periodic boundary conditions, i.e., no frictional resistance from any lateral walls. This di ference in the ow con guration qualitatively justi es that our run-out distance is larger (less frictional resistance leads to a larger spread out of the granular media) but is probably not su cient to quantitatively explain the discrepancy. Although Lube et al.have lateral walls, their channel looks rather wide, so the additional frictional ow resistance is likely to be limited. In [START_REF] Balmforth | Granular collapse in two dimensions[END_REF], Balmforth and Kerswell claim that λ 2 is a function of the granular material properties and shape, based on their own experimental results. Figure 11 in [START_REF] Balmforth | Granular collapse in two dimensions[END_REF] suggests that for a = 7.3, the run-out distance roughly spans the range [7 : 13] for wide channels, with the largest value found for ne glass. Fine glass grains seem to look moderately angular (see Figure 3 in [START_REF] Balmforth | Granular collapse in two dimensions[END_REF]) and could presumably be well represented by icosahedra. Our computed run-out distance hence falls almost in the middle of the range of values reported in [START_REF] Balmforth | Granular collapse in two dimensions[END_REF]. Overall, our numerical prediction is in good agreement with the assorted experimental values reported in the literature. But additional simulations are required to further determine the right scaling and the potential dependence of that scaling to the granular material properties and shape.
Size Size Size Size Size
Parallel scalability
We use the Size 5 granular column collapse ow con guration to perform weak scaling tests and further assess the parallel scalability of Grains3D. From Section 4.1, we learnt that a good parallel performance requires a minimum of ≈ 100, 000 particles per core. Therefore our reference case on a single core approximately corresponds to Size 5 case of Section 4.2 but 24 times narrower. The system on a single core comprises N p,1 = 101850 icosahedra and its width is W 1 = 0.021875m. For parallel computing, we increase the system width and the number of particles accordingly. We adopt a 1D domain decomposition in the Y direction such that each core hosts approximately 101850 particles. Hence, a N cores -core computation corresponds to a system with N T = N p,1 ×N cores particles and of width W = N cores ×W 1 as detailed in T . 5.5. The weak scaling tests are performed over the 20, 000 rst time steps of the collapse.
N F . 5.19 shows the overall scalability of our code Grains3D. The code exhibits a very satisfactory performance for a particles load per core of ≈ 100, 000 of regular polyhedra. The scaling factor S(n = N cores ) is ≈ 0.93 on 512 cores for a system with a quasi-perfect load balancing. The plot seems to indicate a very slight degradation of the performance above 256 cores but the general trend suggests that S(n) should still be 0.9 on 1024 cores for a system comprising more than 100, 000, 000 of regular polyhedra. 4.3 Coupling with a uid in an Euler/Lagrange framework, application to uidized beds
The nal test case is a uidized bed, i.e., a ow con guration in which the particles dynamics is not only driven by collisions by also by hydrodynamic interactions with the surrounding uid. The model implemented here is of the two-way Euler/Lagrange or DEM-CFD type (Anderson and Jackson (1967), [START_REF] Kawaguchi | Numerical simulation of two-dimensional uidized beds using the discrete element method (comparison between the two-and threedimensional models)[END_REF], [START_REF] Tsuji | Spontaneous structures in three-dimensional bubbling gas-uidized bed by parallel DEM-CFD coupling simulation[END_REF], [START_REF] Pepiot | Numerical analysis of the dynamics of two-and three-dimensional uidized bed reactors using an Euler-Lagrange approach[END_REF]). The principle of the formulation is to write uid porosity averaged conservation equations with an additional source representing the reaction of the particles on the uid and to add a hydrodynamic force to the translational Newton's equation for the particles representing the action of the uid on the particles. In our weak scaling tests below, we evaluate the parallel scalability of the solid solver only.
Formulation
The formulation of the set of governing equations dates from [START_REF] Anderson | Fluid mechanical description of uidized beds. equations of motion[END_REF] in the late 60s and was recently clari ed in [START_REF] Capecelatro | An Euler-Lagrange strategy for simulating particle-laden ows[END_REF]. In essence, for the uid part, the mass conservation equation and the momentum conservation equation are averaged by the local uid porosity. In most formulations, the set of governing equations is integrated in control volumes larger than the particle diameter, although recent advances in this eld have shown that it is possible to use a projection kernel disconnected from the grid size [START_REF] Pepiot | Numerical analysis of the dynamics of two-and three-dimensional uidized bed reactors using an Euler-Lagrange approach[END_REF], [START_REF] Capecelatro | An Euler-Lagrange strategy for simulating particle-laden ows[END_REF]). Particles trajectories with collisions and hydrodynamic forces are tracked individually and computed by our granular dynamics code Grains3D. The two-way Euler/Lagrange formulation has been detailed many times in the past literature (see [START_REF] Kawaguchi | Numerical simulation of two-dimensional uidized beds using the discrete element method (comparison between the two-and threedimensional models)[END_REF], [START_REF] Tsuji | Spontaneous structures in three-dimensional bubbling gas-uidized bed by parallel DEM-CFD coupling simulation[END_REF], [START_REF] Xu | Numerical simulation of the gas-solid ow in a uidized bed by combining discrete particle method with computational uid dynamics[END_REF] among many others) and we shortly summarize the main features of our own two-way Euler/Lagrange numerical model.
The uid is assumed to be Newtonian and incompressible. The set of governing equations for the uid-solid coupled problem reads as follows:
• Fluid equations
We solve the following uid porosity averaged mass and momentum conservation equations:
∂ε ∂t + ∇ • εu = 0 (5.10) ρ f ∂(εu) ∂t + ∇ • (εuu) = -∇p -F f p + 2µ∇ • (εD) (5.11)
where ρ f , µ, ε and D stand for the uid density, the uid viscosity, the uid porosity (also referred to as uid volume fraction) and the rate-of-strain tensor, respectively. The pressure gradient term only contains the hydrodynamic pressure and F f p represents the uid-particle hydrodynamic interaction force.
• Particles equations
We solve E . 5.1 and E . 5.2 with addtional hydrodynamic interaction contributions F i and M i , respectively. The translational and angular momentum conservation equations of particle i hence read as follows:
M i dU i dt = M i (1 -ρ f /ρ p )g + N -1 j=0,j =i F ij + F f p,i
(5.12)
J i dω i dt + ω i ∧ J i ω i = N -1 j=0,j =i R j ∧ F ij + M f p,i (5.13)
where ρ p , F f p,i and M f p,i stand for the particle density, the uid-particle hydrodynamic interaction force exerted on particle i and the uid-particle hydrodynamic interaction torque exerted on particle i, respectively.
The uid-particle hydrodynamic interaction force F f p,i exerted on particle i (and similarly for the torque) derives from the momentum exchange at the particle surface:
F f p,i = ∂P i τ • n dS (5.14)
where τ denotes the point-wise uid stress tensor and n is the normal vector to the particle surface ∂P i . In the two-way Euler-Lagrange framework, point-wise variables are not resolved. A closure law is hence needed to compute the uid-solid interaction at the position of each particle [START_REF] Kawaguchi | Numerical simulation of two-dimensional uidized beds using the discrete element method (comparison between the two-and threedimensional models)[END_REF], [START_REF] Tsuji | Spontaneous structures in three-dimensional bubbling gas-uidized bed by parallel DEM-CFD coupling simulation[END_REF], [START_REF] Pepiot | Numerical analysis of the dynamics of two-and three-dimensional uidized bed reactors using an Euler-Lagrange approach[END_REF]). Following previous contributions to the literature, we assume that the dominant contribution to the hydrodynamic interaction is the drag and that the hydrodynamic torque is small enough to be neglected, i.e., we set M f p,i = 0. In our uidized bed simulations, particles are spherical and we select the drag correlation proposed by Beetstra et al. (2007a;b) which reads as follows:
F i,f p = F d,i = 3πdµ(u -U i )g(ε, Re p,i ) (5.15) g(ε, Re p ) = 10(1 -ε) ε 2 + ε 2 (1 + 1.5 √ 1 -ε) + 0.413Re p 24ε 2 ε -1 + 3ε(1 -ε) + 8.4Re -0.343 p 1 + 10 3(1-ε) Re -0.5-2(1-ε) p Re p,i = ρ f d p ε|u -U i | µ (5.16)
To compute the reaction term -F f p of the particles on the uid ow, we need to use a projection operator from the Lagrangian description of the particles motion to the Eulerian description of the uid ow. Here we use the simple embedded cube projection kernel introduced by Bernard (2014), [START_REF] Bernard | Euler/Lagrange numerical modeling of the dynamics of bubbling and spouted uidized beds[END_REF]. The uid equations are discretized with a classical second-order in space Finite Volume/Staggerred Grid discretization scheme and the solution algorithm is of the rst-order operator splitting type. The two-way Euler/Lagrange model used here is implemented in the PeliGRIFF platform to which Grains3D is plugged to compute particles trajectories, see [START_REF] Wachs | A DEM-DLM/FD method for direct numerical simulation of particulate ows: Sedimentation of polygonal isometric particles in a Newtonian uid with collisions[END_REF], Wachs et al. (2007Wachs et al. ( -2016) ) among others. For more detail about the formulation of the model and its implementation, the interested reader is referred to [START_REF] Bernard | Multi-scale approach for particulate flows[END_REF], [START_REF] Esteghamatian | Micro/meso simulation of a uidized bed in a homogeneous bubbling regime[END_REF], [START_REF] Bernard | Euler/Lagrange numerical modeling of the dynamics of bubbling and spouted uidized beds[END_REF]. The set of governing equations above can be easily made dimensionless by introducing the following scales: L c for length, V c for velocity, L c /V c for time, ρ f V 2 c for pressure and ρ f V 2 c L 2 c for forces. In a dimensionless form, the govening equations contain the following dimensionless numbers: the Reynolds number Re
c = ρ f V c L c µ , the density ratio ρ r = ρ p ρ f
and the Froude number Fr = gL c V 2 c .
Simulation set-up and parameters
We consider the uidization of mono-disperse solid spherical particles in a simple box-like reactor. We use the uniform inlet velocity U in as the characteristic velocity V c and the spherical particle diameter d p as the characteristic length L c . The Reynolds and Froude numbers hence read as follows:
Re in = ρ f U in d p µ
(5.17)
Fr in = gd p U 2 in (5.18)
Results hereafter are presented in a dimensionless form and dimensionless variables are written with a • symbol. Particles positions are initialized as a cubic array arrangement with a solid volume fraction of π/6. The computational domain is shown in F . 5.20. Inlet boundary condition corresponds to an imposed velocity u = (0, 0, 1) and outlet boundary condition corresponds to a standard free-ow condition with an imposed reference pressure. Lateral (vertical) boundaries are periodic.
✲ ✛ ✻ ❄ ✒ ✠ ✻ ✻ U in L z L x L y ✲ x ✻ z ✒ y ✻ ❄ H 0 ❅ ❅ ❅ ❅ ❅ |
Particles
Figure 5.20 -Fluidized bed computational domain.
The principle of our weak scaling tests is similar to the one adopted in the scaling tests of the previous sections except that here the reference case is a full node that comprises 16 cores. The domain is evenly decomposed and distributed in the horizontal x -y plane to guarantee an optimal load balancing over the whole simulation, i.e., we adopt a N cores,x × N cores,y × 1 = N cores domain decomposition. The reference case on a full 16-core node has the following dimensionless size: Lx = 200, Ly = 80 and Lz = 1500 and initially hosts 200 × 80 × 300 = 4, 800, 000 of spheres. With a 4 × 4 × 1 domain decomposition, each sub-domain has the following dimensionless size 50 × 20 × 1500 and hosts initially N p,1 = 50 × 20 × 300 = 300, 000 of spheres. The total number of particles in a system is N T = 300, 000 × N cores = 4, 800, 000 × N nodes . The initial height H0 of the bed is 300, such that we also have Lz / H0 = 5. The additional physical and numerical dimensionless parameters of our simulations are listed in T . 5.6. We use the same contact parameters for particle-bottom wall and particle-particle collisions.
Another important dimensionless parameter is the ratio of the inlet velocity U in to the minimum uidization velocity of the system U mf . Here we select U in /U mf = 3 to run our weak scaling tests. To avoid a strong overshoot of the bed over the early transients, we rst set U in /U mf = 2 for t ∈ [0 : 1785] and then U in /U mf = 3 for t > 1785. The weak scaling tests are performed by increasing the length Lx of the system together with the number of particles, as shown in T . 5.7, with Ly and Lz kept unchanged. As Lx increases, the domain horizontal cross-section looks more and more like a narrow rectangle and the bed behaves like a pseudo-3D/quasi-2D bed, as transverse secondary instabilities in the y direction are arti cially strongly damped by the narrow periodic length Ly while transverse secondary instabilities in the x direction are free to develop. This con guration is purposedly selected to facilitate the visualisation of the bubbles dynamics inside the bed. As expected, the ow eld does not vary much in the y direction (see F . 5.21). Note that this does not a fect our weak scaling tests since with 4 sub-domains in the y direction and bi-periodic boundary conditions, each sub-domain has exactly 8 neighbors, regardless of the fact that the cross-section is a narrow rectangle or a square. In other words, the cartesian domain decomposition is fully 2D. The evaluation of the scaling factor is carried out over 20, 000 time-steps as U in /U mf = 3 for t > 1785. Table 5.7 -System size in the fluidized bed weak scaling tests. Each node hosts 16 cores, i.e., N cores = 16 × N nodes , and each core initially hosts N p,1 = 300, 000 of spheres, thus N T = 300, 000 × N cores = 4, 800, 000 × N nodes . F . 5.21 illustrates the early transients for U in /U mf = 2 of the simulation with 19, 200, 000 of particles over which the primary streamwise (in the z direction) instability develops, as well documented in the literature. Then a secondary transversal (horizontal in the x direction) instability triggers, grows and leads to the creation of a rst big bubble that eventually bursts. F . 5.21 shows the time evolution of the uid porosity in a x -z cut plane located at Ly /2 over the transition from U in /U mf = 2 to U in /U mf = 3. For t > 1785, the system progressively transitions to its bubbling regime. The level of intermittency decreases with time until the system reaches a pseudo-stationary bubbling regime. F . 5.22 shows a 3D snapshot of the ow eld (velocity contours in a x-z cut plane located at Ly and 3D contours of ε = 0.75) at t = 2142. The presented results are qualitatively in line with the expected behaviour of a uidized in the selected ow regime [START_REF] Pepiot | Numerical analysis of the dynamics of two-and three-dimensional uidized bed reactors using an Euler-Lagrange approach[END_REF]).
F . 5.23 shows the parallel scalability of our granular solver Grains3D in our uidized bed parallel simulations. The overall parallel e ciency of our granular solver is very satisfactory. The scaling factor S(n = N nodes ) is 0.91 for the largest system investigated, i.e., for 230, 400, 000 of particles and 48 nodes/768 cores. This very high scalability for such a high number of particles derives from less frequent collisions between particles than in a dense granular media. Although collisions are constantly happening in the system, the presence of the uid and the overall observed dynamics lead to particles often advancing over a few solid time steps without collide with another particle. We would like to emphasize that, in such a u-idized bed simulation, most of the computing time is spent in computing particles trajectories with collisions, i.e., in the granular solver. This has been shown as well in a companion paper [START_REF] Bernard | Euler/Lagrange numerical modeling of the dynamics of bubbling and spouted uidized beds[END_REF]). So overall, measuring the parallel e ciency of the granular solver only in such systems still supplies a rather reliable indication of how the whole uid-solid solver scales. Although F . 5.23 shows that the scaling factor seems to slightly degrade with increasing the number of nodes, the trend reveals that simulations with a 1 billion of particles on a few thousands of cores can be performed with a reasonably satisfactory scalability. This is indeed very encouraging.
D P
We have suggested a simple parallel implementation of our granular solver Grains3D based on a xed cartesian domain decomposition and MPI communications between subdomains. The MPI strategy with tailored messages, non-blocking sendings and type conversion has proven to be particularly e cient when the ow con guration does not require any particular dynamic load balancing of the number of particles per core. In the three ow con gurations investigated in this work, the parallel performance of the code is deemed to be more that acceptable, and even satisfactory to very satisfactory. For systems with more than 100, 000 particles per core, the scaling factor S(n) is consistantly larger than 0.75. In case particles are non-spherical, S(n) is actually larger than 0.9 for computations on up to a few hundreds of cores.
We have also shown than the parallel performance is not only limited by the parallel overhead in terms of messages sent by and received from cores combined to copying the required information in bu fers before sending and treating the information received, but also by the competitive access to and proper management of random-access memory on a multi-core architecture. The aftermath of this known limitation is the requirement to enhance even the serial parts of the code. This reprogramming task might be tedious but should be very benecial on the long run as new architectures are likely to have more and more cores per processor and more and more processors per node. Although Grains3D went through this refactoring process, there is still room for further improvement.
In its current state, Grains3D o fers unprecedented computing capabilities. Systems with up to 100, 000, 000 of non-spherical particles can be simulated on a few hundreds of cores.
Besides, the trend shown by the scaling factor as a function of the number of cores or nodes suggests that the milestone of a billion of particles is attainable with a decent parallel performance, without uid or with uid in the framework of a two-way Euler/Lagrange coupling method. This will create incentives to examine ow con gurations that were beyond reach before and strengthen the position of numerical simulation associated to high performance computing as an indispensable tool to extend our comprehension of granular ow dynamics.
The next research directions that we will explore short-term on the purely computing side to further enhance the computing capabilities of Grains3D are the following ones:
• the developement of a dynamic load balancing algorithm to supply a good parallel performance in ow con gurations with high particle volume fraction heterogeneities and signi cant particle volume fraction time variations. We will proceed in two steps. First, we will implement an algorithm that dynamically balances the load of particles per core in one direction only and make sure this algorithm exhibits a good parallel performance. Second, we will extend this algorithm to dynamic load balancing in 3 directions. Conceptually, dynamic load balancing is not particularly complex but a parallel implementation that scales well is the true challenge,
• the intra-processor and intra-node limitation due to competitive access to memory and/or MPI latency may be partly corrected by moving to an hybrid OpenMP/MPI parallelisation instead of an all-MPI one, such as the one suggested by [START_REF] Berger | Hybrid parallelization of the LIGGGHTS open-source DEM code[END_REF],
• as the number of cores attains a few thousands, the MPI latency as well as the number of messages sent and received might start to become a serious limitation, although we have not explored yet this range of number of cores. In case this should happen, our simple though very e cient so far MPI strategy might necessitate to be upgraded too, with at least improvements in the scheduling of messages or other techniques,
• nally, although the ability to compute granular ows with non-spherical convex shape opens up fascinating perspectives to address many open questions in the dynamics of real life granular systems, this does not cover all possible particle shapes. In fact, many non-spherical particles are also non-convex. There is hence a strong incentive to devise a contact detection algorithm that can address granular media made of non-convex particles. We will examine this issue in Grains3D-Part III: extension to non-convex particles.
A
This work was granted access to the HPC resources of CINES under the allocations 2013-c20132b6699 and 2014-c20142b6699 made by GENCI. The authors would like to thank Dr. Manuel Bernard who developed the two-way Euler/Lagrange numerical model for the coupled uid/particles numerical simulations.
R
Ce chapitre est dédié à l'aspect parallèle du code Grains3D . E fectivement, il décrit une nouvelle stratégie d'implémentation parallèle du code Grains3D . Le travail est e fectué en adoptant une approche classique de décomposition de domaine, une communication MPI (Message Passing Interface) entre sous-domaine et une implémentation utilisant un système de "géolocalisation" des particules. C'est-à-dire que les particules sont géolocalisées lorsque celles-ci se retrouvent dans le voisinage des interfaces entre sous-domaines pour optimiser ainsi les messages entre les processeurs. Le problème gestion de mémoire est aussi abordée. L'implémentation a été testée sur quelques con gurations d'écoulements granulaires tels que vidanges de silos, é fondrements de colonnes de particules et lits uidisés. Ces tests ont pu montré que le code Grains3D est capable de simuler des systèmes contenant quelques centaines de millions de particules ouvrant ainsi la voie à des simulations numériqures de systèmes de milliard de particules.
P -R S :
C F r decades most catalytic re ning and petrochemical reactions have been processed in xed bed reactors. In the downstream oil and gas industries, these reactors represent the majority of reactor plants. Particles (catalyst pellets) are randomly stacked in these reactors and reactants, such as liquid or gas ow through these packed beds in the upward direction or the downward direction. It is known that these particles are made of porous medium in which the pores hold noble active materials. Usually, particle length is in the order of few millimetres.
Traditionally, the chemical industry is always looking for optimised and economical processes. For example, long lasting e cient catalysts. It is a matter of interest to investigate at the fundamental level the physics that govern these industrial plants. Afterwards, based on these investigations engineers can improve the performances of the reactors both in terms of chemical reactions and in terms of mechanical behaviour.
Until late 90's all the studies aimed at optimising processes in xed beds were done experimentally and analytically. Later on, numerical approach has been gradually introduced in the community. Among others, [START_REF] Kuipers | Computational uid dynamics applied to chemical reaction engineering[END_REF] presented the future of numerical simulation in chemical engineering and Logtenberg and Dixon (1998) explored the use of numerical simulation to study heat transfer in xed bed reactors. They used a nite element commercial code to solve the 3D Navier-Stokes equations. The simulation consisted of an arrangement of 8 spherical particles only. Afterwards, Dixon and Nijemeisland (2001) presented Computational Fluid Dynamics as a design tool for xed bed reactors, still limited to low tubeto-particle diameter ratio (N ∈ [2; 4]). [START_REF] Romkes | CFD modelling and experimental validation of particle-to-uid mass and heat transfer in a packed bed at very low channel to particle diameter ratio[END_REF] extended this limitation to a channel-to-particle-diameter ratio N of 1 < N < 5 and compared their numerical results to experimental data. They found that their tool could predict the particle-to-uid heat transfer with an average of 15% of relative discrepancy with the experimental data. Gunjal et al. (2005) studied the uid ow through an arrangement of spherical particles to understand the interstitial heat and mass transfer. Particles were arranged periodically following a simple cubical, a 1D rhombohedral, a 3D rhombohedral, and a face-centered cubical geometries. In this framework of nding the best particle arrangement that can represents a whole industrial bed, [START_REF] Freund | Detailed simulation of transport processes in xed-beds[END_REF] presented their work applied to a structured simple cubic packing and a random packing. In addition, they highlighted the advantages of modelling approaches such as deriving reliable correlations from "numerical experiments".
In the literature, the use of Discrete Particle modelling is becoming more and more undeniable due its conceptual simplicity. This method combined with Computational Fluid Dynamics has been proven to be an e cient and powerful tool for the study of the physics behind numerous industrial processes (among others [START_REF] Van Buijtenen | Discrete particle simulation study on the in uence of the restitution coe cient on spout uidized-bed dynamics[END_REF], [START_REF] Deen | Direct Numerical Simulation of Fluid Flow and Mass Transfer in Dense Fluid-Particle Systems[END_REF], [START_REF] Sutkar | Numerical investigations of a pseudo-2d spout uidized bed with draft plates using a scaled discrete particle model[END_REF], [START_REF] Rahmani | Free falling and rising of spherical and angular particles[END_REF], [START_REF] Dorai | Fully resolved simulations of the ow through a packed bed of cylinders: E fect of size distribution[END_REF]). With the growth of computing capabilities, many research groups adopted a multi-scale strategy [START_REF] Deen | Multi-scale modeling of dispersed gas-liquid two-phase ow[END_REF], [START_REF] Van Der Hoef | Numerical simulation of dense gas-solid uidized beds: A multiscale modeling strategy[END_REF]) which targets the up-scaling of local information (at the particle scale), known as micro-scale, to the intermediate scale, known as meso-scale (usually laboratory scale) and later on to the macro-scale (industrial plant). F . 6.1 illustrates the aforementioned strategy where the micro-scale is usually resolved with Direct Numerical Simulation (DNS), called also Particle Resolved Simulation (PRS). The power of this method relies on the fact that the momentum, heat or mass transfer are fully resolved without almost any assumption (see for example the works of [START_REF] Deen | Direct Numerical Simulation of Fluid Flow and Mass Transfer in Dense Fluid-Particle Systems[END_REF], [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF]). The PRS solutions serve as a benchmark to create correlations that will be implemented in the meso-scale model (Beetstra et al. (2007a;b), [START_REF] Esteghamatian | Micro/meso simulation of a uidized bed in a homogeneous bubbling regime[END_REF]) and the macro-scale one.
Finally, coming back to xed bed numerical simulation, the DEM approach combined with the PRS method enables researchers to simulate from the lling of reactors with particles, the ow through the packed bed to the chemical reaction and heat transfers between the bed Figure 6.1 -Illustration of the up-scaling procedure. Micro: DNS approach, Meso: Euler-Lagrange approach, Macro: Euler-Euler approach. and the uid. In the PRS level, various methods have been developed during the last two decades.
B
One of the earliest method is the body-conformal mesh or boundary tted methods (among others see the works of [START_REF] Johnson | Simulation of multiple spheres falling in a liquid-lled tube[END_REF] or [START_REF] Wan | Fictitious boundary and moving mesh methods for the numerical simulation of rigid particulate ows[END_REF]). The boundary tted methods have the advantage of capturing the details of the ow dynamics around rigid bodies. Indeed this method is very powerful to capture momentum, heat and mass boundary layers around immersed objects but su fers from a weak on computational performance since a re-meshing process is needed at each time step. This method is often combined with various numerical schemes that have been suggested in the literature, the most famous ones are the Arbitrary-Lagrangian-Eulerian (ALE) formulation and the Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) which is generally used in combination with a Finite Element discretization.
Arbitrary-Lagrangian-Eulerian (ALE)
The ALE is an hybrid method which combines the Lagrangian description of the grid cell where there is a "small" motion and its Eulerian description where it is almost impossible for the mesh to track the motion. In this method the boundary nodes are treated as Lagrangian and the intermediate node velocities are interpolated between the boundary node velocities. This method was initially established by [START_REF] Christie | Finite element methods for second order di ferential equations with signi cant rst derivatives[END_REF], [START_REF] Belytschko | Quasi-eulerian nite element formulation for uid-structure interaction[END_REF], [START_REF] Liu | Arbitrary lagrangian-eulerian petrovgalerkin nite elements for nonlinear continua[END_REF] for Finite Element formulation. The works of Feng et al. (1994a;b), [START_REF] Hu | Direct simulation of ows of solid-liquid mixtures[END_REF], [START_REF] Hu | Direct numerical simulations of uid-solid systems using the arbitrary Lagrangian-Eulerian technique[END_REF] are among others the rst studies to apply the method to particulate ow (Newtonian and non-Newtonian) problems. On unstructured grids (F . 6.2) they have the advantage of capturing precisely the uid-solid interface. It is well known that despite the exceptional accuracy of this method, simulations of dense particulate systems are computationally expensive. This limits their use to study a small system of particulate ows. In particular, the re-meshing step in the simulation algorithm scales poorly on parallel computers. The DSD/SST was rst introduced by [START_REF] Tezduyar | A new strategy for nite element computations involving moving boundaries and interfaces-the deforming-spatial-domain/space-time procedure: I. The concept and the preliminary numerical tests[END_REF] for problems related to deforming spatial domain in a Finite Element framework. In this formulation, the problem is written in its variational form over the associated space-time domain. This implies that the deformation of the spatial domain is taken into account. The space-time mesh is generated over the spacetime domain of the problem, within each time step, the interface nodes move with the interface Figure 6.2 -Arbitrary-Lagrangian-Eulerian. Credits: [START_REF] Johnson | Simulation of multiple spheres falling in a liquid-lled tube[END_REF].
(F . 6.3). Hence, during a time step, the interface nodes move with the interface. After each time step, a new distribution of mesh covers the new spatial domain when there is a motion. Details and extension of the DSD/SST can be found in [START_REF] Johnson | Simulation of multiple spheres falling in a liquid-lled tube[END_REF]1997;1999).
F
Fixed mesh methods have a non negligible advantage since they scale well on large supercomputers but with the price of low accuracy at the uid-solid interface as a local reconstruction is required.
Lattice-Boltzmann Method (LBM)
The LBM [START_REF] Ladd | Sedimentation of homogeneous suspensions of non-Brownian spheres[END_REF], [START_REF] Ladd | Lattice-Boltzmann simulations of particle-uid suspensions[END_REF], [START_REF] Feng | The immersed boundary-lattice Boltzmann method for solving uid-particles interaction problems[END_REF], [START_REF] Derksen | Direct numerical simulations of dense suspensions: wave instabilities in liquid-uidized beds[END_REF], [START_REF] Van Der Hoef | Lattice-Boltzmann simulations of low-Reynolds-number ow past mono-and bidisperse arrays of spheres: results for the permeability and drag force[END_REF]), Hill et al. (2001b;a), [START_REF] Third | Comparison between nite volume and latticeboltzmann method simulations of gas-uidised beds: bed expansion and particle-uid interaction force[END_REF]) has proven to be successful for the numerical simulation of particle-laden ows. This method is a relatively new technique for complex uid systems. Unlike the traditional CFD methods, LBM consists in modelling the uid with ctive particles undergoing consecutive propagation and collision processes over a discrete lattice mesh (F . 6.4). In this method, the uid variables are considered as distribution functions. A "Bounce Back" method [START_REF] Ladd | Numerical simulations of particulate suspensions via a discretized Boltzmann equation. Part 2. Numerical results[END_REF], [START_REF] Ladd | Lattice-Boltzmann simulations of particle-uid suspensions[END_REF]) is often used to account for rigid particles. By means of LBM, [START_REF] Hölzer | Lattice boltzmann simulations to determine drag, lift and torque acting on non-spherical particles[END_REF] determined correlations for the forces acting on a non-spherical particles. [START_REF] Günther | Lattice boltzmann simulations of anisotropic particles at liquid interfaces[END_REF] used the LBM to simulate anisotropic ellipsoidal particles to mimic the shape of clay particles. Using LBM, [START_REF] Janoschek | Accurate lubrication corrections for spherical and nonspherical particles in discretized uid simulations[END_REF] investigated the lubrication corrections on the non-normal direction on spheroids. Hill et al. (2001a;b) also demonstrated the ability of the method to computed uid ows through porous media made of assemblies of spherical particles.
Immersed Boundary Method (IBM)
Figure 6.5 -Illustration of the IBM on a disk. The Lagrangian points are distributed on the boundary. Credits: [START_REF] Vanella | Adaptive mesh re nement for immersed boundary methods[END_REF].
IBM was primarily introduced by [START_REF] Peskin | Numerical analysis of blood ow in the heart[END_REF][START_REF] Peskin | The immersed boundary method[END_REF] for biological uid ow simulations in which the method handles very thin interfaces. In IBM the uid ow is solved on the Eulerian grid and the immersed body boundary is represented with Lagrangian points at its surface. Then approximations of the Delta distribution by smoother functions allow the interpolation between the two grids (F . 6.5). Later on the method was extended to suspension ow problems [START_REF] Peskin | Numerical analysis of blood ow in the heart[END_REF][START_REF] Peskin | The immersed boundary method[END_REF], [START_REF] Kim | Immersed boundary method for ow around an arbitrarily moving body[END_REF], [START_REF] Uhlmann | An immersed boundary method with direct forcing for the simulation of particulate ows[END_REF]). [START_REF] Zastawny | Derivation of drag and lift force and torque coe cients for non-spherical particles in ows[END_REF] utilised IBM to propose correlations for drag force, lift force and torques for four di ferent type of non-spherical particles (F . 6.6).
Distributed Lagrange Multiplier / Fictitious Domain (DLM/FD)
The Distributed Lagrange Multiplier / Fictitious Domain (DLM/FD) (F . 6.7) method initially introduced by [START_REF] Glowinski | A distributed Lagrange multiplier/ ctitious domain method for particulate ows[END_REF][START_REF] Mellmann | The transverse motion of solids in rotating cylinders-forms of motion and transition behavior[END_REF].
Unlike the IBM, the DLM/FD formulation treats the particle boundary and volume as an object under solid body motion [START_REF] Patankar | A new formulation of the distributed Lagrange multiplier/ ctitious domain method for particulate ows[END_REF], [START_REF] Yu | Viscoelastic mobility problem of a system of particles[END_REF], [START_REF] Yu | A direct-forcing ctitious domain method for particulate ows[END_REF], 2012). [START_REF] Wachs | A DEM-DLM/FD method for direct numerical simulation of particulate ows: Sedimentation of polygonal isometric particles in a Newtonian uid with collisions[END_REF][START_REF] Wachs | PeliGRIFF, a parallel DEM-DLM/FD direct numerical simulation tool for 3D particulate ows[END_REF]). In fact, Lagrangian points are distributed not only on the boundary but in the volume occupied by the particle too.
Figure 6.7 -Illustration of the DLM/FD method on a disk. The Lagrangian points are distributed all other the rigid body. Credits: [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF]. [START_REF] Segers | Immersed boundary method applied to single phase ow past crossing cylinders[END_REF] used IBM to study the uid-structure interaction of single phase ow past crossing cylinders. [START_REF] Tavassoli | Direct numerical simulation of uid-particle heat transfer in xed random arrays of non-spherical particles[END_REF] carried out DNS of the heat transfer in xed random arrays of spherocylinders in order to characterize the uid-solid heat transfer coe cient.
So far, the most complex particle shapes in particulate ow in the literature using xed mesh methods can be seen in the works of [START_REF] Rahmani | Free falling and rising of spherical and angular particles[END_REF] and [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF]. In fact, [START_REF] Rahmani | Free falling and rising of spherical and angular particles[END_REF] showed the in uence of particle shape in the path instabilities of free raising or settling of angular particles (F . 6.8), whereas [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF] highlighted the accuracy of the DLM/FD method on spherical and angular particles from low to high solid volume fractions and from Stokes to moderate Reynolds regimes. In [START_REF] Dorai | Fully resolved simulations of the ow through a packed bed of cylinders: E fect of size distribution[END_REF], PRS of packed beds of cylinders are preformed using the DLM/FD formulation. They showed the accuracy of the method on the computed pressure drop.
A M R (AMR)
The AMR method was st introduced by [START_REF] Berger | Adaptive mesh re nement for hyperbolic partial di ferential equations[END_REF] and [START_REF] Berger | Local adaptive mesh re nement for shock hydrodynamics[END_REF]. Their original work consists in creating a ne Cartesian grid which is embedded into a coarser grid. Pursuing the concept of [START_REF] Berger | Adaptive mesh re nement for hyperbolic partial di ferential equations[END_REF], [START_REF] Almgren | A conservative adaptive projection method for the variable density incompressible navier-stokes equations[END_REF] extended the method to solve the variable density incompressible Navier-Stokes equations which was later on extended to two-phase ow ( uid-uid) problems [START_REF] Sussman | An adaptive level set approach for incompressible two-phase ows[END_REF]).
The method suggests that local grid re nement should be performed when needed depending on the ow conditions at the interface and the far eld mesh remains coarse. The major advantages of the AMR lie on the fact that the subcategories of methods of xed mesh can be incorporate in it. That is to say that IBM and AMR [START_REF] Roma | An adaptive version of the immersed boundary method[END_REF], [START_REF] Vanella | Adaptive mesh re nement for immersed boundary methods[END_REF] or DLM/FD and AMR [START_REF] Van Loon | A combined ctitious domain/adaptive meshing method for uid-structure interaction in heart valves[END_REF], [START_REF] Kanarska | Mesoscale simulations of particulate ows with parallel distributed lagrange multiplier technique[END_REF]) can be combined so that when a mesh re nement is needed in the vicinity of the interface, additional Lagrangian points are added, hence locally improving the accuracy of the computed solutions (F . 6.9). However, one of the challenges of AMR is to make it scale well on supercomputers [START_REF] Kanarska | Mesoscale simulations of particulate ows with parallel distributed lagrange multiplier technique[END_REF]).
(a) Computational domain and flow around a sphere (IBM). Credits: [START_REF] Vanella | Adaptive mesh re nement for immersed boundary methods[END_REF].
(b) Flow through a cubic array of spheres (DLM/FD). Credits: [START_REF] Kanarska | Mesoscale simulations of particulate ows with parallel distributed lagrange multiplier technique[END_REF].
C
According to the purpose of the second part of this thesis, i.e. the modelling of uid ow through packed beds of particles, the well suited numerical technique is the boundary tted method. Indeed, it o fers the best accuracy among the three aforementioned methods especially for particles of complex shape. After this method would come the adaptive mesh renement combined either with the DLM/FD formulation or the IB formulation. Then the DLM/FD or the IB method would be the last one to achieve the goal of this study. Having said that, the method we plan to develop must also be applicable to freely-moving particles. This hence disquali es the use of a boundary tted method, due to the aforementioned low computing performance related to constant re-meshing needs. For the same reason, a combined AMR-DLM/FD approach would require the AMR part to be dynamic. As far as we know, this has never been done yet in the literature. Finally, our granular solver Grains3D is already fully coupled with an Eulerian Navier-Stokes solver by means of a DLM/FD method combined with a Finite Volume Staggered Grid scheme and a second order interpolation operator to impose the rigid body motion constraint at the particle boundary based on Finite
Element cubic quadratic basis functions. This constitutes the PRS model of the PeliGRIFF platform. It has proven to supply computed solutions of satisfactory accuracy for spherical and non-spherical convex bodies. It is hence sensible to build up on the existing tools and to extend the DLM/FD method in PeliGRIFF to non-convex particle shapes. As seen in the previous part of this thesis, Grains3D possesses now the capability to handle non-convex particles enhances and justi es the use of PeliGRIFF to resolve the intricate ow dynamics through xed bed reactors made of non-convex particles.
R
Ce chapitre propose une revue de la littérature sur la modélisation des écoulements uideparticules en utilisant la méthode de résolution directe. En e fet, depuis quelques décennies ces écoulements sont souvent modélisés avec des particules sphériques. Grâce à l'avènement du calcul haute performance, les chercheurs proposent des modèles avec des particules non sphériques. Pour celà, plusieurs méthodes sont rentrées dans la communauté telles que les méthodes à maillage adaptatif qui suivent les déplacements des particules de façon lagrangienne parmis lesquelles la célèbre "Arbitrary Lagrangian Eulerian" (ALE) ou encore la"Deforming-Spatial-Domain/Stabilized Space-Time" (DSD/SST). Les méthodes à maillage xe sont aussi très courant grâce au fait qu'elles éliminent le remaillage des particules engendrant ainsi un gain précieux du temps de calcul; parmis lequelles on trouve la méthode "Immersed Boundary" (IBM) ou la méthode "Distributed Lagrange Multiplier/Fictitious Domain (DLM-FD)" ou la méthode "Lattice Boltzmann". Les deux dernières décennies ont vu naître la méthode "Adaptive Mesh Re nement".
Cette revue de littérature a conduit à conclure que la méthode "Distributed Lagrange Multiplier/Fictitious Domain (DLM-FD)" est compatible aux problémes qui font l'objet de cette thèse car elle est déjà existante sur la plateforme PeliGRIFF -Grains3D moyennant une certaine adaptation tout en béné ciant ainsi de nouvelle extension du code Grains3D . A part of this chapter has been written as a rst draft of manuscript that I intend to submit with my co-authors for publication in Chemical Engineering Science. The provisional title of the manuscript is:
N -P GRIFF
Particle Resolved Simulation of packed beds of trilobal/quadrilobal particles using a Finite Volume/Staggered Grid Distributed Lagrange Multiplier/Fictitious Domain formulation.
In this chapter we present the implementation of the numerical method to compute the ow around poly-lobed particles. Then we assess the space convergence studies of the computed solutions on assorted ow con gurations and ow regimes. Finally we compute the pressure drop through packed beds of poly-lobed particles to study the e fect of shape on the hydrodynamics.
A
I regularly shaped particles are ubiquitous in many di ferent real-life systems. For instance, in the downstream oil and gas industries, trilobal and quadralobal shaped particles are used in many chemical reactors for process purposes. Unfortunately, most of corresponding numerical simulations are carried out using idealized spherical particles, spheroids, cubes, or tetrahedron. Very often, the weakness relies on the modelling of the collisional behaviour either to create the packed bed of particles for ows through a xed bed or to compute particle/particle collisions for freely-moving particles in a uidized bed. In Chapter 3, we suggested a numerical technique implemented in our granular dynamics code Grains3D [START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF]) to treat the collisional behaviour of particles of (almost) arbitrary shape even non-convex one. In [START_REF] Rahmani | Free falling and rising of spherical and angular particles[END_REF] have shown the successful implementation of our Distributed Lagrange Multiplier/Fictitious Domain method with a Finite Volume/Staggered Grid discretization scheme for polyhedral particles in the fully parallel numerical platform PeliGRIFF [START_REF] Wachs | PeliGRIFF, a parallel DEM-DLM/FD direct numerical simulation tool for 3D particulate ows[END_REF]) for multiphase ow simulations. [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF] have shown that the method supplies solutions of satisfactory accuracy which puts us in a favourable position to suggest a similar Discrete Element Method -Particle-Resolved Simulation (DEM-PRS) approach. The aim of this study is to go one step further and to extend our numerical method to non-convex particles. Trilobal and quadralobal particles are chosen to illustrate the novel capabilities of Peli-GIRFF. We keep our 2 nd order interpolation operator for velocity reconstruction at the particle boundary and the solutions are computed without any hydrodynamic radius calibration. First, we assess the space convergence and overall accuracy of the computed solutions. Then, we show the shape e fects on pressure drop through packed beds of trilobes and quadralobes with an uncertainty quanti cation for the e fects of random packings encountered in these applications.
I
Flow through porous media made of assemblies of xed particles is encountered in nature and in many industrial plants especially in the process industry, especially in the process industry. Many studies (analytical or numerical) were carried out for applications as e.g. chemical reactors, biomass converters and catalytic exhaust pipes. From numerical point of view, Direct Numerical Simulation tools referred to as Particle Resolved Simulation (PRS) appear to be a good candidate to solve the intricate momentum transfer between solid and uid phases. In fact, numerous methods exist in the literature to simulate the ow dynamics around immersed objects.
This work is entirely focused on the use of a DLM/FD xed mesh method combined with a FV/SG dicretisation scheme and its application to complex particle geometry. The improvement of the DLM/FD method suggested in [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF] does not require the use of any hydrodynamic radius calibration and is hence well suited to particles of arbitrary shape. In fact, the goal of this work is two-fold: (i) to examine again the accuracy of the method when it is extended to non-convex particle shapes. For that purpose, space convergence of computed solutions is assessed on assorted ow con gurations and ow regimes, i.e. the ow through innite arrays of trilobal/quadralobic particles of various orientation and volume fraction. Then the accuracy of the method is investigated in ows through a packed bed made of the same type of particles in Stokes regime and nite Reynolds number regime; (ii) to use the method to predict the pressure drop through packed bed reactors in order to discriminate new shapes of catalyst particle. For decades processes in the chemical industries always relied on analytical and experimental works which often exhibit excessive costs. Therefore, it is of great interest to develop numerical tools to estimate the packing voidage and the pressure drop through the bed so that discrimination of new particle shapes can be carried out before prototyping and building expensive pilot units. Indeed the aforementioned PRS method combined with a Discrete Element Method granular solver can give an insight in local variables of the ow at the particle level and can be used as a tool to provide guidelines for up-scaling procedures to laboratory scale pilot plant and latter on to industrial pilot.
2 A DLM/FD PRS - [START_REF] Glowinski | A distributed Lagrange multiplier/ ctitious domain method for particulate ows[END_REF][START_REF] Mellmann | The transverse motion of solids in rotating cylinders-forms of motion and transition behavior[END_REF] were the rst to introduce the concept of DLM/FD to the community of particulate ow modelling. It was originally combined with a Finite Element Method and latter on extend to Finite Volume method [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF]). The principle of the DLM/FD formulation consists in enforcing the rigid body motion on the particle domain within an Eulerian xed-grid as a constraint. Solid objects are de ned by using Lagrangian points over both their volume and their surface. In many works, authors point out the use of hydrodynamic radius calibration in order to correct the computed drag force on a xed spherical particle in creeping ow regime and low solid volume fraction φ. Then the calibration is used for any ow regime and any solid volume fraction. [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF] pointed out that the hydrodynamic radius calibration becomes questionable when it comes to deal with non-spherical shapes. Despite the work of [START_REF] Breugem | A second-order accurate immersed boundary method for fully resolved simulations of particle-laden ows[END_REF] that suggested a presumably optimal hydrodynamic radius for cubic particles, it is totally unclear how to determine a hydrodynamically calibrated radius for any non-spherical particle. It seems to us that for non-convex particle the concept of hydrodynamic radius calibration is almost meaningless. Therefore, correct and accurate methods without resorting to using any sort of geometric calibration should be selected to model the uid-solid interaction. For instance, in [START_REF] Deen | Direct numerical simulation of ow and heat transfer in dense uid-particle systems[END_REF] and [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF], the authors presented satisfactory method, respectively IBM and DLM/FD, without any such geometric calibration. The assets of the enhanced IB and DLM/FD methods suggested in these two works rely only on an accurate velocity reconstruction at the particle boundary and a distribution of Collocation Points (CP) on the solid domain compatible with the formulation of the problem and the discretization scheme adopted.
For spherical particles, it is well known [START_REF] Uhlmann | An immersed boundary method with direct forcing for the simulation of particulate ows[END_REF], [START_REF] Feng | Robust treatment of no-slip boundary condition and velocity updating for the lattice-Boltzmann simulation of particulate ows[END_REF], [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF]) in the community that the best way to distribute these CP on the particle surface is to perform a dynamic simulation of a system in which CPs are considered as charged particles. Then, the nal state of the system corresponds to homogeneously distributed charged particles with minimum repulsion energy. Despite the accuracy of this method, the time scale of this type of simulation is very large and the method cannot be extended to nonspherical shape. There is hence a technical issue in distributing CPs as homogeneously as possible on the surface of a non-spherical particle while keeping the remarkable geometric features (edges, corners) of the shape at the discrete level. This is not an easy task and we will present a construction method for trilobes and quadralobes in this work. Recently, some studies other than those of our group extended the use of PRS methods to non-spherical particles but mostly for generalized ellipsoids/rounded particles [START_REF] Zastawny | Derivation of drag and lift force and torque coe cients for non-spherical particles in ows[END_REF], [START_REF] Tavassoli | Direct numerical simulation of uid-particle heat transfer in xed random arrays of non-spherical particles[END_REF]).
N
In line with the work of [START_REF] Wachs | A DEM-DLM/FD method for direct numerical simulation of particulate ows: Sedimentation of polygonal isometric particles in a Newtonian uid with collisions[END_REF][START_REF] Wachs | PeliGRIFF, a parallel DEM-DLM/FD direct numerical simulation tool for 3D particulate ows[END_REF]), Wachs et al. (2015)), our numerical is based on the classical Distributed Lagrange Multiplier/Fictitious Domain method in which the Lagrange multiplier is implicitly computed to enforce the rigid body motion, combined with a FV/SG scheme and a L2-projection algorithm for the solution of the Navier-Stokes equations. The method is coupled with a granular solver to solve the particle-particle collisions. Hence, the granular solver is employed to ll the reactor and create the pack of particles while the ow solver computes the uid ow through the packed bed of particles. In the rest of this chapter, we shortly remind the reader the formulation of both numerical methods and elaborate on their extension to poly-lobed particles. The collisional model for non-convex particles is based on the decomposition of the composite particle into a set of convex particles. The reader is referred to Chapter 3 for more details of the collision model for non-convex rigid bodies.
Introduction to PeliGRIFF
PeliGRIFF (Parallel E cient Library for GRains in Fluid Flow) (Wachs et al. (2007(Wachs et al. ( -2016))) is an object oriented code implemented in C++ for multi-core architecture. The discrete phase is handled by Grains3D . The open source library PELICANS is the kernel of PeliGRIFF which is used of PDE solvers. PeliGRIFF relies on di ferent libraries for linear algebra such as PETSc (Portable, Extensible Toolkit for Scienti c Computation), BLAS (Basic Linear Algebra Subprograms), LAPACK (Linear Algebra PACKage) and HYPRE BoomerAMG for preconditioners. The code can simulate uid-solid, using a DLM/FD approach, and uid-uid, using a Level-Set approach, two-phase ows. In addition, new extensions of PeliGRIFF enable simulations of heat and mass transfers between phases.
Governing equations for the uid ow solver
We shortly recall the general DLM/FD formulation for freely-moving particles.Then we present the rst-order operator splitting solution algorithm in the particular case of xed particles. Finally, we elaborate on our CP construction strategy for the case of poly-lobed particles.
Let Ω de nes a domain of R d , d ∈< 2, 3 >, ∂Ω its boundary. Then let be N P the number of rigid bodies P i (t) (i ∈ [1, N P ])that Ω is lled with. For the sack of simplicity, N P is considered to be equal to 1. Dirichlet boundary conditions are set on ∂Ω for the uid velocity eld. In the rest of the chapter, the "star" symbol denotes any dimensional quantity.
Dimensionless variables are de ned using the set of the following variables: L * c for length, U * c for velocity, T c = L * c /U * c for the convective time scale, ρ * f U * 2 c for pressure and ρ * f U * 2 c /L c for rigid-body motion Lagrange multiplier, ρ * f denotes the uid density. The combined conservation equations that govern both the uid and solid motion is written as follows:
1. Combined momentum equations
∂u ∂t + u • ∇u = -∇p + 1 Re c ∇ 2 u -λ over Ω (7.1) (ρ r -1)V P dU dt -Fr g * g * - j (F c ) j - P (t)
λdx = 0, over P (t) (7.2)
I P dω dt + ω × I P • ω + j (F c ) j × R j + P (t)
(λ × r) • dx = 0, over P (t), The following dimensionless numbers are introduced in the above equations:
Reynolds number Re c = ρ * f U * c L * c η * , ( 7
Time discretization scheme
The set of conservation equations is solved by a rst-order operator splitting algorithm. Diffusion and advection terms are treated by a Crank-Nicholson and a Adams-Bashford scheme. Further details on the method and algorithm can be found in [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF] and [START_REF] Dorai | Fully resolved simulations of the ow through a packed bed of cylinders: E fect of size distribution[END_REF]. In this study, since particles are xed, our operator-splitting algorithm is comprises in two stages, written in a dimensionless form as follows:
1. A classical L2-projection scheme for the solution of the Navier-Stokes problem: nd u n+1/2 and p n+1 such that
ũ -u n ∆t - 1 2Re c ∇ 2 u n+1/2 = -∇p n+1 + 1 2Re c ∇ 2 u n , - 1 2 3u n • ∇u n -u n-1 • ∇u n-1 -γλ n ,
(7.9) (7.11) 2. A ctitious domain problem: nd u n+1 and λ n+1 such that u n+1u n+1/2 ∆t + λ n+1 = γλ n , (7.12)
∇ 2 ψ = 1 ∆t ∇ • ũ , ∂ψ ∂n = 0 on ∂Ω, (7.10) u n+1/2 = ũ -∆t∇ψ, p n+1 = p n + ψ - ∆t 2Re c ∇ 2 ψ.
u n+1 = 0 in P (t). (7.13)
where u, p, λ, ψ and ∆t denote the dimensionless uid velocity, uid pressure, DLM/FD Lagrange Multiplier to relax the constraint in E . 7.13, pseudo-pressure eld and time step respectively. The term γ ∈ [0 : 1] is a constant that sets the level of explicit direct forcing in the velocity prediction step. It has been shown that γ = 1 signi cantly improves the coupling between sub-problems (1) and ( 2) and allows the use of larger time steps ∆t. In practice, all computations are performed with γ = 1 (see [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF] for more details).
Colocation Points on non-convex particles
As explained in [START_REF] Wachs | A DEM-DLM/FD method for direct numerical simulation of particulate ows: Sedimentation of polygonal isometric particles in a Newtonian uid with collisions[END_REF][START_REF] Wachs | PeliGRIFF, a parallel DEM-DLM/FD direct numerical simulation tool for 3D particulate ows[END_REF], the set of CP comprises a set of interior points distributed in the solid volume using staggered uid velocity nodes and a subset of boundary points distributed as uniformly as possible on the solid surface. An illustration on a 2D circular cylinder is shown in F . 7.2. In [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF], two types of interpolation operator are considered (F . 7.2): a classical multi-linear operator [START_REF] Hö | Navier-Stokes simulation with constraint forces: Finite-di ference method for particle-laden ows and complex geometries[END_REF]) and a quadratic operator that uses the basis functions of a cubic Q2 nite element (9-point in 2D and 27-point in 3D stencil). The di culty arises when the particle shape is not isotropic. Far from being understood, the repartition of CP on non-convex body is not straightforward. Even the simple case of spherical particle is still subjected to discussion in the literature. Therefore, a particular care is dedicated to equally distribute the CP in the best manner possible.
As the construction of a non-convex shape is based on decomposing it into convex shapes, the construction of the CP is performed as follows:
• trilobal particle is made of three cylinders with a triangular prism which lls the central gap forming the connection between the three cylinders (F . 7.3a). The same procedure is applied to quadralobal particle but instead of a triangular prism, a rectangular parallelepiped is used (F . 7.3b). • the sets of interior CPs of all the components of the "composite" are merged by ensuring that they are neither overlapping. • the set of boundary points is distributed as follows:
-for the cylinders, the CPs are distributed in slices along the cylinder revolution axis with a constant distance. Let z be the revolution axis of the cylinder and k ∈ R On each face, the points are located at the nodes of a constant spacing ratio l a lattice made of squares for the rectangular faces and equilateral triangles for the triangular faces (F . 7.4).
-the boundary CPs are merged in the following manner: (i) the BP located in another convex component is discarded; (ii) the CPs on the edges of the components are kept to perfectly describe their shapes; (iii) the boundary CPs on the top and bottom disks of all the cylinders are kept except those on their edges which are on the cross-section of the polyhedron; (iv) the last empty area on the top and the bottom of the shape is lled by the CPs of the polyhedron and we ensure that the CPs are not too close. The resulting geometries are illustrated in F . 7.5
A
Methodology
A space convergence study is now presented in the aim of knowing the accuracy as a function of the mesh size and minimizing the computing resources. Literature on the accuracy of solutions computed with a DLM/FD methods exists for spheres [START_REF] Kanarska | Mesoscale simulations of particulate ows with parallel distributed lagrange multiplier technique[END_REF]) and cylinders [START_REF] Dorai | Multi-scale simulation of reactive ow through a xed bed of catalyst particles[END_REF]2015)). Intuitively, the space resolution for poly-lobed particles is expected to be even more demanding. Due to the lack of analytical solution, the space convergence study is based on the approach proposed by [START_REF] Richardson | The approximate arithmetical solution by nite di ferences of physical problems involving di ferential equations, with an application to the stresses in a masonry dam[END_REF] which consists in (i) estimating the reference solution by extrapolating the numerical solutions to zero mesh size and (ii) evaluating the accuracy of the computed solutions against the reference. The extrapolation reads:
Λ = Λ(h) + Kh β + O(h β+1 ), with h = N -1 p (7.14)
where N p , K and β denote respectively the number of CP on the circumscribed cylinder diameter d * , pre-factor of the relative error and convergence rate. From the equation E . 7.14, Λ ref = Λ(0) gives the exact extrapolated solution. Hence, the convergence is evaluated in terms of relative error e of the physical quantity Λ as follows:
e(Λ) = |Λ -Λ ref | |Λ ref | (7.15)
Here, the new shapes of particle are subjected to assorted ow regimes and ow con gurations such as: (i) ows through an in nite structured array of poly-lobed particles at low Reynolds number (Re c = 0.01) , (ii) ows through a packed bed of poly-lobed particles at low and moderate Reynolds numbers (Re c = 0.01 and Re c = 16).
Flow past a single poly-lobed particle in a tri-periodic domain
The rst attempt to assess the accuracy of the presented extension of the DLM/FD formulation has been inspired by the work of [START_REF] Zick | Stokes ow through periodic arrays of spheres[END_REF]. The test consists in computing the friction coe cient for a single particle in a tri-periodic domain, in other words the ow through an in nite of particles. The friction coe cient is computed as the pressure drop based on the diameter of the equivalent sphere of same volume. The relationship between the mean velocity u * , the imposed pressure drop ∆p * and the friction coe cient K for an innite structured simple array of poly-lobed particles, modelled as a single particle centered in a tri-periodic domain, reads:
∆p * l * s = 9 2 η * a * 2 φKu * (7.16)
where l * s , a * , η * , φ denote respectively the streamwise domain length, equivalent sphere radius, uid viscosity and the solid volume fraction de ned as φ = 1 -ε in which ε stands for the void fraction. Unlike spherical particles, there is an in nite way to orientate an elongated poly-lobed particle due to the anisotropy of its shape. In addition to the Reynolds number Re and the solid volume fraction φ, the Euler angles (ϕ, θ, ψ) and the aspect ratio a r should be included in the study.
In general, the packed particles are arranged in a random way which enables the uid to ow in a random interstitial pore shape and size. Depending on the latter the preferential streamwise directions are established, whereas in this test case the streamwise direction is only imposed by the geometry of the periodic domain. For the sake of simplicity and to cover a large range of all the parameters, two aspect ratios are chosen (a r = 1 and a r = 5), φ is varying from loose to dense packing and three sets of Euler angles are considered for the particle orientation (relative to the streamwise direction):
• parallel to the particle axis (ϕ = 0 • , θ = 0 • , ψ = 0 • ) denoted with the symbol " " (F . 7.6a), • perpendicular to the particle axis (ϕ = 90 • , θ = 0 • , ψ = 0 • ) denoted with the symbol "⊥" (F . 7.6b), • a rotation of 20 • about all the axis (ϕ = 20 • , θ = 20 • , ψ = 20 • ) denoted with the symbol "20" for moderate φ (F . 7.6c). In the following, for Stokes regimes, a di fusive time scale is de ned as T d = ρ f d * 2 /η * and the Reynolds number Re c is de ned by using the following terms: L * c = d * for the characteristic length scale and U * c = u * in for the characteristic velocity. d * and u * in stand for the diameter of the circumscribed cylinder and the inlet uid velocity respectively. Hence, the Reynolds number reads: First, we assess the accuracy of our method with particles of an aspect ratio a r = 1. The computed solutions obtained from various φ are plotted on F . 7.8. Simulations are carried out for Re c = 0.01 and ∆t/T d = 10 -2 . The trilobal particle seems to have nicer convergence than the quadralobic one. As expected, for high φ, the error e is much higher for the same N p i.e. a higher resolution is needed for dense particulate systems.
Re c = ρ * f u * in d * η * (7.17
Since the space convergence study on a r = 1 shows satisfying results, we now move on to the space convergence for a r = 5. Again, the study is still performed from loose to dense packing. Due to the domain size and shape restriction, the particles are only oriented perpendicularly and collinearly to the ow direction (the "⊥" and the " " con gurations). For the perpendicular con guration, a very high φ means that the uid is not owing any more because the orthographic projection of the particle is equal to that of the domain which is somehow "blocking" the uid to ow properly. For the parallel con guration, the particle orthographic projection is the area with the lobes which enable the uid to ow in the concavity of the particle even with high solid volume fraction. F . 7.7 plots the space convergence of the computed solutions. It can be observed that for φ = 0.59 (dense packing) the solutions exhibits higher relative error compared to the other systems. The previous numerical study can be summarized as follows:
• the most challenging cases are (i) when the particle axis is perpendicular to the main ow direction due to the lobe induced recirculation; (ii) when the particle axis is parallel to the main ow direction due to its at orthogonal face which creates singularities all over the edge, • all the simulations exhibit an average convergence rate of N -1.3 p which is in line with the work of [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF] for spherical and polyhedral particles (see also the work of D'Avino and Hulsen (2010)),
• as expected, compared to low φ, simulations at high solid volume fraction require a higher N p for the same accuracy. For comparison purposes, we performed the simulations of ow past three di ferent shapes (TL, QL, CYL) to have a rst glance on the dependency of their friction coe cient on the solid volume fraction. The studied cases consist in comparing 3 elongated particles of the same volume and the same particle length L * p . It worth to remind that at the same volume a cylinder does not circumscribe neither the trilobe nor the quadralobe. In terms of particle orientations, two con gurations are selected: particle axis perpendicular and parallel to the streamwise direction. N p = 45 for a relative error e 5% and Re c = 0.01. T . 7.1 summarises the di ferences between the three shapes. F . 7.11 depicts the results of our comparison. In F . 7.11a, it can be observed that when the particle axis is perpendicular to the ow the friction coe cient K(φ) is similar for φ 0.34. The di ference is only noticed at high φ. In fact, in this con guration, the projected crosssectional areas (F . 7.9) of the particles are di ferent in the following order:
S ⊥ T L > S ⊥ CY L > S ⊥
QL . This leads to the following classi cation of friction coe cients: F . 7.11b, illustrates the computed friction coe cient K(φ) corresponding to a ow parallel to the particle axis. The orthogonal cross-sectional areas of the particles are classi ed in the following order: S CY L < S T L < S QL (F . 7.10). Therefore, the resulting friction coe cients are classi ed as follows:
K ⊥ T L > K ⊥ CY L > K ⊥ QL . (
K CY L < K T L < K QL .
The results presented in F . 7.11 show that the computed solutions depend drastically on the particle orientation, hence the projected cross-sectional area. At low concentration, there is no clear distinction between the shapes.
Based on these results, we would like to examine the accuracy of the solution computed by our numerical model in the case of the ow through a packed bed. In fact, a packed bed is representative of the real operating conditions regarding volume fraction, the position and orientation of particles.
Flow past a small packed bed of poly-lobed particles
For decades, predicting the ow through a packed bed of particles has been an interesting and challenging subject in the chemical engineering community. One of the major challenges is the e fect of particle shape in these systems, where the bed porosity and pressure drop are very important for industrial operations. The second step of the space convergence study is performed on a small size bed of packed particles (a r = 2). This case is more representative of xed beds than the periodic array of a single particle as particles present random orientations forming a porous medium in which the uid ow is more complex. Rules have been suggested in previous related works regarding the number of CP needed to discretize a particle and guarantee a computed solution of satisfactory accuracy. However the straightforward extrapolation from ow in beds of cylinders [START_REF] Dorai | Multi-scale simulation of reactive ow through a xed bed of catalyst particles[END_REF]2015)) and polyhedra [START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF]) to beds of poly-lobed particles is rather questionable and might not be accurate enough. In fact, the lobes create additional complexity.
In this case, 40 trilobal particles (a r = 2) are stacked in a bi-periodic domain of 5×5×10 (Lx × Ly × Lz) (F . 7.12a). The corresponding solid volume fraction is φ ≃ 0.55. The particles are located at z = 3 away from the inlet and the outlet of the system. The boundary conditions are set as follows:
• periodic boundary conditions in the horizontal direction Results are obtained in this con guration are promising: from N p = 16 the error is less than 2% in Stokes regime, whereas N p = 65 is required for the same accuracy for Re = 16. In [START_REF] Dorai | Multi-scale simulation of reactive ow through a xed bed of catalyst particles[END_REF], the authors pointed out that computing a uid ow through a packed bed cylinders requires 50% ner mesh than that of spherical particles. Compared to a cylinder [START_REF] Dorai | Multi-scale simulation of reactive ow through a xed bed of catalyst particles[END_REF]), a trilobal and quadralobic need 50% ner mesh to achieve the equivalent accuracy.
P -
Results presented in the previous section deemed to be satisfactory enough to perform numerical simulations to predict the pressure drop through packed beds of poly-lobed particles. The objective of this section is to compute this pressure drop with trilobal and quadralobal particles based on PRS, investigate shape e fects, assess uncertainty quanti cation and derive predictive correlations based on the Ergun's formulation [START_REF] Ergun | Fluid ow through packed columns[END_REF]).
A quick review of single phase pressure drop in xed beds
The theory of [START_REF] Kozeny | Über kapillare Leitung des Wassers im Boden:(Aufstieg, Versickerung und Anwendung auf die Bewässerung[END_REF] describes a porous media as a collection of small channels in which a uid is owing in laminar regime. It reads:
∆p * H * = 72 η * (1 -ε) 2 u * in ε 3 d * s 2 (7.18)
From a physical view point, this formula proposes that the equivalent channel diameter is proportional to the sphere diameter d * s regardless of the local structure through:
εd * s (1 -ε) (7.19)
To account for tortuosity that are present in a porous media, [START_REF] Blake | The resistance of packing to uid ow[END_REF] corrected the coe cient 72 to 150 which led to the Blake-Kozeny equation for ε < 0.5 and Re c < 10:
∆p * H * = 150 η * (1 -ε) 2 u * in ε 3 d * s 2
(7.20) [START_REF] Carman | Fluid ow through granular beds[END_REF] proposed = 180 as a correction in Stokes ow regimes (Re c ∼ 0) in packed beds of spheres, which is more accurate than Blake's coe cient in these conditions. The equation reads:
∆p * H * = 180 η * (1 -ε) 2 u * in ε 3 d * s 2 (7.21)
For high Reynolds number regimes, [START_REF] Burke | Gas ow through packed columns1[END_REF] considered that the pressure drop through a packed bed can be computed as an inertia term. They proposed the following equation for Re c > 1000:
∆p * H * = 1.75 ρ * f (1 -ε)u * in 2 ε 3 d * s (7.22)
which is known as the Burke-Plummer equation. In this formulation the characteristic size of the channel is the same as the one proposed by Kozeny.
Combining the previous theories, [START_REF] Ergun | Fluid ow through randomly packed columns and uidized beds[END_REF] mentioned that the pressure drop through a packed bed is directly function of ε and the constants α and β which depend on the ow regime and proposed the following correlation [START_REF] Ergun | Fluid ow through packed columns[END_REF]): The formulation in F . 7.13 has been proved to be accurate and is widely used in the chemical engineering industry. The pressure drop in E . 7.23 is the combination of a frictional viscous term proportional to the velocity and a quadratic term on the uid velocity that takes into account the ow direction and change in cross-sections [START_REF] Larachi | X-ray micro-tomography and pore network modeling of single-phase xed-bed reactors[END_REF]). [START_REF] Ergun | Fluid ow through packed columns[END_REF] proposed the constants α = 150 and β = 1.75 to describe the pressure drop through packed beds of spheres, cylinders and crushed particles. For packed beds of complex shapes, a de nition of an universal correlation appears to be an endeavour. Many studies reveal a noticeable variation on these coe cients. For instance, [START_REF] Macdonald | A generalized blake-kozeny equation for multisized spherical particles[END_REF] suggested that α = 180 and β = 1.8 as universal constants which are over estimate the Ergun's coe cients by more than 16%. It appears that any new experimental data yields a new proposition a set of coe cients. The explanations of the di ferences between these works are still a subject of discussion between many authors. Among others, it was measured on cylinders by [START_REF] Macdonald | A generalized blake-kozeny equation for multisized spherical particles[END_REF] that the value of β is dependent on the particle roughness: β = 1.8 corresponds to smooth particles, whereas β = 4 corresponds to the roughest particles.
∆p * H * = α η * (1 -ε) 2 u * in ε 3 d * s 2 + β ρ * f (1 -ε)u * in 2 ε 3 d * s ( 7
Later on, many authors improved the correlation, among others [START_REF] Nemec | Flow through packed bed reactors: 1. single-phase ow[END_REF], to account for shape e fects. The coe cients α and β are then modi ed to include the shape e fects. An equivalent particle diameter for non-spherical particles has to be introduced and reads: [START_REF] Nemec | Flow through packed bed reactors: 1. single-phase ow[END_REF]. [START_REF] Nemec | Flow through packed bed reactors: 1. single-phase ow[END_REF] summarise the works of [START_REF] Pahl | Über die Kennzeichnung diskret disperser Systeme und die systematische Variation der Einflußgrößen zur Ermittlung eines allgemeingültigeren Widerstandsgesetzes der Porenströmung[END_REF], [START_REF] Reichelt | Zur berechnung des druckverlustes einphasig durchströmter kugel-und zylinderschüttungen[END_REF], England and Gunn (1970) in which α varies between 180 -280 and β between 1.9 -4.6 for cylindrical particles of aspect ratio ranging from 0.37 to 5.77. They measured the pressure drop for a large number of inlet velocities for one trilobal and one quadralobal shape. Each experiment was repeated twice (on a repacked bed), yielding only 4 additional data points (T . 7.2). It is interesting to note that the proposed correlation would systematically under or over predict the experimental data points. In summary, the available data for poly-lobed particles is scarce (only 2 points for TL and QL), with a large scatter.
d * p = 6V * p A * p ( 7
For non-spherical particles, [START_REF] Nemec | Flow through packed bed reactors: 1. single-phase ow[END_REF] extended the correlation by introducing the sphericity Ψ: As matter of fact, the suggested correlation agrees fairly well with the numerical results of [START_REF] Dorai | Fully resolved simulations of the ow through a packed bed of cylinders: E fect of size distribution[END_REF] for which the present study is a continuation. The work agrees well with the experimental data of [START_REF] Nemec | Flow through packed bed reactors: 1. single-phase ow[END_REF] for cylindrical particles. Other formulations have been proposed that take into account various shapes. Nevertheless, there is so far no universal method to precisely predict the Ergun's equation coe cients based only on particle shape.
Ψ = 36πV * p 2 A * p 3 1 3 (7.25) ∆p * H * = α(Ψ) η * (1 -ε) 2 u * in ε 3 d * p 2 + β(Ψ) ρ * f (1 -ε)u * in 2 ε 3 d * p (7.26) α(Ψ) = 150 Ψ a (
From the experimental point of view, the data on poly-lobed particles is scarce and quite dispersed. The reason of this scattering is still a matter of discussion (see for example [START_REF] Nemec | Flow through packed bed reactors: 1. single-phase ow[END_REF]). Recently, PRS on cylinder [START_REF] Dorai | Fully resolved simulations of the ow through a packed bed of cylinders: E fect of size distribution[END_REF]) opened up new perspectives in "in silico" determination of the pressure drop for any particle shape, for example TL and QL. The goal of this part of this study is to suggest a correlation based on the Ergun formulation and propose coe cients α and β for trilobal and quadralobic particles.
Method
As it can be seen in Chapter 3 that Grains3D is used as a porous media maker. The single phase uid owing through the packed bed is computed with PeliGRIFF using PRS. All the simulations are performed in a Lx * = Ly * = 8 mm wide bi-periodic container using a circumscribed diameter d * = 1.6 mm. Packed beds consist of 210 to 320 particles. The investigation is carried out with packings of TL and QL with a range of aspect ratio de ne as a r = 1.5, 2, 2.5, 3, 4. Boundary conditions are similar to the previous convergence study on packed beds. The particles are stacked at 2.5d * away from the top and 2.5d * from the bottom of the domain.
The local void fraction < ε > z is the volumetric average of void fraction on a layer of the bed of thickness Dz (F . 7.14b). The average void fraction < ε > is the average of all < ε > z inside the control volume. < ε > z is used to plot axial pro les of void fraction and obviously depends on the value of Dz. It is computed by discretizing the volume occupied by the particles. The pressure < p > z is the average pressure on a plane located at height z. The averaging procedures are written as follows:
< ε > z = i δ(x, y, z)v i,z i v i,z (7.29) where δ(x, y, z) = 1, if X(x, y, z) ∈ Ω p 0, otherwise (7.30) < p > z = i p i (x, y, z) i v i,z (7.31)
ε, p i , v i,z denote respectively the uid volume fraction, the pressure and the control volume of the system at the coordinate z.
The pressure drop is the di ference between the pressure at planes located at z = 3 from the top of the packed bed and z = 2 from its bottom [START_REF] Dorai | Packing xed bed reactors with cylinders: in uence of particle length distribution[END_REF][START_REF] Bernard | Multi-scale approach for particulate flows[END_REF]2015)). In other words, a layer of 3 particle diameters thick is discarded at the top of the bed, whereas a layer of 2 particle diameters thick is discarded at the bottom. The void fraction and the pressure di ference are computed on the same control volume Lx × Ly × Dz.
F . 7.14a plots examples of values obtained from PRS of packed beds in this study. It can be seen that the pressure and the void fraction are correlated. All the PRS are performed at Re c ranging from 0.1 to 16 with the objective of capturing the onset of inertia regime (Re c = 0.1, 0.2, 0.3, 0.4, 1, 16). All systems are re-packed randomly several times to have di ferent micro-structures (2 to 10 times). In particular, for both shapes, the systems of particles of aspect ratio a r = 2 are repeated 10 times to quantify the e fects of random packing both on the void fraction and the pressure drop simulated for Re c = 1. After the extraction of the pressure drop, the coe cients α and β of the Ergun's correlation are tted according to numerical results. The tting of β is performed only in the case of inertial ow regimes.
We observed that some pressure pro les are not fully linear arising the question of how the choice of cutting planes a fect the output. A sensitivity analysis for a limited number of 7 beds based on independently changing the positions of the bottom and top cutting planes between 0.5d * and 5d * , yields an uncertainty of 3.1% on the value of α. This trend is judged to be low enough to use always the same cutting planes positions.
F . 7.15 depicts a typical results of PRS for a packed bed of trilobes at Re c = 0.1 and a r = 2.
Results
Uncertainty quanti cation of the packings All packed beds are loaded randomly. For this reason, it is matter of importance to quantify the e fect of repetition (re-packing) on packed bed reactors. In this section, we investigated the random packings of 10 packed beds of TL and QL of a r = 2. F . 7.16b plots the pressure drop through successive simulations of the coupled problem (granular packing + PRS) as a function of void fraction < ε >. It can be observed that void fraction may vary signi cantly among simulations. In this data set, the packed beds of TL have a lower void fraction < ε > compared to those of QL and induce a higher pressure drop. T . 7.3 shows that despite the low standard uncertainty I = 2σ ∈ [2, 4]% on < ε >, the pressure drop exhibits an overall uncertainty of I = 2σ ∈ [12, 24]%. The uncertainty on pressure drop partly results from the scattering on void fraction. The uncertainty of α resulting from the hydrodynamic simulations is corrected by the void fraction and is lower than I = 2σ = 12%. This behaviour is observed for both shapes. This uncertainty, that results only from random e fects during packing, is quite high. An ANOVA analysis on the data indicate that due to the large scatter, α values are statistically identical for TL and QL.
For the sake of comparison, a glance on the local pressure and the velocity eld is presented in F . 7.17 in 4 horizontal cross-sections of the packed beds 8 (top) and 9 (bottom) at
• m -1 ], < ε > [-].
z = 4, 8, 12, 16 made with QL of aspects ratios a r = 2. As the It can be seen, despite the fact that the systems are similar in terms of particle number and domain size, the re-packing induces a noticeable di ference in the local pressure and velocity magnitude. Since a zero pressure outlet is set at the top of the bed, the di ferences lie at the vicinity of the bed inlet. It can be noticed that at z = 4 the bed 8 has a lower velocity magnitude compared to the bed 9 which is translated into higher pressure compared to the bed 8. As the cross-section is moving upward, the uid velocity magnitude and the pressure tend to be more homogeneous for both packed beds. This is a pure e fect of local micro-structure. The set of repetitions of a packed bed of QL is considered as representative to investigate the e fects of the random packing on the ow dynamics. F . 7.16a illustrates the di ferences in pressure pro le for each simulation of the set. It reveals that despite the fact that the system is the same in terms of number of particles and domain size, the random insertion leads to di ferent micro-structures. This is visually con rmed in F . 7.17 which depicts the local structures of two packed beds (cases 8 and 9).
Values of the coe cients α and β
The values obtained for the Blake-Kozeny-Carman α and the Burke-Plummer constants β are exhibited in this section. Before presenting the values, it is important to note that the values are in the range of the experimental values, and that the uncertainty induced by the random packings on α is evaluated to be 12%.
Results for the coe cient α are presented in F . 7.18a and 7.18b complemented with data set from the work of [START_REF] Nemec | Flow through packed bed reactors: 1. single-phase ow[END_REF]. Simulations in this work indicate that α = 200, does not depend on particle length and does not vary between TL and QL.
Values of tted coe cient β are plotted in F . 7.19a and 7.19b as a function of aspect ratio a r and the sphericity Ψ. The evaluation of β is performed at nite Reynolds number of Re c = 16, where the quadratic term accounts for approximately 30% of the total pressure drop. As explained earlier, the uncertainty on the pressure drop and α are quite high (resp. ∼ 20% and ∼ 12%). This induces an uncertainty on the evaluation of β which approximately reaches 30%. Performing simulations at higher Re c is still not possible due to computing resource limitations induced by the complexity of particle shapes. It is worth to note that at Re c = 16, the computations of the pressure drop through packed beds of particles of a r = 4 need more than 3 × 10 8 grid cells and require more than 512 cores to resolve the problem. The tted values of β are in the range of 2.8-4.6 which is very coherent with previous works.
Discussion on values of α and β
A closer look at the tted α may indicate two trends depending on the aspect ratio a r . When plotted as a function of a r , two domains are identi ed. For a r 2 (Ψ 0.68), TL and QL exhibit values of α which are in the scattered data and follow the usual trend (increasing with a r , decreasing with Ψ). For a r > 2, values of α seem to loose their dependency on the particle shape, this is most visible when plotted as a function of the sphericity.
An explanation of this behaviour may be the size of the bi-periodic domain. Intuitively, if the ratio of the domain length to particle length (Lx * /L * p ) decreases, the results may su fer from periodicity e fects. Simulations in this study were all performed with the same domain width. Therefore, when the particle length increases the ratio of the domain length to the particle length decreases. In order to clear out this risk, the following veri cation has been performed. Using TL of a r = 1.5, various domain size of 6, 8 and 10 mm are simulated. They all give the same void fraction and pressure drop. Void fractions of particles with an aspect ratio a r = 4 are in the range of 0.52 -0.53 and in line with the data reported by [START_REF] Nemec | Flow through packed bed reactors: 1. single-phase ow[END_REF] (T . 7.2) with similar aspect ratios. The void fraction measured by DEM is slightly higher but within the stochastic uncertainty. Last, the same DEM-PRS simulations were run while increasing the domain width from 8 to 12 mm for two cases: TL of a r = 2.5 and CYL of a r = 2.89 (L * p = 4.62 mm). In both cases, it was found that there are no signi cant di ferences on both void fractions (less than 0.5% variation) and pressure drop (less than 5% di ference). As cylinder results follow a very regular trend even for a r = 2.89, it is conclude that the domain size has no e fect for a r < 3. Nevertheless, an observation on results for β indicates that the results at a r = 4 can be seen as "di ferent". For the present time, it is safer to consider that only the results presented for a r < 3 are representative of experimental data.
The veri cation of the results leaves the door open for an e fect of a too small domain size in the simulations with the longest particles. The packing dynamics of long particles may be impacted by the size of the bi-periodic domain by very short mechanical interaction chains. To be more speci c, in special conditions, a particle A can mechanically interact with another one B on one side and B's clone on the other side: this corresponds to a B-A-B interaction chain. This type of interaction does not exist when loading large reactors and may lead to some special packing structures with an e fect on pressure drop and possibly on void fraction. Longer interaction chains with 3 or more particles may be also considered but are much more likely to occur experimentally. These very short interaction chains are more likely to occur with long and horizontal particles.
Assuming a loss of representativeness for high particle aspect ratio, a few interesting facts emerge. As the numerical methods used for the simulation of the pressure drop do not depend on particle length, the loss of representativeness for high aspect ratio pressure drop must originate from the packing structure. As already discussed, the simulated void fraction is slightly higher than experimental data, which logically yield a lower pressure drop. However, the void fraction correction to compute α should have corrected for this bias and produces a higher α. The high a r are di ferent in some ways that are interesting to be understood as they could potentially lead to innovative packing methods that lower pressure drop at constant void fraction. Is it possible to identify this speci c features ?
Using numerical data resulting from DEM simulations, it is quite straightforward to compute the angle of each particle with the horizontal plane xy and its average on all particles. As it can be seen in F . 7.20a, the average angle to the horizontal is a function of the particle shape and aspect ratio. For TL and QL, it decreases with a r until an asymptote is reached for a r 2.5. This threshold value corresponds to a presumable transition that might impact the pressure drop. For cylindrical particles, the average angle to horizontal decreases more slowly and does not reach a plateau with the available data. Is the plateau a physical feature or the result of the limited domain size ? Tortuosity is the other standard porous media descriptor, although barely used in chemical engineering as it is very di cult to measure. This data can be measured numerically. One of the methods studied in [START_REF] Duda | Hydraulic tortuosity in arbitrary porous media ow[END_REF] suggests to express the tortuosity T as a ratio of the volumetric integral of the uid ow velocity to the the volumetric integral of the velocity component of the macroscopic ow direction. It reads: (7.32) where subscript z stands for the macroscopic ow direction. [START_REF] Carman | Fluid ow through granular beds[END_REF] already had this idea of computing < v > / < v z > for the representation of the hydraulic tortuosity but all the attempts were always restricted to simple model such as group of parallel channels which do not represent complex heterogeneous porous media. Tortuosity for TL and QL is presented in F . 7.20b: it increases with the aspect ratio. More work is required to see how this parameter evolves for cylindrical particles but is not presented in this chapter.
T = V v(X)d 3 X V v z (X)d 3 X = < v > < v z >
C P
The DLM/FD method has been adapted to compute single phase ow in packed beds of polylobed particles without the need of radius calibration. A suitable set of CP location has been proposed for a trilobal and a quadrilobal particle. A perspective is to optimize distribution of the CPs so as to reduce their number without losing accuracy. Another research question is to develop some automated approach to mesh any new complex particle. The numerical platform Grains3D -PeliGRIFF has then been used to simulate for the rst time the pressure drop in packed beds of trilobes and quadralobes. Results have been interpreted using the Ergun formalism and agree well with the available literature for low aspect ratio. It is concluded that trilobe and quadralobe have the same pressure drop behaviour. For high aspect ratio, simulation results are not in line with the scarce experimental data available. There is no reason for the validity of the computed solutions to change with particle length, so we think that the surprising behaviour is either physical or results from the packing structures that are somehow not physical although they have a correct void fraction and no speci c features. It could be that the loss of representativeness originates from a too small simulation volume that for some unknown reason impacts the granular dynamics. An open and important question for "ab silico" simulations of xed beds is to identify a signature of "un-physical" packings. We suggest to investigate the particle orientation or the tortuosity or any other numerically accessible piece of information. A rst step toward this, would be to replicate our simulations using larger domains and track how the pressure drop coe cients evolve. This e fort will be limited by computing power.
R
La première partie de ce chapitre est consacrée au couplage du solveur granulaire avec le solveur des équations de Navier-Stokes pour les types de particules vus dans les précédents chapitres. Ici la méthode "Distributed Lagrange Multipliers / Fictitious Domain" est étendue aux particules de forme multi-lobée. En e fet, la méthode est robuste moyennant une correcte répartition des points de colocation à la surface et à l'intérieur de la particule. A défaut de solution analytique, il est proposé une étude de convergence en espace des solutions calculées, en premier lieu sur une particule isolée dans une con guration tri-périodique, ensuite sur une collection de quelques dizaines de particules. Il a été conclu que les formes non-convexes ont besoin de plus de point de colocation que les particules convexes convexes pour obtenir la même précision sur les solutions calculées.
La deuxième partie de ce chapitre se concentre sur l'application de la méthode sur les problèmes rencontrés dans le réacteurs à lit xe. C'est-à-dire les e fets des formes des particules sur la perte de charge dans ce genre de réacteurs. La comparaison est faite sur trois types de particules: cylindre, trilobe et quadrilobe. E fectivement, la plateforme Grains3D -plgf a été utilisée pour simuler, pour la première fois, la perte de charge au travers de lits de particules multilobées. Les résultats montrent une tendance qui est statistiquement identiques pour les particules multi-lobées. Cette tendance sur la perte de charge est di férente de celle des particules habituellement retrouvées dans la littérature.
C P C P rticulate ow modelling signi cantly progressed during this decade and has bene ted from the growth of computing power. This opens up new opportunities to investigate the e fect of particle shape in uid-particle systems. In fact, most of existing models in the literature are only designed for spheres but in many applications particle shapes are often complex. In this thesis, a modelling of complex particle shape has been suggested. For this endeavour, a numerical multiphase ow platform (Grains3D-PeliGRIFF) dedicated to particulate ow simulations of arbitrary convex particles is extended to deal with non-convex particles.
The rst part of this work is dedicated to the extension of the Discrete Element Method granular solver Grains3D to handle non-convex particle shape. To this end, the strategy is based on the decomposition of a non-convex particle into a set of convex bodies. This idea comes from the so-called "glued spheres" model widely used in the literature. The concept appears to be simple and e cient since almost any complex shape can be decomposed into a few or many arbitrary convex particles rather than spherical ones. Hence, the name of "glued convex" has been given to the new model. Due to the complexity of the shape, volume and elements of the moment of inertia is computed by discretising the shape instead of using boolean operations on the presumably overlapping elements of the "composite". Owing to the number of elementary particles, the "composite" particle may be subjected to a multi-contact problem. In order to overcome this issue two models are tested for the resulting contact force. The rst model consists in summing up all the forces while the second model is based on averaging all the forces. For both methods the resulting contact force is computed at each time step due to the probable variation of the number of contact points during an interaction. Results have been shown to be accurate when compared to analytical results but the time step decreases as a reverse function of number of contact points when we adopt the rst model. The second model keeps a contact duration that is in the order of magnitude of convex particles. Based on this observation the second model is selected and implemented in the granular solver Grains3D. Using our new glued convex model, a study of the dynamics of a granular media made of non-convex particles in a rotating drum is carried out to quantify the e fects of the non-convexity. Results obtained for two cross-like non-convex particles overall show that the avalanching regime is promoted at low rotation rates and that the cataracting regime is not really easy to de ne. These major di ferences are a result of the high entanglement of particles which provides a sort of cohesion to the granular media. The second application of the implemented model is the study of the packings resulting from the lling of reactors encountered in the re ning industry. For this purpose, poly-lobed particles are modelled as a composite of cylinders and a polygonal prism which replicate with high delity the shape of catalyst particles developed at IFPEN. Due to slight micro-structural variations in packed beds, the void fraction always di fers from a bed to another bed. Packing repeatability is assessed and correlations are established for cylindrical, trilobal and quadralobal particles in cylindrical vessels and in semi-in nite domains to mimic large scale reactors. Obtained results show a clear change of void fraction between cylindrical particles and poly-lobed packed beds. Finally, the parallel performance of Grains3D was assessed on various granular ow con gurations comprising both spherical and angular particles. To this end, large scale simulations of silo discharges of spherical and angular particles, dam breaks of icosahedron and uidized beds of spherical particles were performed. All simulations showed a scalability of more than 0.75 for systems of more than 100, 000 particles per core. The scalability can reach up to 0.9 for systems of nonspherical (convex) particles. In its current state, Grains3D o fers unprecedented computing capabilities. Systems with up to 100, 000, 000 of non-spherical particles can be simulated on a few hundreds of cores.
The main goal of the second part of this thesis is to determine the e fects of particle shape on the pressure drop through packed beds of trilobes and quadralobes. The rst step was the extension of the capability of the uid ow solver to handle poly-lobed particles. The granular ow solver is coupled to the micro-scale (Direct Numerical Simulation) module of PeliGRIFF in which a Distributed Lagrange Multiplier / Fictitious Domain formulation combined with a Finite Volume Staggered Grid scheme is already implemented. The extension relies on the integration of the new geometries in the formulation, i.e. designing a new construction method to homogeneously distribute the collocation points in the rigid bodies and on their surface. A space convergence study was carried out in assorted ow con gurations and ow regimes such as the steady ow through a periodic array of particles and the steady ow through a packed bed of particles to assess the accuracy of computed solutions. Based on the convergence study, we found that for the same accuracy, the number of collocation points for poly-lobed particles should be higher compared to more standard particles such as spheres or cylinders. In fact, 50% more points are required to describe the cross-sectional surface of the poly-lobed particles than that of a cylindrical particle. From the previous study, the pressure drop through packed beds of poly-lobed particles have been reliably investigated. We performed around 200 particle-resolved simulations of the ow through a packed bed of trilobes or quadralobes. Based on these simulation results, we suggested a modi ed Ergun's correlation. The proposed correction of the Ergun's correlation is based on tting the Blake-Kozeny-Carman (α) and the Burke-Plummer (β) constants by introducing parameters that depend on the sphericity and the particle equivalent diameter. Results have been interpreted using the Ergun formalism and agree well with the available literature for low aspect ratios. We observed that TL and QL have globally the same pressure drop behaviour. For high aspect ratios, simulation results are not in line with the scarce experimental data available. The simulated void fraction is slightly higher than experimental data, which logically yield a lower pressure drop.
P
T e new extension of the multiphase ow platform Grains3D-PeliGRIFF has been success- fully deployed. However, there is vast room for improvements, both on the physical modelling side and on the computational side. Since the contact resolution scales with N i × N j where N i and N j denote the number of elementary particles of the composites i and j respectively, there is an interest on implementing a convex hull or a bounding box algorithm to accelerate the contact detection. Later on, the model can be extended to take into account cohesive interactions. In addition, a dynamic load balancing would enhance the computing capabilities of Grains3D in ow con gurations with high particle volume fraction heterogeneities.
On the pure parallel computing aspects, the milestone of a billion of convex particles appears attainable as suggested by the trend shown by the scaling factor of the code.
In PRS, although the accuracy of the DLM/FD formulation is satisfactory, the method does not strictly satisfy the velocity divergence-free property. In fact, our operator-splitting algorithm solves the following sequence of sub-problems at each time: (i) Navier-Stokes subproblem and (ii) DLM/FD sub-problem. The latter enforces the rigid body motion constraint but not the velocity divergence-free constraint. Therefore, more sophisticated operatorsplitting techniques as e.g. a second order Strang symmetrized algorithm, or more strongly coupled solution algorithms might further improve the computed solution accuracy. In the original version of the DLM/FD formulation, an Uzawa conjugate gradient algorithm is used to solve the saddle point sub-problem. In order to avoid the computational cost, a fast projection scheme, a variant of the Direct Forcing of the Immersed Boundary Method, can be implemented. Since the most of the computing time is spent in the DLM/FD sub-problem in dense systems, accelerating the solution of this sub-problem while keeping the same level of accuracy is highly desirable. In addition, an Adaptive Mesh Re nement strategy would be of great improvement in xed bed simulations. Not only the AMR strategy would decrease the total number of grid cells, but it will increase the accuracy of the computed solutions where needed. A DLM/FD module to model heat and/or mass transfer with in nite di fusivity in the particles core is already available in PeliGRIFF and can be used for the simulations of ows with trilobes/quadralobes. The extension to intra-particle di fusion would require the implementation of a Sharp Interface method to properly capture the gradient discontinuity at the particle/ uid interface. This work is currently carried out by another PhD student of the Peli-GRIFF group. Numerical simulations with mass transfer would need a realistic and manageable kinetic scheme (in the sense with "not too many" equations and chemical species) and probably adapted numerical schemes to treat the di ferent time scales involved in these chemical reactions.
Further Uncertainty Quanti cation of random packing would provide a better tting of the Ergun's coe cients for pressure drop in a packed bed of trilobes or quadralobes. This may lead to the introduction of another parameter in the correlation such as the particle aspect ratio. Further simulations of packed beds of particles with high aspect ratio in larger domains would provide a better understanding on the low pressure drop that we measured in some of our simulations. The new "glued convex" model can be integrated in a multi-scale framework for granular ow modelling or particulate ow modelling for various industrial problems (among others geoscience, food industry, pharmaceutical industry, upstream oil & gas industry, etc . . . ). For instance, correlations for drag, heat ux and mass transfer for any particle shape can be derived from PRS and later integrated in a meso-scale model (of the DEM-CFD type for instance) for uidised bed simulations.
With all these features, the numerical platform Grains3D-PeliGRIFF can serve as a very accurate tool for virtual optimisation of processes in the chemical industry. For chemical conversion in xed bed reactors, numerical simulations are feasible from the loading of reactors to the hydrodynamics of the ow through the bed, coupled with heat and mass transfer. This would equip chemical engineers with a predictive tool of chemical e ciency of catalysts. R L s écoulements uide-particules ont connu un important progrès durant cette décennie grâce à l'avènement de l'ère du calcul haute performance. Ceci ouvre la voie à plusieurs opportinuités d'investigation des e fets de forme des particules dans ces systèmes. E fectivement, de nombreux modèles existant dans la littérature reposent sur des particules de forme sphériques ce qui n'est pas toujours les cas dans plusieurs applications. Au cours de cette thèse, la modélisation de systèmes comportant des particules de formes complexes est abordée en utilisant la plateforme numérique Grains3D -PeliGRIFF dédiée aux écoulements multiphasiques. Ces travaux de thèse consistent à étendre la capacité de ces outils à pouvoir prendre en compte des particules non-convexes.
La première partie de cette thèse est dediée à l'extension du solveur granulaire (Discrete Element Method) à traiter des particules non-convexes. Elle est basée sur la décomposition d'une particule non-convexe en particules élementaires arbitrairement convexes. Cette méthode peut être considérée comme étant une extension du modèle " glued sphere", très connu dans la littérature. Le concept paraît simple et é cace car à peu près n'importe quelle forme arbitrairement non-convexe peu être décomposée en plusieurs formes arbitrairement convexes. D'où la dénomination du nouveau modèle glued convex. À cause la complexité des formes, le calcul du moment d'inertie est fait par une discrétisation spatiale du "composite" tout en considérant que les particules élémentaires peuvent se recouvrir. Cette décomposition implique aussi plusieurs points de contact dans la dynamique du composite auxquels une attention particulière a été dédiée. Ainsi, le modèle a permis, pour la première fois, d'étudier la dynamique des milieux granulaires dans un tambour tourant pour montrer l'e fet de la concavité des particules en forme de croix. En e fet, ces milieux granulaires montrent que le régime d'avalanche se manifeste à très faible vitesse de rotation et la transition entre régime de cascade et régime de centrifuge n'est pas évidente à de nir. La seconde application du modèle consiste à simuler le remplissage de réacteurs à lit xe avec des particules de forme multi-lobée rencontrées dans l'industrie du ra nage pour, ensuite, quanti er l'e fet des formes de catalyseurs sur le taux de vide dans ces réacteurs. Finalement, la performance parall `le de Grains3D est mise en évidence sur quelques con gurations découlement granulaires. Ces tests ont permis de montrer que des systèmes de plus de 100, 000, 000 de particules non-spheriques peuvent être simulés sur quelques centaines de processeurs et que desormais des simulations numériques de systèmes atteignant le milliard de particules shperiques peuvent être envisageables.
La deuxième partie de ce travail est consacrée au couplage entre le nouveau modèle de particule non-convexe implémenté dans le solveur Grains3D et le solveur des équations de Navier-Stokes PeliGRIFF en utilisant le module de simulation numérique directe de ce dernier. Cette résolution directe repose sur la méthode "Distributed Lagrange Multipliers / Fictitious Domain". Elle consiste à imposer une condition de corps rigide au sein de la particule et à la surface de celle-ci en imposant une condition d'égalité des vitesses uide et solide à l'aide de multiplicateurs de Lagrange. Dans le cas d'un lit xe, cette vitesse est nulle dans la particule et sur sa surface. À défaut de solutions analytiques, une étude de convergence spatiale des solutions calculées est menée dans le cas d'une particule isolée, puis sur un lit xe de quelques dizaines de particules multi-lobées. Après la comparaison avec des particules cylindriques, cette étude a conduit à conclure que 50% de point de colocation supplémentaires sont indispensables pour décrire la surface issue de la coupe transversale des particules multi-lobées. Á partir de cette étude, une campagne de simulations numériques a été menée dans le but de quanti er l'e fet des formes des particules sur la perte de charge au travers d'un lit xe en utilisant le for-malisme d'Ergun. Les résultats illustrent que les trilobes et les quadrilobes ont statistiquement le même e fet sur la perte de charge.
Mots clés: Particule non convexe, Mécanique des Milieux Granulaires, Simulation Numérique Directe, Tambours Tournants, Lits Fixes, Milieux Poreux, Calcul Haute Per. . . . . . . . . . . . . . . . . . . . . 2 2 R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3 H . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.1 Industrial context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Shape and apparent catalytic activity . . . . . . . . . . . . . . . . . . . . . . . 7 3.3 Pressure drop and Void fraction . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.4 Summary on catalyst shape optimization: Need for predictive tools . . . . . . . 9 4 S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Figure 1
1 Figure 1.1 -Cement plant Figure 1.2 -Inside a cement mill
Figure 1
1 Figure 1.3 -Hopper discharge Figure 1.4 -Segregation inside a hopper near the exit hole
Figure 1
1 Figure 1.7 -A dust storm passing over Onslow in Australia.
Figure 1 . 8 -
18 Figure 1.8 -Pyroclastic flow flowing into the Tar River in North Carolina.
Figure 1 . 9 -
19 Figure 1.9 -Rings of Saturn.Figure 1.10 -NASA's Mars rover Curiosity. Credit: NASA/JPL-Caltech.
Figure 1.9 -Rings of Saturn.Figure 1.10 -NASA's Mars rover Curiosity. Credit: NASA/JPL-Caltech.
Figure 1 .
1 Figure 1.11 -Di erent regimes found in rotating drum. Credits: Mellmann (2001).
(a) Example of spherical catalyst particles. Credits: Falmouth Products Inc., Falmouth, MA. (b) Example of catalyst particles developed at IFPEN.
Figure 1 .
1 Figure 1.12 -Example of catalyst pellets.
( a )
a Common shapes used in industrial applications.(b) Extrudate catalyst pellets. Credit:[START_REF] Cooper | Hydroprocessing conditions a fect catalyst shape selection[END_REF].
Figure 1 .
1 Figure 1.13 -Various shape of catalyst pellets.
Figure 1 .
1 Figure 1.14 -E ectiveness factor for various shapes. X-axis : Thiele modulus, Y-axis : efficiency. Credit: Aris (1957).
Figure1.15 -Dependency of pressure drop on loading technique and particle shape. Credit:[START_REF] Cooper | Hydroprocessing conditions a fect catalyst shape selection[END_REF].
shape . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Brief review of particle shape in literature . . . . . . . . . . . . . . . . . . . . . 5 S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 2.1 -Sketch of dense granular flows in experimental studies (Pouliquen and Chevoir (2002)). (a) Shear cell, (b) Vertical chute, (c) Inclined surface, (d) Heap formation.
( a )
a Sketch of collision between two particles. (b) Contact scenarios between two cylinders.Credit:Kodam et al. (2010a).
Figure 2 . 2 -
22 Figure 2.2 -Decomposition of trilobal and quadralobic particle shapes into convex shapes. View of the cross sections.
Figure 2 . 3 -
23 Figure 2.3 -Contact approximation between two ellipsoids using a curvilinear region defined by pairs of θand ϑ-curves: (a) first approximation and (b) final approximation. Credit: Lu et al. (2015) (adapted from[START_REF] Rothenburg | Numerical simulation of idealized granular assemblies with plane elliptical particles[END_REF]).
Figure 2 . 4 -
24 Figure 2.4 -Examples of 3D super-quadrics. Credit: Lu et al. (2015).
Figure 2.5 -Illustration of the common plane technique between two colliding polygons. (a) Definition of the common plane, (b) Five probable planes for the common plane all pass through the mid-point O of the linear section P Q (P and Q are the "closest vertices" of polygon 1and 2). They are the plane which is perpendicular to the linear section P Q, and the planes that are parallel to the polygon edge P P 1 , P P 2 , QQ 1 and QQ 2 where P 1 and P 2 are the vertices of particle 1 next to P and Q 2 and Q 2 are the vertices of particle 2 next to Q. Credit:[START_REF] Lu | Discrete element models for non-spherical particle systems: From theoretical developments to applications[END_REF] (adapted from[START_REF] Nezami | A fast contact detection algorithm for 3-d discrete element method[END_REF]).
( b )
b Spheropolygon obtained by moving a disk around the polygon. Credit: Alonso-Marroquín and Wang (2009).
Figure 2 . 6 -
26 Figure 2.6 -Various spherosimplices particle shapes.
Figure 2 . 7 -
27 Figure 2.7 -Multi-sphere particle impacting a flat wall. (a) single contact, (b) double contact and (c) a triple contact. Credit: Kruggel-Emden et al. (2008).
Figure 3 . 2 -
32 Figure 3.2 -Discretisation strategy illustrated in 2D.
Figure 3 . 3 -
33 Figure 3.3 -Convergence of the relative error on the volume of a sphere, a cylinder and a glued convex made of two overlapping cylinders.
Figure 3 . 5 -
35 Figure 3.5 -Damping coefficient depending on the restitution coefficient e n and the number of contact N .
Angular velocity, rω + y /V - z,g .
Figure 3 . 7 -
37 Figure 3.7 -Comparison of the dimensionless post-impact velocities.
Figure 3 . 8 -
38 Figure 3.8 -Normal contact force evolution with time of a particle impacting a flat wall at the angle of 90 • . N = 1, ..., i is the number of the components of the glued convex.
Figure 3 . 9 -
39 Figure 3.9 -Deviation of the velocities of a single glued cylinders DEM simulation compared to a single true cylinder DEM simulation impacting a flat wall at 90 • .
a
Figure 3.10 -Representations of a cylinder made of 9 glued spheres (a,b) and 54 glued spheres (c,d).
Angular velocity rω + y /V - z,g .
Figure 3 . 11 -
311 Figure 3.11 -Comparison of the dimensionless post-impact velocities of a cylinder made of 9 glued spheres (a,b) and 54 glued spheres (c,d) with the analytical solution E. 3.37-E . 3.38.
Figure 3 .
3 Figure 3.12 -Non-convex cross-like shapes considered in this work.
Figure 3
3 Figure 3.13 -Packings of 1000 particles of various shapes. Particles in blue are the periodic clones.
Figure 3.14 -Packing of 250 particles of 6 di erent shapes in a cylindrical container. Results in(a,b,c,d) are from[START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF].
Figure 3 .
3 Figure 3.15 -Visualisation of the rotating drum.
Figure 3.16 -Rotating drum filled with "3D cross" shaped particles at various rotation rates: snapshots of the pattern of particles coloured by their translational velocity magnitude (from blue (min) to red (max)).
Figure 3.17 -Single particle trajectories at Ω = 150 rpm during 10 s.
Figure 3.18 -Snapshots of the pattern of the "3D cross" shaped particles coloured by their translational velocity magnitude (from blue (min) to red (max)) at Ω = 5 rpm (a,b,c) and at Ω = 20 rpm (d,e,f).
Figure 3.19 -Rotating drum filled with "2D cross" shaped particles at various rotation rates: snapshots of the pattern of particles coloured by their translational velocity magnitude (from blued (min) to red (max))
Figure 3.20 -Snapshots of the pattern of the "2D cross" shaped particles coloured by their translational velocity magnitude (from blue (min) to red (max)) at Ω = 5 rpm
Figure 3
3 Figure 3.21 -Time averaged variation of the coordination number. Data are form[START_REF] Wachs | PeliGRIFF, a parallel DEM-DLM/FD direct numerical simulation tool for 3D particulate ows[END_REF]
Figure 3
3 Figure 3.22 -Evolution of dimensionless mean translational velocity as a function of dimensionless time t * = tΩ. Data for spheres and cubes are from[START_REF] Wachs | Grains3D, a exible DEM approach for particles of arbitrary convex shape-Part I: Numerical model and validations[END_REF].
Figure 3 .
3 Figure 3.23 -Circumscribed sphere illustrated in 2D.
Figure 4 . 1 -
41 Figure 4.1 -Illustration of a box-like insertion window in DEM simulations.
Figure 4 . 2 -
42 Figure 4.2 -Examples of plots resulting from the method (ii) of the void fraction analysis. Here QL and CYL have the same volume and same length L p .
(a) Cylinder (CYL) (b) Trilobe (TL) (c) Quadralobe (QL) (d) Circumscribed diameter.
Figure 4 . 3 -
43 Figure 4.3 -Particle shapes in this study.
( a )
a Pakcing of TL in cylindrical domain. (b) Pakcing of TL in bi-periodic domain.
Figure 4 . 4 -
44 Figure 4.4 -Type of domains in this study.
(a) An insertion window of 4 mm length in a cylindrical vessel of 14 mm diameter. (b) An insertion window of 6 mm length in a cylindrical vessel of 14 mm diameter.
Figure 4 . 5 -
45 Figure 4.5 -Top view of two simulation domains with their corresponding insertion windows.
Figure 4 . 6 -
46 Figure 4.6 -Average void fraction in a bi-periodic container for particles of various shape, length and diameter.
Figure 4 . 9 -Figure 4 . 10 -
49410 Figure 4.9 -Void fraction for TL packed beds for various reactor diameters, particle lengths (not shown) and aspect ratio.
Figure 5 . 1 -Figure 5 . 2 -
5152 Figure 5.1 -Illustration of the status of a particle in a tagged cell.
Figure 5
5 Figure 5.3 -2D illustration of inter-process communication for a particle tagged SOUTH.
Figure 5
5 Figure 5.4 -2D illustration of inter-process communication for a particle tagged SOUTH_EAST.
( a )
a Silo shape. Credit:[START_REF] González-Montellano | Validation and experimental calibration of 3d discrete element models for the simulation of the discharge ow in silos[END_REF].(b)Equivalent extended silo in our simulations.
Figure 5 . 5 -
55 Figure 5.5 -Shape and dimensions of the 3D silo.
t
Figure 5 . 6 -
56 Figure 5.6 -Simulation results of filling and discharge of the 3D silo with Grains3D. Coloured by the particle velocity magnitude.
Figure 5 . 7 -
57 Figure 5.7 -Comparison between experimental data of González-Montellano et al. (2011) and our simulation results with Grains3D: snapshots of discharge dynamics at di erent times.
(a) Decomposition of the domain into 16 sub-domains. Each silo is handled by a single core. (b) Discharge of 14000 spherical particles per silo from 16 independent silos. Silos are hidden. Coloured by the particle velocity magnitude (blue = min, red = max).
Figure 5
5 Figure 5.8 -Multi-silo simulation set-up without overlap between silos (communications with empty messages between sub-domains).
(a) Top view of the simulation domain in which 16 silos are merged into one big silo. (b) Discharge of 16000 spherical particles per subdomain from 16 connected hoppers. Hoppers are hidden. Coloured by the particle velocity magnitude (blue = min, red = max).
Figure 5
5 Figure5.9 -Multi-silo simulation set-up with all silos merged (connected hoppers) into one big silo (actual communications with non-empty messages between sub-domains). Each hopper corresponds to a sub-domain.
Figure 5 .
5 Figure 5.10 -Discharge of 1,792,000 spherical particles from 128 merged silos (connected hoppers). Coloured by the particle velocity magnitude (blue = min, red = max).
Figure 5 .Figure 5 . 12 -
5512 Figure 5.11 -Weak scaling parallel performance of Grains3D in the multi-silo configurations with (a) disconnected silos and (b) merged silos into one big silo.
Figure 5 . 13 -
513 Figure 5.13 -Granular dam break set-up.
Figure 5.14 -3D view of the granular dam break flow for Size 4 case.
Figure 5.15 -2D view of the granular dam break flow for Size 4 case. (a)-(f) correspond to snapshots every 0.1s.
Figure 5 . 16 -
516 Figure 5.16 -Variation of L ∞ /L. L ∞ (red) for ε ≤ 0.1. Blue dots are particles positions.
Figure 5 . 17 -
517 Figure5.17 -Variation of run-out distance (L ∞ -L)/L with dimensional size of the system for a ≈ 7.3.
Figure 5 .
5 Figure 5.18 -Final scaled profiles of the deposit as a function of dimensional size of the system for a ≈ 7.3. All profiles collapse on a single master profile.
Figure 5 . 19 -
519 Figure 5.19 -Weak scaling parallel performance of Grains3D in granular dam break computations.
Figure 5
5 Figure 5.21 -3D snapshots of fluidized bed fluid flow over the early transients with U in /U mf = 2 in the case N cores = 64, N T = 19, 200, 000: ε = 0.75 fluid porosity contours colored by pressure magnitude, velocity contours in a x -z cut plane located at ỹ = Ly and pressure contours in a y -z cut plane located at x = 0.
Figure 5 .
5 Figure 5.22 -A 3D snapshot of fluidized bed fluid flow at t = 2142 and U in /U mf = 3 in the case N cores = 64, N T = 19, 200, 000: ε = 0.75 fluid porosity contours colored by pressure magnitude, velocity contours in a x -z cut plane located at Ly and pressure contours in a y -z cut plane located at x = 0.
Figure 5 . 23 -
523 Figure 5.23 -Weak scaling parallel performance of Grains3D relative to a full 16-core node in fluidized bed computations.
. . . . . . . . . . . . . . . . . . . . . . . . 4 C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1. 2
2 Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST)
Figure 6
6 Figure 6.3 -Deforming-Spatial-Domain / Stabilized Space-Time. Credits:[START_REF] Wan | Fictitious boundary and moving mesh methods for the numerical simulation of rigid particulate ows[END_REF].
Figure 6 . 4 -
64 Figure 6.4 -Lattice structure for a "D3Q19" model. Credits: Third et al. (2016).
Figure 6 . 6 -
66 Figure 6.6 -Example of non-spherical particles used in IBM studies. Credits: Zastawny et al. (2012).
Figure 6 . 8 -
68 Figure 6.8 -Chaotic motion of a cube and a tetrahedron. Credits: Rahmani and Wachs (2014).
Figure 6 . 9 -
69 Figure 6.9 -Illustration of the adaptive mesh refinement technique.
u ∈ V ∂Ω (Ω) stands for the uid velocity vector, p ∈ P(Ω) the pressure, λ ∈ Λ(t) the velocity distributed Lagrange multiplier vector, U ∈ R d the particle translational velocity vector, ω ∈ R d the particle angular velocity vector, d the number of non-zero components of ω (if d = 2 : ω = (0, 0, ω z ) and d = 1; if d = 3 : ω = (ω x , ω y , ω z ) and d = 3 = d), F c ∈ R d the contact forces, R ∈ R d the vector between particle gravity center and contact point, r the position vector with respect to particle gravity center,V P = M * /(ρ * s L * d c ) ∈ R the dimensionlessparticle volume, M * the particle mass, I P = I * P /(ρ * s L * (d+2) c ) ∈ R d× d the dimensionless particle inertia tensor, ρ * s ∈ R the particle density, g * ∈ R d the gravity acceleration and g * ∈ R the gravity acceleration magnitude.
η * denotes the uid viscosity. In the following, L * c = d * is chosen for suspension ows, d * denoting the diameter of the cylinder whose cross section circumscribes the extrudated shapes (F . 7.1).
Figure 7 . 1 -
71 Figure 7.1 -Definition of the circumscribed diameter.
Figure 7.2 -DLM/FD points on the staggered grid for a 2D circular cylinder: (a) the set of interior and boundary points, (b) in blue the 4-point multi-linear interpolation stencil and in red the 9-point Q2 outwardsoriented interpolation stencil for the x velocity component. Adapted from[START_REF] Wachs | Accuracy of nite volume/staggered grid distributed lagrange multiplier/ ctitious domain simulations of particulate ows[END_REF].
Figure 7 . 3 -
73 Figure 7.3 -Decomposition of trilobal and quadralobic particle shapes into convex shapes. View of the cross sections.
d its unit vector and let r ∈ R d the radial vector. Then the CP are build using the parametric equation of each slice which reads, for ζ ∈ [0, 1]: P = cos(ζ)r + sin(ζ)k × r + C, where C ∈ R d denotes the slice gravity center. -for the triangular prism and the rectangular parallelepiped, the points are distributed as follows: given the targeted point to point distance l pp = τ h , τ ∈ [1 : 2] and the rectangular or triangular edge length l e , the actual point to point distance is l a = l e /int(l e /l pp ), where int(x) denotes the integer portion of x.
( a )
a Rectangular face. (b) Triangular face.
Figure 7 . 4 -
74 Figure 7.4 -Layout of boundary CP on: (a) rectangular face and (b) triangular face.
Figure 7 . 5 -
75 Figure 7.5 -Layout of boundary CP on a : (a) trilobal, (b) quadralobic particles
)
Figure 7.6 -Illustration of the three flow configurations for φ = 0.216. Stream lines. Velocity field magnitude (red=max, blue=min).
Figure 7 . 7 -
77 Figure 7.7 -Convergence of the computed solutions at Re = 0.01 for a r = 5. N p is the number of CP in the particle cross-section. N -1.3 p
Figure 7 . 8 -
78 Figure 7.8 -Convergence of the computed solutions at Re = 0.01 for a r = 1. N p is the number of CP in the particle cross-section. N -1.3 p
a) Trilobe ⊥. (b) Quadralobe ⊥.(c) Cylinder ⊥.
Figure 7
7 Figure 7.9 -3-periodic array of particle for φ = 0.545 and Re c = 0.01. Velocity field magnitude.
Figure 7 .
7 Figure 7.10 -3-periodic array of particle for φ = 0.545 and Re c = 0.01. Velocity field magnitude.
Figure 7 .
7 Figure 7.11 -Dependency of the friction coefficient K on the solid volume fraction φ.
•
a uniform upward inlet velocity at the bottom of the bed • a zero pressure outlet and homogeneous Neumann boundary conditions for all velocity components at the top of the bed The convergence is assessed on the inlet-outlet pressure drop. (a) Packed bed of trilobes
Figure 7 .
7 Figure 7.12 -Packed bed reactor of trilobal particles. (a) contour of the velocity magnitude at Re = 50 (red=max, blue=min). (b) convergence of the computed solutions
(2005) proposed some values for a and b depending on the particle shapes: (a, b) poly-lobed particles. The proposed correlation does not t with all the data points. Based on numerical results in the creeping ow regime,[START_REF] Dorai | Fully resolved simulations of the ow through a packed bed of cylinders: E fect of size distribution[END_REF] proposed a = 5/4 for cylinders (b was not evaluated).
Example of averaged pressure and fluid volume fraction. Layers on which are computed ε and p.
Figure 7 . 14 -
714 Figure 7.14 -Example of outputs resulting from the post-processing E . 7.29 and 7.31 on a fixed bed of trilobal particles at Re c = 0.1 and a r = 1.5.
Figure 7 .
7 Figure 7.15 -Velocity (left) and pressure (right) fields through a packed bed of trilobal particles. Re c = 0.1 and a r = 2.
Figure 7 .
7 Figure 7.16 -E ects on random packing on the pressure on 10 fixed beds poly-lobed particles of aspect ratio of a r = 2 at Re c = 1.
z = 4. (j) z = 8. (k) z = 12. (l) z = 16. packed bed 9 (m) z = 4. (n) z = 8. (o) z = 12. (p) z = 16.
Figure 7 . 17 -
717 Figure 7.17 -Comparison of the horizontal cross-sectional velocity magnitude ( u * /u * in ) and the pressure field (p * ) between two packed beds of quadralobal particles of aspect ratio of a r = 2 at Re = 1.
Dependency of α on the particle aspect ratio. Dependency of α on the particle sphericity. Power law correlation α = 150/Ψ 1.2 .
Figure 7 . 18 -
718 Figure 7.18 -Dependency of the fitted Blake-Kozeny-Carman constant on a r and Ψ. Number of simulations: 128.
Figure 7 . 19 -
719 Figure 7.19 -Dependency of the fitted Burke-Plummer constant on a r and Ψ. Number of simulations: 26.
Dependency of the orientation angle on the particle aspect ratio ar.
Dependency of the tortuosity on the particle aspect ratio ar.
Figure 7 .
7 Figure 7.20 -Dependency of T and < θ > on the aspect ratio a r at Re = 0.1.
Equations of motion . . . . . . . . . . . . . . . . . . . . . . . . . . Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Normal cylinder-wall impact . . . . . . . . . . . . . . . . . . . . . . R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Packing porosity . . . . . . . . . . . . . . . . . . . . . . . . . . . Rotating drum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M . . . . . . . . . . . . . . . . . . . . . . . . . DEM with non-convex particles . . . . . . . . . . . . . . . . . . . . . Simulation principle . . . . . . . . . . . . . . . . . . . . . . . . . . Void fraction analysis . . . . . . . . . . . . . . . . . . . . . . . . . Contexte industriel . . . . . . . . . . . . . . . . . . . . . . . . . . Les propriétés massiques . . . . . . . . . . . . . . . . . . . . . . . . Intégration temporelle . . . . . . . . . . . . . . . . . . . . . . . . . Détection de contact GJK . . . . . . . . . . . . . . . . . . . . . . . Force de contact et moment . . . . . . . . . . . . . . . . . . . . . . Tambour tourant . . . . . . . . . . . . . . . . . . . . . . . . . . . C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principe de simulation . . . . . . . . . . . . . . . . . . . . . . . . . Analyse du taux de vide . . . . . . . . . . . . . . . . . . . . . . . . Description des cas . . . . . . . . . . . . . . . . . . . . . . . . . . Répétition du chargement . . . . . . . . . . . . . . . . . . . . . . . la fenêtre d'insertion . . . . . . . . . . . . . . . . . . . . . . Incertitude globale . . . . . . . . . . . . . . . . . . . . . . . . . . Méthode des frontières immergées . . . . . . . . . . . . . . . . . . . . PeliGRIFF . . . . . . . . . . . . . . . . . . . . . . . Les équations gouvernant le solveur uide . . . . . . . . . . . . . . . . Schéma de discretisation temporelle . . . . . . . . . . . . . . . . . . . Points de collocation sur des particules non-convexes . . . . . . . . . . . . Méthodologie . . . . . . . . . . . . . . . . . . . . . . . . . . . . Écoulement autour d'une particule multi-lobée isolée dans un domaine tripériodique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Écoulement au travers d'un lit xe de particules multi-lobées . . . . . . . . Brève revue sur la perte de charge monophasique au travers d'un lit xe . . . .
T 5 3.1
4 O 4 5 4 O 1 2 1 L 1 1 2 4 2 3 3 5 3.2 4.1 4.2 C I 2.1 2.2 2.3 I 3.2 I M 3.3 2.1 3.4 P 2.2 4.1 2.3 4.2 2.4 L F S 4.3 3.1 P 3.1 3.2 E fet de 3.3 -DEM avec des particules non-convexes . . . . . . . . . . . . . . . . . . : . . . . . . . . . . . . . . . . . . . . . . . : --. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . -. . . . 5.1
1 G
8 C P
2 S ' :
5 G 3D : U DEM
2 G : A
1 L . . . . . . . .
5 G 3D: 3D DEM
6 S N D : '
4 3 M D E M G 3D . . . . . .
1
6 P -R S :
3 N -1 I 2 N 2.1 2.2 1 B 1.1 1.2 2 F 2.1 2.2 2.3 2.3 2.4 3 R 2.5 4 C 2.6 7 P G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3D M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arbitrary-Lagrangian-Eulerian (ALE) . . . . . . . . . . . . . . . . . . C Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Méthode des multiplicateurs de Lagrange distribués / Domain ctif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lattice-Boltzmann Method (LBM) . . . . . . . . . . . . . . . . . . . -P GRIFF
viii
1 I . . . . . . . . . . . . . . . . . . . 2 R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 H . . . . . . . . . . . . . . . . . . . . . 3.1 Industrial context . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Shape and apparent catalytic activity . . . . . . . . . . . . . . . . . . . 3.3 Pressure drop and Void fraction . . . . . . . . . . . . . . . . . . . . . 3.4 Summary on catalyst shape optimization: Need for predictive tools . . . . . 4 S . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 G . . . . . . . . . . . . . . . . . . . . . . . 2.1 Hard sphere approach . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Soft-particle and Discrete Element Method (DEM) . . . . . . . . . . . . 2.3 Non-Smooth Contact Dynamics (NSCD) . . . . . . . . . . . . . . . . 2.4 Hybrid soft and hard sphere collision . . . . . . . . . . . . . . . . . . 2.5 Continuum Mechanics Methods (CMM) . . . . . . . . . . . . . . . . . 2.6 Other approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 C . . . . . . . . . . . . . . . . . . . . . 4.1 Importance of particle shape . . . . . . . . . . . . . . . . . . . . . . 4.2 Brief review of particle shape in literature . . . . . . . . . . . . . . . . . 5 S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Mass properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Time integration . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 GJK-based contact detection . . . . . . . . . . . . . . . . . . . . . . 2.6 Contact force and torque . . . . . . . . . . . . . . . . . . . . . . . . vii 3 V . . . 2.4 Cases description . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 S . . . . . . . 3.1 Repeating the packing . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 E fect of insertion window size . . . . . . . . . . . . . . . . . . . . . 3.3 Overall uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . 4 R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Bi-periodic container . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Cylindrical container . . . . . . . . . . . . . . . . . . . . . . . . . 5 D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 E fect of domain size in bi-periodic directions? . . . . . . . . . . . . . . 5.2 Remark on the e fect of container size . . . . . . . . . . . . . . . . . . 6 C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 N . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 D D . . . . . . . . . . . . . . 4 C . . . . . . . . . . . . . . . . . . . . . . 4.1 Assessing memory management on multi-core node architecture . . . . . . 4.2 Granular slumping . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Coupling with a uid in an Euler/Lagrange framework, application to uidized beds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 D P . . . . . . . . . . . . . . . . . . . . . . 2.2 Immersed Boundary Method (IBM) . . . . . . . . . . . . . . . . . . . 2.3 Distributed Lagrange Multiplier / Fictitious Domain (DLM/FD) . . . . . . 3 A M R (AMR) . . . . . . . . . . . . . . . . . . . 4 C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Forme et activité catalytique . . . . . . . . . . . . . . . . . . . . . . 3.3 Perte de charge et taux de vide . . . . . . . . . . . . . . . . . . . . . . 3.4 Résumé sur l'optimisation des formes de catalyseur : besoin d'outils prédictifs . 4 P . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 A . . . . . . . . . . . . . . . . . 4 P 4.1 Importance de la forme des particules . . . . . . . . . . . . . . . . . . 4.2 Les formes de particules dans la littérature . . . . . . . . . . . . . . . . 5 S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 L . . . . . . . . . . . . . . 2.1 Équation du mouvement . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Stratégie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Méthodologie . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Impact cylindre-parois . . . . . . . . . . . . . . . . . . . . . . . . . 4 R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Porosité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi 4.2 4 R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Récipient bi-périodique . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Récipient cylindrique . . . . . . . . . . . . . . . . . . . . . . . . . 5 D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 E fet de la taille du domaine dans les directions bi-périodiques ? . . . . . . . 5.2 Remarque sur l'e fet de la taille du récipient . . . . . . . . . . . . . . . . 6 C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 M . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 S . . . . . . . . . . . . . . . . 4 P . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Évaluation de la gestion de mémoire sur une architecture à noeuds multi-coeurs 4.2 E fondrement granulaire . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Couplage avec du uide dans une approche Euler/Lagrange, application aux lits uidisés . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 D P . . . . . . . . . . . . . . . . . . . . . . . 1 M . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Arbitrary-Lagrangian-Eulerian (ALE) . . . . . . . . . . . . . . . . . . 1.2 Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) . . . . . . . 2 M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Méthode de Boltzmann sur réseau . . . . . . . . . . . . . . . . . . . . 1 I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Introduction à 5.2 Méthode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Résultats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 C P . . . . . . . . . . . . . . . . . . . . . . . xiii A
Table 3
3
.2 -Experimental and numerical parameters for the normal impact of a cylinder modelled with glued spheres on a flat wall.
Delenne. Optimizing particle shape in xed beds: simulation of void fraction with lobed particles.
-
C 1 I
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 2 M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.1 DEM with non-convex particles . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.2 Simulation principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.3 Void fraction analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.4 Cases description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3 S . . . . . . . . . . 60 3.1 Repeating the packing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.2 E fect of insertion window size . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.3 Overall uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4 R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.1 Bi-periodic container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.2 Cylindrical container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5 D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5.1 E fect of domain size in bi-periodic directions? . . . . . . . . . . . . . . . . . . 66 5.2 Remark on the e fect of container size . . . . . . . . . . . . . . . . . . . . . . 66 6 C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 This chapter has been submitted for publication in Chemical Engineering Science: M. Rolland, A. D. Rakotonirina, A. Devousassoux, J.L. Barrios Goicetty, A. Wachs, J.-Y.
Table 4
4
.3 -E ect of insertion window size on void fraction. Insertion window is 2D square of length comprised between 0 mm to 10 mm.
The results for TL particles are presented in F . 4.9. The following linear correlation (E . 4.16) describes the data with an accuracy equal to the uncertainty:A uni ed correlation predicting the void fraction for TL and QL regardless of the shape has the same accuracy as that of the TL. It is de ned as follows:
QL: ε = 0.33 + 0.0328 0.55 L p d p + 0.212 L p D (4.13)
10 < D[mm] < 19, 0.55 0.40 0.45 0.50 Correlation 1.2 < L/dp < 3.33, TL-D14 TL-D16 TL-D19 3 < L p [mm] < 4
Correlation 0.40 0.45 0.50 0.40 0.42 0.44 0.46 0.48 0.50 0.52 QL-D19 QL-D16 QL-D14 Simulations
0.40 0.42 0.44 0.46 0.48 0.50 0.52 Simulations
TL: ε = 0.345 + 0.0289 L p d p + 0.15 L p D (4.14)
10 < D[mm] < 19, 1.2 < L/dp < 3.3, 3 < L p [mm] < 4
A simpli ed correlation based only on aspect ratio predicts almost as well void fractions
with relative standard deviation of 2.5%. It reads:
TL: ε = 0.366 + 0.035 L p d p (4.15)
QL & TL: ε = 0.329 + 0.0289 L p d p + 0.15 L p D (4.16)
10 < D[mm] < 19, 1.2 < L p /d p < 3.33,
2 < L p [mm] < 4, 1.2 < d p [mm] < 2.48
F . 4.8):
Figure 4.8 -Void fraction for packed beds of quadralobal particles in a cylindrical reactor for various reactor diameters, particle lengths and aspect ratios: correlation vs. simulations. Dashed lines are parity ±I.
Table 5
5
Repetition Experiments (s) Grains3D (s)
1 29.32 29.36
2 29.28
3 29.2
Mean discharge time (s) 29.27 29.36
Table 5.2 -Comparison between experimental data of González-Montellano et al. (2011) and our simulation
results with Grains3D for the discharge time of the silo.
.1 -Contact force model parameters, estimate of contact features at v col = 4.5 m/s and time step magnitude used in the silo discharge simulation.
Table 5
5
.4 -Contact force model parameters, estimate of contact features at v col = 4.2 m/s and time step magnitude used in the dam break simulations.
Table 5
5
.5 -System size in granular dam break weak scaling tests.
Table 5.6 -Fluid and particles physical and numerical dimensionless parameters.
Parameter Value
Fluid
ρ r 2083.333
Re in Fr in ∆ tf 79.333 6.927 × 10 -3 0.0119
Particle
e n 0.9
µ c 0.1
k ms δmax 0 0.025
∆ tp 0.00595
Arbitrary-Lagrangian-Eulerian (ALE) . . . . . . . . . . . . . . . . . . . . . . .
1 B
1.2 Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) . . . . . . . . .
2 F
. . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Lattice-Boltzmann Method (LBM) . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Immersed Boundary Method (IBM) . . . . . . . . . . . . . . . . . . . . . . . 2.3 Distributed Lagrange Multiplier / Fictitious Domain (DLM/FD) . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 3 N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 3.1 Introduction to PeliGRIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 3.2 Governing equations for the uid ow solver . . . . . . . . . . . . . . . . . . . 114 3.3 Time discretization scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 3.4 Colocation Points on non-convex particles . . . . . . . . . . . . . . . . . . . . 116 . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
C 1 I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
2 A DLM/FD PRS -
4 A . . . . . . . . . . . . . . . . . . . . . 118
4.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.2 Flow past a single poly-lobed particle in a tri-periodic domain . . . . . . . . . . 119
4.3 Flow past a small packed bed of poly-lobed particles . . . . . . . . . . . . . . . 123
5 P - . . . . . . 124
5.1 A quick review of single phase pressure drop in xed beds . . . . . . . . . . . . 124
5.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6 C P
.
Table 7 .
7 1 -Configuration of the studied cases.
Table 7 .
7 .24) where V * p and A * p denote respectively the volume and area of the particle. 2 -Fitted Ergun constants for poly-lobed particles. Credit:
Shape a r Ψ ε α β
Trilobe 4.33 0.63 0.466 295 4.71
Trilobe 4.33 0.63 0.511 263 4.99
Quadralobe 3.85 0.593 0.471 292 3.93
Quadralobe 3.85 0.593 0.502 294 4.19
Table 7 .
7 3 -Repetition of random packing with identical particles. ∆p
* /H * [P a | 358,322 | [
"781447"
] | [
"300006"
] |
01483591 | en | [
"spi"
] | 2024/03/04 23:41:48 | 2011 | https://hal.science/hal-01483591/file/doc00026588.pdf | Neila Bhouri
email: neila.bhouri@inrets.fr
Inrets / Gretia
Jari Kauppila
email: jari.kauppila@oecd.org
Managing Highways for Better Reliability -Assessing Reliability Benefits of Ramp Metering
Reliability of travel time is increasingly becoming an important part of transport policies around the world. However, a recent review of policies in OECD countries shows that despite its importance, only few countries monitor reliability or explicitly incorporate reliability into transport policy making.
The role of the government may be crucial in delivering optimal levels of reliability.
A number of policy options are available to improve reliability.
Active management of the network through ramp metering is recognized as an efficient way to control motorway traffic and field tests of ramp control strategies show benefits on average travel time.
Far less is said on reliability benefits of ramp metering. There are only few studies that specifically monitor improvements in travel time variability. In this paper we present findings on a case study of applying ramp metering on a French motorway A6W near Paris.
We apply a number of indicators for travel time variability before and after introducing ramp metering.
In order to take into account reliability in policy impact evaluation, cost-benefit assessment provides consistent framework to assess the monetised benefits. We therefore also calculate monetary value of reliability benefits of ramp metering and finally discuss policy implications of our results.
We suggest that failing to unbundle time saving benefits of a project between average travel time and the variability in travel time is likely to lead to sub-optimal policy solutions.
We also argue that managing existing capacity better can be a cost-effective way to improve both average travel time and the variability in travel time.
INTRODUCTION
Traffic congestion imposes cost on the economy and generates multiple impacts on urban regions and their inhabitants through increased travel times. Congestion has not only impact on average travel speed but also on travel time reliability. As traffic volumes increase and the road network approaches full capacity, the vehicle flow becomes unstable and much more vulnerable to incidents such as accidents, vehicle breakdowns, road works or bad weather. This in turn increases variability in travel times.
There is much evidence that the variability of travel times may be more important than the average travel speed in that users of the network can plan their travel accordingly if a road is constantly congested while unpredictable travel conditions impose the greatest frustration. Unreliable and extremely variable travel times impose the greatest challenge on road users [START_REF]OECD/ECMT Managing Urban Traffic Congestion[END_REF]. This type of congestion-related unreliability has serious consequences as users of the transport network, trying to avoid delays, need to allow more time than otherwise needed by adding a "safety margin" or "buffer" above that of average travel time. Companies or logistic managers, in turn, try to adapt their operations and build in buffer stock of goods. Hence, in contemplating a journey, users consider not just the expected average travel time but also its variability in order to avoid delays, or worse, snowballing effects, affecting other activities in the logistic chain.
Adding extra time or keeping additional stocks "just in case" is not costless. Leaving earlier to ensure arriving on-time is time wasted from other, potentially more productive, activities. Keeping additional stocks of goods can be very costly way to ensure on-time delivery of goods. Not surprisingly, a recent study by the Joint Transport Research Centre of the OECD and the International Transport Forum suggests that costs of unreliability may rival those of congestion [START_REF]OECD/ITF Improving Reliability on Surface Transport Networks[END_REF].
A number of countries are looking at ways of improving the reliability of travel time while reliability has become increasingly important part of national transport policies. An improvement in the reliability and predictability of travel times can rapidly reduce the cost associated with excessive congestion levels.
The role of the government may be crucial in delivering optimal levels of reliability.
However, a recent review of policies in OECD countries shows that despite its importance, only few countries monitor reliability or explicitly incorporate reliability into transport policy making [START_REF]OECD/ITF Improving Reliability on Surface Transport Networks[END_REF]. Network and service reliability is not systematically incorporated in the transport planning process and thus is not reflected adequately in decision making.
A number of ways to monitor reliability are available but there are also several shortcomings. Existing indicators, if applied, tend to aggregate across users, monitor system performance rather the user perspective, show annual averages hiding shorter-term variations and provide partial view, normally that of network managers rather than the reliability perceived by the end-users (3).
A wide range of policy instruments are also available to improve reliability of transport and they can be distilled into four main options (2):
Increasing capacity of infrastructure either by supplying extra capacity or improving quality of existing one.
Better management of existing capacity.
Charging directly for reliability.
Providing information to users mitigating the adverse effects of poor reliability.
In this paper, we focus on better management of existing capacity. Management options can be further divided into two categories: pro-active and active. Pro-active management of infrastructure mainly includes identification of network vulnerability to recurrent and non-recurrent unreliability. Dynamic processes, in turn, focus on active management of network to intensify oversight of network use or react once a network incident arises; such management systems include traffic control, accident clearing teams and rerouting strategies.
It is acknowledged that many of the policy options mentioned above are already in use as congestion mitigation policies. However, while remedial actions against congestion can also improve reliability, it is also important to separate the impacts as these two are not the same as will be demonstrated later on.
In order to ensure optimal strategies, policy makers face a number of challenges:
To identify prevailing reliability levels by monitoring the existing variability in travel times.
To assess the improvement in the variability of travel times after a policy intervention to ensure that the most cost-effective solutions are adapted first to improve reliability.
To present these results in a way that is easy to communicate both for the decision makers and the users.
Until recently, improving travel time reliability has not been usually included into the assessment of management strategies. A recent study (4) has introduced measures for the variability in travel time when comparing effectiveness of alternative ramp metering strategies (ALINEA and coordinated method) at the A6W motorway in France (4). In addition to average travel time, the study included measures of standard deviation, coefficient of variation, buffer time and planning time.
In this paper we build on this analysis and apply a number of other indicators for travel time variability that have been advocated in a range of studies. In order to take into account reliability in policy impact evaluation, cost-benefit assessment provides consistent framework to assess the monetised benefits of different projects. We therefore also calculate monetary value of the reliability benefits of ramp metering and finally discuss policy implications of our results.
ACTIVE MANAGEMENT THROUGH RAMP METERING
Ramp metering is a specific active management measure which employs traffic lights at the freeway on-ramps to control the traffic flow entering the motorway mainstream. It consists in limiting, regulating and timing the entrance of vehicles from one or more ramps onto the main line. As with many other highway policy strategies, ramp metering was originally designed to mitigate congestion impacts. It is recognized as the most direct and efficient way to control and upgrade motorway traffic and a number of field tests of ramp control strategies in different countries are available showing benefits on average travel time [START_REF] Papageorgiou | ALINE local ramp metering: Summary of field results[END_REF].
Ramp metering is applied either on local or system level. Local control strategy is directly influenced by the main-line traffic conditions in the immediate vicinity of the ramp during the metering period. System control mode is a form of ramp metering in which realtime information on total freeway conditions is used to control the entrance ramp system [START_REF] Kotsialos | Motorway network traffic control systems[END_REF].
The most efficient local ramp control strategy is ALINEA. It has been tested in many countries and proved its superiority when compared to other local strategies [START_REF]EURAMP Deliverable D6.3, Evaluation Results. European Ramp Metering Project[END_REF]. ALINEA is based on a rigorous feedback philosophy (8) (9) [START_REF] Papageorgiou | ALINEA: A Local Feedback Control Law for on-ramp metering[END_REF]. Since the main aim of ramp metering is to maintain the capacity flow downstream of the merging area, the control strategy for each controllable on-ramp should be based on downstream measurements. Therefore, ALINEA, which was developed by the application of the classical feedback theory, obtains the following form:
HOW TO MEASURE RELIABILITY
When monitoring reliability, it is important to distinguish between network operator perspective and user perspective. For the network operator, the focus is on network quality (what is provided and planned) while for the user, the focus is on how the variability of travel time is experienced.
Several definitions for travel time reliability exist and many different relevant indicators have been proposed. Here we use the same breakdown as presented in previous studies and divide these measures into four categories (11) (12):
1. Statistical range methods.
2. Buffer time methods.
3. Tardy trip measures. 12). This can be presented by the following equation [START_REF]EURAMP Deliverable D6.3, Evaluation Results. European Ramp Metering Project[END_REF] which calculates the probability that travel times do not deviate more than minutes the median travel time. Parameter can be given any value. For example, =10 minutes for routes less than 50 km in the Netherlands.
APPLICATION TO THE FRENCH A6W MOTORWAY
Test Site and Data
Within the framework of the European project EURAMP, traffic impact assessments of coordinated and isolated ramp metering strategies were carried out at a French test site [START_REF]EURAMP Deliverable D6.3, Evaluation Results. European Ramp Metering Project[END_REF].
The motorway section A6W of the French field test comprises 5 on-ramps which are fully equipped with signal lights and traffic flow, occupancy rate and speed measurement stations roughly every 500 meters (Figure 1). The total motorway length is around 20 The predicted travel time of the main motorway section (20km) during the morning period (5h-12h) is computed based on data collected every 6-minutes. In order to point out in a comprehensive way the impact of the ramp metering for the decision makers and for the users, the travel time calculation is based on the application of the "floating car" algorithm.
The travel time estimation is based on real data measurement and in particular the speed measurements.
Data collected was screened in order to discard days when there were major detector failures. Secondly, all days with atypical traffic patterns (essentially weekends and holidays) were discarded. Thirdly, in order to preserve comparability, all days including significant incidents or accidents (according to the incident files provided by the police) were also left out. This screening procedure eventually delivered data for 11 days for the No control and 10 days for ALINEA.
Findings
Reliability Indicators
Table 1 shows the variability in travel time according to different measures on the A6W motorway when active management by ramp metering is not in use (No control) and when ramp metering is used (ALINEA). Applying ramp metering strategy at the five accesses to the A6W motorway improves reliability by 26-52% depending on the indicator used.
Results are consistent in the direction of change with, however, variation in the size of the impact.
The wider the travel time distribution, the less reliable travel times are. As shown in Table 1, overall the spread or variation (STD or COV) of the travel time distribution becomes smaller (and more reliable) when using the ramp metering.
Generally, during congestion, unreliability is predominantly proportional to while during congestion onset and dissolve unreliability is predominantly proportional to . Our analysis suggests that ramp metering has nearly the same impact on both indicators, suggesting that ramp metering improves reliability both at the onset and dissolve of congestion as well as during congestion itself.
The Misery Index (MI) indicates that 20% of the most unlucky travellers experienced a travel time 76% worse than the average travel time when ramp metering was not in use. The index was reduced to 53% when ALINEA was applied.
Probability index (Pr) shows that without active management 28% of users experience more than 10 minutes of delay as compared with the median travel time. Again, ramp metering reduced this to only 18% of users. For the policy maker, the variation in findings presented above can be problematic.
The choice of the "right" measure will remain a subject to debate. Hence, without further analysis, we cannot make any deeper going conclusions on the impact of ramp metering on travel time variability, other than it seems to reduce variability in general.
The results are also difficult to communicate to decision makers or users of the network. While the operator view on reliability is still important, measures like these are likely not to relate particularly well to the way in which travellers make their decisions. A traveller is more accustomed in making decisions based on time (minutes) rather than in terms of percentages.
In the following, we therefore present results on the average Travel Time (TT), Buffer Time (BT) and Planning Time (PT) in minutes. Table 2 shows that a user who plans to arrive on time to his destination during the long morning peak period on A6W with 95% certainty, has to take into account the mean travel time of 25 minutes and add another 25 minutes as a "buffer" to ensure on-time arrival (when ramp metering is not in use). Hence the actual travel time during the morning peak is doubled due to uncertainty and variability in travel time.
On the contrary, when introducing active management through ramp metering (ALINEA), user planning time is reduced by more than 14 minutes for the trip. The total time needed for the trip declines from 50 to 36 minutes. The main improvement from the user perspective comes indeed from reduced variability in travel time (buffer time reduced by 11 minutes) while the mean travel time only improves by 3 minutes.
Monetary Value of Time Savings Benefit
Indices such as the Probability Index seem rather practical ways to present reliability from the network management point of view. The Probability Index allows for setting targets for reliability against the median travel time, such as is done in the Netherlands. However, while this type of targets may be useful in benchmarking desired performance standards, they are often arbitrarily set. The cost of achieving such levels may unintentionally exceed the benefits derived.
Without (monetised) quantitative criteria, the impact of a policy measure on travel time variability will remain a matter of debate. Especially for policy maker, the challenge is to identify policy options that deliver an improvement in reliability for the lowest cost. In order to take into account reliability in policy impact evaluation, cost-benefit assessment provides consistent framework to assess the monetised benefits of different projects.
At present, reliability is generally not taken into account when evaluating a project.
However, recent findings have provided valuable information on how to value and measure unreliability of travel time and a number of studies are underway to estimate the value of reliability based on stated and revealed preference research [START_REF] Fosgerau | Travel time variability -Definition and valuation[END_REF] (2) [START_REF] De Jong | Preliminary Monetary Values for the Reliability of Travel Times in Freight Transport[END_REF]. Although these methods are still under development, more practical approaches are already proposed and used for incorporating reliability into project evaluation [START_REF] De Jong | Preliminary Monetary Values for the Reliability of Travel Times in Freight Transport[END_REF].
The standard deviation of travel time distribution can be with relatively little difficulty applied in the cost-benefit assessment [START_REF]OECD/ITF Improving Reliability on Surface Transport Networks[END_REF]. In few cases where reliability is formally incorporated into the project appraisal, the country guidelines indeed suggest that travel time variability is measured by the standard deviation of travel time (15) (16) [START_REF][END_REF].
Most available country guidelines refer to the use of the so-called reliability ratio (RR) for valuing reliability. This ratio is defined as the ratio of the value of one minute of standard deviation (i.e. value of reliability) to the value of one minute of average travel time. These ratios are mainly derived from international case studies, and more specifically from a workshop of international experts convened by AVV, the transport research centre of the Dutch Ministry of Transport. At this meeting, some consensus regarding reasonable reliability ratios for passenger transport was reached -0.8 for cars and 1.4 for public transport (20) [START_REF] Kouwenhoven | Reliability Ratio's voor het Goerderenveroer[END_REF].
While we acknowledge that the value of reliability (and value of time) is user-, location, and time-specific, we use the approach presented above as the best practise estimation of the reliability benefits of ramp metering on the M6W motorway.
To simplify, traditionally the money value of time savings benefit (TSB) arising from a project can be written as [START_REF] Lomax | Selecting travel reliability measures[END_REF] where the average number of minutes of time savings ( ) is multiplied by the value of time ( ), typically differing by user group. The current practise in incorporating reliability into cost-benefit analysis suggests then that the money value of time savings benefits is split into pure journey time improvement and improvement in the standard deviation of travel time ( . The above equation then becomes [START_REF] Van Lint | Travel Time unreliability on freeways: Why measures based on variance tell only half the story[END_REF] where is given value 0.8 for passenger transport by car based on available case studies valuing reliability in relation to average travel time. Using the equation above and our results in Table 1 we can calculate the total monetary value of time savings benefit on the [START_REF]Developing Harmonised European Approaches for Transport Costing and Project Assessment[END_REF] Although our calculation is crude, it illustrates how incorporating reliability into project assessment may change the overall results of any project assessment. According to our results, the largest monetary benefits when applying ramp metering on the A6W motorway does not come from the improvement in reducing congestion (the monetary value of pure journey time equals 3.1 times ) but rather from improved reliability, where the monetary value of improvement in reliability of travel time equals 3.3 times .
For the assessment of different strategies to improve highway operations results are important. Ignoring reliability from the project appraisal would lead us to underestimate benefits of ramp metering by half in this case. Hence, traditional assessment on the benefits of ramp metering would underestimate greatly the impact of intervention. Incorporating reliability into cost-benefit assessment more than doubles the time savings benefits.
Summary of Findings
Although the above analysis is based on a specific motorway stretch near Paris, France, there are some general lessons to be learnt. First, different existing reliability measures will result with inconsistency in results. This is likely to cause confusion amongst the policy makers if enough attention is not given to the property of each measure.
Implications of different measures on policy are, however, beyond the scope of our paper.
Secondly, travel time variability accounts for an important part of the user experience.
Buffer time or Buffer Index seem quite useful in measuring user experience and more importantly in communicating these results both for decision makers and users of the network.
Thirdly, failing to unbundle time saving benefits from improvement in average travel time and improvement in the variability in travel time is likely to lead to sub-optimal policy solutions. Our case study clearly shows that benefits derived from congestion management are likely to be higher than traditionally estimated (in our case from the A6W motorway benefits are more than doubled). When policy makers are choosing between different policy options, failing to account for these benefits might lead to a situation where less optimal solutions are adopted before more cost-effective ones.
CONCLUSIONS AND POLICY IMPLICATIONS
Reliability of travel time is increasingly becoming an important part of transport policies around the world. At the same time, monitoring, measuring and assessing reliability benefits have not been sufficiently taken into account in the national transport policies.
Recalling the key challenges that policy makers face when trying to ensure optimal strategies for improving reliability, we can draw conclusions on policy implications of our results.
First, monitoring variability of travel time in addition to average travel time is obviously important. In the case of A6W motorway near Paris, the buffer needed for the trip equals the average travel time during the morning peak. For the user of the network this means that one needs to double the actual travel time in order to ensure on-time arrival.
Looking at the average travel time alone would obviously leave an important part of the picture hidden. Hence, identifying prevailing reliability levels by monitoring existing variability in travel times plays a major role in understanding how the network performs and, more importantly, how users experience the trip.
Presenting and communicating results in terms of buffer time or planning time seem intuitively understandable. Introducing the planning time concept is very useful both for the user and network manager. It is, after all, the total time spent for the completion of the journey that matters. Reducing the time needed (both the actual travel time and the time needed to ensure on-time arrival) for the trip as a whole is an effective way to present benefits of policy interventions and argue for benefits.
In this paper we argue, through a case study on the French A6W motorway, that reliability can be measured and the related monetary benefits of the policy intervention can be assessed. Incorporating reliability into policy assessment is likely to change priorities of projects and increase benefits, especially in congested situations as shown by the example.
Reliability should be incorporated into planning and assessment of transport policy strategies. As shown with the example, failing to unbundle the impact of the improvement in variability of travel time leads to an underestimation of the benefits derived from the policy.
In our case study, traditional assessment would underestimate significantly the benefits.
Although we acknowledge that while a number of promising techniques are emerging to better incorporate reliability into cost-benefit analysis, the more pragmatic approach presented here is likely to be useful when applied at least as additional information to the traditional cost-benefit analysis.
Finally, many reliability policies are already in use as congestion mitigation policies.
It seems that strategies for improving travel times are useful also in reducing unreliability.
However, we also argue that impacts should be assessed separately for both.
Managing existing capacity better can be a cost-effective tool to improve both average travel time and the variability in travel time. Our results suggest that costs of unreliability indeed rival those of congestion at least at the A6W motorway during the morning peak hours. Reliability should therefore be given the same policy prominence as congestion has been traditionally given.
ramp volumes at discrete time periods k and k-1 respectively, is the measured downstream occupancy at discrete time k, is a pre-set desired occupancy value (typically set equal to the critical occupancy which separates fluid traffic from congested one) and is a regulation parameter. Ramp metering has been introduced mainly for reducing congestion and improving safety and the evaluation of impacts usually focuses on congestion related indicators, such as average travel time, duration of recurrent congestion, mean speed, fuel consumption and emissions (5) (7). Far less is said on reliability benefits of ramp metering. There are only few studies that specifically monitor improvements in travel time variability. A recent study (4) compares different on-ramp strategies at the A6W Motorway near Paris. The study shows that ramp metering improves the variability of travel time more than average travel time (4). Therefore, it seems quite obvious that monitoring travel time variability, in addition to average travel time, and measuring network users' experiences are vital in making a robust assessment of the policy options to improve user experience of travel on any road network.
4 . 5 )
45 Probabilistic measures.Standard deviation (STD) and the coefficient of variation (COV) show the spread of the variability in travel time. They can be considered as cost-effective measures to monitor travel time variation and reliability, especially when variability is not affected by a limited number of delays and when travel time distribution is not much skewed[START_REF]OECD/ITF Improving Reliability on Surface Transport Networks[END_REF]. Standard deviation is defined as[START_REF]OECD/ITF Improving Reliability on Surface Transport Networks[END_REF] while coefficient of variation is written as (3) where M denotes the mean travel time, the travel time observation and N the number of travel time observations. A further consideration to use the standard deviation as a reliability indicator derives from recent studies that recommend defining travel time reliability as the standard deviation of travel time when incorporating reliability into cost-benefit assessment (13) (14). As a result, standard deviation is used to measure reliability in few countries where guidelines for cost-benefit assessment include reliability (15) (16) (17). Both standard deviation and coefficient of variation indicate the spread of travel time around some expected value while implicitly assuming travel times to be symmetrically (normally) distributed. However, symmetrical distribution probably only exists in the case of trivialtime periods of free-flow conditions. Therefore, studies have proposed metrics for skew and width of the travel time distribution (12). The wider or more skewed the travel time distribution the less reliable travel times are. In general, the larger indicates higher probability of extreme travel times (in relation to the median). The large values of in turn indicate that the width of the travel time distribution is large relative to its median value. Previous studies have found that different highway stretches can have very different values for the width and skewness of the travel time and propose another indicator ( ) that combines these two and removes the location specificity of the measure (12). Skewness and width indicators are defined as length and is the X th percentile travel time. Other indicators, especially the Buffer Index (BI) appears to relate particularly well to the way in which travellers make their decisions (18). Buffer time (BT) is defined as the extra time a user has to add to the average travel time so as to arrive on time 95% of the time. It is computed as the difference between the 95th percentile travel time (TT95) and the mean travel time (M). The Buffer Index is then defined as the ratio between the buffer time and the average travel time (4) The Buffer Index is useful in users' assessments of how much extra time has to be allowed for uncertainty in travel conditions. It hence answers simple questions such as "How much time do I need to allow?" or "When should I leave?". For example, if the average travel time equals 20 minutes and the Buffer Index is 40%, the buffer time equals 20 × 0.40 = 8 minutes. Therefore, to ensure on-time arrival with 95% certainty, the traveler should allow 28 minutes for the normal trip of 20 minutes. Planning Time (PT) is another concept used often. It gives the total time needed to plan for an on-time arrival 95% of the time as compared to free flow travel time. The Planning Time Index (PTI) is computed as the 95th percentile travel time ( divided by free-flow travel time ( ) (For example, if = 1.60 and = 15 minutes, a traveller should plan 24 minutes in total to ensure on-time arrival with 95% certainty. Because these indicators use the 95-percentile value of the travel time distribution as a reference of the definitions, they take into account more explicitly the extreme travel time delays. Misery Index (MI) calculates the relative distance between mean travel time of the 20% most unlucky travelers and the mean travel time of all travelers. It is defined as (6) where is the 80 th percentile travel time. Probabilistic indicators (Pr) calculate the probability that travel times occur within a specified interval of time. Probabilistic measures are parameterized in the sense that they use a threshold travel time or a predefined time window to differentiate between reliable and unreliable travel times. Probabilistic measures are useful to present policy goals, such as the Dutch target for reliability, according to which "at least 95% of all travel time should not deviate more than 10 minutes from the median travel time" (
kilometers. The controlled ramps include two measurement stations each. The upstream one is used to detect surface intersection blocking and the downstream one is used by the on-ramp metering strategy.The main authority in charge of traffic management is the Direction Interdépartementale de l'Ile de France (DIRIF). The DIRIF motorway network covers 600 kilometres (motorways A1 to A13). The level of congestion on the "Ile de France" network (including Paris ring road) represents 80% of the total congestion on the whole of the French motorway network. The test site is considered the most critical area of the A6W motorway towards Paris. Morning and evening peak congestions are observed over several hours and several kilometers.
FIGURE 1 A6W
1 FIGURE 1 A6W Test site.
Figure 2
2 Figure2shows the difference between congestion and reliability by time of day. As the morning peak starts at around 6am, travel time increases sharply from around 17 minutes to over 35 minutes by 6:42am. It remains at this level until 9am starting then slowly to decline until at around 10am it has reached nearly the pre-peak levels.At the congestion onset, also the unreliability of travel time increases rapidly and the buffer time grows from 4 minutes at around 6am to 14 minutes by 6:42am. However, in contrary to travel time, the buffer time continues to increase (although slowly) all the way until 10am, finally reaching nearly 22 minutes. This may be explained by the fact that during peak congestion travel is consistently slow whereas as congestion dissolves travellers are faced with more variable speeds affecting travel time distribution including extreme observations at the tail end of the distribution.
FIGURE 2
2 FIGURE 2 Congestion and reliability.
TABLE 1 Results for Travel Time Variability by Different Statistical Indicators
1 STD in seconds. Gain in % may differ from actual numbers due to rounding errors.
No control ALINEA Gain
Category Acronym (%) (%) (%)
Statistical range STD (a) 706 463 34
COV 46 35 25
Skewness 137 96 30
270 199 26
(/km) 7 3 52
Buffer Index BI 98 62 37
PT 377 270 28
Tardy Trip MI 76 53 31
Probabilistic Pr(TT>TT50+10min) 28 18 35
(a)
TABLE 2 Travel Time, Buffer Time and Planning Time
2
TT Gain BT Gain PT Gain
(min) (min) (%) (min) (min) (%) (min) (min) (%)
No-Control 25.4 25.0 50.4
ALINEA 22.3 -3.1 -12.2 13.8 -11.2 44.8 36.2 -14.2 28.2
ACKNOWLEDGEMENT
This paper is partly based on research carried out by INRETS and the Joint Transport
Research Centre (JTRC) of the OECD and the International Transport Forum. The views presented here do not, however, necessary reflect views of these organisations. | 33,962 | [
"1278852"
] | [
"222120",
"487612"
] |
01483809 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01483809/file/978-3-642-35764-0_10_Chapter.pdf | Chun Hu
Wei Jiang
Bruce Mcmillin
Privacy-Preserving Power Usage Control in the Smart Grid
Keywords: Smart grid, power usage control, privacy preservation
scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
The smart grid provides utilities and consumers with intelligent and efficient ways to manage electric power usage. To achieve this, the grid needs to collect a variety of data related to energy distribution and usage. This expanded data collection raises many privacy concerns, especially with regard to energy consumers. For example, specific appliances can be identified through their electricity usage signatures from data collected by automated meters (at a frequency much higher than the traditional monthly meter readings) [START_REF] Quinn | Smart Metering and Privacy: Existing Law and Competing Policies[END_REF]. Indeed, research has shown that the analysis of aggregate household energy consumption data over fifteen-minute intervals can determine the usage patterns of most major home appliances [START_REF] Drenker | Nonintrusive monitoring of electric loads[END_REF][START_REF] Quinn | Privacy and the New Energy Infrastructure[END_REF]. This increases the likelihood of discovering potentially sensitive information about consumer behavior and so-called activities of daily life (ADL) [START_REF]Smart Grid Interoperability Panel, Guidelines for Smart Grid Security (Introduction[END_REF].
Since ADL data is generally personal or private, it should be protected from access by unauthorized entities. For example, a malicious entity could analyze the usage patterns of household appliances in energy usage data, and determine when the victim is not home. The malicious entity could then plan and initiate actions without being easily exposed.
A common strategy to prevent power outages is to dynamically adjust the power consumed by households and businesses during peak demand periods. In this case, a utility may determine a threshold for each neighborhood it services. When the total power usage by a neighborhood exceeds the threshold, some households in the neighborhood are required to reduce their energy consumption based on contractual agreements with the utility.
Implementing threshold-based power usage control (TPUC) requires a utility to collect and analyze power usage data from every household in the participating neighborhoods. Consumers are generally provided with incentives such as reduced rates to encourage participation. In return, the consumers must agree to reduce their power consumption when necessary. For example, the household that consumes the most power in a neighborhood may be required to reduce its consumption to bring the total power usage of the neighborhood under the threshold.
Privacy concerns regarding the fine-granular power usage data that is required to be collected and stored by utilities is the primary obstacle to implementing TPUC in the smart grid. To address these concerns, it is important to design sophisticated TPUC protocols that preserve the privacy of both consumers and utilities. This paper describes two distributed, privacy-preserving protocols that enable utilities to efficiently manage power distribution while satisfying the privacy constraints.
Problem Statement
Let A 1 , . . . , A n be n participating consumers or users from a neighborhood. Furthermore, let f TPUC be a privacy-preserving TPUC protocol given by:
f TPUC ({a 1 , . . . , a n } , t) → ({δ 1 , . . . , δ n } , ⊥)
where a 1 , . . . , a n are the average power consumptions during a fixed time interval by consumers A 1 , . . . , A n , respectively; and t is a threshold determined by the utility for the neighborhood. The protocol returns δ i to consumer A i and nothing to the utility. The δ 1 , . . . , δ n values are the required power consumption adjustments for the consumers such that t ≥ n i=1 (a iδ i ). When t ≥ n i=1 a i , every δ i is equal to zero, i.e., no power usage adjustments are required. Note that not all the consumers are required to make adjustments at a given time. In general, the specific adjustments that are made depend on the strategy agreed upon by the consumers and the utility.
This paper considers two common power adjustment strategies:
Maximum Power Usage: When the average total energy consumption by a neighborhood over a fixed time interval or round (denoted by a = n i=1 a i ) exceeds a predefined threshold t, then the consumer who has used the most power during previous round is asked to reduce his or her power consumption. After the next round, if the new a that is computed is still greater than t, then the newly-found maximum energy consumer is asked to reduce his or her usage. This process is repeated until t ≥ a. Note that the a value is computed at the end of each round. During each round, the consumer who has used the most power can reduce his or her consumption without much discomfort by shutting down one or more household appliances (e.g., washer and dryer) or by adjusting the thermostat temperature setting a few degrees.
Individual Power Usage: If the average total energy consumption a is over the threshold t, then the consumption of every consumer in the neighborhood is reduced based on his or her last usage a i . The least amount of energy reduction δ i for each user A i is determined by the following equation:
δ i = a i a (a -t) and a = n i=1 a i (1)
where δ i is a lower bound on the amount of power usage that the user A i should cut, and a is the average total power usage during the last time interval. After the adjustments, the average total power usage falls below t. Thus, under this strategy, the protocol only has only one round of execution.
Since the collection of fine-granular power usage data by a utility can compromise personal privacy, it is important to prevent the disclosure of such data. Therefore, an f TPUC protocol should satisfy two privacy-preserving requirements:
Consumer Privacy: The average power usage data a i of a consumer A i should not be disclosed to any other consumer in the neighborhood or to the utility during the execution of an f TPUC protocol.
Utility Privacy: The threshold t should not be disclosed to the consumers of a neighborhood during the execution of an f TPUC protocol.
The utility privacy requirement must be met because an entity who knows the t values for a number of neighborhoods serviced by a utility could infer the operational capacity and the energy supply distribution of the utility. The public disclosure of this information can cause the utility to lose its competitive advantage. We adopt security definitions from the domain of secure multiparty computation [START_REF] Yao | Protocols for secure computations[END_REF][START_REF] Yao | How to generate and exchange secrets[END_REF] to develop the rigorous privacy-preserving TPUC protocols described in this paper. A naive -albeit secure -way to implement an f TPUC protocol is to use a trusted third party (TTP). As shown in Figure 1, each consumer A i sends his or her a i value to a TTP while the utility sends its t value to the TTP. Having received these values, the TTP compares t with a = n i=1 . If t < a, the TTP computes each δ i value and sends it to consumer A i .
This TTP-based f TPUC protocol easily meets the privacy-preserving requirement. However, such a TTP rarely exists in practice. Therefore, it is necessary to develop f TPUC protocols that do not use a TTP while achieving a similar degree of privacy protection provided by a TTP protocol.
Related Work
This section briefly reviews the related work in the field. In particular, it discusses privacy issues in the smart grid, and presents key security definitions from the domain of secure multiparty computation.
Privacy issues in the smart grid are highlighted in [START_REF]Smart Grid Interoperability Panel, Guidelines for Smart Grid Security (Introduction[END_REF]. Our work primarily focuses on one of these issues, namely, protecting the release of fine-granular energy usage data in a smart grid environment. Quinn [START_REF] Quinn | Smart Metering and Privacy: Existing Law and Competing Policies[END_REF] has observed that power consumption data collected at relatively long intervals (e.g., every fifteen or thirty minutes) can be used to identify the use of most major household appliances. Indeed, data collected at fifteen-minute intervals can be used to identify major home appliances with accuracy rates of more than 90 percent [START_REF] Quinn | Privacy and the New Energy Infrastructure[END_REF]. Furthermore, the successful identification rate is near perfect for large two-state household appliances such as dryers, refrigerators, air conditioners, water heaters and well pumps [START_REF] Drenker | Nonintrusive monitoring of electric loads[END_REF]. Lisovich, et al. [START_REF] Lisovich | Inferring personal information from demand-response systems[END_REF] describe the various types of information that can be inferred from fine-granular energy usage data.
In this paper, privacy is closely related to the amount of information disclosed during the execution of a protocol. Information disclosure can be defined in several ways. We adopt the definitions from the domain of secure computation, which were first introduced by Yao [START_REF] Yao | Protocols for secure computations[END_REF][START_REF] Yao | How to generate and exchange secrets[END_REF]. The definitions were subsequently extended to multiparty computation by Goldreich, et al. [START_REF] Goldreich | How to play any mental game[END_REF].
We assume that the protocol participants are "semi-honest." A semi-honest participant follows the rules of a protocol using the correct inputs. However, the participant is free to later use what he or she sees during the execution of the protocol to compromise privacy (or security). Interested readers are referred to [START_REF] Goldreich | The Foundations of Cryptography, Volume II: Basic Applications[END_REF] for detailed definitions and models.
The following definition formalizes the notion of a privacy-preserving protocol with semi-honest participants.
Definition. Let T i be the input of party i, i (π) be i's execution image of the protocol π and s be the result computed from π. π is secure if i (π) can be simulated from ⟨T i , s⟩ and the distribution of the simulated image is computationally indistinguishable from i (π).
Informally, a protocol is privacy-preserving if the information exchanged during its execution does not leak any knowledge regarding the private inputs of any participants.
Privacy-Preserving Protocols
We specify two privacy-preserving TPUC protocols: f 1 TPUC and f 2 TPUC for the maximum power usage strategy and the individual power usage strategy, respectively. We adopt the same notation as before: A 1 , . . . , A n denote n utility consumers in a participating neighborhood, and a 1 , . . . , a n denote the average power usage during a fixed time interval set by utility C. Additionally, a = n i=1 a i and a m ∈ {a 1 , . . . , a n } denotes the maximum individual energy usage of consumer A m ∈ {A 1 , . . . , A n }. Without loss of generality, we assume that a m is unique and a 1 , . . . , a n are integer values. Since a 1 , . . . , a n can be fractional values in the real world, the values have to be scaled up to the nearest integers before the protocols can be used. After the results are returned by the protocols, they are adjusted by the appropriate scaling factors to obtain the final values.
The privacy-preserving requirements (consumer privacy and utility privacy) described above are difficult to achieve without using a trusted third party. Consequently, we relax the privacy-preserving requirements slightly in defining the protocols. In particular, the two privacy-preserving requirements are specified as follows:
Maximum Power Usage: Only a and a m can be disclosed to A 1 , . . . , A n .
Individual Power Usage: Only a can be disclosed to A 1 , . . . , A n .
Note that these relaxed requirements permit the design of efficient protocols.
The f 1 TPUC and f 2 TPUC protocols require several primitive protocols as subroutines. These primitive protocols are defined as follows:
Secure Sum(a 1 , . . . , a n ) → a This protocol has n (at least three) participants. Each participant A i has an a i value, which is a protocol input. At the end of the protocol, a is known only to A 1 .
Secure Max(a 1 , . . . , a n ) → a m This protocol has n participants. Each participant A i has an a i value,
1. A 1 randomly selects r ∈ {0, N -1}, computes s 1 = a 1 + r mod N and sends s 1 to A 2 2. A i (1 < i < n) receives s i-1 , computes s i = s i-1 + a i mod N and sends s i to A i+1
3. An receives s n-1 , computes sn = s n-1 + an mod N and sends sn to A 1 which is a protocol input. At the end of the protocol, a m is known to every participant, but a i is only known to A i .
Secure Compare(a, t) → 1 if a > t and 0 otherwise This protocol has two participants. At the end of the protocol, both participants know if a > t.
Secure Divide((x 1 , y 1 ), (x 2 , y 2 ))
→ x1+x2 y1+y2
This protocol has two participants. Participants 1 and 2 submit the private inputs (x 1 , y 1 ) and (x 2 , y 2 ), respectively. At the end of the protocol, both participants know x1+x2 y1+y2 .
All these primitive protocols have privacy-preserving properties because the private input values are never disclosed to other participants.
Implementation
The Secure Sum protocol can be implemented in several ways. In this paper, we adopt a randomization approach, which yields the protocol specified in Figure 2. Note that N is a very large integer. Because r is randomly chosen, s 1 is also a random value from the perspective of A 2 . Therefore, A 2 is not able to discover a 1 from s 1 . Following the same reasoning, a 1 , . . . , a n are never disclosed to the other consumers during the computation process. Because A 1 is the only participant who knows r, only A 1 can derive a correctly.
The remaining three primitive protocols are straightforward to implement. The Secure Max protocol is implemented using the steps given in [START_REF] Xiong | Topk queries across multiple private databases[END_REF]. The Secure Compare protocol is implemented using the generic solution given in [2]. The Secure Divide protocol is implemented using the methods outlined in [1,[START_REF] Blanton | Empirical Evaluation of Secure Two-Party Computation Models[END_REF].
f 1 TPUC Protocol
The f 1 TPUC protocol is readily implemented using the primitive protocols. Figure 3 presents the main steps in the protocol.
Since A 1 has the value a, the Secure Compare protocol in Step 2 can only be executed between consumer A 1 and the utility. However, any consumer can become A 1 ; this is accomplished via a leader election process among the consumers that determines who becomes A 1 . Alternatively, A 1 can be chosen at random before each execution of the protocol.
f 2 TPUC Protocol
In the f 2 TPUC protocol, A 1 is also responsible for the Secure Sum and Secure Compare operations. An additive homomorphic probabilistic public key encryption (HEnc) system is used as a building block in the protocol. The private key is only known to the utility and the public key is known to all the participating consumers.
Let E pk and D pr be the encryption and decryption functions in an HEnc system with public key pk and private key pr. Without pr, it is not possible to discover x from E pk (x) in polynomial time. (Note that, when the context is clear, the subscripts pk and pr in E pk and D pr are omitted.) The HEnc system has the following properties:
The encryption function is additive homomorphic, i.e., E pk (x
1 )× E pk (x 2 ) = E pk (x 1 + x 2 ).
Given a constant c and E pk (x), E pk (x) c = E pk (c • x).
The encryption function has semantic security as defined in [START_REF] Goldwasser | The knowledge complexity of interactive proof systems[END_REF], i.e., a set of ciphertexts do not provide additional information about the plaintext to an unauthorized party or E pk (x) ̸ = E pk (x) with very high probability.
The domain and the range of the encryption system are suitable.
Any HEnc system is applicable, but in this paper, we adopt Paillier's public key homomorphic encryption system [START_REF] Paillier | Public key cryptosystems based on composite degree residuosity classes[END_REF] due to its efficiency. Informally, the public key in the system is (g, N ), where N is obtained by multiplying two large prime numbers and g ∈ Z * N 2 is chosen randomly. To implement the f 2 TPUC protocol and according to Equation ( 2), each consumer A i needs to calculate ai•t a between A i and the utility C so that a i is not disclosed to C and t is not disclosed to A i . We adopt the Secure Divide primitive and an HEnc system to solve the following problem:
δ i = a i a (a -t) = a i - a i • t a (2)
1. A 1 obtains a ← Secure Sum(a 1 , . . . , an)
2. A 1 and utility C jointly perform the Secure Compare protocol If Secure Compare(a, t) = 1, then (a) A 1 randomly selects r from {0, N -1}
-Set y 1 = Nr and y 2 = a + r mod N -Send y 1 to A 2 , . . . , An and Also, we assume that E(t) is initially broadcasted by the utility. Figure 4 presents the main steps in the f 2 TPUC protocol. A 1 is the designated consumer in the participating neighborhood, who is responsible for computing a and distributing Nr to the other consumers and a + r mod N to the utility. Note that the value of a computed in Step 1 should not include the value a 1 (this is easily achieved via a small modification to the Secure Sum protocol) and A 1 does not adjust his or her energy consumption. This prevents the disclosure of t to A 1 . For instance, if A 1 obtains a δ 1 , then A 1 can derive t based on Equation (2). To ensure fairness, A 1 can be randomly selected from among the participating consumers before each execution of the protocol.
y 2 to C (b) Each A i (2 ≤ i ≤ n) randomly selects r i from {0, N -1} -Compute E(t) a i to get E(a i • t) -Set x 1i = N -r i and s i = E(a i • t) × E(r i ) = E(a i • t + r i ) -Send s i to C (c) Utility C sets x 2i = D(s i ) for 2 ≤ i ≤ n (d) Each A i (2 ≤ i ≤ n) with input (x 1i ,
The purpose of Step 2(a) is to hide the a value from the utility and the other consumers. Since r is chosen randomly, y 1 and y 2 are randomly distributed in {0, N -1}. As a result, the other consumers A 2 , . . . , A n cannot discover a from y 1 ; similarly, the utility cannot discover a from y 2 .
The goal of Step 2(b) is to hide a i from the utility and t from A i . Since the encryption scheme is semantically secure, from E(t) and without the private key, the consumers cannot learn anything about t. In addition, because r i is chosen randomly, the x 2i value computed in Step 2(c) does not reveal any information regarding a i .
The operations performed in Steps 2(b) and 2(c) are based on the additive homomorphic property of the encryption function E. Since x 1i + x 2i = a i • t and y 1 + y 2 = a, κ i = ai•t a . Therefore, the protocol correctly returns δ i for each A i , except for A 1 .
Protocol Efficiency and Privacy
This section analyzes the complexity and privacy properties of the protocols.
Protocol Complexity
Since the Secure Sum protocol only performs additions and each participant only turns in one input, the protocol is very efficient. The complexity of the Secure Compare protocol depends on the number of bits needed to represent the maximum value between a and t. Once the number of bits required to represent these numbers is fixed, the complexity of the Secure Compare protocol is constant. The main operation in the Secure Max protocol is the comparison of two numbers, so the protocol itself is very efficient. In the case of a neighborhood with 1,000 consumers, if the communication delay is negligible, then the running time of the f 1 TPUC protocol is just a few seconds. According to [1], the computational cost of the Secure Divide protocol is bounded by O(log l), where l is the number of bits used to represent the maximum value between a i • t and a. Because l = 20 is generally sufficient in our problem domain, the computational cost of Secure Divide is constant and very small. If the number of consumers in the neighborhood is small and the utility can execute the Secure Divide protocol with each consumer concurrently, then the f 2 TPUC protocol can also be completed in a few seconds. Based on the above analysis, it is reasonable for the utility to set up a fifteen-or thirty-minute interval between executions of the protocols.
Protocol Privacy
With regard to the f 1 TPUC protocol, a is disclosed to A 1 and a m is disclosed to all the participating consumers. Since a is aggregated information, the disclosure of a can hardly cause any privacy violations. Although a m is disclosed, no one can link a m to a particular consumer. Thus, the disclosure risk of the f 1 TPUC protocol is not significant.
The f 2 TPUC protocol only discloses a to A 1 , so it is more privacy preserving than the f 1 TPUC protocol. However, because the Secure Divide protocol has to be executed between every consumer and the utility, the protocol is less efficient than f 1 TPUC . Therefore, depending on whether or not efficiency is more important than privacy, one protocol is more or less applicable than the other protocol in a real-world situation.
Conclusions
Intelligent power usage control in the smart grid requires utilities to collect fine-granular energy usage data from individual households. Since this data can be used to infer information about the daily activities of energy consumers, it is important that utility companies and their consumers employ privacypreserving protocols that facilitate intelligent power usage control while protecting sensitive data about individual consumers.
The two privacy-preserving protocols described in this paper are based on energy consumption adjustment strategies that are commonly employed by utilities. Although the protocols are not as privacy-preserving as the ideal model that engages a trusted third party, they are efficient and limit the amount of information disclosed during their execution. Our future research will focus on refining these protocols and will develop privacy-preserving protocols for other types of energy usage control.
Figure 1 .
1 Figure 1. TTP-based fTPUC protocol.
4. A 1 Figure 2 .
12 Figure 2. Secure Sum protocol.
Figure 3 .
3 Figure 3. f 1 TPUC protocol.
Figure 4 .
4 Figure 4. f 2 TPUC protocol.
y 1 ) and C with input (x 2i , y 2 ) jointly perform the Secure Divide protocol -A i obtains κ i = Secure Divide((x 1i , y 1 ), (x 2i , y 2 )) -A i sets δ i = a iκ i -A i reduces his or her consumption according to δ i
Acknowledgements
The research efforts of the first two authors were supported by the Office of Naval Research under Award No. N000141110256 and by the NSF under Grant No. CNS 1011984. The effort of the third author was supported in part by the Future Renewable Electric Energy Distribution Management Center, an NSF Engineering Research Center, under Grant No. EEC 0812121; and by the Missouri S&T Intelligent Systems Center. | 23,386 | [
"761922"
] | [
"487654",
"487654",
"487654"
] |
01483864 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01483864/file/978-3-642-28827-2_11_Chapter.pdf | Motheo Lechesa
Lisa Seymour
email: lisa.seymour@uct.ac.za
Joachim Schuler
email: joachim.schuler@hs-pforzheim.de
ERP Software as Service (SaaS): Factors Affecting Adoption in South Africa
Keywords: ERP adoption, SaaS, South Africa, cloud computing
Within the cloud computing hype, ERP SaaS is receiving more focus from ERP vendors such as ERP market leader SAP announcing SAP by Design, their new ERP SaaS solution. SaaS is a new approach to deliver software and has had proven success with CRM systems such as Salesforce.com. The appeal of SaaS is driven by amongst other things, lower Total Cost of Ownership and faster implementation periods. However, the rate at which ERP SaaS is being adopted is low in comparison to other SaaS applications such as CRM or Human Resource systems. Hence the need to establish the reasons for this low adoption. Consequently the purpose of this research was to determine barriers that affect the adoption of ERP SaaS in South Africa. Using interviews and qualitative data analysis, this study developed a model that explains the factors that affect the adoption of ERP SaaS. Network limitations, customisation, security and cost concerns were raised as dominant factors affecting the adoption of ERP SaaS. The research concludes by suggesting that over time the adoption of ERP SaaS should increase as the technology matures.
Introduction
Enterprise Resource Planning (ERP) Systems have been adopted by many organisations as a way to improve efficiency and to achieve strategic goals set by management [START_REF] Helo | Expectation and reality in ERP implementation: consultant and solution provider perspective[END_REF]. The major benefit associated with ERP systems is their ability to integrate organisational data and processes to achieve improved efficiency and productivity levels [START_REF] Verduyn | Drive Business performance with ERP[END_REF]. Despite their usefulness, ERP system implementations are associated with high implementation costs driven by the cost of hardware, software and consultancy as well as high maintenance costs [START_REF] Helo | Expectation and reality in ERP implementation: consultant and solution provider perspective[END_REF], [START_REF] Verduyn | Drive Business performance with ERP[END_REF].
As organisations continue to seek ways to reduce costs and utilise available technology to achieve desired objectives, alternative ways of implementing ERP systems have had to be explored [START_REF] Hofmann | ERP is Dead, Long Live ERP[END_REF]. In particular, the Software as a Service (SaaS) model has emerged as a real alternative to implementing in-house ERP systems. ERP SaaS has been implemented successfully in Europe, some parts of North America and in Asia Pacific Countries [START_REF] Guo | A survey of Software as a Service Delivery Paradigm[END_REF].
The major benefit of ERP SaaS is low implementation costs and a flexible pricing model that does not require a major capital outlay. However, in comparison to other SaaS applications, the rate at which ERP SaaS is being adopted is low [START_REF] Benlian | Drivers of SaaS-Adoption-An empirical study of different application types[END_REF]. The reason for slow adoption of ERP SaaS varies from issues around security and customisation to integration and other concerns [START_REF] Burrel | New engagement approach in Europe by Fujitsu Services[END_REF].
The purpose of this paper is to determine factors that affect the adoption of ERP SaaS in South Africa. This paper should be useful to potential clients of ERP SaaS, to assess the factors that they should consider in deciding whether to adopt ERP SaaS or not. For Vendors, this paper shall provide an understanding of the factors that their clients consider in making decisions about whether to adopt ERP SaaS or not. This work also contributes to the research domain by providing issues central to ERP SaaS adoption.
Literature Review
Software as a Service
Software as a Service (SaaS) is explained as a business model that allows the vendor to manage software and deliver it as a service over the internet [START_REF] Xin | Software-as-a-Service Model: Elaborating Client-Side Adoption Factors[END_REF]. SaaS first came into the scene in the late 1990s when discussions about turning software into a service emerged and Salesforce.com launched CRM SaaS [START_REF] St | Software-as-a-Service (SaaS). Put the Focus on the KM/Knowledge Services Core Function[END_REF]. The SaaS architecture is multi-tenant based with SaaS application vendors owning and maintaining it [START_REF] Kaplan | SaaS: Friend or Foe?[END_REF], [START_REF] Xin | Software-as-a-Service Model: Elaborating Client-Side Adoption Factors[END_REF]. Contrary to the ASP model, SaaS provides a much better value creation through resource sharing, standardisation of processes and centralised data.
The National Institute of Standards and Technology (NIST) and Cloud Security Alliance (CSA) have described cloud computing as "a model for enabling convenient, on demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service vendor interaction" [START_REF]ISACA: Cloud Computing: Business Benefits with Security, governance and assurance perspectives[END_REF]. The cloud model incorporates Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) [START_REF]ISACA: Cloud Computing: Business Benefits with Security, governance and assurance perspectives[END_REF], [START_REF] Vaquero | The Challenge of Service Level Scalability for the Cloud[END_REF].
As technology involved in cloud computing matures, SaaS continues to attract interest across a broad spectrum of stakeholders and is gaining momentum across a spectrum of applications. The vendor and the user alike benefit from the adoption of SaaS [START_REF] Hai | SaaS and Integration best practices[END_REF]. On the part of the vendor, SaaS reduces the maintenance and upgrading costs of the system [START_REF] Liao | An anatomy to SaaS Business Mode Based on Internet[END_REF]. In addition, SaaS vendors gain competitive edge over other software vendors since they provide faster upgrades and patches [START_REF] Liao | An anatomy to SaaS Business Mode Based on Internet[END_REF].
On the part of the Clients, a SaaS application allows a software implementation project to be started on a pay-as-you go basis, scaling on business needs. This saves in upfront costs because payments for software and hardware are paid over a period of time [START_REF] Dubey | Delivering Software as a Service[END_REF]. So, SaaS helps to focus on the core areas of the business without too much concern on information technology issues [START_REF] St | Software-as-a-Service (SaaS). Put the Focus on the KM/Knowledge Services Core Function[END_REF]. SaaS implementations are also shorter compared to installed applications [START_REF] Liao | An anatomy to SaaS Business Mode Based on Internet[END_REF].
ERP SaaS
ERP SaaS means, to deliver an ERP system "as a service", not as in the past, where ERP systems had been implemented "on premise" as products bought by clients. The most important disparities between ERP SaaS and installed in-house ERP applications are, that ERP SaaS is accessed through the internet, the application and data are under control of the service provider while installed applications are offered as a product and accessed and controlled from the customer's location. Moreover, the payment for the software services is provided through subscriptions that have to be paid e.g. per user on a monthly basis [START_REF] Dubey | Delivering Software as a Service[END_REF].
Although ERP SaaS has been implemented in Europe and in the United States of America (USA), the rate at which it is being adopted is lower in comparison to other software applications such as CRM and Human capital management (HCM) [START_REF] Lucas | The State of Enterprise Software Adoption in Europe[END_REF]. The reason for this slow adoption is the fact that most organisations are not yet ready to trust the SaaS model with business, critical or core applications.
An Aberdeen Group survey [START_REF]AberdeenGroup: Trends and Observations[END_REF] carried out on over 1200 companies operating mainly in Europe found ERP SaaS deployments to be less prevalent compared to other SaaS deployments. Although ERP is lagging behind other applications in terms of SaaS based applications there seems to be a general consensus that ERP SaaS is gaining momentum [START_REF] Montgomery | Magic Quadrant for ERP for Product-centric midmarket companies[END_REF]. To increase the adoption rate of ERP SaaS, issues impeding its adoption need be addressed.
Barriers to adoption of ERP SaaS
In general the issues that negatively impact on the adoption of ERP SaaS model are similar to concerns raised for adopting cloud computing. These issues include security and privacy, support, interoperability and compliance as well as loss of control over data and other computing resources [START_REF] Kim | Cloud Computing: Today and Tomorrow[END_REF].
Customisation. Although SaaS is implemented in a quicker and easier manner than "on-premise", in most cases, it does so at the expense of configuration and customisation, thereby losing flexibility [START_REF] Sahoo | IT Innovations: Evaluate, Strategize, and Invest[END_REF], [START_REF] Xin | Software-as-a-Service Model: Elaborating Client-Side Adoption Factors[END_REF]. A balance needs to be maintained when it comes to customisation and configuration of SaaS products for both the vendor and the client. Flexibility is necessary as an ERP SaaS system or any system for that matter may require some kind of configuration or customisation in order to address or cater for the unique aspects of the adopting organisation [START_REF] Guo | A survey of Software as a Service Delivery Paradigm[END_REF]. Hence, the SaaS architecture should support configurability because it is a crucial requirement for clients in order to differentiate their business from competition [START_REF] Nitu | Configurability in SaaS (Software as a Service) Applications[END_REF]. Otherwise, the SaaS model may experience a low adoption rate [START_REF] Nitu | Configurability in SaaS (Software as a Service) Applications[END_REF]. Hence there is a challenge for vendors to offer ERP SaaS solutions capable of delivering customisable source code for more complex ERP systems [START_REF] Guo | A survey of Software as a Service Delivery Paradigm[END_REF].
Security and Privacy. Security is always an issue when it comes to any software implementation [START_REF]ISACA: Cloud Computing: Business Benefits with Security, governance and assurance perspectives[END_REF]. Concerns over security have been raised as a concern with the SaaS model with many potential clients reticent to trust third parties with their data, especially sensitive corporate data [START_REF] Armbrust | Above the clouds: A Berkeley View of Cloud Computing[END_REF]. Yet, [START_REF] Armbrust | Above the clouds: A Berkeley View of Cloud Computing[END_REF] argue that there is no reason why services over the cloud can't be as secure as those provided by in-house IT.
Cost. The issue of ERP SaaS cost is twofold, being the once off implementation cost and the annual subscription [START_REF] Godse | An approach to selecting Software as a Service (SaaS) Product[END_REF]. The implementation cost of ERP SaaS may include initial consulting and configuration costs, while on-going subscriptions costs includes hardware, software and support personnel [START_REF] Godse | An approach to selecting Software as a Service (SaaS) Product[END_REF]. The most appealing SaaS feature is the lower total cost of ownership (TCO), with the majority of articles referring to SaaS benefits mentioning the lower TCO. Burrel [START_REF] Burrel | New engagement approach in Europe by Fujitsu Services[END_REF] mentioned that "SaaS has been touted to major organisations as the new cure-all that solves the CIO's cost enigma at a stroke by significantly reducing implementation costs, outsourcing developments, slashing consultancy costs and devolving infrastructure maintenance and servicing".
Regulation. The legal framework in the form of legislation and standards within which organisations operate has become very stringent [START_REF] Jaekel | Software as a Service (SaaS) with Sample Applications[END_REF]. In this respect legal issues about data protection, confidentiality, copyright, audit and controls should be considered by both the potential user and vendors alike [START_REF] Yang | Where are we at with cloud computing?[END_REF]. In particular the enterprise clients need to ensure that the technology they adopt satisfies the legal requirements in terms of the data it provides [START_REF] Kim | Cloud Computing: Today and Tomorrow[END_REF]. An example is legal requirements in many countries prohibiting SaaS vendors from keeping customer data and copyright material outside the national boundaries within which those clients reside, such as the USA Patriot Act [START_REF] Armbrust | Above the clouds: A Berkeley View of Cloud Computing[END_REF].
Network Limitations. Linked with the network limitation is the availability of system concern. Often organisations would require a 100% availability of systems especially those such as ERP systems that are considered critical [START_REF] Kim | Cloud Computing: Today and Tomorrow[END_REF]. To ensure that the ERP system itself and the network provides 100% availability or very close to that, it is vitally important that service level agreements (SLA) are entered into between the vendors and the clients [START_REF] Kim | Cloud Computing: Today and Tomorrow[END_REF].
Application and Organisation specific issues. Drivers of SaaS vary depending on the characteristics of the application that is considered for SaaS outsourcing. Since ERP systems possess strategic significance for organisations and are not highly standardised, they rank among the applications with the lowest SaaS adoption rates [START_REF] Benlian | Drivers of SaaS-Adoption-An empirical study of different application types[END_REF]. Furthermore, there are some organisations for which, because of their nature, SaaS is not well suited [START_REF] Mangiuc | Software: From product to service the evolution of a model[END_REF]. For instance, organisations which base their survival on maintaining secret or confidential data are more likely not to adapt SaaS. Therefore the speed at which ERP SaaS is adopted may differ depending on the nature of the organisation or industry.
Integration. The integration of SaaS applications with other in-house application or other SaaS vendors is still a big challenge to the extent that the cost of integration can be 30-45% of the overall SaaS implementation [START_REF] Hai | SaaS and Integration best practices[END_REF]. Another aspect of the issue around integration relates to the fact there are no interoperability standards within the cloud computing arena which creates a possible lock-in scenario for the clients [START_REF] Krishnan | TCS and Cloud Computing[END_REF] where they are not able to use integrated SaaS applications provided and supported by distinctive vendors, as the two solutions would not be interoperable [START_REF] Guo | A survey of Software as a Service Delivery Paradigm[END_REF]. This emphasises the importance for vendors to design and develop SaaS integration by first reviewing business processes, integration design and implementation, data migration, testing, production and monitoring; and then figuring areas where integration may be a concern with the aim of addressing deficiencies [START_REF] Hai | SaaS and Integration best practices[END_REF], [START_REF] Sahoo | IT Innovations: Evaluate, Strategize, and Invest[END_REF]. Integration as a service solutions are beginning to find ways of simplifying integration issue on a cloud-to-cloud platform while SaaS vendors are addressing in house integration issues by pre-building integration within the SaaS solution with the aim of reducing complexity and cost [START_REF] Hai | SaaS and Integration best practices[END_REF].
Summary of barriers.
The reviewed literature suggests that ERP SaaS is an attractive solution for organisations that have insufficient resources to consider onpremise ERP adoption. Yet there is limited research into SAAS ERP and adoption has been slow. A review of the literature identified that the major barriers to adoption of SaaS applications include concerns around security and privacy; regulatory compliance; limited customisation; network limitations; cost clarity; integration concerns; and application and organisation specific issues. However, due to a lack of empirical research it is not known whether these adoption concerns are valid for SaaS ERP. Also there seems to be a lack of frameworks or theory supporting these concerns. Hence, the need for more understanding and this research.
Research Framework
The purpose of this research was to study the factors that affect adoption. The Technology-Environment-Organisation (TOE) framework was considered as the most appropriate theoretical model for the purpose of this research. The framework is an institutional theory that has been widely used for the adoption of complex innovations such as e.g. e-businesses [START_REF] Heart | Explaining adoption of remote hosting: A case Study[END_REF]. It focuses on three main characteristics associated with technology innovation namely technology, organisation and environment. The use of TOE is suitable for technology innovations that are associated with uncertainties regarding the current and future status of that technology innovation [START_REF] Heart | Empirically Testing a Model for The Intention of Firms to Use Remote Application hosting[END_REF].
The Context: SaaS in South Africa
ERP adoption is arguably more of a challenge for organisations in developing countries such as South Africa, given the high cost of capital and the shortages of IT skills. Hence the SaaS model does appear to address many adoption barriers. While a number of articles has been written in South Africa about cloud computing and ERP SaaS, there is limited literature from credible sources about it. There are articles that are intermittently posted on web sites about cloud computing, in particular ITweb has a number of articles. A survey was carried out in 2009 by Fujutsi Technology Solutions that revealed that organisations in South Africa are aware of cloud computing [START_REF] Nthoiwa | Jumping into the cloud[END_REF]. The survey revealed that in South Africa, the value of cloud computing is not clear, and where the value was clear it was by hampered by the required networking infrastructure [START_REF] Nthoiwa | Jumping into the cloud[END_REF].
Research Method
The research followed an interpretive philosophy [START_REF] Klein | A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems[END_REF] to gain a deep understanding of factors affecting ERP SaaS adoption in South Africa. Qualitative primary interview data was collected from individuals with the required knowledge and experience. Purposive sampling was used to select participants from five South African organisations. All participants had experience of a minimum of 10 years in the Information, Communication and Technology (ICT) industry. Ethical clearance for the research was obtained from the university, participants signed to indicate interview consent and their anonymity was assured. Table 1 summarises the participant's details, their experience, type of industry and other relevant information. Semi-structured interviews with open-ended questions were chosen to probe for answers because of their flexibility and ability to obtain rich data from interviews [START_REF] Frankel | Study Design in Qualitative Research -1: Developing Questions and Assessing Resource Needs[END_REF]. Face to face interviews were carried out and tape-recorded. In some instances, follow up questions through telephone were carried out to seek clarity.
The Thomas general inductive approach [START_REF] Thomas | A general inductive approach for qualitative data analysis[END_REF] of thematic data analysis employed allows "research findings to emerge from the frequent, dominant or significant themes inherent in raw data, without the restraint imposed by structured methodologies" [START_REF] Thomas | A general inductive approach for qualitative data analysis[END_REF]. To increase dependability of analysis [START_REF] Anfara | Qualitative Analysis on Stage: Making the Research Process More Public[END_REF] audit trails for three iterations of coding were kept and transcribed notes were sent back to respondents for verification.
Findings and Implications
Based on the findings of the research, a theoretical model (based on the TOE framework) was developed to explain the relationships between established categories and concepts (Figure 1). The underlying construct of the model is that all three factors affects one another, where the decision to adopt technology innovation is concerned. Hence, the outer link has been made to indicate that any of the technology, organisation and environment factors may affect one another.
For each of the three pillars of the model, three key themes emerged from the data analysis, namely, Business Benefits, Organisation Readiness and System Trust. These three main issues within the TOE aspects influence the decision whether to adopt ERP SaaS or not and are now discussed, supported by quotes from the participants and literature evidence.
Business Benefits
Under the technological characteristics of the adoption of ERP SaaS, the perceived business benefits that the clients associate with ERP SaaS have an impact on the overall decision whether to adopt ERP SaaS or not. In regard to business benefits, three main issues emerged from the study.
Cost. The main benefits associated with the adoption of the SaaS model are: low TCO, the subscription pay-as-you go model and ease of implementation [START_REF] Dubey | Delivering Software as a Service[END_REF]. On the other hand ERP systems are associated with high costs of implementation and maintenance [START_REF] Helo | Expectation and reality in ERP implementation: consultant and solution provider perspective[END_REF]. The expectation therefore is that the adoption of the ERP implemented on the SaaS model shall neutralise the huge upfront costs associated with ERP systems, and shall also lower the overall cost of ERP systems.
In agreement with findings by Burrel [START_REF] Burrel | New engagement approach in Europe by Fujitsu Services[END_REF] participants perceived that savings in the required upfront implementation costs are seen as a benefit mostly associated with small businesses. Smaller businesses do not have sufficient resources in terms of the technical manpower and funding to manage installed software and therefore could benefit from SaaS without the need to acquire the personnel with requisite skills. "I personally think it could help smaller companies maybe more" (Participant A). Participant D mentioned that ERP-SaaS offered by one prominent ERP software vendor in South Africa targets smaller operations that want to get some elements of ERP running.
The investment in ERP SaaS needs to be justified like any other investment through a clear decision making tool. The clients are of the opinion that they do not have a tool or a yardstick that can be used as a basis for decision making. This uncertainty makes it difficult for the decision makers such as the participants to determine whether ERP SaaS is beneficial in terms of return on investment. Costs and benefits involved in SaaS have been noted as hard to determine [START_REF]Measuring the return on investment of web application acceleration managed services[END_REF]. Most importantly, the research participants were very critical of the cost reduction benefits of SaaS, especially for certain organisations with their need for rapid customisation. Yet, the fact that in-house ERP systems are seen as expensive by the participants means that there is an opportunity for ERP SaaS to make headway in any type of organisation regardless of size. That said, the issue of SaaS cost reduction can't be looked at in isolation because of the participant's view that ERP systems demand flexibility that in turn impacts the cost.
Customisation. According to Guo [START_REF] Guo | A survey of Software as a Service Delivery Paradigm[END_REF], the perceived lack of customisation of ERP SaaS negatively impacts on the adoption of ERP SaaS. This is not only in terms of possible cost increases but also in terms of the flexibility to meet business requirements. ERP systems are not static as rapid customisations are often needed to meet business requirements [START_REF] Helo | Expectation and reality in ERP implementation: consultant and solution provider perspective[END_REF]. In contrast, ERP SaaS is perceived as rigid. The high level of customisation in a rapidly changing environment impacts negatively on the cost reduction benefit, which in turn negatively affects the perceived benefit of ERP SaaS technology.
Yet, SaaS applications are purported to be loosely coupled with configurable components. Clients are able to customise their applications through on-screen clicks without code modifications [START_REF] Hai | SaaS and Integration best practices[END_REF]. This implies that to meet unique requirements of the customers, there may be a need to increase the configurable limit as far as possible towards the client's unique requirements [START_REF] Guo | A survey of Software as a Service Delivery Paradigm[END_REF].
The view that the respondents hold is that, SaaS is more appropriate for standardised applications that requires little or no customisation. The issue of customisation came out as very important because it is seen as a way to differentiate from the competitors; hence, it was rated highly by the participants.
"Now, the economies of scale and scope, provided the scope is the same, are fantastic in terms of reducing cost. Right? But those services can't give you…., there's no differentiating between businesses. You know, it doesn't give you any competitive edge" (Participant B).
Another issue raised about customisation is that which relates to the presence of organisations in different regions or parts of the world. One participant was concerned that the customisation requirements for different regions could complicate ERP SaaS. On analysis, the issue of customisation is seen as a very important inhibiting factor of adopting ERP SaaS. The clients perceive it as being an issue that makes the SaaS model expensive, especially for ERP systems that need to be agile to facilitate a faster response to the changing environment, as well as for facilitating unique processes that gives a company a competitive edge over other competing firms. However, none of the respondents seemed to be aware of the configurability of the SaaS applications, which to some extent addresses the issue of customisation.
Application Specificity. The main issue about application specificity is the fact that ERP is considered too important to run on a SaaS model. The general feeling amongst the research participants is that, although some parts of ERP could be run as ERP SaaS, the model is more appropriate for applications such as Business Intelligence (BI) and Customer Relationship Management (CRM).
Since ERP is seen as a core application, it not considered as appropriate for SaaS model, especially in relation to modules that are considered core. "If you look at ERP systems, you can't just leave it out in the Internet, in a cloud somewhere. It needs to be very secure" (Participant B).
On addressing the question about which applications are more suitable for the SaaS Model, Participant D was of the opinion that the CRM is more suitable for
SaaS. "That is very different to the complexity of an integrated ERP environment…the peripheral functionality that needs quick scaling, quick flexibility, you could probably go and say, let's get that out of the cloud" (Participant D).
Two of the organisations that took part in the study are already using cloud computing, for Spam and Virus filtering and the other one for e-mail. This supports the view that at least for now, the SaaS model is seen as appropriate for standardised and peripheral applications.
The important role ERP system plays in organisations today makes it difficult to hand control over to third parties. One can conclude that the extent to which an application is perceived as being core to the business operations has a negative impact on the adoption of SaaS, and hence ERP SaaS.
Organisation Readiness
The extent to which the business is ready to adopt technological innovation covers organisational factors that influence the adoption or non-adoption decision. Two main issues that were found to affect the organisation's readiness to adopt ERP SaaS are competitive advantage and organisation specificity.
Competitive Advantage. The competitive advantage here refers to the actual enterprise functions or the processes, and not the type of application itself. The participants were of the opinion that organisation readiness is influenced by the perceived importance of a process/function. If the process is seen as being something that is so important to the organisation in that it provides a competitive edge over other competitors, then such a process/function may negatively influence the decision to adopt. On the other hand processes/functionalise that are not considered core can be readily accepted for adoption of SaaS. Thus ERP modules that are considered less important or not mission critical to the organisation could be easily adopted by such organisations.
"Services that are differentiating to your business, probably software as a service may not work, or maybe it will work but you won't get the economies of scope because it's your specific processes that you don't want to share with other customers or other competitors" (Participant B).
Core processes and functionalities are currently considered as being appropriate only for running in-house because of the reluctance to share the differentiating functionalities with competing firms. It is not so much about the issue of trust as it is the case in application specificity, but rather about the possibility of losing unique identity which is so crucial for businesses to compete successfully. Organisation Specificity. In addition, the participants view of the size or nature of the organisation influences the adoption of ERP SaaS. The analysis revealed that some of the participants felt that the type of organisation they are in may not be suitable for ERP SaaS adoption. On the other hand, Participant C asserted that the vendors are probably not looking into ERP SaaS for large organisations.
With regard to the nature of the organisation, there are two main issues raised concerning customisation and confidentiality [START_REF] Heart | Empirically Testing a Model for The Intention of Firms to Use Remote Application hosting[END_REF]. As indicated above, organisation with rapid customisation requirements are seen as not being appropriate for ERP SaaS adoption. On the other hand the literature points to certain types of organisations such as those with requirements for confidentiality not readily accepting the adoption of ERP SaaS [START_REF] Mangiuc | Software: From product to service the evolution of a model[END_REF]. This was confirmed by participants:
"How are you going to ensure confidentiality of client's information? We deal with very sensitive information about client's information you know. Medical condition and research issues. How are you just going to leave that in the cloud somewhere?" (Participant A).
Adoption of ERP SaaS amongst big enterprises is impeded by the view that ERP SaaS is appropriate for small operations rather than large corporate. The issue about the need for rapid customisation for enterprises that requires agility and flexibility is also seen as being inappropriate for those organisations. In addition, the nature of organisation with high requirements for confidentiality may negatively impact the decision to adopt ERP SaaS.
System Trust
System Trust in the context of this paper is the extent to which the organisation expects to benefit from the adoption of ERP SaaS within acceptable levels of security and guarantees in terms of structural assurances and third party guarantees. The structural assurance refers to internet reliability and bandwidth issues. Third party guarantees refer mainly to the assurances that the risk of adopting ERP SaaS shall be mitigated and managed by the vendor. There are three main issues that emerged under the system trust theme: network limitations, security and knowledge.
Network limitations
The participants felt that the reliability and cost of bandwidth negatively affect the adoption of ERP SaaS in South Africa. The issue of bandwidth did not only apply to the South African environment but in Africa as a whole. SaaS requires a stable and reliable internet connection to access web-based service [START_REF] Kaplan | SaaS: Friend or Foe?[END_REF]. Since there are still challenges about broadband access and cost in South Africa and Africa [START_REF] Bernabé | Broadband Quality Score III -A global study of broadband quality[END_REF], [START_REF] Lopez | Foreclosing competition through access charges and price discrimination[END_REF] the adoption of ERP SaaS shall be slow compared to other developed countries such as in America and Europe.
Security and confidentiality. Security and confidentiality strongly influences the trust that the organisation is willing to place on the system [START_REF]ISACA: Cloud Computing: Business Benefits with Security, governance and assurance perspectives[END_REF]. Security was raised as a concern by almost all participants. "Where's the security? How are you going to guarantee security?" (Participant B).
The participants suggested that ERP SaaS vendors have a vested interest in ensuring the security and confidentiality of client's information. But as Participant A mentioned: "What happens when the vendor goes out of business?"
Moreover, since none of the participant organisations use ERP SaaS the concern about security seems to have been caused by the uncertainty about the way in which ERP SaaS operates. This lack of understanding or knowledge about aspects of SaaS could impacts the level of security trust that the users are willing to place on ERP SaaS. In addition, since the vendors have a vested interest in security as it impacts their integrity and reputation, they would probably have more stronger security controls in place than many clients [START_REF] St | Software-as-a-Service (SaaS). Put the Focus on the KM/Knowledge Services Core Function[END_REF].
While maintenance and upgrades were seen by the participants as a benefit of adopting ERP SaaS, others are of the view that it is problematic for the organisation in question. Xin & Levina [START_REF] Xin | Software-as-a-Service Model: Elaborating Client-Side Adoption Factors[END_REF] have noted that vendors drive future developments of the ERP SaaS system, which makes the users powerless over the control of their upgrades and therefore heavily reliant on the clients on the vendors. This was confirmed by participants: "You lose the ownership and lose the control that you have over the system and you are more manipulated or forced into the direction by the system as when you actually hosting it yourself" (Participant E).
Information and Knowledge. Although the participants seemed to understand what SaaS is and how it works, there is an apparent lack of information about the specifics of ERP SaaS. Information about ERP SaaS may also impact on the organisation readiness, since decision making becomes difficult if there is not enough knowledge about the issue at hand.
In addition, there are still "grey areas" in terms of ERP SaaS functionality such as how security works. Participant C summarised it nicely: "The issue with culture, obviously, needs to be addressed on two levels because there needs to be a clear understanding by everybody in the business, if we were to propose this change, on exactly what Software as a Service entails in the modern business environment today, because I doubt that there is a clear understanding of how it works."
Summary of Findings
Table 2 below highlights factors that affect the adoption of ERP SaaS as well as their impact on the decision to adopt. The environment factors emerged as key in impacting the decision to adopt ERP SaaS in South Africa. Network limitation, security and confidentiality, and the extent of information available to the organisations about ERP SaaS affect the level of trust the clients could place on the system itself. The level of trust placed by the clients on the system as a whole affects the decision whether to adopt ERP SaaS or not. Second to environment issue, are the technology factors, about cost, customisation and application specificity. The current cost of ERP implementation and maintenance positively affect the adoption of ERP SaaS. Yet big organisations do not perceive the SaaS model as being appropriate, especially with respect to their customisation requirements to deliver differentiating products or services. ERP are seen as core applications that are not appropriate for SaaS delivery. On the other hand, the peripheral modules of ERP seem to be favoured with clients for adoption as SaaS.
Surprisingly, the participants did not mention the integration aspects of ERP Software as a Service (SaaS) in determining ERP SaaS. This is despite being one of the most complex and expensive component of SaaS implementation, especially in the enterprise space [START_REF] Hai | SaaS and Integration best practices[END_REF] and with several studies raising integration as one of the factors where SaaS needs to improve [START_REF] Guo | A survey of Software as a Service Delivery Paradigm[END_REF].
The SaaS model is an emerging business model that completely changes the way software is being delivered to participants [START_REF] Hofmann | ERP is Dead, Long Live ERP[END_REF]. Where ERP SaaS is adopted as opposed to in-house ERP the technology support landscape changes tremendously. This is not restricted to the ICT departments and includes the whole organisation, in terms of culture and required structural readjustments [START_REF] Hofmann | ERP is Dead, Long Live ERP[END_REF]. This implies that ERP SaaS potential clients should consider the likely impact of ERP SaaS adoption on the organisational strategy, Information Technology strategy and governance processes.
The issue of change management and culture was raised by one participant in this research. Another issue that has not been addressed much by empirical research literature is the potential gains in terms of environmental impact of SaaS adoption.
Conclusion
The objective of this research was to determine factors that affect the adoption of ERP SaaS in South Africa. Using the TOE framework as a theoretical lens, factors affecting the adoption of ERP SaaS were determined from qualitative data collection and analysis.
In particular, factors relating to the environment emerged as key in deciding on the adoption or non-adoption of ERP SaaS in South Africa with issues around network limitations and security concerns strongly impacting potential adoption. However, as internet technology improves and the cost of bandwidth begins to drop in Africa and South Africa the network issues shall be of a lesser concern.
In regard to issues around the technology, the main factor inhibiting the adoption of ERP SaaS is a concern around customisation and possible costs associated with it. The SaaS model is perceived to be rigid and not allowing for the flexibility required for systems such as ERP.
Seemingly the evolution has began, with clients starting to adopt peripheral applications through cloud computing. As one participant put it, a transition will occur. "So that's why I'm saying, very excited, think that it's an opportunity and a possibility, but it's going to have to be planned properly and the migration is probably going to be over time. So, what I see here is that there's a transition that will take place from where we are now, to eventually procuring all the services in the cloud."
In general there is a lack of empirical research about Software as a Service (SaaS) in South Africa. This paper has made a first attempt. The research on the same subject focusing on certain industry or type of organisation could also be useful. For instance, the research could focus on the factors affecting adoption of ERP SaaS in Small and Medium Enterprises in South Africa. The results of such research could be consistent and more concrete without generalising amongst participants operating in different industries.
Fig. 1 .
1 Fig. 1. ERP SaaS adoption model.
Table 1 .
1 Research Participants
Interviewee ICT Industry Size in terms of Code-
Position Experience Sector # employees Participants
in IT-Years
IT Director 10 Health- 750 Participant
Clinical A
Research
CIO 15 Mining 50,000 Participant
B
CIO 20 Energy- 3,500 Participant
Petroleum C
Business 26 Retail 49,000 Participant
Development D
Manager
(Retail)
Deputy 25 Education- 900 Participant
Director, IT University E
Table 2 .
2 Summary Findings for ERP SaaS Adoption
TOE Factor Impact on Decision to Adopt -
Positive or Negative
Technology Business Benefit -Perceived business benefits shall
Main Theme influence decision to adopt
Technology Cost Negative for large companies due
to customisation
Positive for small companies
Negative for ERP core
applications
Technology Customisation Negative for rapidly changing
enterprises
Technology Application Specifity Negative for ERP core
applications
Organisation Organisation Readiness -Organisation´s preparedness
Main Theme to adopt ERP
Organisation Competitive Advantage Negative for processes seen as
offering competitive advantages
Organisation Organisation Specifity Negative for large enterprises
Environment
Main Theme
System Trust -The level of trust influences the decision to adopt
Environment Network Limitation Negative for immature
Environment Information and Negative for system trust
Knowledge
Environment Security Negative and indifferent
Environment Regulation Indifferent (not included in the
theoretical model) | 43,016 | [
"1003468",
"1003469"
] | [
"303907",
"303907",
"487694"
] |
01483866 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01483866/file/978-3-642-28827-2_13_Chapter.pdf | Christian Leyh
Susanne Strahringer
email: susanne.strahringer@tu-dresden.deaxel.winkelmann@ercis.uni-muenster.de
Axel Winkelmann
Towards Diversity in ERP Education -The Example of an ERP Curriculum
Keywords: ERP systems, curriculum, course descriptions, teaching, diversity, problem-oriented learning, education
The need for providing ERP knowledge by teaching the concepts of ERP systems in university courses and, above all, the possibilities of using these systems themselves in courses are frequently discussed in literature. There are many ERP systems with different technologies and philosophies available on the market. Here, the universities face the challenge of choosing the "right" number of ERP systems, how to include them in the curriculum and to what extent / how deep each of the systems should be taught. Within this paper, as a curriculum example, we will describe the ERP curriculum at the / Dresden University of Technology / Technische Universität Dresden, its different ERP courses, and how the ERP systems are provided and taught.
Introduction
Today, standardized enterprise resource planning (ERP) systems are being used in a majority of enterprises. For example, more than 92 percent of all German industrial enterprises use ERP systems [START_REF] Business | Konradin ERP-Studie 2009: Einsatz von ERP-Lösungen in der Industrie. Konradin Mediengruppe[END_REF]. Due to this strong demand, there are many ERP systems with different technologies and philosophies available on the market [START_REF] Winkelmann | Dynamic Reconfiguration of ERP Systems -Design of Information Systems and Information Models -Post-Doctoral Thesis[END_REF]. Therefore, the ERP market is strongly fragmented, especially when focusing on systems targeting small and medium-sized enterprises (SMEs) [START_REF] Winkelmann | Experiences while selecting, adapting and implementing ERP systems in SMEs: a case study[END_REF]. The growing multitude of software manufacturers and systems is making it more and more difficult for enterprises that use or want to use ERP systems to find the "right" software and then to hire the appropriate specialists for the selected system. Also, for future investment decisions concerning the adoption, upgrade, or alteration of ERP systems, it is important to possess the appropriate specialized knowledge and skills in the enterprise [START_REF] Winkelmann | Dynamic Reconfiguration of ERP Systems -Design of Information Systems and Information Models -Post-Doctoral Thesis[END_REF], [START_REF] Winkelmann | Teaching medium sized ERP systems -a problem-based learning approach[END_REF]. This is essential since errors during the selection, implementation, or maintenance of ERP systems can cause financial disadvantages or disasters, leading to insolvencies of the affected enterprises (e.g., [START_REF] Barker | ERP Implementation Failure: a case study[END_REF], [START_REF] Hsu | Avoiding ERP Pitfalls[END_REF]). In order to prevent this from happening, it is necessary to strive for a sound education on ERP systems. This places the responsibility of transferring the specialized knowledge to their students and graduates on universities, in particular on university courses in the field of information systems [START_REF] Venkatesh | One-Size-Does-Not-Fit-All: Teaching MBA students different ERP implementation strategies[END_REF].
Because of the increasing importance of ERP systems and their educational value, many universities use or want to apply ERP systems in university courses [START_REF] Seethamraju | Enterprise systems software in business school curriculum -Evaluation of design and delivery[END_REF] to teach and demonstrate different concepts and processes [START_REF] Magal | Essentials of Business Processes and Information Systems[END_REF]. To support these courses, some ERP manufacturers co-operate closely with universities and offer their systems and resources for academic teaching [START_REF] Leyh | Exploring the diversity of ERP systems -An empirical insight into system usage in academia[END_REF]. One of the goals of using ERP systems in courses is to prepare students for their career by giving them at least an introduction to ERP systems. A further goal, promoted by ERP manufacturers themselves (especially by making their systems available for university courses), is for students to learn about the products as early as possible since they, later as graduates, will work with these systems or will hold enterprise positions that influence ERP investment decisions. Therefore, it is necessary for universities to offer the appropriate systems, processes, and suitable courses for their students [START_REF] Brehm | Using FERP Systems to introduce web service-based ERP Systems in higher education[END_REF], [START_REF] Fedorowicz | Twelve tips for successfully integrating enterprise systems across the curriculum[END_REF], [START_REF] Winkelmann | Teaching ERP systems: A multi-perspective view on the ERP system market[END_REF].
The need for providing this knowledge through university courses and, above all, the possibilities of using these actual systems in courses are frequently discussed in literature (e.g., [START_REF] Fedorowicz | Twelve tips for successfully integrating enterprise systems across the curriculum[END_REF], [START_REF] Antonucci | Enterprise systems education: Where are we? Where are we going?[END_REF], [START_REF] Boyle | Skill requirements of ERP graduates[END_REF], [START_REF] Hawking | Second wave ERP education[END_REF], [START_REF] Peslak | A twelve-step, multiple course approach to teaching enterprise resource planning[END_REF], [START_REF] Stewart | Collaborative ERP curriculum developing using industry process models[END_REF]). These discussions clearly point out that ERP systems are or should be an important component of university curricula in information system-related subjects and courses. However, this is not a trivial task, as Noguera and Watson [START_REF] Noguera | Effectiveness of using an enterprise system to teach processcentered concepts in business education[END_REF] discuss. Because there is no standardized approach, the choice of systems and their number, as well as the structure and number of ERP courses, differ from university to university [START_REF] Seethamraju | Enterprise systems software in business school curriculum -Evaluation of design and delivery[END_REF]. For example, for teaching the respective systems, the lecturer has to be familiar with the system's concepts and its practical usage. Thus, the choice of one or more ERP system for a course strongly depends on the knowledge and experience of the lecturers. Additionally, the variety of ERP systems used in courses is limited by the manufacturers' willingness to provide their systems. This results in a situation in which only a small variety of systems and software manufacturers are represented at universities in spite of the heterogeneous ERP market.
In particular, the software manufacturer SAP is represented in numerous universities through its University Alliance program. With more than 400 partner universities participating in this program, SAP is probably the most widely used system in study courses worldwide [START_REF] Hawking | Second wave ERP education[END_REF], [START_REF] Pellerin | Proposing a new framework and an innovative approach to teaching reengineering and ERP implementation concepts[END_REF]. Smaller systems are rarely used in teaching; yet, a more diversified integration of ERP systems into education is advisable, especially from the viewpoint of SMEs [START_REF] Winkelmann | Teaching ERP systems: A multi-perspective view on the ERP system market[END_REF], [START_REF] Leyh | From teaching large-scale ERP systems to additionally teaching medium-sized systems[END_REF]. In addition, by including more than one ERP system in the curriculum, the students gain a broader overview of different ERP systems, the ERP market itself, and different ERP concepts and architectures. Furthermore, by teaching a diverse set of ERP systems, the students' awareness of functional approaches, process support, and interface ergonomics will increase. Additionally, by including ERP systems for smaller companies, the differences between SMEs and large-scale companies [START_REF] Welsh | A small business is not a little big business[END_REF] will be illustrated to students because they are reflected in the appropriate design of the respective systems [START_REF] Winkelmann | Experiences while selecting, adapting and implementing ERP systems in SMEs: a case study[END_REF]. All these are reasons to strive for more diversity in ERP education. However, at this point it is difficult to decide how many systems should be part of the curriculum and to what extent they should be taught. Of course, ERP systems and their concepts can also be described theoretically without direct system access. However, the learning experience and understanding are much better promoted through the use of real systems [START_REF] Watson | Using ERP systems in education[END_REF]. Yet, choosing the "right" number of ERP systems is difficult since too many systems can lead to student confusion or misunderstandings. Also, in-depth insights in selected systems are mandatory to ensure a deeper understanding of the concepts and constructs. Hence, it is not advisable to provide deep insights into too many ERP systems. Again, this would lead to student confusion or misunderstandings. Here, the universities face the challenge of how to include a couple of ERP systems in the curriculum and to what extent / how deep each of the systems should be taught.
The Dresden University of Technology / Technische Universität Dresden (TUD)especially the Chair of Information Systems (IS in Manufacturing and Commerce)has gained much experience with including a diverse set of ERP systems in its curriculum for students in Information Systems. Within this paper, we will describe the different courses and how the ERP systems are provided and taught. Therefore, the paper is structured as follows. Following this introduction, we provide the general design of the curriculum. The different IS programs are briefly described and a state of the art of teaching ERP systems at German universities is given. Afterwards, the respective ERP courses (lectures, projects, and seminars) are described in detail before an evaluation of the courses is given. Finally, we address limitations and summarize the overall curriculum design and major aspects.
Background and General Design of the Curriculum
Teaching ERP Systems at German Universities
For gaining an insight into the current ERP teaching at German universities, we conducted a survey among 92 university chairs [START_REF] Leyh | Exploring the diversity of ERP systems -An empirical insight into system usage in academia[END_REF]. Among those 92 respondents, 59 are teaching ERP topics. Our investigation resulted in a large variety of teaching methods, which are used to familiarize students with ERP knowledge and skills. The question on the employed teaching methods was mostly answered with "Lectures". Eighty-five percent of all the participants who are involved in ERP topics use ERP at least in their lectures. Practical exercises and case studies were mentioned by 36 and 29 participants (cp. Table 1). Therefore, lectures and practical exercises can be seen as the typical methods employed, whereas the other methods mentioned allow for a deeper learning experience. For example, case studies help students to not only understand enhanced ERP system functionality but also to strengthen their individual soft skills like problem-solving or teamwork. However, despite this variety of teaching methods, out of these 59 participants only 38 (64%) are using ERP systems practically (e.g., in computer lab exercises, projects, independent teaching formats, etc.). A majority of the participants who are teaching ERP systems practically are using SAP (35 out of 38; 92%) [START_REF] Leyh | Exploring the diversity of ERP systems -An empirical insight into system usage in academia[END_REF]. Other ERP systems used are Microsoft Dynamics NAV and AX (39%), Semiramis (10%), and ProAlpha (10%). Mostly, more than one ERP system is used. Thus, many participants who use ERP systems in teaching employ different systems. This fact supports the demand mentioned in our introduction [START_REF] Leyh | Exploring the diversity of ERP systems -An empirical insight into system usage in academia[END_REF].
Aim of the ERP Curriculum at the Technische Universität Dresden
The role of knowledge as a strategic resource is well understood in the business world. However, the question of how to teach and to make best use of it still remains insufficiently answered [START_REF] Neumann | Functional Concept for a Web-Based Knowledge Impact and IC Reporting Portal[END_REF]. For universities, there is always a small degree between academic "truth" (in terms of general, but too broad domain knowledge or theories) and practical skills (that may be outdated in a few months) [START_REF] Winkelmann | Teaching medium sized ERP systems -a problem-based learning approach[END_REF]. Therefore, our ERP curriculum is based on the idea of providing theoretical knowledge about ERP systems and their concepts as well as practical abilities in using these systems for bachelor students and later on for master students.
In general, people pass several competence stages when they acquire knowledge. For example, in the Dreyfus model of skill acquisition, five stages of knowledge acquisition from novice to expert are distinguished [START_REF] Dreyfus | A five-stage model of the mental activities involved in directed skill acquisition[END_REF]. Researchers agree that reflective practice is necessary in order to go through different stages of learning. It involves considering personal experiences in applying gained knowledge to practice while being coached by professional tutors [START_REF] Schoen | The Reflective Practitioner: How professionals think in action[END_REF]. According to the stages of maturity on the competence ladder [START_REF] North | The Benefits of Knowledge Management -Results of the German Award 'Knowledge Manager 2002[END_REF], students have to go through different levels of competence acquisition in order to achieve sufficient knowledge and hence competency within their focused field of study. The views of North and Hornung [START_REF] North | The Benefits of Knowledge Management -Results of the German Award 'Knowledge Manager 2002[END_REF] differ from learning about incoherent symbols and data without meaning to information that becomes knowledge in combination with certain experiences and in a specific context. Furthermore, the actual application of knowledge to know-how and its critical reflection, in terms of gaining competency, allows for the final goal of individually applying different methods, instruments, and experiences in a unique and hence competitive way (cp. Figure 1). This complements Kolb's experiential learning theory model that outlines four approaches towards grasping experience, namely abstract conceptualization, reflective observation, active experimentation, and concrete experience [START_REF] Kolb | Experiential learning: Experience as the source of learning and development[END_REF]. Competence Level 1 consists of theoretical lectures on IS development and the usage of managerial IS, especially ERP systems. Aim of Competence Level 2 is the application of theoretical background based on SAP ERP 6.0 exercises, business processes, and customizing as well as on an SAP simulation game. Competence Level 3 seeks to transfer knowledge of various systems in depth (e.g., implementation of Microsoft Dynamics) and breadth (e.g., overview case studies on various mediumsized ERP systems).
Thus, in order to allow for a broad education combined with several deep insights, we decided on setting up an ERP curriculum with lectures, seminars, and projects in the Bachelor IS program and in the Master IS program at TUD.
It consists of different lectures where especially ERP topics are focused in the third and fourth semester as mandatory parts of the bachelor program. In combination with the lecture of the third semester, a mandatory hands-on exercise (SAP Exercise) is provided as well as an optional project seminar (MS Dynamics Project) in the second half of the third semester. Following the lectures of the fourth semester, in semester five another hands-on exercise is given (SAP Customizing). Within the master program two practical courses are included in the curriculum. Both courses take place in the first half of the first semester of the master program. The first course is a project seminar (ERP Systems in Commerce) that lasts the whole semester. The second course is an SAP-based simulation game (ERP Simulation Game), which takes place in the second half of the first semester.
Before the different courses and lectures are described in detail, a short overview of the mentioned bachelor and master programs is given in the following sections.
Bachelor IS Program at the Technische Universität Dresden
The Bachelor program in Information Systems at Dresden University of Technology / Technische Universität Dresden (TUD) is a six-semester program comprising 180 ECTS that lead to a "Bachelor of Science" degree. It is one of four bachelor programs the TUD faculty of business & economics offers. The IS program combines classes in Information Systems, Informatics, Mathematics, Business Administration, and Economics. Each area has to be attended and offers different types of classes such as lectures, exercises, seminars. In addition there is a module specifically intended to acquire soft skills during a mentoring program and two projects. In the fourth and fifth semester, students have to select two specialization modules (minor) from offerings in business and economics. However, there are electives within many of the other mandatory modules as well where students choose from a well-defined catalog of offerings. The program ends with the writing of a research-oriented bachelor thesis.
The ERP curriculum is integrated into two of the four information systems modules (IS track) that build on an introductory IS class in semester 1, which is an integral part of the four bachelor programs the faculty offers. The logic of the IS track is to reverse the order of a system lifecycle by starting with the intended outcome of information systems and then proceeding backwards step-by-step to what is required in order to achieve this outcome. Thus, there are three modules reversing the lifecycle logic that structure the IS track as follows:,
• Module 1 in semester 2: Adding value through information systems (6 credits, 2 lectures, 1 project) • Module 2 in semester 3: Using information systems (9 credits, 1 lecture, 2 hands-on exercises, 1 project) • Module 3 in semesters 4 & 5: Providing information systems (12 credits, 2 lectures, 1 hands-on exercise, 2 projects) and one additional module with IS electives, that is • Module 4 in semesters 5 & 6: Selected topics in information systems (6 credits, 2 courses from a catalog of offerings).
The ERP curriculum is mainly included in modules 2 and 3 and encompasses one of the hands-on exercises (SAP Exercise) and the project (MS Dynamics Project) in module 2 and parts of the lectures and one of the exercises (SAP Customizing) in module 3.
Master IS Program at the Technische Universität Dresden
The Master IS program is one of five master programs the TUD Faculty of Business & Economics offers. It provides students with the ability to identify and frame problems related to using and providing information systems and information technology in private and public companies and organizations, to analyze these problems by applying scientific methods, and to work out feasible solutions. The integration of Management and Informatics in the program allows students to identify, illustrate, and consider interdisciplinary coherences.
During the studies courses in Research Methodology, Additional Qualifications, an internship and a research seminar have to be attended. Students specialize in different fields by choosing six specializations, two from the field of information systems, two from informatics and two from any specialization the school offers (including IS). Thus students can either go for a broader range of areas by choosing two IS specializations only or heavily concentrate on information systems by choosing four IS specializations. Each specialization consists of two modules with 6 and 9 credits. The IS specializations are:
• Application Systems in Business and Administration
• Business Intelligence • Information Management • Systems Development
The ERP curriculum is mainly included in the first specialization which is the one chosen by most of the IS students. Within this, "ERP Simulation Game" and "ERP Systems in Commerce" take place.
The study program ends with the writing of the research-oriented master thesis.
Description of the ERP Curriculum's Elements
In this section we will describe the respective ERP courses. We will focus on the practical parts of the ERP curriculum -the hands-on exercises, the project seminars, and the ERP simulation game. Additionally, we will refer each course to the competence ladder and its different competence levels (cp. Figure 1). We will not describe the lectures, since within the lectures "common" ERP topics (e.g., packaged vs. home-grown applications, ERP architecture, ERP selection and implementation, license models, etc.) are taught.
SAP Exercise -Competence Level 2 (Bachelor IS Program)
SAP Exercise is a hands-on exercise. It takes place within the third semester of the Bachelor IS program and is a mandatory course.
For this exercise, we chose SAP as one of the big players because our university was already a member of the SAP University Alliance Program. The access to the system (SAP ECC 6.0) was easy to establish. Also, detailed instruction materials and click-by-click process descriptions needed for the course were provided through SAP´s University Competence Centers (UCC). Furthermore, the lecturers were already familiar with this kind of course and with the corresponding system.
Since the students learned some theoretical basic knowledge about ERP systems within the lectures, the students got a first practical hands-on understanding of one ERP system while attending SAP Exercise. The scenario for SAP Exercise is part of the materials and therefore, it was predetermined because of the click-by-click instructions provided by the UCC. In order to gain access to the detailed instructions, one has to be a member of the SAP University Alliance Program. Table 2 shows a short compendium of the scenario.
The three parts (production, controlling, and retail) of this course were performed in three separate sessions, which took place in the computer lab of the university. For every session a time slot of three hours was scheduled. This was enough time to let the students perform the processes at their own pace. Before the beginning of each session a lecturer gave a short overview of the process which should be performed during the session. Additionally, one or two lecturers (depending on the number of students) stayed in the lab during the whole session to provide helpful advice or solve problems if the students had done a wrong click, forgot to enter some data, etc. Furthermore, before the three sessions took place a "navigation-session" was offered to the students during which they could learn how to work with an SAP system. This was optional, but the majority of the students who did not have any experience with SAP participated. For example, students who had worked with SAP during internships did not attend this "pre-session". A more detailed description of the course and the approach of setting up this course can be found in [START_REF] Leyh | From teaching large-scale ERP systems to additionally teaching medium-sized systems[END_REF].
MS Dynamics Project -Competence Level 3 (Bachelor IS Program)
MS Dynamics Project is an optional project seminar in the third semester of the Bachelor IS program. Students who attended SAP Exercise can take on this project seminar to deepen their gained ERP knowledge. For this seminar, we chose Microsoft as manufacturer of ERP systems for small and medium-sized enterprises. Even though Microsoft has a university program, too -the so-called Microsoft Business Solutions Academic Alliance (MBSAA) -detailed instruction materials are not provided. Instead, access to the ERP systems of Microsoft (Microsoft Dynamics NAV and Microsoft Dynamics AX) as well as to other Microsoft Business Solutions are provided without fees. Therefore, in this course the participating students had to evaluate the functionalities of Microsoft Dynamics NAV 5.0 (winter semester 2009 / 2010). repeating this course in winter semester 2010 / 2011 we changed the system to Microsoft Dynamics AX 2009. There is no need for changing the system within this course from a learner perspective. However, from a lecturer's perspective this allows for gaining more experience with different systems (see for a more detailed argumentation why we chose to change the system in [START_REF] Leyh | From teaching large-scale ERP systems to additionally teaching medium-sized systems[END_REF]).
For MS Dynamics Project, the chosen scenario contains a generic retail process that has to be examined by the students. Additionally, a generic production process that contains the assembly of a product consisting of individual parts was added to the scenario. Table 3 gives an overview of the scenario. The complete scenario for MS Dynamics Project in English and German can be requested from the first author.
Generic production process
Create a stock of materials Create bill of materials Generate a production order Assembly of the individual parts Assembly of the whole product
Generic retail process
Create master data for customers and vendors Enter a framework contract (1,000 motorcycle helmets for 299 Euro each) Normal purchase price 349 Euro each Order 150 motorcycle helmets for next month for 299 Euro each Supplier sends a delivery notification Supplier delivers 150 motorcycle helmets that have to be checked and stored A customer asks for an offer for 10 motorcycle helmets The customer orders 8 motorcycle helmets relating to the initial offer Take order amount from the warehouse and ship to the customer By providing the scenario, the students should be able to identify the processes that have to be performed within the ERP system and therefore to define the necessary work packages. This enables them to organize their teams for themselves. Furthermore, additional literature is helpful to compensate possible gaps in the students' knowledge (e.g., for retail literature, [START_REF] Becker | Retail information systems based on SAP products[END_REF], [START_REF] Mason | Retailing[END_REF], [START_REF] Sternquist | International Retailing[END_REF]). During a kick-off meeting we described the organizational basics and general conditions of the course as well as the idea of the scenario and the tasks that had to be fulfilled.
During the course, participants worked independently in small groups on the given processes with their ERP system. The students were two months (until the beginning of the fourth semester) to evaluate the ERP system and to write the required documentations. There was no training for the students by the ERP manufacturers. Once they had access to the system, the students independently performed the initial skill adaptation training. The mentoring of the lecturers was only required for individual group meetings, during which the teams could ask questions concerning technical aspects or problems with regard to the content of the scenario. Students had to evaluate the ERP systems based on the requirements of the scenario. They had to enter all necessary data in order to properly document the functionality later and had to reproduce the processes based on the functionalities of their ERP systems. If some aspects of the scenario were not supported by the system because of missing functions, students mentioned this in their written documentation. Access to the systems was granted during the kick-off meeting. There, the students had to install the system on their own laptops (Microsoft Dynamics NAV 5.0). For Microsoft Dynamics AX 2009, we provided access to the system through the computer labs with an installation on faculty servers. Afterwards, the students had to organize themselves to fulfill the tasks. A more detailed description of the course and the approach of setting up this course can be found in [START_REF] Leyh | From teaching large-scale ERP systems to additionally teaching medium-sized systems[END_REF].
SAP Customizing -Competence Level 2 (Bachelor IS Program)
SAP Customizing is a hands-on exercise, too. It takes place during the second half of the fifth semester of the Bachelor IS program and is a mandatory course.
For this exercise, we chose SAP, too, since all of the students in this semester as well as the lecturers were familiar with this system.
Since the students learned some theoretical basic knowledge about tailoring ERP systems within the lectures of the fourth semester, this course gives a hands-on understanding of how to configure / customize a specific ERP system. The scenario for SAP Customizing, again, is part of the materials provided from the UCC and therefore, it was predetermined because we referred mainly to the provided click-byclick instructions.
Contrary to SAP Exercise, this course also consists of some theoretical parts. In general, three different components are part of this course. The scenario deals with a smaller company that is inherited from a large multi-national enterprise. Therefore, the first part of the course is to understand the different organizational structures of the two companies. Additionally, the students have to identify the different organizational units of the SAP system that refer to the companies' structures. Afterwards, the students have to perform the configuration of the companies within the SAP system practically by using a click-by-click instruction. The third part of this course sums up both previous parts. Here, the students get another (shorter) scenario similar to the previous scenario. Again, they have to identify the necessary organizational units and perform the configuration within the SAP system. However, for this part, we do not provide any detailed instruction materials. The students have to "use" the knowledge learned during the other parts and have to work on their own. The parts of this course were performed in three separate sessions (3 hours each), which took place in the computer lab of the university. the first session a strong interaction between the students and the lecturers is necessary, to ensure that the students understand the scenario correctly and that they identify the "right" organizational units. Students complete parts two and three at their own pace. Additionally, one or two lecturers (depending on the number of students) stayed in the lab during the whole session to provide helpful advice or solve problems if the students had done a wrong click, forgot to enter some data, etc.
ERP Simulation Game -Competence Level 2 (Master IS Program)
This course takes place in the first semester of the master program. As the course title indicates, this course is based on a simulation game, originally designed by academics of the HEC Montreal. Contrary to other simulation games, this game includes SAP as ERP system to handle the business activities and processes [START_REF] Seethamraju | Enhancing Student Learning of Enterprise Integration and Business Process Orientation through an ERP Business Simulation Game[END_REF], [START_REF] Leger | Using a Simulation Game Approach to Teach Enterprise Resource Planning Concepts[END_REF].
Using a continuous-time simulation, students have to run their business with a reallife ERP (SAP ECC 6.0) system. The students are divided in teams of 3 to 4 members. They have to operate a firm in a make-to-stock manufacturing supply chain (manufacturing game) and must interact with suppliers and customers by sending and receiving orders, delivering their products, and completing the whole cash-to-cash cycle. [35]. Additionally, for German universities, a "distribution game" was developed that incorporates reduced functionality of the make-to-stock manufacturing game. We use this "light" version of the game for introductory purposes. The students have to play this game before playing the extended version.
A simulation program was developed to automate some of the processes to reduce the game's complexity and the effort required of the students. For example, the sales process, parts of the production process, and parts of the procurement process are automated. Automation is especially applied to steps where no business decisions are involved. Using standard and customized reports in SAP, students must analyze their transactional data to make business decisions and ensure the profitability of their operations [START_REF] Leger | Using a Simulation Game Approach to Teach Enterprise Resource Planning Concepts[END_REF], [35].
To get access to the game, the university has to be a member of the UCC program of SAP. Then again, detailed instruction materials as well as training for the lecturers are provided.
In winter semester 2010 / 2011 we included this game in the curriculum for the first time. The students had to play the game three times. For every session a time slot of three hours was scheduled. Within the first session, the distribution game was played to allow the students to "get in touch" with the game. Afterwards, the students continued to play the manufacturing game with reduced complexity and functionality within the second session, before playing the manufacturing game with full functionality in the third session. Before the games started, a lecture was given as introduction to the respective game and to its processes in order to ensure an overview for the students. After the introduction one or two lecturers (depending on the number of students) stayed in the lab during the whole session to provide helpful advice or solve problems.
A more detailed description of the course and the approach of setting up this course can be found in [START_REF] Seethamraju | Enhancing Student Learning of Enterprise Integration and Business Process Orientation through an ERP Business Simulation Game[END_REF], [START_REF] Leger | Using a Simulation Game Approach to Teach Enterprise Resource Planning Concepts[END_REF], [35].
ERP Systems in Commerce -Competence Level 3 IS Program)
This course is based on a project seminar described by Winkelmann and Matzner [START_REF] Winkelmann | Teaching medium sized ERP systems -a problem-based learning approach[END_REF] and its enhancement in Winkelmann and Leyh [START_REF] Winkelmann | Teaching ERP systems: A multi-perspective view on the ERP system market[END_REF]. It takes place in the first semester of the master program. Similar to MS Dynamics Project, a problem-oriented, learnercentered approach [START_REF] Stewart | Collaborative ERP curriculum developing using industry process models[END_REF], [START_REF] Saulnier | From teaching to learning: Learner-centered teaching and assessment in information systems education[END_REF] is used. With case studies, the students train themselves independently in small groups to use different ERP systems and present their findings and experiences through live demos of the respective system, for example. The seminar participants can increase their knowledge through investigating different ERP systems. We have enhanced the original concept [START_REF] Winkelmann | Teaching medium sized ERP systems -a problem-based learning approach[END_REF] and simultaneously applied their model to three different universities [START_REF] Winkelmann | Teaching ERP systems: A multi-perspective view on the ERP system market[END_REF].
We chose a scenario of existing processes from enterprises that served as a starting point for the students' evaluation. At the end of the semester, after the analysis of the respective systems, students had to present their results. We divided the students into groups of 2-6 each. Every group had to fully explore one ERP system and take a look at the other systems in order to derive questions for the final presentations. Each team had to present its ERP system in a similar, structured way, but the detailed focus and design of the presentation was incumbent upon each team.
This project has been part of the curriculum for three years. It is taking place each winter semester at the universities of Dresden, Koblenz-Landau, and Muenster. During a kick-off meeting, at the beginning of the seminar, we described the organizational basics and general conditions of the seminar as well as the idea of the scenario and the tasks that had to be fulfilled. In analogy to MS Dynamics Project, during the seminar, participants worked independently in small groups on the given processes with their respective ERP system. At the end of the semester, they provided a written evaluation of "their" ERP system. Additionally, a presentation of the systems had to be done. Presentations in a two-day block meeting were considered practical, since the ERP systems and their functionality were presented in a condensed way during a short period of time and allowed immediate comparison of the different systems. Therefore, each team had 60 minutes for the presentation and an additional 30 minutes for a discussion and questions. For the presentations, the participation of all students was mandatory. This guaranteed that learning outcomes did not remain limited to one system but were extended to the other ERP systems.
Again, there was no training for the students by the ERP manufacturers. The students independently performed initial skill adaptation training after they got access to the systems. Contact with the manufacturers was only necessary if a technical problem evolved and prevented further processing of the scenario (e.g., missing access rights, running out of the license). The mentoring of the lecturers was only required for individual group meetings. Since we offered the e-mail addresses of the teams from the other universities to each ERP team, they were able to solve most technical and economic questions among themselves. However, we also had to send questions to the ERP vendors. During the different semesters, we changed the set of ERP systems used. Therefore, Figure 2 shows the different systems per winter semester (WS) with the access possibilities. The ERP manufacturers were asked for remote access to their system and most the manufacturers provided this access. The Microsoft Dynamics and SAGE systems were made available locally on computers at the universities or were installed on the students' notebooks. The appropriate licenses and full versions of the ERP systems were released free of charge for the period of the seminar.
The scenario chosen contained generic and specific retail processes that were examined by the students. Additionally, a generic production process that contained the assembly of a product consisting of individual parts was added to the scenario. Table 4 gives a general overview of the processes and tasks that make up the scenario.
Generic retail process
Enter a framework contract (1,000 PCs / 299 Euro each) Normal purchase price 349 Euro each Order 150 PCs for next month for 299 Euro each Supplier sends a delivery notification Supplier delivers 150 PCs that must be checked / stored A customer asks for an offer for 10 PCs The customer orders 8 PCs relating to the initial offer Take order amount from the warehouse and ship to the customer Specific retail process Check for basic price conditions, transaction-based conditions, and subsequent price conditions for purchase and sales (conditions such as basic bonuses, market share increase bonuses, listing bonuses, allowance adjustment bonuses, etc., are given in the scenario) Check if the system is capable of conditions depending on specifics such as regions, customer loyalty, etc. Check for calculation possibilities in purchase (different gross and net costs, etc.) Evaluate warehouse structures in terms of organization, areas, and attributes such as restrictions in weight or article characters (explosives, chemicals, etc.) Check whether it is possible to split sales offers into orders Check whether it is possible to deliver to different stores with different prices but send all bills once a month Generic production process Create a stock of materials Create bill of materials Generate a production order Assembly of the individual parts Assembly of the whole product A more detailed description of the course and the approach of setting up this course can be found in [START_REF] Winkelmann | Teaching ERP systems: A multi-perspective view on the ERP system market[END_REF].
Course Evaluation
Students' perspective
Since an evaluation of seminars and/or courses in general is of high importance for the improvement of teaching concepts [START_REF] Seethamraju | Enterprise systems software in business school curriculum -Evaluation of design and delivery[END_REF], the same questionnaires were handed out to the students at all practical courses. Within the lectures we did an evaluation, too. But these evaluation results will not be part of this chapter. We will focus on the evaluation of the practical ERP courses and the competence levels 2 and 3 only.
The questionnaires were filled out anonymously. This questionnaire served to identify possible weaknesses and opportunities for improvements with respect to the courses' realizations, the scenarios, and the support from the lecturers as well as the adequateness of the respective ERP systems. Also, the positive aspects that should be repeated in the next cycles could be emphasized. The questionnaire consisted of 21 questions based on scale evaluations (grades 1-5), yes/no, and free text answers. Some selected evaluation results are shown in Table 5. The presented results are from the evaluation of the course in winter semester 2010 / 2011. Additionally, feedback discussions were conducted with some of the teams separately to gather further suggestions from the students. This was especially done at the end of MS Dynamics Project and ERP Systems in Commerce. Instead of a detailed empirical evaluation that would not be statistically relevant because of the small seminar sizes, our goal is to report on students' and lecturers' experiences in order to make this knowledge available for other universities. As seen in Table 5, the students emphasize an increase in interest in ERP issues. Only, the hands-on exercises do not lead to such an increase. The level of difficulty is seen as quite reasonable for all courses within Competence Level 2 (cp. Figure 2). For the two courses of Competence Level 3, the students feel the level of difficulty a little bit too high. Contrary to that fact, the effort for these courses in comparison to other courses is seen as quite reasonable. Only the effort for ERP Systems in Commerce is seen as too high.
Especially MS Dynamics Project and ERP Systems in Commerce turned out to be very popular among students. However for ERP Systems in (since this was not mandatory), we always feared attracting too few students, which would imply skipping some of the systems. This would have meant disappointing some of the ERP manufacturers who had invested time in advance to our seminar
Lecturer's perspective
The inclusion of the different lectures and hands-on exercises with the addition of "self-learning" ERP courses (MS Dynamics Project and ERP Systems in Commerce) was a good opportunity to enhance the information systems curriculum at the TUD. For the students, this not only enabled a deeper insight in specific ERP systems for large-scaled enterprises but for small and medium-sized enterprises as well.
A further benefit of MS Dynamics Project lies in the documentation of the respective ERP system produced by the students. Thereby, the lecturers obtained click-by-click instructions that can be used for further course enhancements or additional hands-on exercises. If the teams perform well in MS Dynamics Project, these documentations can be used without considerable effort for adjustment of the materials. For ERP Systems in Commerce, the expansion of the seminar to three universities was a good opportunity for the lecturers to foster the exchange with colleagues in the same research area. They could explore and discuss in which ways the students of the other universities were educated in the field of information systems. Additionally, this expansion also created a competitive pressure among the lecturers because every lecturer wanted his or her teams to perform very well. This increased the motivation for a good and high-quality mentoring of the teams at every university. Therefore, the expansion of the seminar to more than one university was regarded as a good idea among the professors and lecturers of the respective universities. Also during the different courses, the lecturers (professors and assistants) gained a valuable insight into ERP systems (some of them previously not known). Therefore, the courses also offer a chance to increase the individual ERP horizon.
Conclusion and Limitations
The idea of the integration of different ERP lectures and courses in the IS curriculum at TUD was to enable an insight into different ERP concepts and systems for the students. With this, a broad overview of ERP topics, functionalities, and architectural approaches is given as well as a deep practical insight into selected ERP systems. Therefore, students become familiar with using systems. Although we regard this type of course combination aiming at more diversity in ERP education as very successful, there are some limitations too. First, we are only able to handle some ERP systems and are not able to fully cover the market. However, we do not consider this a severe disadvantage. Furthermore, not all ERP systems on the market are suitable for such an ERP course. For example, older systems are often very complicated in their installation procedure. Also, ERP systems for large companies may not be very suitable for MS Dynamics Project and for the ERP Systems in Commerce as they may be too complex for unsupervised student exercises. We tried to keep the workload at the same level for all IS students especially for the groups in the "self-learning" courses. However, some groups may have to invest less work due to better ERP documentation, better usability, or more help from Internet forums.
In conclusion, for both students and lecturers/tutors, the integration of different ERP systems in the curriculum offers a good opportunity to gain a deeper insight into ERP systems and extend their knowledge about a variety of ERP systems, sharpening awareness of system differences.
Future steps are repeating the different courses in the respective semesters and varying the ERP system used in ERP Systems in Commerce in each cycle. Additionally, we are going to include the distribution game of ERP Simulation Game in the curriculum of IS students in the Bachelor Program as well as in the business studies curriculum.
Figure 1 .
1 Figure 1. Stages of maturity on the competence ladder with regard to the ERP curriculum (cp. [28], [29])
Figure 2 .
2 Figure 2. Overview of the ERP systems per semester
Table 1 . Teaching methods (multiple answers allowed, n=59) Teaching methods Absolute frequency Relative frequency (n= 59)
1
Lectures 50 85%
Practical exercises 36 61%
Case studies 29 49%
Projects 23 39%
Seminars 20 34%
Assignment paper 14 24%
Simulation games 4 7%
Other teaching methods 4 7%
Table 2 . Scenario for SAP Exercise (compendium)
2
Generic Create a stock of materials
production Create bill of materials
process Create routings
Generate a production order
Assembly of the individual parts
Assembly of the whole product
Controlling Case Create Cost Centers
Study Plan the Number of Employees
Plan Primary Cost Inputs
Plan Internal Activity Inputs
Automatic Price Calculation
Create Work Centre
Integrate Work Centre in Routing
Perform New Product Cost Estimate
Generic retail Create master data for customers and vendors
process Enter a framework contract
Create Sales Order
Create Production Order
Create Purchase Order
Outbound Delivery with Order Reference
Create Transfer Order for Delivery Note
Table 3 . Scenario for MS Dynamics Project (compendium)
3
Table 4 . Scenario for ERP Systems in Commerce (compendium)
4
Table 5 . Results of the course evaluations of winter semester 2010 / 2011 Average grade per course (1=very high, 5=very low)
5
SAP MS Dy- SAP ERP ERP
Exercise namics Custo- Simula- Systems
Level 2 Project mizing tion in Com-
(n=30) Level 3 Level 2 Game merce
(n=12) (n=26) Level 2 Level 3
(n=10) (n=12)
Knowledge before course 3.90 2.65 2.70 2.67
Interest in ERP issues before course 2.47 2.17 2.46 2.50 2.50
Interest in ERP issues after course 2.47 1.83 2.62 2.40 2.17
Motivation for
thoughts and opinion 3.37 2.17 3.24 2.10 2.58
building
Increase of ERP knowledge in general 2.43 3.17 2.35 2.20 1.92
Increase of knowledge
regarding the 2.63 2.33 2.35 2.20 2.08
respective ERP system
Increase of knowledge
in comparison to other 2.92 2.67 2.72 1.90 2.45
seminars
Usefulness of the scenario 2.50 2.00 2.24 2.10 2.18
Adequateness of the respective ERP system 2.27 1.83 2.09 1.78 2.00
Level of difficulty
(2=much too high, 0=reasonable, -0.37 0.17 -0,08 -0.10 0.42
-2=much too low)
Effort needed
(2=much too high, 0=reasonable, 0.13 0.00 0.00 0.00 0.58
-2=much too low)
Effort needed in
comparison to other
courses (2=much too -0.50 -0.50 -0.08 -0.40 0.67
high, 0=reasonable,
-2=much too low) | 50,488 | [
"1003471",
"1003472",
"1003473"
] | [
"96520",
"96520",
"325710"
] |
01483867 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01483867/file/978-3-642-28827-2_14_Chapter.pdf | Ling Li
Effects of Enterprise Technology on Supply Chain Collaboration
Keywords: enterprise information technology, supply chain collaboration, supply chain performance
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Enterprise Information technology integrates business functional areas and links suppliers and customers of the entire supply chain. Today, e-solutions are a must-have weapon for a supply chain to improve collaboration to compete in the global market. Equipped with integrated information technology, many manufacturing producers have adopted the collaborative strategy on production planning, demand forecasting and inventory replenishment to provide the end user what he wants, how he wants it, and when he wants it.
This study is to investigate the effects of enterprise technology on supply chain collaboration and performance. Structural equation modeling is employed to test the multi-phased conceptual model which is shown in Figure 1. Enterprise technology assimilation is indicated using two factors: enterprise technology use for exploitation (F1) and enterprise technology use for exploration (F2). Based on the theory of organizational learning [START_REF] March | Exploration and exploitation in organizational learning[END_REF] [2], we define enterprise technology assimilation for exploitation as the use of technology for the execution of supply china routine processes. Similarly, enterprise technology assimilation for exploration is defined as the implementation of unstructured and strategic supply chain activities. Planning collaboration (F3) and forecasting and replenishing coordination (F4) are considered as supply chain collaboration measures. Collaboration and coordination in planning is defined as jointly plan for supply chain key activities [START_REF] Vics | CPFR Guidelines. Voluntary Inter-industry Commerce Standards[END_REF] [START_REF] Danes | Managing business processes across supply networks: the role of coordination mechanisms[END_REF]; while operational collaboration and coordination are defined as information sharing to achieve efficient task execution [START_REF] Zhou | Supply chain practice and information sharing[END_REF]. Operational benefits (F5) are defined as first-order benefits that arise directly from effective supply chain collaboration. Conversely, benefits for market performance (F6) arise through better operational performance supported by supply chain collaboration [START_REF] Li | Assessing Intermediate Infrastructural Manufacturing Decisions that Affect a Firm's Market Performance[END_REF].
Adaptive process concept toward enterprise information technology assimilation
The exploitation of enterprise information technology in supply chain collaboration involves using enterprise technology to facilitate routine business practices, such as order receiving, order tracking, new accounts establishment, existing account maintenance, invoicing, material transaction, etc. These activities refine existing business patterns with benefits occurring over a short to immediate time period [START_REF] Tokman | Exploration, exploitation and satisfaction in supply chain portfolio strategy[END_REF]. With enterprise technology, users are able to improve operational efficiency through measures such as increasing standardization or tightening process control. Furthermore, the exploitation approach tends to result in operational benefits such as lead time reduction and inventory accuracy [START_REF] Zhou | Supply chain practice and information sharing[END_REF]. Firms oriented to exploitation, use enterprise information technology for information sharing, channel collaboration, and integrated forecasting and inventory replenishment. For example, Cisco outsources more than 50% of its production capacity. Using enterprise information technology, it effectively process orders online which results in enhanced ability to rapidly respond to the demand changes in the supply chain [START_REF] Zhou | Supply chain practice and information sharing[END_REF].
The exploration of enterprise technology, on the other hand, diffuses beyond the organization and involves uncovering new methods to solve long-term supply chain collaboration problems. Exploration is defined by terms such as search, innovation, and discovery, with benefits occurring over a longer time horizon and beyond the organization [1] [8]. Unlike the exploitation approach that place emphasis on efficiency, consistency and process control, the exploration approach involves risk taking and experimentation. Firms oriented toward exploration of enterprise information technology develop new business models and strategies that enable them to expand new markets and develop new products [START_REF] Debenham | Exploitation and exploration in market competition[END_REF]. For example, relied on enterprise technology to share business information with vendors and customers, Dell Computer has gained market share by building customized computers using the Internet as an order fulfillment vehicle. Dell assembles computers but outsources most of the parts and components it needs for production. Outsourcing has made collaborative planning, forecasting, and replenishment a vital vehicle to implementing mass customization strategy in supply chain.
Enterprise technology and supply chain collaboration
A supply chain is as strong as its weakest link. The notion here focuses on strong and effective collaboration. The fundamental point that distinguishes supply chain management and traditional materials management is how the collaboration of trading partners is managed. Thus, collaboration is a most talked about issue in today's global supply chain management.
In recent years, retailers have initiated collaborative agreements with their supply chain partners to establish on-going planning, forecasting, and replenishment process.
This initiative is called collaborative planning, forecasting, and replenishment (CPFR). The Association for Operations Management defines CPFR as follows:
"Collaboration process whereby supply chain trading partners can jointly plan key supply chain activities from production and delivery of raw materials to production and delivery of final products to end customers" -The Association for Operations Management 1 .
The enabler of CPFR is information technology. The earlier versions of CPFR are Electronic Data Interchange (EDI), bar coding, and vendor-managed inventory (VIM). The more current version of CPFR takes advantage of enterprise information technology. For example, Wal-Mart has engaged in CPFR with about 600 trading partners [START_REF] Cutler | CPFR: Time for the breakthrough[END_REF]. The use of enterprise technology has permitted strong supply chain coordination for production planning, demand forecasting, order fulfillment, and customer relationship management. Published studies have consistently support the effective result through association between enterprise technology use and organizational coordination [3] [5].
Supply chain collaboration has been referred to as the driving force of effective supply chain management [START_REF] Barratt | Understanding the meaning of collaboration in the supply chain[END_REF] [START_REF] Li | Ensuring Supply Chain Quality Performance through Applying SCOR Model[END_REF]. The objective of supply chain collaboration is to improve demand forecast and inventory management, with the right product delivered at right time to the right location, with reduced inventories, avoidance of stock-outs, and improved customer service. The value of supply chain collaboration lies in the broad exchange of planning, forecasting and inventory information to improve information accuracy when both the buyer and seller collaborate through joint knowledge of sales, promotions, and relevant supply and demand information.
Supply chain collaboration becomes a core competence in a global market. There are eye-opening collaborative results in forecasting and inventory management.
Nabisco and Wegmans, for example, noted over a 50% increase in category sales. Wal-Mart and Sara Lee reported a 14% reduction in store-level inventory with a 32% increase in sales. Nevertheless, integrating disconnected planning and forecasting activities in the entire supply chain is still a challenge. It has been reported that supply chain collaboration has proved difficult to implement; it is difficult to understand when and with whom to collaborate; it has relied too much on information technology and there is a lack of trust between trading partners [START_REF] Barratt | Understanding the meaning of collaboration in the supply chain[END_REF].
Given the literature and anecdotal evidence, we may conclude that supply chain collaboration has great potential in supply chain management, but further investigation is needed to understand its practical value. As such, we hypothesize the following:
Hypothesis 1: The higher the level of enterprise information technology use for exploitation the greater the supply chain perceived level of collaborative planning. Hypothesis 2: The higher the level of enterprise information technology use for exploitation the greater the supply chain perceived level of collaborative forecasting and replenishment. Hypothesis 3: The higher the level of enterprise information technology use for exploration the greater the supply chain perceived level of collaborative planning. Hypothesis 4: The higher the level of enterprise information technology use for exploration the greater the supply chain perceived level of collaborative forecasting and replenishment.
The VICS Working Group conceptualized a sequential collaborative process [START_REF]Collaborative Planning, Forecasting, and Replenishment: How to Create a Supply Chain Advantage[END_REF]. The process has nine steps which are divided into three phases. The first is planning phase, which consists of steps 1 and 2, and creates the collaborative frontend agreement and joint business plan. The second is the forecasting phase, including steps 3-8, and the last is the replenishment phase (step 9). Specifically, a sequential process is introduced. The second and third phases execute supply chain orders which are translated from the joint business plan which is determined at the first phase [START_REF] Danes | Managing business processes across supply networks: the role of coordination mechanisms[END_REF].
The importance of collaborative has been well documented. For example, in the spring of 2001, Sears and Michelin (a French company) began discussions on collaborative planning. Later that year, their joint plan detailed a collaborative forecasting and replenishment agreement. As the result of collaboration, the combined Michelin and Sears inventory levels were reduced by 25 percent [START_REF] Steermann | A practical look at CPFR: the Sears -Michelin experience[END_REF]. This supports our following hypothesis.
Hypothesis 5: The higher the level of collaborative planning the better the execution of collaborative forecasting and replenishment.
The relationship between supply chain collaboration and performance
Research consistently supports the idea that collaboration in supply chain improves firm's operational performance and market competitiveness [START_REF] Li | Assessing Intermediate Infrastructural Manufacturing Decisions that Affect a Firm's Market Performance[END_REF]. Companies that are able to establish collaborative relationship with their supply chain partners will have a significant competitive edge over their competitors. Industry practices have provided numerous examples. Mayo audio-video franchise store in Shanghai applied enterprise technology to support its collaborative planning, forecasting and replenishment activities and achieved better operational performance such as cost reduction and better market performance such as market share growth [START_REF] Wang | CPFR and Its Application in Shanghai Maya[END_REF]. Dell Computer implements a "direct model" which builds customized computers based on customer orders. It collaborates with many of its suppliers and applies the Internet-based enterprise technology. The exploitation of enterprise technology enables Dell to implement JIT-based production system; while the exploration of enterprise technology enables Dell to develop innovative business model which opens up new markets for it. This leads us to the next two hypotheses.
Hypothesis 6: Collaborative planning, forecasting and replenishment in supply chain will directly benefit a firm's operations performance. Hypothesis 7: Better operations performance will contribute to supply chain market performance.
Research Methodology
Data and Constructs
The research instrument was based upon the existing literature and pre-tested by a group of practicing managers in China, who had enterprise information technology implementation experience and supply chain collaboration knowledge. The instrument was then revised according to the suggestions from the managers. The revised questionnaire was sent in year 2006 to a group of 1000 manufacturing firms. Our effective sample size for this analysis is 177. Six constructs based on Figure 1 are used to test the hypotheses. Among them, two constructs are used for enterprise information technology assimilation: enterprise information technology for exploitation (EIT) and enterprise information technology use for exploration (ERT). Based on March's discussion of organizational learning theory [START_REF] March | Exploration and exploitation in organizational learning[END_REF] and published studies on enterprise technology use [START_REF] Akbulut | The role of ERP tools in supply chain information sharing, cooperation, and cost optimization[END_REF], we define enterprise technology use for exploitation as the use of EIT for production scheduling, material requirement, the implementation of structured inter-firm processes such as order processing and order shipment facilitation. These items are measured on a seven-point Likert scale, ranging from not important (1) to absolutely critical [START_REF] Tokman | Exploration, exploitation and satisfaction in supply chain portfolio strategy[END_REF].
Given the wide variation in definitions and usage of the concept in the literature, the collaborative activities suggested in this study are just one of many ways that can be applied to capture the overall thrust of supply chain collaboration through technology implementation. In this study, supply chain collaboration is measured by two constructs; one deals with collaborative planning (CP) and the other collaborative forecasting and replenishment (FR). We structure collaborative activities to two constructs because one is at the planning level and the other at the operational level [START_REF] Danes | Managing business processes across supply networks: the role of coordination mechanisms[END_REF]. A number of authors suggest that collaborative planning processes such as joint decision-making and planning precede operational collaboration such as demand forecasting and inventory replenishment [START_REF] Vics | CPFR Guidelines. Voluntary Inter-industry Commerce Standards[END_REF]. The collaboration constructs are also assessed on a seven-point Likert scale, ranging from significantly lower (1) to significantly higher [START_REF] Tokman | Exploration, exploitation and satisfaction in supply chain portfolio strategy[END_REF] as compare to their previous supply chain activities.
The operational performance construct (OP) is based on the published operations management literature [START_REF] Zhou | Supply chain practice and information sharing[END_REF] [START_REF] Akbulut | The role of ERP tools in supply chain information sharing, cooperation, and cost optimization[END_REF]. Inventory represents the material flow in supply chain and is the physical item that the suppliers send to its customers. The focus is placed on inventory accuracy, safety stock reduction, delivery lead time and order fulfillment lead time [START_REF] Akbulut | The role of ERP tools in supply chain information sharing, cooperation, and cost optimization[END_REF]. Operations performance items are measured from "not improved (1)" to "significantly improved [START_REF] Tokman | Exploration, exploitation and satisfaction in supply chain portfolio strategy[END_REF]."
The market performance construct has empirical support. The most commonly cited financial performance indicators are market share growth, economic growth opportunity, and customer retention [START_REF] Steermann | A practical look at CPFR: the Sears -Michelin experience[END_REF]. The performance items are measured on a 7point Likert scale, ranging from significantly lower (1) to significantly higher (7) as compared to the firm's pre-implementation performance.
Structural equation model is employed to test the hypothesized relations among six constructs. Structural equation modeling measures multiple relationships between independent and dependent variables, thus accommodating aggregated dependent relationships simultaneously in one comprehensive model.
Construct measure and reliability
Our conceptual model involves relationships among six constructs. In this section, we provide evidence that the measurement of these constructs has been effective in terms of reliability and validity. All of the survey items that were used for measurement of the constructs are listed in Table 1. Empirical support for effective measurement is provided by a Cronbach Alpha. Enterprise technology for exploitation was measured using three items. The reliability for the scale is 0.81 (Table 1). Enterprise technology for exploration was measured using a three times. The reliability is 0.817 (Table 1). The reliabilities for collaborative planning and collaborative forecasting and replenishment are 0.756 and 0.868 respectively. Finally, the reliabilities for operational performance and market performance are 0.805 and 0.804 respectively.
Results
Structural model test result
The results of the structural model tested evaluating overall model fit are shown in Fig. 2. Additionally, chisquare/df is 1.06, GFI is .916, AGFI is.890, CFI is .993, and RMSEA < .018; all meet the acceptable threshold. The standardized path coefficients are significant at p-value of p< 0.01 (Table 1). Combining the findings of fit indices obtained from the measurement model and the structural model, we can see that the sample data support our conceptual model. The following section presents the outcomes of hypotheses associated with the structural model.
Findings related to hypotheses:
We further investigated the findings related to specific hypothesis and individual paths of the model. The set of four hypotheses relate to enterprise technology and supply chain collaboration is examined first.
Hypothesis 1 is not significant. Hypothesis 2 is supported at p<0.10 (γ 1 =0.157). Hypothesis 3 is supported at p<0.01 (γ 2 =0.462). Hypothesis 4 is supported at p<0.01 (γ 3 =0.257).
This set of findings reveals some valuable insights on how enterprise technologies facilitate supply chain collaboration. The result suggests that applying enterprise technology for exploitation directly affects operational collaboration such as demand forecasting and inventory replenishment. However, it does not have a significant impact on collaborative planning. Furthermore, applying enterprise technology for exploration, which focuses on identifying the trends in sales and operations management and leveraging firm's expertise to create new markets and production, has direct positive effect on both collaborative planning and collaborative forecasting and replenishment. The results from this study underscore the complexity of the construct of enterprise technology exploitation and indicate that exploration may have an overarching impact on supply chain collaboration. These findings suggest that enterprise technology use creates a unique and specific value to collaborations within supply chain.
Next, we look at the hypothesis that relates the collaborative planning construct to collaborative forecasting and replenishment.
Hypothesis 5 is supported at p<0.01 (β 1 =0.3901).
The finding provides support for the sequential process of collaborative planning and collaborative operational activities. A possible explanation is that sharing information through enterprise technology and making collaborative plans are not enough to improve operations performance. In order to achieve better inventory and lead time performance, supply chain managers have to be able to get involved with the complexity of collaborative planning with multiple echelons in a supply chain and implement the plan through demand forecast and inventory management. This finding is consistent with the result obtained by Disney et al. [START_REF] Disney | Assessing the impact of e-business on supply chain dynamics[END_REF].
Finally, we examined the hypotheses that relate supply chain collaboration to operations and market performance.
Hypothesis 6 is supported at p<0.01 (β 2 =0.2842). Hypothesis 7 is supported at p<0.01 (β 3 =0.6189). The findings suggest that collaborative forecasting and replenishment will significantly benefit operational performance. Better operations performance is found to have a significant impact on firm's market performance.
In summary, six of seven hypotheses have been supported by the results of the statistical analysis using data from 177 Chinese firms. Examining the results, some tentative conclusions can be made. First, enterprise information technology implementation significantly affects collaborative planning, forecasting, and inventory replenishment in a supply chain. Second, supply chain collaboration benefits firm's operational performance. Finally, market competitiveness is influenced by operations performance.
Conclusions
The study considers how collaborative activities mediate the association between enterprise information technology assimilation and market performance in supply chain. We draw upon an empirical research from 177 companies to illustrate what collaborative activities will enable supply chain to achieve better operational and market performance, given their particular enterprise information technology implementation circumstances. We have provided three major contributions in this study: (i) uncovered importance of leveraging enterprise information technology use through supply chain collaboration; (ii) identified the relationship between enterprise ownership and enterprise technology use and supply chain collaboration; and (iii) illustrated the association between collaborative activities, operational benefits, and supply chain market performance. The result of the study indicates that assisted with advanced information technology, successful collaboration among trading partners does affect firm's operational and market performance if effective communication in the process of supply chain coordination is fostered.
There are a number of avenues this research can be extended to. For example, further research on collaboration of supply chain may include risk assessment of collaboration, optimal point of product-differentiation in a supply chain, selection of trading partners, the effects vertical collaboration, horizontal collaboration, and spatial collaboration on performance.
Fig
Fig. 1 Research Model
Fig. 2
2 Fig. 2 Covariance Structure Model
Table 1 : Scales and Constructs
1
Standard Cronbach
coefficient t-value alpha
The Association for Operations Management is formerly known as American Production and Inventory Control Society (APICS). | 23,857 | [
"13008"
] | [
"10117"
] |
01483869 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01483869/file/978-3-642-28827-2_15_Chapter.pdf | Dmitrij Slepniov
Brian Vejrum Waehrens
Ebbe Gubi
Changing Foundations for Global Business Systems
Keywords: Business system, global operations capabilities, operations network configuration, case studies
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
The world is changing fast. To accommodate for this change, companies are under increasing pressure to develop new adequate structures for their operations systems. Facing intense competition, companies all over the world are seeking to achieve a higher degree of efficiency and effectiveness by constantly reconfiguring their value networks and subsequently relocating discrete value-added activities to most appropriate destinations. This process may be confined only to crossing geographic borders and occur on an "intrafirm" basis (i.e. offshoring). However, increasingly, in many industries (e.g., textile, footwear, IT services) it has also been accompanied by vertical disintegration of activities (i.e. outsourcing to external suppliers) [START_REF] Aron | Getting Offshoring Right[END_REF], [START_REF] Kotabe | Global Sourcing Strategy and Sustainable Competitive Advantage[END_REF].
It goes without saying that the idea of global dispersion of work is not new. The existing industrial networks scholarship (e.g. [START_REF] Ferdows | Made in the World: The Global Spread of Production[END_REF], [START_REF] Shi | Emergence of Global Manufacturing Virtual Networks and Establishment of New Manufacturing Infrastructure for Faster Innovation and Firm Growth[END_REF]), provides a point of departure for understanding how global operations units are configured on a global basis and consist of diverse and interdependent affiliates (linked both through ownership and non-equity relationships), which are engaged in an exchange of goods, services and information. [START_REF] Dicken | Global shift: Mapping the Changing Contours of the World Economy[END_REF] points to the essential dynamism and organizational temporality of such global operations agglomerates. With the spread of offshoring and fragmentation of operations, the networks temporality and frequent reconfiguration trends are likely to continue. We argue that the constant "process of becoming" [START_REF] Slepniov | Offshore Outsourcing of Production -an Exploratory Study of Process and Effects in Danish Companies[END_REF] in global operations networks poses a serious challenge for global business system management and the development of adequate systems tools and solutions. in order to avoid. Therefore, this paper investigates how changing operations configurations affect business systems of companies and how companies can establish fitness for continually evolving operations configurations.
The empirical part of the paper is based on a case study of a large Danish industrial equipment company. The offshoring process has affected most parts of its value chain activities and is no longer confined to simple or non-core activities. This process has pushed the company into the development of more elaborate operations strategy structures and infrastructures. The case testifies how Danish companies have advanced far in relocating and reconfiguring most parts of their value chain. However, this has been achieved at great initial cost of intense coordination efforts. Drawing on the experiences of the case study, the key argument of this paper is that companies need to understand the logic, factors and determinants of their global business systems and how they change over time. Such an understanding will enhance companies" ability to build and continuously upgrade organizational capabilities, support systems and knowledge supporting the integration between the lead firm and the increasingly dispersed operations network
The following section introduces the theoretical background of the study. We then proceed with the methods and the case study used in the paper. Next, the analysis and discussion are presented, before we conclude with key lessons and implications for future research.
Theoretical Background
Business Systems
The global business system is a rather vague concept, which has been applied at multiple levels of analysis. At one end, there is the economic concept of the national business system [START_REF] Whitley | Divergent Capitalisms: The Social Structuring and Change of Business Systems[END_REF] dealing with the national institutions and conditions for conducting various business processes. At the company level, the business system has been used to describe the organisation, mode and scope of operations [START_REF] Normann | From Value Chain to Value Constellation: Designing Interactive Strategy[END_REF]. And at the operational level, the business system is often discussed as the specific tools supporting and or governing operations and operations development [START_REF] Davenport | Putting the Enterprise into the Enterprise System[END_REF]. One key dimension of the latter perspective on business system is that the business system imposes its own logic on the company and the company often fails to reconcile the technical standards embedded in the system with its specific business needs [START_REF] Davenport | Putting the Enterprise into the Enterprise System[END_REF].
In this paper, the business system is discussed as a combination of these perspectives, drawing on the idea that the business system builds on a set of structural and infrastructural means which enable the company to create, deliver and appropriate value. This does not alone include internal resources and capabilities, but also the company"s ability to get access to external resources and capabilities. The paper will be working from the thesis that it is from the understanding of all the above conditions for business that we should draw our knowledge about how to configure the business system appropriately.
Organizing Principles of Global Business Systems
The business system approach [START_REF] Whitley | Divergent Capitalisms: The Social Structuring and Change of Business Systems[END_REF] focuses on the effect of factors in the institutional environment on organizations. The basic dimensions of coping with these effects are: 1) coordination, i.e. the mode and extent of organisational integration through common routines, systems and management standards; and 2) control, i.e. the way the activities and resources are controlled within the organization. These two dimensions are also important for determining the degree of centralisation and decentralisation. Prior research (e.g. [START_REF] Narasimhan | Organization, Communication and Co-ordination of International Sourcing[END_REF]) differentiates between three major types of organisational structures: centralized, decentralized and hybrid. Centralized structures are characterized by tight coordination and control mechanisms where decision-making authority concentrated at the top of an organisation. In decentralized structures, on the other hand, decision-making authority is pushed down to the business units level making such a structure particularly suitable for organisations with markedly different or even unique business units. In practice, however, most companies have adopted a hybrid approach that combines attributes of both centralized and decentralized form with the intention to overcome centralization-decentralization tradeoffs.
In the strategic management literature, the dimensions of control and coordination are used in defining four basic business configurations with distinctive governance forms: the multinational, the global, the international, and the transnational [START_REF] Bartlett | Managing Across Borders: The Transnational Solution[END_REF]. The transnational mode helps companies to achieve simultaneously global efficiency of the global mode, national responsiveness of the multinational mode and the ability to exploit knowledge emphasised in the international mode [START_REF] Bartlett | Managing Across Borders: The Transnational Solution[END_REF]. The transnational mode recognises the importance of decentralisation and responsiveness to cultural differences and, thus, retains national in its name. On the other hand, the transnational mentality also emphasises linking and coordinating between globally dispersed operations, as indicated by the prefix "trans". These four configurations in turn has formed the outset for addressing ERP architectures [START_REF] Clemmons | Control and Coordination in Global ERP Configuration[END_REF] dealing with the basic dilemma of balancing local responsiveness and global efficiency of the system.
Among other theories providing insights into the foundations for organizing global business systems is the resource-based view (RBV). The view adds an important dimension to this discussion. The fundamental principle of RBV is that the basis for a competitive advantage of a firm lies primarily in the application of the bundle of valuable resources at the firm"s disposal [START_REF] Barney | Firm Resources and Sustained Competitive Advantage[END_REF]. These advantages are dependent on organisational trajectories, which build intrinsic organisational capabilities. The transformation of a short-run competitive advantage into a sustained competitive advantage requires that these resources are heterogeneous in nature and not perfectly transferable; in other words, they develop proprietary properties and are embedded in a specific set of context variables. Within the multinational company these resources are highly dispersed as local entities specialise. According to the RBV, the activities that enable an organisation to outperform competition should be nurtured and defended. However, the multinational company may have incentives to inject more discipline and centralized control into their dispersed operations if the costs of responsiveness significantly outweigh its benefits.
Offshoring Trends and Challenges of Continuous Reconfiguration
With the growth of offshoring, the move of competitive resources from intraorganisational base to the inter-organisational network settings is also gaining pace. In other words, the resource bases are getting stretched across locations or even organisations. The practice shows that some more mature and experienced companies are better than others equipped for dynamic, fragmented and to a large degree external set-ups of their operations. However, even these more mature companies cannot avoid challenges and costs of dealing with such complexity. Within a loosely coupled global inter-organisational network, the situation is exacerbated event further in cases of non-standard products, products with integral product architectures, and products whose output is time-sensitive [START_REF] Baldwin | The Power of Modularity[END_REF].
However, these challenges and costs differ depending on robustness and transferability [START_REF] Grant | Adapting Manufacturing Processes for International Transfer[END_REF] of the tasks in question. An operations process is robust if its sensitivity to external factors (e.g. managerial practices, infrastructure, and government requirements) is low. Transferability here refers to how easy the process can be captured, decontextualised, transmitted and assimilated. High robustness and high transferability may be highly desired for implementing the offshoring decision. However, referring back to the arguments of the RBV, high robustness and high transferability of all processes may reduce the uniqueness of business and undermine the sustainability of competitive advantages, which in turn may push the company to choose a more integrated organizational mode.
It can be argued that few manufacturing processes possess a sufficient robustness and transferability levels to allow for perfect mobility or a standardised organizational infrastructure, which is also supported by [START_REF] Clemmons | Control and Coordination in Global ERP Configuration[END_REF] in their discussion of global ERP configuration. This raises the question of configuring business systems solutions to key contingencies. Addressing this effectively means that not only the hardware of the support system has to be changed; the process also involves building organizational capability for global operations through systems, processes, product adaptations and preparing the organization mentally. However, how this can be achieved remains a key unresolved question and therefore this paper explores the following research question: how can companies effectively coordinate and control globally dispersed tasks which are embedded in differentiated and constantly changing organizational contexts.
Methodology and Data
The primary data set for this study is derived from a case of Danish industrial equipment firm. The case was followed intensely by the authors in 2009-2011. We have interviewed COO and supply chain managers about the process, means, and strategies supporting their international operations.
The case study strategy, one of several strategies of qualitative enquiry, has been chosen for this investigation for several reasons. First, case studies can describe, enlighten and explain real-life phenomena that are too complex for other approaches requiring tightly structured designs or pre-specified data sets [START_REF] Voss | Case Research in Operations Management[END_REF], [START_REF] Yin | Case Study Research -Design and Methods[END_REF]. Second, the case study strategy is well-equipped instrumentally for furthering understanding of particular issues or concepts which have not been deeply investigated so far ( [START_REF] Eisenhardt | Building Theories from Case Study Research[END_REF], [START_REF] Yin | Case Study Research -Design and Methods[END_REF]). Third, the choice of the case study strategy is based on the fit between case and operations management (OM) [START_REF] Voss | Case Research in Operations Management[END_REF], which is acknowledged but underexplored in the literature.
Despite having many advantages, case study research also has several pitfalls and poses significant challenges (e.g. [START_REF] Meredith | Building Operations Management Theory through Case and Field Research[END_REF]). First, there is the problem of the observer"s perceptual and cognitive limitation. Second, a high probability of overlooking some key events also constitutes a threat to the quality of case studies research. Third, case studies are exposed to the challenge of generalizability. Fourth, the accuracy of some inferences can be undermined by the reliance on intuition and subjective interpretation of an investigator. To address these challenges, we followed practical guidelines and steps discussed in qualitative methodology literature (e.g. [START_REF] Voss | Case Research in Operations Management[END_REF], [START_REF] Yin | Case Study Research -Design and Methods[END_REF]). The current research relied on extensive use of triangulation. Multiple sources of evidence (semi-structured interviews, documents and on-site observations) as well as triangulation of multiple data-points within each source of evidence (e.g. multiple respondents at the top and middle management levels) were used. These data combined with secondary material (annual reports, media material, presentation material to customers and stakeholders) were used to build the database for the case.
Case Study: Distributed Operations at a Danish Industrial Equipment Firm
The case company is a Danish equipment manufacturer holding a market leader position. With production in twelve countries and a global sales presence, it was working from a strong international base. The company had been acquiring approximately one production company every year since 2000 and with these new subsidiaries, it also inherited a number of business systems, processes and product configurations. By 2011, it had incorporated more than 80 companies, spanning all time zones, 90 languages and more than 100 product families. These developments were signaling a change of mindset from an early ideology of original in-house development, tight control and green-field investments. Some of the newly acquired firms still controlled their own business agenda, while others were fully integrated under a corporate business system. The pace of acquisition had quickened recently in par with the restructuring of their main product"s market characterized by increased concentration, and firms moving from component to system suppliers, adding more competencies. When referring to the business approach one of the company"s executives defined it as "centrally driven global approach with a local presence". Such an approach inevitably resulted in a highly complex business system characterized by: Sales and operations location diversity: Some products were produced in one factory and sold world-wide, other products were produced in the region where they are sold, Components supply base diversity: Many components for local assembly were produced in one or a few factories; some components were also shared across product families Multiple product/solution configurations: Sales responding to local needs and standards resulting in many potential product/solutions configurations Multiple approaches to operations: Network consisted of all operations approaches from make-to-stock to engineered-to-order Diverse and dynamic operations network: The global operations network was emerging with addition of new facilities many of which had their own operation conditions
The Danish HQ had the strategic vision of establishing tighter control of foreign subsidiaries with regards to global capacity footprint, R&D and process ownership. However, each business unit had its own budget and certain latitude to select projects, allocate resources and responsibilities. Consequently, coordination efforts were organized in a corporate management function with a key focus on embedding a corporate culture, developing group standards and policies. But the entrepreneurial spirit of the individual subsidiaries remained and was seen as a key driver of development and all KPIs remained related to local operations performance, resulting in what could be termed a loosely coupled global supply chain.
The company was structured around a fundamental process perspective where the interaction between Production, Product Development and the Technology Center played a special role. With Technology Centers being responsible for technology development and establishment of production lines, a certain degree of coordination was necessary to serve their two customers, namely Production and Product Development. Although the main Production hub was still based in Denmark, parts of Production had already been widely offshored and a broad autonomy has been granted to regional hubs. With Product Development also moving out of Denmark, it made sense that Technology Centers followed its internal customers in their global expansion. Consequently, local hubs were opened in Hungary and China and a new hub was planned in Mexico/USA. Although there was a shared agenda at a higher level in relation to operations in different market segments, cooperation between foreign units was largely limited to brief collaboration on assignments and sharing of patents.
The economic downturn hit the company with a delay in 2009. The management group had just reported that the company seemed to be largely unaffected by the global crisis, when a drastic drop in turnover happened. Afterwards, it was unveiled that due to the largely decentralized reporting structure it took more than 6 months to stop component production, from the time it stopped to invoice the external customers. This experience taught the company a valuable lesson, namely that the loosely coupled operations network could not react swiftly to major changes on the market. To respond to this challenge, a strategic decision was made initiating global integration of Demand and Supply.
For implementing this decision, the company introduced a number of new technologies and processes, which challenged the decentralized approach to the global network of facilities fulfilling demand and consolidating demand planning. An overlaying federal structure was introduced to the global network consisting of a number of business system tools:
A new process for Integrated Demand and Supply Planning New roles and changed responsibilities across the supply chain New SAP modules to support the process and decision-making A product segmentation according to level of demand predictability and supply chain impact
For further coordination of strategic roles and responsibilities in the global business system these measures were introduced:
Supply Chain focus and KPIs ONE PLANtransparent and visible to all Global decision-making with local execution
The R&D function was also in need of better coordination. The company had over 1000 R&D staff globally, indicating that even highly complex tasks are increasingly dispersed. In the coming 5 -7 years, this dispersion of activities was expected to grow further. To illustrate, the Asian hub was planned to have the same number of engineers as Denmark. This rapid growth could also be illustrated by the more than doubling of staff in China in just a year, to more than 100 engineers. Though R&D man power in China was growing fast, they had not launched any product range on their own yet, solely supporting central development activities. It was seen, however, that future responsibilities of developing products would be decentralized to a larger extent. One key driver of this was that China had a special status as a "second home market" with a Managing Director reporting directly to the global board. Meanwhile efforts were also taken to develop the US market as its potential had traditionally been unrealized to the full extent. To illustrate, although the company introduced some product ranges over 50 years ago, it could only claim less than 10% share in the market.
It is expected that over time, despite the introduction of measures outlined above, each regional "Network" (Technology Center/Production/R&D) will grow increasingly independent and specialized, replicating best practices, but developing own capabilities, compatible with local culture and markets. The global organization will be nurtured through a positive iterative process by gradually increasing the level of complexity of tasks overseas. The parallel activities at different hubs of the company will continue until outposts reach critical mass or until they matured enough to absorb key competencies from headquarters or other hubs of the network.
Discussion and Implications
Changing Operations Configurations
As a point of departure to discussing the case company and its fitness for global operations, there is a need to highlight how the operations configurations of the company have been changing over time and how the overall business system has been affected as a result of that. The long period of acquisitions and offshoring moves resulted in the creation of a complex loosely coupled network of differentiated partners and affiliates working with a variety of business systems, processes and product configurations. The belief that responsiveness to local conditions should be answered by the development of local solutions led to a number of different standards for operations and a lacking ability to compare and organize a coordinated effort across sites.
The situation is hardly can be seen as unique or just this case specific. All companies are bound to their historical legacy. The long string of strategic and operational decisions introduces a certain dependency to firms" development trajectory making it difficult, if not impossible, to design such a system from a clean slate. The case study in this paper also illustrates how the business system evolves over time and how any system development initiative need to take the changing operational realities of the fragmented system into account. This in turn means that developing solutions and capabilities related to managing the evolving global business system poses a serious challenge for multinational companies.
Factors influencing operations in the case are added incrementally as new facilities are established or acquisitions are made, new markets are opened, new technologies are added, and new suppliers seek integration. These incidences mean that the system is in constant motion and that mechanisms of coordination and control are constantly challenged by diverging standards. While the operations of sites and companies may have a clear agenda and set of stakeholders, the network of operations is not tended to; it is no-one"s business. This means that the network may indeed share a common vision, but that its common focus is disintegrated by design, as each entity develops through a series of incremental moves and decisions.
In many companies, this evolution and caused by it increased complexity call for reengineering of the overall business system and its supporting tools. Like any other engineered system, the business system is designed to nurture certain capabilities, and the system is likely to be good at doing certain things, but does so at the expense of others. Ultimately, this property of the system leads to trade-offs, which have to be dealt with. The issue may, however, be solved by focusing on the possible complementarities of the system elements rather than their conflicting characteristics. We know from the field of operations management that certain complementary effects can be gained from capabilities, which are often seen as conflicting [START_REF] Hallgren | A Hybrid Model of Competitive Capabilities[END_REF]. This approach has won widespread recognition as a key organizing principle for a modern business world and transnational mode of operations [START_REF] Bartlett | Managing Across Borders: The Transnational Solution[END_REF]. However, it is also recognized that governance based on these principles are difficult to operationalize in practice.
Developing Adequate Global Business System Solutions
With these conditions an increasing number of manufacturers, like the case company, are significantly reshaping their global operations configuration, including radical increases in commitments to offshore operations in scale and scope. Very often such a reconfiguration is done based on expected short term capacity and cost implications. Meanwhile, the equally important aspects of how to realise global operations and to sustain competitive positioning in the longer term get lower priority. As the example in the case shows, the company struggled to utilise global operations potential or was faced with unintended risks as it did shortly after the global economic downturn in 2009. Circumventing these negative effects requires a conscious build-up of organisational capabilities in support of global operations; which we refer to as the build-up of fitness for global operations.
When discussing the configuration of such a fragmented system configuration comprising both internal and outsourced operations, the overall system's performance should be emphasized. As systems theory suggests, any system is not just the sum of the individual parts. If an operations configuration and the relationships between units in it are not optimal, the company risks a negative synergy. The case clearly demonstrates this situation because increased performance in one factory in the network is not necessarily equal improved performance in the overall supply chain.
To tackle this, the company tried to find an optimum balance between centralization and decentralization. On the one hand, to compensate for slow response and the increased distance among their operations, the company worked on introducing a more formalized form of working. On the other hand, the company nurtured plans of upgrading its regional hubs and maintaining a high degree of responsiveness. In some instances, they compensated for the lack of direct control over the physical flow of goods by standardization and in some cases by letting go of the responsibility to suppliers. The standardization can also be observed in companies who in spite of their overall preference for direct ownership, still face the increasing distance between their HQ and subsidiaries or try to establish or utilize the economies of scale in their value chains. There is also evidence in the case to support the proposition that standardization increased the company"s ability to change faster and maintain continuous improvements on a global scale.
We can conclude from this that the ownership ties that exist within the vertically integrated multinational company do not necessarily preclude the entire range of discretionary behaviors that are possible among interacting organizations that are geographically dispersed. Paradoxically, despite predominantly ownership-based relationships in the case, control was limited not only because some of the subsidiaries happen to be very physically distant and resource-rich, but also because they controlled critical linkages with key actors, such as suppliers and customers. Direct control originating from vertical integration was present in the case company, but it was limited due to its co-existence with local autonomy, inherited and diverse systems, and work cultures, which were also recognized as necessary for maintaining responsiveness to various local market demands.
In terms of explaining a particular offshoring trajectory in the case, the company specific task interdependency and the related ties between partners may be useful. For understanding why the case company experienced correlating offshoring trends across all major functions (i.e. Production, Product Development and Technology Centers), the particular relational density of a given set of activities is key. Relational density is made-up from the rate at which industries change in terms of products, processes and organizations and may be explained as the need for thickness of relational infrastructure.
It is evident that the case company has developed a high level of fitness for global operations as it advanced quite far with its global operations capability. However, the case also shows that, figuratively speaking, the company has been building the bridge while walking on it. Responding to upcoming challenges, it pushed standardization efforts, built up an integration mechanism and initiated relations building and resource pooling to build economies of scale and scope.
The case clearly demonstrated that continuous dynamics and change became inherent characteristics of the operations configurations. In this context, the old fashioned efficiency-alone-oriented global business system solutions become irrelevant. Therefore, the company faced the challenge of developing a solution which enabled it to achieve the optimum balance between local responsiveness and global efficiency. The efforts that the company instigated led to an increased systematisation of the business system and increased awareness of processes at its various levels, namely corporate management (challenging decentralisation approach e.g. through global Demand and Supply synchronization) and individual sites level (having enough autonomy for ensuring local responsiveness). The cases company carefully studied its opportunities for outsourcing parts of the operations network or otherwise extending the reach of its operations management beyond the organizational boundary as a means to focus on product development, assembly and distribution.
This systemic approach the company was developing emphasized not only shortterm operational efficiency, but also increasingly longer-term strategic effectiveness. Some of the key determinants of the system included:
Overall system performance focus Limitations of direct ownership control and coordination Relational fitness (relational density) Availability of a sourcing market driving cost opportunities and pooling of resources Weak or strong ties between value chain actors Types of cross-functional interdependence necessary to accomplish tasks Strategic reconciliation between Supply and Demand The institutional support was also available for establishing global operations on a site-by-site basis within the organizational context as well as facilitated by the developments in the external context. However, the case also stresses how the global business system is affected beyond just the stage of establishing individual sites or contracting with an external service or manufacturing provider. It rather emerges and as an effect of this emerging process, there seems to be a clear trajectory to the internationalization of the operations system, which over time gradually changes its center of gravity to offshore destinations and absorbes new roles and responsibilities in this process. Mature offshoring decisions are characterized by their move beyond the piecemeal type decisions. They rather initiate an organizational process, which accounts for systems effects and is not just about getting something produced in a specific location, but rather about orchestrating a network of interlinked activities, which raise multiple new demands on management capabilities and management systems.
Conclusion
The business system evolves over time rather than is designed from a clean slate. This in turn means that developing solutions and capabilities related to managing the evolving global business system poses a serious challenge for multinational companies. Factors influencing operations are added incrementally and mean that the system is in constant motion and that mechanisms of coordination and control are constantly challenged by diverging standards. While the operations of sites and companies have a clear agenda and set of stakeholders, the network of operations is not tended to; it is no-one"s business.
The purpose of the paper has been to investigate how companies can effectively coordinate and control globally dispersed tasks which are embedded in differentiated and constantly changing organizational contexts thereby establishing fitness for global operations. The findings of the investigation show that the traditional manufacturers are significantly reshaping their global operations configurations, including radical increase in offshore production. On the basis of the existing literature and the casebased example, the study identifies key determinants of the system aimed at striking a balance between seemingly irreconcilable global efficiency and local responsiveness. Moreover, it proposes how the design of operations configurations can be improved through the development of a distinct systemic approach to control and coordination.
This paper adds to the existing literature by unfolding the aspects of organisational capability required for improving the integration of globally dispersed business system and successful development of global operations. The awareness of inherent organisational capabilities often only emerges in the situation where the company fails to establish a required level of quality, gain sufficient advantages from their global scope of operations or fail to reproduce proprietary practices at a new location. As this study demonstrated, due to the integration needs and the interdependencies between globally dispersed tasks, this challenge is persistent and reveals itself even in more experienced companies. | 35,791 | [
"1003474",
"1001975"
] | [
"300821",
"300821",
"487698"
] |
01483871 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01483871/file/978-3-642-28827-2_3_Chapter.pdf | Per Svejvig
Charles Møller
email: charles@production.aau.dk
A Workshop about the Future of Enterprise Information Systems
Keywords: Future Workshop, Enterprise Information Systems, LEGO SERIOUS PLAY
Enterprise Information Systems (EIS) can be classified into three generations, starting with the application-centric, moving on to the data-centric and then to contemporary thinking, which can be described as process-centric. The overall theme of CONFENIS 2011 was to re-conceptualize EIS. One way of re-conceptualizing is to start with a blank sheet and "think out of the box". This topic was addressed in a workshop at CONFENIS 2011 which focused on the future of EIS. The workshop consisted of a large number of experts from across the world, divided into seven groups, who discussed the topic using LEGO SERIOUS PLAY to facilitate and stimulate the discussions. The group of seven came up with seven challenges for the future of EIS and we propose that the next generation of EIS should be conceptualized as human-centric.
Introduction
The Fifth International Conference on Research and Practical Issues in Enterprise Information Systems (CONFENIS 2011) was held in October 2011 in Aalborg, Denmark. CONFENIS is now an established conference with representation from all over the globe. More than 80 experts from around the world were gathered together in Aalborg and to exchange knowledge and discuss EIS. This year's overall theme was Re-conceptualizing Enterprise Information Systems (EIS).
One way to re-conceptualize is to start with a blank sheet and "think out of the box", that is, to try to think differently or from a new perspective. The organizing committee for CONFENIS 2011 decided to hold a workshop about "the Future of EIS" to contribute to the overall theme of re-conceptualizing EIS. This workshop explored the combined knowledge of the participants about the challenges and potential features of future EIS, using the LEGO SERIOUS PLAY method [START_REF] Hansen | Changing the way we learn: towards agile learning and co-operation[END_REF].
The purpose of this chapter is to present the process and the results from the workshop.
The LEGO SERIOUS PLAY method was used to facilitate and stimulate the discussions at the workshop [START_REF] Mabogunje | SWING-Simulation, Workshops, Interactive eNvironments and Gaming: An Integrated Approach to Improve Learning, Design, and Strategic Decision Making[END_REF]. The participants were divided into seven groups and each group was challenged to come up with their view on the future of EIS.
The seven groups each identified their number one challenge for the next five years: (1) Security, (2) Transparency of control, (3) User simplicity, (4) Rights management, (5) Standards, [START_REF] Pink | Doing Visual Ethnography. Images, Media and Representation in Research[END_REF] IT and business working in a cooperative environment, and [START_REF] Bazeley | Qualitative Data Analysis with NVivo[END_REF] Human business systems. The way these challenges were approached will be explained in this chapter with relation to the group work.
The chapter is organized in the following way. The next section presents the methodology, using LEGO SERIOUS PLAY to stimulate and facilitate the discussion. The seven major challenges are then reported on in the following section. This is followed by detailed presentations of five cases representing the results from five out of the seven groups. Finally we present our view of the next generation of EIS and the paper concludes with implications and suggestions for further research.
Methodology
Brief about the workshop process
The workshop took about two hours and consisted of several steps managed by a workshop facilitator. The seven groups were formed ad hoc. First, the facilitator introduced the workshop question "How can we conceptualize the Enterprise Information System of the future?" The facilitator explained that this is not an easy question with an easy answer and therefore there could be a lot of different answers based on different viewpoints. The viewpoints can be expressed metaphorically as different slices of a potato symbolizing the EIS, while the potato is a big and fluffy. The slices make up different images based on the workshop participants' theoretical and practical understanding of future EIS. Then he explained about how our hands and fingers can stimulate our cognitive thinking by building models and prototypes. This approach was taken further by playing (not to be confused with gaming) using LEGO SERIOUS PLAY (LSP) to facilitate and stimulate the play [see also [START_REF] Bürgi | Images of Strategy[END_REF][START_REF] Gauntlett | Creative and visual methods for exploring identities[END_REF]. Working with complex problems and concepts in a playful way can produce a high degree of creativity.
Secondly, the facilitator moved the audience through different exercises in order for them to learn the language of LSP by building small LEGO models.
Fig. 1. LEGO SERIOUS PLAY workshop session
Thirdly, each workshop participant was asked to build the future EIS in 2016 (five years time from now) as an individual exercise. After the building process, each model was presented for the others in the group.
Fourthly, the final step was to build a shared model in the group by using the individual models, although adding parts to the model and removing redundant parts was still allowed. This shared model represented the team's final and conclusive work. At the end of the workshop, this model was formally presented to the other groups and the main future challenge was identified in each group. The groups were required to name unique challenges (There was no overlap between groups).
Data collection
One of the authors of this paper had the role of carrying out participant observation [START_REF] Myers | Qualitative Research in Business & Management[END_REF] and of capturing the workshop by video recording [START_REF] Pink | Doing Visual Ethnography. Images, Media and Representation in Research[END_REF]. Several student assistants supported the process by taking pictures from the workshop and video recording the final models prepared by the seven groups and providing formal feedback to the audience. This ensured that the activities that were simultaneously being carried out were sufficiently documented.
Data analysis
The videos were transcribed and coded in NVivo [START_REF] Bazeley | Qualitative Data Analysis with NVivo[END_REF]. The transcription process contained both visual elements (e.g. annotation of pictures) and verbal elements. Some of the videos were very difficult to transcribe, due to the noisy environment and the multifarious English accents. Pictures were selected to represent the shared models. Videos from the two groups were missing, so the detailed presentations of the group work covered five groups. Videos, pictures, transcriptions etc. were used to theorize about the major challenges with future EIS and to come up with proposals for the next generation of EIS.
The 7 Major Challenges of future EIS
The shared conception of the next generation EIS is remarkably uniform. All groups began by assuming that three of the main challenges of EIS today already would have been solved in the future:
Firstly, EIS is considered to be ubiquitous. This on the agenda on this year's CONFENIS conference and we can see in people's minds that the idea of the EIS being available everywhere-easily accessible from the grid-is considered to be a certainty. It is interesting that also this is seen as a potential driver for more green and sustainable IT solutions in the future.
Secondly, EIS is considered to be extremely flexible. This is despite one of the prominent challenges of the existing EIS being that they are infamous for being inflexible and thus hindering the business innovation of enterprises locked into systems logic. The LEGO brick itself is considered the ultimate model for IT solutions or services, packed into units with well-defined interfaces making the future of EIS potentially extremely versatile.
Thirdly, EIS is considered to be relevant. There are many new technologies today and also new approaches that could leave the EIS as outdated legacy systems. Not surprisingly for EIS researchers, the EIS is viewed as being likely to be present in both the back-office and the front office in terms of playing a new role for business organizations.
This is also the starting point for the first challenge of the next generation of EIS, as characterized by seven inter-related challenges.
IT and business working in a cooperative environment
Today there is a lot of attention being given to the gap between IT and business organizations. In the future the participants in EIS, IT and business organizations will work together to solve business problem, with EIS as an underlying technology to support intelligent decision making and solutions.
Human business systems
Today we can define EIS as being mainly a technology for supporting management control systems. In the next generation of EIS, we will see much more emphasis on EIS as a foundation for human centered business systems. This implies the existence of access to EIS by means that are independent from time and space.
User simplicity
EIS today is characterized by often being difficult to use by the ordinary and occasional business user. In the future EIS will have to simplify their user interfaces and the logic behind their functionality must be simplified where the user is involved. This challenge also includes including mobile access from gadgets such as iPads and other future smart devices.
Transparency of control
The most important way that the next generation of EIS can be made simple is to make the controls transparent. Lack of visibility is one of the major drivers of complexity and creating transparency of control is one way of making it apparent to the users what is cause and what is effect in the business.
Rights management
In order to support the new networked business architecture, the future EIS must manage rights in a different way. When we are dealing with smart devices we cannot be sure that one person is in front of a computer or a person requiring information could be outside of the organization, e.g. they could be located at a supplier's. This creates a tremendous challenge regarding the management of rights.
Security
Various security issues that will have to be managed and solved in order to open up enterprise data to be accessed outside an enterprise. Today a new security update can be considered safe until the bad guys crack the code. Then a new security update is needed. This is not good enough in an enterprise setting. So new approaches to security will be required in the future EIS.
Standards
Finally, in order to advance the maturity of the EIS, overall standards are needed. These are needed, not only to sustain technological standards but also a process standards. These also point towards achieving flexibility and as such are a prerequisite for flexibility and versatility.
These seven interrelated and to some extent cascading challenges were considered to be the most important challenges by the participants in the workshop. These can be seen as the major findings from this study. Together they span an opportunity for space for future EIS research.
Five LEGO models representing different views on the future of EIS
This section reports on the immediate understanding and interpretation of future EIS in five out of the seven groups.
Group 1
Fig. 2, as shown below, captures the conceptualization of the future EIS by group 2. The model is annotated with concepts explained by the group.
Today we completed our future version of an EIS system in five minutes [assembled however from the individually produced future EIS systems in the group]. Now I would like to present the [future EIS] system. The name of the system is Elephant. An elephant is a symbol of luck and longevity, so our system will survive from generation to generation.
There are five major strengths of our system: 1) Our system is a bug free system. We can see that the bugs (pointing at the LEGO model) are kept out of the system by the fences and the hem. There is another hem if they break into the first fence, but they cannot get into the second one. We have very tight controls.
2) The second feature of our system is versatility. This is a very flexible system in a comprehensive / complicated world. There are many different kinds of features (again pointing to elements in the LEGO model) and it is effective to update and change them.
3) The third strength is in monitoring threats. We have a monitoring system with which they can watch the outside world and the system will be able to react to threats. 4) The fourth feature is that we will "break the old habits". If you think of existing EIS systems, they are like blocks or circles, but we have different shapes, we have blocks, we have circles, we have triangles, we have elephants -so we can break the old habits! 5) Finally the last one is maybe the most important. We can integrate a lot of subsystems so that the world can work happily together.
This is our version of our future EIS system (applause) -Thank you!
Second Fence
Elephant is the symbol of the system The group decided that security is the biggest challenge for future EIS, and this is clearly represented in their LEGO model.
Group 2
Group 2 came up with the following model in Fig. 3 describing the future EIS. The group describes the model as follows:
What we have here is our conceptualization of the EIS...this is characterized by:
1) Transparency 2) Process factory ability 3) Pervasive computing anywhere and any time 4) Directions to be able to drive our processes and the processes [delivered by] a process provider. There is a process to drive an organization towards the goal in an efficient way. Adjusting the processes day by day and ensuring that the goals of the organizations can be driven in the right direction 5) Flexible sensors to sense what is happening in the outside
environment and then to deal with pressure from competitors, the market, and from regulators 6) We also have some interaction with mobile devices, so we are very flexible and can see them as a mechanism or as a bridge to an external provider
We furthermore have controller busses (in the middle of the picture) [to take care of controlling processes]. We have this management information both for the inside and also for the outside, to retain a high degree of transparency. The system also has the horizontal support of processes for manufacturing, sales, finance, and other uses. The system will be very powerful in the future.
Group 2 selects transparency of control as the main challenge illustrated by the inside / outside management information system. The group explains the future EIS artifact in this way: First of all, everything is built around the user: 'humans first' is the motto and it is all placed in a green environment. So we look into the future and in five years' time it is important to consider the green elements within it. Then what we will have is the interconnectivity of the different elements, built on a very simple platform. That green one is the simple platform, but everything is built on stable and flexible pillars using the LEGO building blocks. They are not stuck together in a rigorous or inflexible way, but rather in a flexible way... That is the reason why we say that our artifact is reusable and stable.
The element of the uncertainty [expressed by the lion]… Yes you need to speak about the uncertain. You never know what will happen -security management is also important.
The biggest challenge, according to this group, is user simplicity.
Gr
The co The gr where we is connec
The
Next generation enterprise information system
We have now presented the findings from the CONFENIS workshop on the future of EIS. The workshop methodology has been presented, as have the conceptual models made by the groups. The challenges of future EIS have been presented as an aggregate conceptual model of future EIS. EIS have evolved through different generations and the challenges presented here could point towards the next generation of EIS.
It is possible to classify the three first generations of EIS in this way:
• First generation EIS can be considered to be application-centric in the sense that the applications contain the data, and the business rules are not necessarily integrated. • Second generation EIS can be considered to be data-centric and driven by the integrated databases enabled by the DBMS technology. • Third generation EIS can be considered to be process-centric and driven by the BPM architecture, supported by the integrated systems.
What characterizes the next generation EIS is of cause yet to be seen, but we can speculate on the challenges based on the three previous generations of EIS. The evolution of the EIS has been driven by the business challenges and by the resulting organizational challenges. In the discussions in the groups the volatile nature of today's business environment was taken as a premise. Following this premise the organization of enterprises does not follows the hierarchical logig of the past, but is characterized by being network oriented and spanning across organizational boundaries. But mainly the evolution has been driven by the enabling technologies. The role of technology in EIS is an interesting topic to pursue. E.g. this year there was a track at the conference focusing on the impact of cloud computing on EIS. EIS technologies are of cause influenced by the general trends in information technology but in order to become game changers in EIS other factors are required.
Based on the experience of the workshop we can conclude that the next generation EIS will be human-centric. Human-centric EIS are characterized by being: 1) ubiquitous; 2) flexible; and 3) relevant to the business. The seven challenges that were the main findings support the thinking of human-centric and cascading challenges: 1) Standards; 2) Security; 3) Transparency of control; 4) User simplicity; 5) Rights management; 6) IT and business working in a cooperative environment; and 7) Human business systems.
Conclusion
This chapter has pursued to illustrate a possible future of EIS. This was addressed in a workshop at CONFENIS 2011 consisting of a large number of experts from across the world, divided into seven groups, who discussed the topic using LEGO SERIOUS PLAY to facilitate and stimulate the discussions. The group of seven came up with seven challenges for the future of EIS, which were overall conceptualized as the human-centric EIS.
There are conceptual and methodological implications from this study. First, if we consider other ways of modeling the future through scenario building approaches like the "Shell Energy Scenario 2050" [START_REF]Shell Energy Scenarios to[END_REF] or "Supply Chain 2020" [START_REF]SC2020 Baseline Scenarios[END_REF], the participants generally share a fairly optimistic view of the future business environment and a positive view of the role of technology in the future. This is perhaps not surprising since most of the researchers are working with various technologies. However this attitude could also have been produced by "group effects" in an atmosphere of enthusiasm for EIS where participants might have a tendency to express accepted views [START_REF] Bryman | Social Research Methods[END_REF]. The workshop should therefore be seen as a creative and thoughtful inspiration for a continued discussion about the future EIS. Further studies involving the EIS community (vendors, consultants, users, researchers etc.) might bring this discussion to a much more refined level.
Second, the research methods used for the workshop are interesting topics itself. The methods combine LEGO SERIOUS PLAY for visualization and cognitive thinking [START_REF] Gauntlett | Creative and visual methods for exploring identities[END_REF] with video documentation [START_REF] Pink | Doing Visual Ethnography. Images, Media and Representation in Research[END_REF][START_REF] Pink | Walking with video[END_REF]. However we encountered several practical and methodological issues (e.g. noisy environment and lack of structured documentation from groups), which has hampered this study. We do nevertheless see the approach as promising for future research and practice as a tool to multimodal imagery which brings together verbal/narrative, visual/imagistic, and kinesthetic/haptic modes [START_REF] Bürgi | Images of Strategy[END_REF] documented by video.
Despite the implications and limitations, this study has anyway provided some evidence of trends that may help researchers in selecting topics in the future and for practitioners to make sense of the next generation of EIS.
Fig. 2 .
2 Fig. 2. LEGO Model from group 1
Fig. 3 .3.3 Group 3 Fig. 4 Fig. 4 .
3344 Fig. 3. LEGO model from group 2
Fig. 5 .
5 Fig. 5. LE
.5 Gr Group Fig. 6. LE
roup 4 The b iggest challen nge for group p 4 was righ hts manageme ent which is Group 5 emphasizes standards as their key challenge. related to
security w well expressed d in the model l.
onceptualizatio on from group p 4 is shown i in Fig. 5:
co lawn whe There they conn And th elephant Here i responsib flexible m interconn Also, i the infras All the EGO Model fro roup presents e have all har cted to the rest onnection is w ere users have are different nect to someth hen we have tw is here but it m is the problem ble for solvin model and a nects together in this environ structure and s e legacy stuff om group 4 their model a rdware in the t of the world. with the user e cover and it things (pointi hing. wo animals re might be attac m solver with ng… new cha sense of the for intelligen nment, small p speak the sam is hidden in t as follows: Th computer dep . and the eco is ready to be ting at the mo epresenting al cked by the lio an intelligent allenges whic e environment t problem solv pieces of stand me language, s the basement is is a very ec partment and system. The e used as the u odel) that can ll obstacles an cological energ the security a ecosystem is user wishes. support the u nd new challe rgy facility and then it the green user when roup 5 p 5 conceptual EGO model fro izes the future om group 5 e EIS as show wn in Fig. 6: 3The g group describ bes their mo odel as follo ows: First of f all, things must be enges. The interconn nected. We ca an make a cle ear distinction n between the e two element ts [LEGO on. t problem solv ch may arise t, in combini ving system th when using ng different p plates] th he system itse lf with its bas se [green LEG GO plate], th e various elem ments and hat is also what the system does s [grey LEGO O plate] in te erms of exten nding the cap pability of this very human be eings...alone a and in connec ction with oth ers, so we see e the system a as being in parts that the sky or r the clouds. ving. dardized appl so as to conne where no one lications can c e can touch it. . Here are that peopl le actually w want to use it. ect. ...We h have five chall lenges: connect to 1. . One is peo ople versus t technology s o we can ma ake it adaptab ble so
two users s [Pink platfor rm]. Both are 2. . Then we h have standard localized and d they can wor rk from home. . They can ds, they have e flexibility a and we can m make
recognize e their system m as being v very intuitive, them appl licable to our r needs in ter rms of their u but [they] a are also able e to travel use.
around t the universe 3. . We have s using differe ent gadgets t to access the e data that security. It is s pretty obvio ous that vital l information is hidden n should
somewhe that the u And w everywhe you easily ere here (poin users will have what did I forg ere and this r y can connect ting...) This is e access to the get? ...Oh yes represents the t and disconne s so that secu rity issues ca n be solved lo ocally and not be rev vealed (brown n knight]. eir data. s this is the g e stability that it taking fi ive years to d develop. Tha at is a real ch hallenge. ect. grid Internet t t is very stand hat is actually dardized, and 4. . A means o of safe storin ng of data en nsuring secur ity and priva acy, so ly all over data is not t lost. d to which 5. . A mechan nism to captu ure user requ uirements in a rapid way y without
Table 1 .
1 Four generations of EIS
EIS generation Application- Data- Process- Human-
centric centric centric centric
MRP ERP BPM ?
Business Efficacy Efficiency Effectiveness Resilience
Challenge
Organizational Support of Support of Support of Support of
challenge departments enterprises supply chains business
networks
Technology Databases DBMS, Internet, SOA Semantic
Enablers Client- networks,
server Social
architecture Media,
Cloud
computing
Integrates… Applications Data Processes Humans
Timeline Around Around Around Around
80'ies 90'ies 00'ies 10'ies | 25,194 | [
"990471",
"1003475"
] | [
"19908",
"300821"
] |
01483872 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01483872/file/978-3-642-28827-2_5_Chapter.pdf | David L Olson
email: dolson3@unl.edu
Björn Johansson
email: bjorn.johansson@ics.lu.se
Rogério Atem De Carvalho
A Combined Method for Evaluating Criteria when Selecting ERP Systems
Keywords: ERP selection process, multiple criteria selection, decision criteria
There are many benefits offered by integrated enterprise computer systems. There are a growing number of options available to obtain such management information system support. A major problem when selecting Enterprise Information Systems, in special ERP systems, is how to deal with the great diversity of options as well as the number of criteria used to evaluate each alternative. There is an implicit tradeoff between cost and system functionality. Total cost of ownership (TCO) is in itself very difficult to calculate accurately, and needs to be considered in light of other criteria. Published criteria for ERP selection decisions in a variety of contexts are reviewed. We also present a method which integrates a multicriteria rating strategy based on the Simple MultiAttribute Rating Theory (SMART) with the meta-method Prepare-Identify-Rate-Compare-Select (PIRCS) framework for driving the selection process. The method is demonstrated with a general ERP selection decision, but is meant as a framework that can be applied with whatever criteria decision makers deem important in the context of their specific decision.
Introduction
Organizations can benefit a great deal from integrated enterprise systems, obtaining increased data accuracy through single-source databases, more efficient operations through business process reengineering, and reduced information technology payroll. The number of options is increasing, beyond top-of-the-line vendor systems such as SAP and Oracle, through more moderately priced vendors such as Microsoft and Lawson [START_REF] Olson | Enterprise Information Systems: Contemporary Trends and Issues[END_REF], to application service providers offering rental of enterprise computing. However, there is risk involved, especially for small businesses [START_REF] Poba-Nzaou | Adoption and Risk of ERP Systems in Manufacturing SMEs: A Positivist Case Study[END_REF], [START_REF] Kirytopoulos | Project Termination Analysis in SMEs: Making the Right Call[END_REF]. In specific countries, such as China [START_REF] Wei | An AHP-Based Approach to ERP System Selection[END_REF], Brazil [START_REF] De Carvalho | Free/Open Source Enterprise Resources Planning[END_REF], and elsewhere [START_REF] Baki | Determining the ERP Package-Selecting Criteria: The Case of Turkish Manufacturing Companies[END_REF], there are additional local forms of ERP. One option that has become viable in the past decade is the open source alternative providing free software from various business models [START_REF] Johansson | Management of Requirements in ERP Development: A Comparison Between Proprietary and Open Source ERP[END_REF]. There are many enterprise system options available. When selecting an ERP option there is a general tradeoff between functionality and cost, although total cost of ownership (TCO) is a complex matter that defies accurate calculation [START_REF] Kabassi | A Knowledge-Based Software Life-Cycle Framework for the Incorporation of Multicriteria Analysis in Intelligent User Interfaces[END_REF]. This paper reviews criteria that have been published in the literature with respect to selection of enterprise resource planning (ERP) system. It also demonstrates how the meta-method PIRCS (prepare, identify, rate, compare, and select) [START_REF] De Carvalho | Issues on Evaluating Free/Open Source ERP Systems[END_REF] can be implemented through the simple multiattribute rating theory (SMART) [START_REF] Edwards | Social Utilities[END_REF].
ERP Selection Criteria
Many papers have dealt with selection among alternative means of obtaining ERP systems. Baki and Çaki [START_REF] Baki | Determining the ERP Package-Selecting Criteria: The Case of Turkish Manufacturing Companies[END_REF] reviewed criteria considered by prior studies in manufacturing firms, and conducted a survey of 55 Turkish manufacturing companies concerning the importance of these criteria, adding references, consultancy, implementation time, and software methodology to the criteria used by the prior studies. Baki and Çaki used a 1-5 Likert scale, the mean of which is reported in the last column. A rating of 1 indicated lowest possible importance and a rating of 5 indicated highest possible importance (see Table 1). Baki and Çaki analyzed their data for differences between organizations that adopted MRP or MRP-II systems versus those who had not. They found no statistically significant difference between these two groups. Their inference was that prior exposure was not an important factor. The results show that all factors had some positive importance (as 3 would indicate neutrality), but Table 1 indicates that external fit (such as supply chain linkage) and software factors tend to be rated higher than organizational factors such as fit, service and support, and cost. Open source ERP software is attractive for all small organizations. Three studies were found giving criteria for this domain. Criteria considered varied when selecting open source ERP as can be seen in Table 2. Benroider and Koch [START_REF] Benroider | ERP Selection Process in Midsize and Large Organizations[END_REF] sampled 138 small or medium sized organizations in Austria who had selected an ERP system about the criteria they used in their decisions. Small or medium sized was defined on the basis of the number of employees, based upon European Community standards. Large vendors were considered by almost all subjects, but a bit over 47 percent considered smaller ERP vendors. SAP was selected by nearly 70 percent of the samples, and small vendors by a little over 23.3 percent. There was a bias for larger organizations to select SAP. Delphi analysis was used to identify criteria deemed important. Benroider and Koch only reported criteria that had a strong relationship to organization size (focusing on those rated very important to SMEs). SMEs emphasized software adaptability and flexibility and shorter implementation time more than large organizations. Both SMEs and large organizations rated good support and process improvement as very important.
Other studies have looked at specific ERP selection contexts. Table 3 demonstrates further diversity of criteria proposed for consideration in the specific context of outsourcing (or ASP provider selection):
Modeling ERP Selection
There are a number of selection models presented in the literature. Conjoint analysis is used in marketing to determine the relative importance of product characteristics to potential clients. Keil et al. [START_REF] Keil | Relative Importance of Evaluation Criteria for Enterprise Systems: A Conjoint Study[END_REF] conducted conjoint analysis to ERP selection, using software characteristics and implementation attributes. That study received 126 completed responses of 7 software package profiles from MIS managers of large organizations (see Table 4). The study modeled manager likelihood of recommending system acquisition using multiple regression, with a model adjusted R 2 of 0.506. The results indicate predominance of software factors over implementation factors. Only ease of customization was significant among the implementation factors. In this study, cost was found significant, but not as significant as software reliability and functionality. The subject firms were large. That set of ERP users can be expected to focus on getting the ERP system working. While cost is important, it's unpredictability would naturally be subsidiary to the necessity of obtaining required information system support. Ease of customization was significant, which indicates consideration of long-term life cycle cost. Ease of use was significant at a lower degree, while vendor reputation and ease of implementation were not significant. These last three factors relate to the impact of the system on the organization. MIS managers in the Keil et al. study placed less emphasis on these factors.
A second type of model using criteria is for decision maker selection. Among these models are analytic hierarchy process (AHP) [START_REF] Wei | An AHP-Based Approach to ERP System Selection[END_REF], [START_REF] Saaty | A Scaling Method for Priorities in Hierarchical Structures[END_REF] and multiattribute utility theory, to include simple multiattribute rating theory (SMART) [START_REF] Edwards | Social Utilities[END_REF]. AHP has been the most widely used method in evaluating various aspects of ERP. Ahn and Choi [START_REF] Ahn | ERP system selection using a simulation-based AHP approach: A case of Korean homeshopping company[END_REF] did so in a group context in South Korea, Salmeron and Lopez to evaluate ERP maintenance [START_REF] Salmeron | A multicriteria approach for risks assessment in ERP maintenance[END_REF], Kahraman et al. [START_REF] Kahraman | Selection Among ERP Outsourcing Alternatives Using a Fuzzy Multi-Criteria Decision Making Methodology[END_REF] to consider ERP outsourcing, and Onut and Efendigil to ERP selection in Turkey [START_REF] Onut | A theoretical model design for ERP software selection process under the constraints of cost and quality: A fuzzy approach[END_REF]. The related analytic network process [START_REF] Saaty | The Analytic Network Process: Decision Making with Dependence and Feedback[END_REF] was used by Ayağ and Özdemir [START_REF] Ayağ | An intelligent approach to ERP software selection through fuzzy ANP[END_REF] and Kirytopoulos et al. [START_REF] Kirytopooulos | Project termination analysis in SMEs: Making the right call[END_REF], allowing for feedback relationships. Olson and Wu [START_REF] Olson | Multiple criteria analysis for evaluation of information system risk[END_REF] applied SMART along with data envelopment analysis to consider information system risk. One model for ERP selection used the criteria in Table 5 to compare alternative ERP vendors. That study [START_REF] Wei | An AHP-Based Approach to ERP System Selection[END_REF] provided a thorough analysis of criteria starting with fundamental objectives for both system software factors and vendor factors, adding evaluation items at a third level, and identifying constraints reflecting means. The methodology was presented in a group decision making context. The hierarchy consisted of: factors, attributes, evaluation items and means. Another AHP model was applied to selecting an ERP system specific to clothing industry suppliers [START_REF] Ünal | Selection of ERP Suppliers Using AHP Tools in the Clothing Industry[END_REF]. Criteria were selected based upon discussion with three such suppliers, as well as literature reviews. Criteria were: Cost Functionality Implementation approach Support Organizational credibility Experience Flexibility Customer focus Future strategy.
Cost benefit analysis was conducted for the first criterion, while AHP was used to generate a synthesis value for the other eight criteria. The ratio of synthesis value to normalized costs was used to rank alternatives. ANP was applied to benchmarking and selecting ERP systems [START_REF] Perçin | Using the ANP Approach in Selecting and Benchmarking ERP Systems[END_REF], applying the approach to an actual selection decision (see Table 6). [START_REF] De Carvalho | Issues on Evaluating Free/Open Source ERP Systems[END_REF]. PIRCS can be understood as a meta-method, given that it is composed by a series of procedures that should be adapted for specific purposes, according to the adopter's software evaluation culture and specific needs.
System and
PIRCS is completely compatible with the simple multiattribute rating theory (SMART) model [START_REF] Edwards | Social Utilities[END_REF]. Olson [START_REF] Olson | Evaluation of ERP outsourcing[END_REF] presented a SMART analysis of an ERP selection decision considering the seven criteria found on Table 3, taken from [START_REF] Ekanayaka | Evaluating Application Service Providers[END_REF]. It is clear that there are many ways to approach incorporation of multiple criteria in ERP selection models. We have tried to demonstrate the importance of context with respect to the selection of these criteria. In the next section we present a model that aims at demonstrating context such as size of organization as important.
Demonstration Model for Small Business ERP Selection
It is possible to include many criteria, but it has been argued that a limited number of independent and equally scaled criteria will include the bulk of the relative importance [START_REF] Olson | Decision Aids for Selection Problems[END_REF].
The PIRCS framework for evaluation of ERP alternatives consists of the following steps:
Prepare: define requirements, establish positioning strategy, identify attributes and constraints on the decision, and measures of attributes to be considered. Identify: Use searches to identify alternative ERP options and their characteristics. Rate: Establish the utility (value) of each attribute on each alternative. Compare: Apply multicriteria methods, such as AHP or SMART. Select: Consider the comparison analysis from the prior step and make the decision. The focus of this paper is to demonstrate the use of SMART as a means to implement the Compare step.
We assume the context of a small business considering the criteria and options given in Table 8. The criteria would be identified in the Prepare step, and the options in the Identify step. The rating entries would be established in the third step, Rate. Criteria used here include cost, time, and robustness [START_REF] De Carvalho | Issues on Evaluating Free/Open Source ERP Systems[END_REF], as well as the most significant criteria identified for SMEs [START_REF] Benroider | ERP Selection Process in Midsize and Large Organizations[END_REF]. Ratings are scaled on a 0-1 range, with 0 indicating worst possible performance, and 1 indicating best possible performance. The assignment of these values should be done in the context of the organization, reflecting values of organizational decision makers. Table 8 shows the value matrix from the PIRCS Rate step, which is input to the SMART analysis. The entries in Table 8 are of course demonstrative. Cost values reflect best estimates of total life cycle costs. While OSS without support would be free with respect to software acquisition, there would be costs of implementing as well as training users. The Select step should be done judgmentally, by the organization's decision maker.
The SMART analysis should be viewed in terms of decision support (not letting the model make the final decision). However, the PIRCS framework and SMART analysis will provide decision makers with a systematic means to consider important factors and provide greater confidence in the decision. Here the Select output would multiply the ratings in Table 8 by the weights in Table 9, yielding the relative scores shown in Table 10. The implication is that the relatively moderate ratings over all attributes for the OSS with support fees option led to total value greater than that of the large vendor (which did very well on robustness and support, but very poorly on the other three criteria). The mid-size vendor was moderate on all criteria, but turned out to be dominated by the OSS with support fees option in the assumed context.
Conclusions and Future Research
There are many criteria that can be important in the selection of ERP systems. We have tried to show that the context in which such decisions are made is important. While there have been many studies of this matter, there is not universal agreement by any means. each individual organization should be expected to find various criteria critical while other criteria may be more important for other organizations. The ERP environment is also highly dynamic. In the 1990s, ERP was usually only feasible for large organizations. That is changing.
A business case for evaluation of software systems of any type is challenging. Cost estimates involve high levels of uncertainty, and benefits are usually in the realm of pure guesswork. A sound analytic approach is called for, especially given the large price tags usually present in ERP systems. There is a need for a method that can consider expected monetary impact along with other factors, to include risk elements such as project time and system robustness, as well as relatively subjective elements of value such as flexibility and availability of support. The PIRCS process and SMART multiattribute analysis offer a means to systematically evaluate ERP software proposals.
Multiattribute analysis has studied decision making under tradeoffs for a long time. It is quite robust, and can support consideration of a varying number of criteria. It usually is the case that for a specific decision, a relatively small number of criteria matter. If nothing else, the simple fact that if there are seven other criteria more important, the highest relative importance an eighth criterion could have is 0.125, with a high likelihood of a much lower weight [START_REF] Miller | The Magical Number Seven Plus or Minus Two: Some Limits on Our Capacity for Processing Information[END_REF]. This paper had the purpose of describing how the PIRCS framework could support the critical process of ERP software alternative evaluation, along with multiattribute analysis to consider the inevitable trade-offs that are encountered in such decisions. The demonstration of the combination of the PIRCS framework and the SMART analysis shows that the huge diversity of different options on ERP systems can be managed. The combined framework has a huge potential when it comes to deal with context related factors when making a selection on what option to implement. It also concisely shows tradeoffs among criteria being considered. However, the framework is dependent on that relevant and correct data on every option is possible to have. This is especially true with respect to life-cycle cost, which is very difficult to predict for most organizations, which hopefully do not have to repeat ERP selection decisions very often.
Future research is important in understanding what criteria are important in particular contexts. For instance, free open source ERP systems are emerging, broadening the market for enterprise system support.
Table 1 .
1 Comparative Criteria in ERP Selection in Manufacturing Firms
Criteria [11] [12] [13] [14] [15] Mean
Fit with allied organizations * 4.79
Cross module integration * * 4.72
Compatibility with other systems * 4.28
References 4.24
Vision * * 4.22
Functionality * * * * 4.15
System reliability * 4.08
Consultancy 4.06
Technical aspects * * * * 4.01
Implementation time 3.94
Vendor market position * * 3.87
Ease of customization * * 3.84
Software methodology 3.83
Fit with organization * 3.83
Service & support * * * 3.77
Cost * * * * * 3.65
Vendor domain knowledge * 3.46
Table 2 .
2 Open Source ERP Software Selection Criteria
Criteria [16] [17] [18]
Technology Technical Complexity of technology Database migration
requirements East of database administration
BPR Business drivers Ease of business logic Synchorizing modules
implementation to workflow
User interface Ease of presentation layer User friendly interfaces
implementation
Administration Ease of administration Integration with 3 rd
party software
Cost Cost drivers
Others Flexibility Ease of service exposure User support
Scalability Resource utilization
Business specific
Table 3 .
3 ERP Selection Criteria for Outsourcing ERP
Study Context Criteria
[20] Application service providers Customer service
Reliability, availability, scalability
Integration
Total cost
Security
Service level
[21] Outsourcing Market leadership
Functionality
Quality
Price
Implementation speed
Link with other systems
International orientation
Table 4 .
4 Results of Keil et al.'s Conjoint Analysis
Attribute Effect t-value P<0.01 P<0.001
Software Reliability 0.464 20.34 Yes Yes
Software Functionality 0.457 20.03 Yes Yes
Software Cost -0.253 -11.08 Yes Yes
Implementation Ease of Customization 0.129 5.67 Yes Yes
Software Ease of Use 0.073 3.19 Yes No
Implementation Vendor Reputation 0.007 0.29 No No
Implementation Ease 0.000 0.01 No No
Table 5 .
5 Value Analysis Hierarchy[START_REF] Wei | An AHP-Based Approach to ERP System Selection[END_REF]
Attributes Evaluation items Means
System Total costs Price Project budget
software Maintenance Annual maintenance budget
Consultant expenses Infrastructure budget
Infrastructure costs
Implementation Duration
time Project management
Functionality Module completion Necessary module availability
Function fitness Currency, language, site issues
Security Permission management
Database protection
User Ease of operation Guidebook
friendliness Ease of learning Online learning, help
Flexibility Upgrade ability Common programming language
Ease of integration Platform independence
Ease of in-house development Ease of integration
Reliability Stability Automatic data recovery
Recovery ability Automatic data backup
Vendor factors Reputation Scale of vendor Financial stability
Financial condition Provision of reference sites
Market share
Technical R&D ability Upgrade service
capability Technical support Diverse product line
Implementation Implementation experience
Adequate number of engineers
Cooperation with partners
Domain knowledge
Table 7 .
7 Vendor Selection Criteria[START_REF] Perçin | Using the ANP Approach in Selecting and Benchmarking ERP Systems[END_REF] Kahraman et al.[START_REF] Kahraman | Selection Among ERP Outsourcing Alternatives Using a Fuzzy Multi-Criteria Decision Making Methodology[END_REF] applied fuzzy modeling to a form of AHP for evaluation of selecting an outsourced ERP alternative. Table7shows the detailed criteria used by Kahraman et al. through two levels. AHP Hierarchy[START_REF] Kahraman | Selection Among ERP Outsourcing Alternatives Using a Fuzzy Multi-Criteria Decision Making Methodology[END_REF]
System Factors Vendor Factors
Functionality Market share
Strategic fitness Financial capability
Flexibility Implementation ability
User friendliness R&D capability
Implementation time Service support
Total costs
Reliability
Top Level Criteria Second Level Criteria
Market Leadership Relevant technology
Innovative business process
Competitive position
Functionality Consumer preference
Functional capability
Compatibility with third party
Quality Reliability
Security
Information Quality
Configuration
Price Service cost
Operating cost
Set-up cost
Implementation speed Performance
Usability
Training
Interface with other systems Data share
Compatibility with the system
Multi-level user
Flexibility
International orientation National CRM
Web applications
Table 8 .
8 Value MatrixThe fourth step is to Compare. Using the SMART approach, this includes identification of relative weights of importance (scale has been removed by identifying value ratings on 0-1 scales for all attributes). Use of swing weighting would begin by ordering criteria by importance, then assigning the most important criterion a value of 100. The other criteria are assessed in turn on the basis of: if the most important criterion was swung from its worst possible state to its best possible state, how relatively important would the next criterion be worth when swung from its worst possible state to its best possible state. Standardized weights are generated by dividing each assessed relative weighting by the sum of these relative weightings. Our demonstrative developed weights are presented in Table9:
Cost Time Flexibility Support
Large vendor 0.2 0.3 0.1 1.0 1.0
Customize vendor 0.0 0.0 0.8 0.7 0.5
Mid-size vendor 0.4 0.6 0.5 0.5 0.6
OSS with support fees 0.7 0.9 0.6 0.8 0.7
OSS without support 0.6 0.6 0.5 0.4 0.0
Table 9 .
9 Swing Weighting
Criteria by order Relative weighting Standardized weighting (/320)
Time 100 0.312
Robustness 80 0.250
Support 70 0.219
Cost 40 0.125
Flexibility 30 0.094
SUM 320 1.000
Table 10 .
10 Alternative Relative Scores
Alternative Score
OSS with support fees 0.778
Large vendor 0.597
Mid-size vendor 0.541
ASP 0.446
OSS without support 0.409
Customize vendor 0.360 | 24,682 | [
"1003476",
"1001319",
"1003477"
] | [
"312624",
"344927",
"487704"
] |
01483873 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01483873/file/978-3-642-28827-2_6_Chapter.pdf | Björn Johansson
email: bjorn.johansson@ics.lu.sev.koroliov@gmail.com
Vadim Koroliov
Deployment of Open Source ERPs: What knowledge does it require?
Keywords: Deployment experiment, Enterprise resource planning systems, ERPs., Open source, Small organizations, SMEs
Enterprise resource planning (ERP) systems are rapidly becoming a de facto standard in business activity. While large and medium-sized companies have the luxury to afford proprietary ERP solutions, small companies are struggling with resource poverty which maybe makes them consider available open source ERP products which are free from licensing fee. However, there is little knowledge available on open source ERP adoption in small companies. In order to spread some light on the first phase of ERP adoption, an experiment on open source ERP deployment was conducted. The experiment aimed at investigating what knowledge is required to successfully deploy open source ERP systems. The experiment was based on a research framework, the Technology Acceptance Model 2 (TAM2), and considered usability testing and user training and education factors. The factors of Perceived Ease of Use (PEOU) and Perceived Usefulness (PU) were used to determine the ease of deployment process, and the usefulness of the open source ERP deployed in relation to the made effort. The findings suggest that user with advanced computer skills perceive open source ERP deployment process as easy, and the deployed open source ERP was seen as being useful to organizations business activities.
Introduction
In organizations, there is a growing need for managing information to become competitive and sustain such advantage over a longer period of time. Therefore, many organizations have implemented extensive enterprise information systems, such as enterprise resource planning (ERP) systems, to obtain a better control over information and thereby acquire advantages over its competitors [START_REF] Verville | ERP Acquisition Planning: A Critical Dimension for Making the Right Choice[END_REF] by, for instance, getting better and faster access to information stored. However, deployment can be problematic for organizations since it can take long time and cost a lot of money; and there is also a high risk that it may fail [START_REF] Verville | ERP Acquisition Planning: A Critical Dimension for Making the Right Choice[END_REF][START_REF] Daneva | Understanding Success and Failure Profiles of ERP Requirements Engineering: an Empirical Study[END_REF][START_REF] Standish | [END_REF]. Despite that, there is an increased interest in ERPs among organizations which also has created an interest in developing simplified versions of ERPs. These simplified versions are developed both by proprietary ERP vendors, such as Microsoft Dynamics or SAP, as well as by new vendor organizations in the open source (OS) area. The open source ERP projects could be seen as an alternative to traditional ERP proprietary systems available today. In general it can be suggested that OS have grown large and still continues to grow strongly as more and more organizations become interested in how they can benefit from OS in their organization [START_REF] Rapp | Ökat intresse för Open Source[END_REF]. Therefore, it could be suggested that organizations, in their search for having an ERP system implemented in their organization, could consider an open source ERP system. Consequently, an interesting question follows if this is doable or not. From an earlier investigation done by Johansson and Sudzina [START_REF] Johansson | ERP systems and open source: an initial review and some implications for SMEs[END_REF], we know that there are a lot of open source ERPs available for download from an online service such as SourceForge. On the other side, what we do not know is, if this is done or if it is feasible for an organization to do so? This directs to the question discussed in this paper which is: What knowledge is required to successfully deploy open source ERP systems?
To be able to say something about this, Section 2 defines open source ERP and open source ERP deployment. Section 3 presents the research conducted, which could be briefly described as a controlled experiment of downloading and making a first usage test of an open source ERP system. The experiment aims at describing knowledge needed to successfully deploy open source ERP systems. Section 4 analyzes the results from the experiment and discusses the knowledge required to successfully deploy open source ERP systems. In the final section some conclusions are provided, and some future research questions in the context of open source ERP deployment are suggested.
ERPs, Open Source ERPs and Open Source ERP Deployment
Enterprise Resource Planning systems (ERPs) are major investments for organizations, and according to Morabito et al., [START_REF] Morabito | ERP Marketing and Italian SMEs[END_REF] have high attention among practitioners, academia, and media. They state that ERP research primarily focuses on two aspects: 1) organizational and economic impact of ERP implementation, and 2) how to best manage implementation. There are a number of key characteristics that more or less all ERP system share making them a unique subtype of information systems. Firstly, ERP is defined as a standardized packaged software [START_REF] Xu | Concepts of product software[END_REF] designed to integrate entire organization [START_REF] Lengnick-Hall | The role of social and intellectual capital in achieving competitive advantage through enterprise resource planning (ERP) systems[END_REF][START_REF] Rolland | Bridging the Gap Between Organisational Needs and ERP Functionality[END_REF][START_REF] Kumar | ERP experiences and evolution[END_REF], its business processes and ICT into a synchronized suite of procedures, applications and metrics which transcend organizational boundaries [START_REF] Wier | Enterprise resource planning systems and non-financial performance incentives: The joint impact on corporate performance[END_REF] and that can be bought (or rented) from an external provider and adapted to firm's specific requirements.
The fact that ERPs are assumed to integrate the organization (both interorganizationally as well as intra-organizationally) and its business process into one package, feeds the complexity of ERPs when it comes to development and implementation as well as usage [START_REF] Koch | ERP-systemer: erfaringer, ressourcer, forandringer[END_REF]. Millman [START_REF] Millman | What did you get from ERP, and what can you get[END_REF] posits that ERPs are the most expensive but least-value-derived implementation of information and communication technology (ICT) support. The reason for this, according to Millman, is that a lot of ERPs functionality is either not used or is implemented in a wrong way. In addition, wrong implementation results from ERPs being customized to fit the business processes, instead of changing the processes to fit the ERP [START_REF] Millman | What did you get from ERP, and what can you get[END_REF], described by Hammer and Champy [START_REF] Hammer | Reengineering the corporation: a manifesto for business revolution[END_REF] as "paving the cow path".
Several studies on inspiring success [START_REF] Davenport | Mission Critical: Realizing the Promise of Enterprise Systems[END_REF], but also failures [START_REF] Larsen | When Success Turns into Failure: A Package-Driven Business Process Re-engineering project in the Financial Services Industry[END_REF][START_REF] Scott | Implementing Enterprise Resource Planning Systems: The Role of Learning from Failure[END_REF], associated with implementation and utilization of ERPs [START_REF] Robey | Learning to Implement Enterprise Systems: An Exploratory Study of the Dialectics of Change[END_REF] exist. Benefits are only related in part to the technology, and most come from organizational changes such as new business processes, organizational structure, work procedures, integration of administrative and operative activities, and global standardization of work practices leading to organizational improvements, supported by the technology [START_REF] Hedman | ERP systems impact on organizations[END_REF]. It can definitely be said that implementation of ERP systems is a difficult and costly organizational "experiment" [START_REF] Robey | Learning to Implement Enterprise Systems: An Exploratory Study of the Dialectics of Change[END_REF], and implementation of ERP systems can be described [START_REF] Davenport | Holistic management of mega-package change: The case of SAP[END_REF] as "perhaps the world's largest experiment in business change" and for most organizations "the largest change project in cost and time that they have undertaken in their history". The implementation is a necessary but insufficient prerequisite for benefits and value, at least for having competitive parity [START_REF] Johansson | Competitive advantage in the ERP system's value-chain and its influence on future development[END_REF].
According to Wieder et al. [START_REF] Wieder | The impact of ERP systems on firm and business process performance[END_REF] there is no significant performance difference between ERP adopters and non-adopters either on process or overall firm levels. In conclusion, it can be claimed that there are different opinions on benefits and advantages of ERP systems adoption.
Nevertheless, the market tendency shows that ERP adoption is growing and will grow among organizations worldwide. Jacobson et al. [START_REF] Jacobson | The ERP Market Sizing Report, 2006-2011[END_REF] denote -in their report on ERP market sizing -that ERP investments among large corporations as well as small and medium-sized enterprises (SME) are continuously increasing. The ERP systems became in practice an industry standard [START_REF] Parr | A Taxonomy of ERP Implementation Approaches[END_REF] and, as argued by Shehab et al.,[25] considered to be the price of entry for running a business.
From this it can be said that the ERP products market have grown. Up to recent years, the ERP systems offer has been primarily characterized by proprietary software products. The largest ERP vendors to present are SAP, Oracle, Infor and Microsoft. In the same time, the industry witnessed the proliferation of open source ERP packages as well. The open source ERP packages, and namely community-based versions, provide a free alternative to commercial ERP packages. Some notable examples are OpenBravo, Compiere, TinyERP, OFBiz, Adempiere, xTuple/PostBook and others, and the majority of them target SMEs.
One would wonder why they are free. Usually, the companies who stand behind these products earn money in a different way, other than traditional. Along with the community version, which is completely free, there are commercial versions with better support, updates and upgrades.
In brief, open source ERP vendors bring a couple of reasons of why to choose open source ERP. Firstly, there are no upfront licensing fees. Anyone can download the software from vendor's website and try the product for free. Thus, assess whether it suits or not the company's needs. Secondly, open source offers free control of software customization with the support of contributing communities and organizations. There is also professional-quality support available from companies working with that specific ERP. And one of the last arguments is that open source reduces the specification risk, characteristic to custom built software; and loss of vendor risk (Opentaps.org, 2011). Not less important is the promise of "be up and running with a full system in 10 minutes" (xTuple.com, 2011) -this, in particular, makes it very interesting to challenge.
Fougatsaro [START_REF] Fougatsaro | A study of open source ERP systems[END_REF] presents the following seven reasons of why organizations should choose open source ERP:
• Flexibility -the available source code makes it easier to customize and integrate the open source ERP with existing systems. • Quality -as a result of commitment of vendors and communities to development efforts. • Ability to adapt to business environment • No hidden costs -as opposed to proprietary ERP systems, where changes scalability issues might be imminent. • Ability of specific developments -freedom of customizing and development • Free of vendor dependence -the support is provided both by community and commercial vendors. • Freedom to upgrade or not.
Fougatsaro [START_REF] Fougatsaro | A study of open source ERP systems[END_REF] claims that despite all these benefits, open source software also has disadvantages. The comparison of proprietary and open-source (community/free) ERP advantages and disadvantages, as well as a total cost of ownership analysis have been researched to a limited extent. As stated by Carvalho [START_REF] Carvalho | Issues on evaluating Free/open source ERP systems[END_REF] the full picture of open source ERP's issues and benefits still has to be covered. Therefore, it might be difficult to evaluate which type of ERP brings the most value. However, this of course impacts the decision when adopting or not, but this is outside the scope of this paper that focuses on open source ERP deployment and knowledge required for doing that.
Open Source ERP Deployment in SMEs
In recent years, ERP systems have become very attractive to SMEs, i.e. the SMEs' interest towards ERP system has increased. And this happened due to a number of reasons. Firstly, ERP vendors have shifted their development efforts focus from mainly large customers -today a saturated market, to small and medium sized companies -a promising market both in cash and in customers. Consequently, the range of ERP packages offer has considerably increased, adjusting to the needs and pockets of various companies [START_REF] Bajaj | ERP for SMEs[END_REF]. Secondly, it is the highly dynamic business environment which requires ERP adoption in order to gain competitive advantage over rivals [START_REF] Bajaj | ERP for SMEs[END_REF]. In the same context, Jacobson et al., [START_REF] Jacobson | The ERP Market Sizing Report, 2006-2011[END_REF] mention that ERP adoption among SMEs comes as a response to new customer requirements, as well as to the wish to participate in a highly global market. And finally, ERP systems offer unprecedented advantages over any other traditional models of managing businesses [START_REF] Sammon | Justifying an ERP investment with the promise of realising business benefits[END_REF].
However, before any further statements are made about SMEs and ERPs, it is important to define SMEs. In this context SMEs are defined as: companies from 10 to 49 employees are considered to be small, companies from 50 to 249 employees are considered being midsized, and companies having 250+ employees are considered to be large companies. This definition is consistent with how the European Commission [START_REF]SME Definition[END_REF] defines SMEs. However, this paper regards organizations with less than 10 employees (micro) as small, the reasons are that constraints and objectives of ERP systems in this group of organizations could be considered the same [START_REF] Laukkanen | Enterprise size matters: objectives and constraints of ERP adoption[END_REF]. However, as Laukannen et al. state, there are significant differences between small and mediumsized companies and therefore they should not be considered as a homogeneous category. The same is claimed by Carvalho and Johansson [START_REF] Carvalho | ERP Licensing Perspectives on Adoption of ERPs in Small and Medium-sized Enterprises[END_REF].
In terms of general constraints, small companies, in comparison to medium-sized, have a lower user IT competence and insufficient information, but are less sensitive to changes enforced by ERP implementation. In terms of ERP objectives, medium-sized companies feel eager to develop new strategic ways of doing business, and are more interested in expanding its business activity [START_REF] Laukkanen | Enterprise size matters: objectives and constraints of ERP adoption[END_REF].
It can also be said that small companies, despite having a growing interest for ERP, have a low actual rate of ERP adoption. Laukannen et al. [START_REF] Laukkanen | Enterprise size matters: objectives and constraints of ERP adoption[END_REF] point out that resource poverty is one the main constraints of ERP system adoption, i.e. they cannot afford one. Small companies, in comparison to mid-size and large enterprises, have limited or scarce resources. Then the logical question arises: so why not chose the free ERP?
Somewhat apparent, the financial factor can be crucial and decisive for small companies. However, Johansson and Sudzina [START_REF] Johansson | ERP systems and open source: an initial review and some implications for SMEs[END_REF] mention that cost is not the only factor affecting the selection and adoption processes of an open source ERP. Laukannen et al. [START_REF] Laukkanen | Enterprise size matters: objectives and constraints of ERP adoption[END_REF] suggest that knowledge is another determinant barrier lying in the way of ERP adoption in small companies specifically, whether it is IT competency, enough information for decision-making in ERP selection or system usage.
It is crucial to understand that knowledge requirements change along various stages of ERP life cycle and embrace a large set of skills, experiences, abilities and perspectives [START_REF] Suraweera | Dynamics of Knowledge Leverage in ERP Implementation[END_REF]. In other words, type of knowledge required in implementation phase, for example, would be different from what knowledge is needed in the system use phase. However, there is no scientific support whatsoever for knowledge requirements for open source ERP deployment.
Given that a small company intends to adopt an open source ERP for free, it would be both interesting and challenging to find out how much knowledge is needed or how easy it is to deploy -choose, implement and use -an open source ERP, and when deployed if it meets the expected outcomes/benefits.
The Open Source ERP Deployment Experiment
The objective of the open source ERP deployment experiment was two-folded. On one hand, it was to find out the required knowledge in the open source ERP adoption process in small companies, and namely, the actual prerequisites, needs and issues with an open source ERP in the early stage of adoption, i.e. its deployment. The prerequisites, needs and issues are to cover the time and knowledge aspects. On the other hand, it is also interesting if the open source ERP is found useful in relation to the effort required to deploy it.
The data collected was based on TAM2 model developed by Venkatesh and Davis [START_REF] Venkatesh | A theoretical extension of the technology acceptance model: Four longitudinal Field Studies[END_REF]. In other words, TAM2 model was used as lens for assessing the open source ERP deployment process (Perceived Ease of Use) and its primary impact on business and its users (Perceived Usefulness). The experiment focused on three questions:
-How much effort -time and knowledge -does it take to deploy an open source ERP?
-What is the perceived ease of use in the deployment stage? -What is the perceived usefulness of open source ERP in the deployment stage?
The Foundation of the Experiment
In the realities of small companies, where the resources and knowledge are limited, open source ERPs seem to be a viable solution. First of all, the product is free regarding license cost, so initial costs could be said are low. However, the problem of knowledge still remains, because it is still unclear how much knowledge is required to adopt and use an open source ERP. Therefore, it would be interesting to find out how much effort is required to deploy and start using open source ERP systems. In order to address this issue, the Technology Acceptance Model 2 was used.
Technology Acceptance Model 2 (TAM2)
TAM2 comes as an extension to the initial Technology Acceptance Model (TAM) developed by Davis [START_REF] Davis | Perceived usefulness, perceived ease of use, and user acceptance of information technology[END_REF]. Both of the models or both versions address the subject of system usage. The model helps to understand and evaluate the reasons and factors which affect the use and adoption of new systems or existing systems. Also, the TAM model is viewed as a tool to measure users will and intention to adopt a system. In this research papers, TAM2 will also serve as a measuring tool to assess the ease and usefulness of an open source ERP.
The initial model was limited to two factors, which in opinion of its author, affect the intention and use of a system: perceived ease of use (PEOU) and perceived usefulness (PU). Despite its popularity and use, Venkatesh and Davis [START_REF] Venkatesh | A theoretical extension of the technology acceptance model: Four longitudinal Field Studies[END_REF] suggested an extension. The extended features take into consideration the external factors which affect perceived usefulness (see Figure 1). Venkatesh and Davis [START_REF] Venkatesh | A theoretical extension of the technology acceptance model: Four longitudinal Field Studies[END_REF] explain that external factors have social and cognitive character.
Figure 1 The TAM2 model and scope of this research [START_REF] Venkatesh | A theoretical extension of the technology acceptance model: Four longitudinal Field Studies[END_REF] Perceived usefulness is the extent the user thinks the systems helps her or him to perform the tasks, i.e. job activities. Perceived ease of use measures the effort required to use a system, i.e. how easy it is to perform the job tasks. PEOU affects the PU and the Intention to Use. One user's intention to use affects in turn the actual usage of the systems, i.e. determines the Usage Behavior.
The external factors are explained by Venkatesh and Davis [START_REF] Venkatesh | A theoretical extension of the technology acceptance model: Four longitudinal Field Studies[END_REF] as follows: Subjective norm, is a "person's perception that most people who are important to him think he should or should not perform the behavior in question". The key idea behind is that persons are susceptible to other people' ideas and views. That is why some persons perform actions motivated by other people.
Voluntariness is "the extent to which potential adopters perceive the adoption decision to be non-mandatory" [START_REF] Venkatesh | A theoretical extension of the technology acceptance model: Four longitudinal Field Studies[END_REF]. In other words, voluntariness determines if system use is perceived as obligatory and unwilling or opposite to these.
Image has a positive effect on perceived usefulness, if the use of the system is to enhance user's social status or image; otherwise it has a negative effect on perceived usefulness. Experience might also have a positive or negative effect on PU. It is considered that a system is most like to be perceived useful if the user is experienced.
Job relevance, as defined by Venkatesh and Davis [START_REF] Venkatesh | A theoretical extension of the technology acceptance model: Four longitudinal Field Studies[END_REF], is "an individual's perception regarding the degree to which the target system is applicable to his or her job". In other words, job relevance measures whether a system use is important to daily job tasks.
Output Quality measures the quality level of the tasks performed by the system. In other words, if the task performed by the system is perfect, then the perceived usefulness of the system is appreciated higher.
And ultimately, Result Demonstrability measures the tangibility of the results given by a system. Venkatesh and Davison [START_REF] Venkatesh | A theoretical extension of the technology acceptance model: Four longitudinal Field Studies[END_REF] explain that "even effective systems can fail to garner user acceptance if people have difficulty attributing gains in their job performance, specifically to their use of the system". That is why it is important for users to see direct results of their system use, in order for the system to be perceived as useful.
This research paper's scope is limited to the deployment of a system, i.e. the installation and first use of an open source ERP system. That is why the use of the TAM2 model would be partial (the dotted line in Figure 1). And namely, the interest lies on perceived ease of use and perceived usefulness of the open source ERP in its installation and first use.
Consequently, the paper will focus on defining the effort needed for and usefulness of deploying an open source ERP in the realities of a small company, in terms of time and knowledge.
The Pre-Study of Open Source ERP Deployment Experiment
The pre-study started with finding information about available open source ERPs on WWW. Having taken the suggestion of Johansson and Sudzina [START_REF] Johansson | ERP systems and open source: an initial review and some implications for SMEs[END_REF], the first source to look for open source software was www.sourceforge.org, which is the largest project promoting open source software. Using this portal's search function, the keywords "open source ERP" were queried. Over 13,000 hits came up, all of these were obviously not relevant.
According to sourceforge.org, the two most popular and downloaded open source CRM & ERP solutions were OpenBravo and PostBooks. So the next logical step was to check out the website of respective projects, and the information provided.
Both of the websites made a professional impression. OpenBravo impressed by the number of customers, and both of the open source ERPs have a large community of users. With no previous experience related to ERP and its activity, the deciding criteria for choosing one was the technical criteria and requirements. It was relatively easy to find the download page and proceed to downloading. Both software products offered solution for cloud, Linux and Windows platforms, however PostBooks/xTuple offered also for Apple/MAC OS users.
Having consulted the available documentation, first difficulties appeared. OpenBravo promised a very simple, one-click installation for Linux users, but more complicated installation paths for other platforms, i.e. advanced computer skills are needed. xTuple software promised easy installation on all platforms. Having available only Linux and Windows operating systems, it was decided to try out OpenBravo on Linux and xTuple on Windows XP. Indeed, the installation process was very easy for OpenBravo, but with no transparency whatsoever. Only, advanced computer users could be able to follow the installation procedure. After installing, OpenBravo on Linux/Ubuntu the software ran in web browser successfully. Regarding, xTuple the installation procedure was as easy, plus the level of procedure transparency and clearness was quite high. The user is guided by explanatory instructions with options of choosing elements to be installed. By default, all elements are being installed. The software also ran with no errors on Windows XP. The approximate time for both installations was ten to fifteen minutes, with high speed internet available. At this point, it is important to mention that from the technical standpoint, the requirements are not high. The computer properties available were 1GB RAM, 10GB free space on hard drive, and Windows XP. The basic needs for installation are mentioned on the vendors' websites.
It was decided to proceed with xTuple. The argument behind was the availability and spread of Windows operation systems, so that theoretically most small companies would have a computer running Windows. The next step was to get acknowledged with xTuple open source ERP.
The log-in procedure was very easy. However, potential users have to pay attention to details while installing, because important information is given, such as credentials, which would be of use later on.
After log-in, the first thing was to get accustomed with menus available in the software. xTuples offers a very pleasant user interface, with large buttons and well organized modules, such as sales, inventory and other. For screenshots please visit www.xtuple.com.
After getting to know the software, the decision was to consult available documentation and tutorials on how to work with the ERP. There are many videos available; some of the videos are introductory, and some of them give detailed instructions on the internals and functions of the system. From the video tutorials, it was found out that following steps were required in order to get a valid invoice1 :
1. Register a new user for the company a. Create separate account b. Enter details such as address, company info, logo etc. 2. Register the new customer with according details 3. Create a new product/item 4. Configure the taxation settings 5. Create a new sales order containing the product created 6. Ship and print out the invoice. These steps were enough to print an invoice valid for Swedish standards, i.e. having required information. From the knowledge gained so far the actual deployment experiment took place.
The Study of Open Source ERP Deployment
There were two methods used for data collection, a structured questionnaire; that gathered data on students' general information profile, as well as on computer experience and knowledge. The purposes of questionnaire was to establish whether experiment participants had any previous experience which resembled the experience required in accomplishing the task, i.e. installing and using software.
The other data collection method was semi-structured interviews, giving the freedom to express the issues and thoughts regarding the easiness of deploying an open source ERP. The interviews were audio recorded, and after completion immediately transcribed.
The experiment involved three students, who were asked to download an open source ERP system; and then proceed with installing it. In the next phase, the students were asked to print out an invoice. In order to simulate a real business situation, a business case with enough data to print the first customer invoice was provided to them. And for the case to resemble a real business situation, all data contained were enough to make a valid invoice by means of Swedish regulations, i.e. VAT, Address details, Organizational number etc. During the experiment, the participants were continuously asked to assess the level of knowledge and effort required to complete a certain task. Printing of invoice was not set as the ultimate goal, but rather as a guiding objective which would take an amount of effort and user interaction with the software. Of great interest was the whole process, from its start to its end -printing the invoice. It is important to mention and understand, that this experiment shares features with observational studies and usability tests. Firstly, the intention of the experiment is to answer to the research questions posed by observing its participants in the settings close to reality, i.e. in front of the computer in an office. Lastly, the experiment shares the features of a usability test, where the system is tested for ease of use. However, the focus of this study is not the interface nor the productivity, but rather the whole experience of the participants from the very moment of deciding to look for an open source system to the very last moment of printing the first invoice.
The sampling technique was influenced by the organizational characteristics of small companies. According to Laukannen et al. [START_REF] Laukkanen | Enterprise size matters: objectives and constraints of ERP adoption[END_REF] , size is the most significant factor which shapes the attitude of organizations towards ERPs. The attitude is expressed in terms of constraints and objectives for ERP adoption. In their study, a small enterprise was characterized by an average of 29 employees, with low IT competence, resource poverty, and representing a variety of industries: wholesale, logistics, retail and manufacturing. This tells that there are no strict requirements on choosing participants, as long as types of companies studied are various, and there are no strict requirements for IT skills. However, in order to avoid bias or misinterpretation accurate profiling of participants computer skills have been made.
The questions in the interview were constructed around two important aspects: 1) Perceived Ease of Use and 2) Perceived Usefulness.
The profiling questions, as previously mentioned, had the purpose of establishing the background of participants and their level of computer skills.
The questions related to Perceived Ease of Use had the purpose to assess the level of ease perceived in the open source ERP deployment process. As TAM2 is a quantifiable model, the participants were asked to answer PEOU questions on a scale from 1 to 5, were 1 was ranked as very easy and 5 -very difficult. Also, clarifying questions followed in order to get better explanation of why participants gave specific grades. The questions related to Perceived Usefulness were designed to assess the positive or negative influence of TAM2 external factors on PU.
The TAM2 model has been applied in two ways. The interviewees were first asked to grade the Perceived Ease of Use related to deployment experiment stages on a Likert-scale. This was done in order to assess the effort and knowledge needed to evaluate the process for the amount of effort needed to complete the tasks. After that, the interviewees were asked to elaborate on the effect of external factors on Perceived Usefulness, whether it was negative or positive. That was done in order to determine if the open source ERP delivered the benefits expected.
In this section a summary of the empirical results will be presented. The structure follows the theoretical model and research questions, and, thus, the data will be arranged accordingly into the following parts: Profiling, Perceived Ease of Use, Perceived Usefulness. Perceived Ease of Use will cover subtopics: The interviewees who took part in the deployment experiment are all master students within Information Systems field. The computer experience has been assessed as between intermediate and advanced levels, with computer experience being seven years on average. Having been asked on their use of computers, they commonly replied that internet and studies are the main reasons. Only one of the students used computer for work and multimedia also. All students confirmed that they have successfully installed software on computers. Regarding software usage, all students have been using one or more software products for a long time, five years or more; and all of them have advanced skill level with the product they have been using over that period of time.
All students were given a business case, used as a guide in the deployment experiment. In brief, the experiment participants were asked to choose, install, configure and accomplish the task of printing the first invoice. These stages have been assessed for knowledge and time effort.
Finding and Choosing an Open Source ERP
Having been asked to find and choose an open source ERP, all students searched for one on Internet. The most common search keywords used were "open source ERP", "open source ERP free" and "open source ERP download". Asked to explain the logic behind the search actions, students clarified that they were willing to evaluate the search results given by Google according to their relevance. In other words, the students chose the first results on the Google's webpage, after having queried for open source ERP. The resulting webpage included for instance OpenBravo, xTuple, Compiere, Opentaps.
To the question on how easy it was to find an open source ERP, they all stated that finding one was very easy.
The students proceeded with looking at the websites available in the search results. Asked how the students evaluated different OS ERP choices, they gave different answers. One student took into consideration the professionalism of the website. In other words, he considered the looks and the design of the website. The rest of participants did not have any explanation for the choice. Ultimately, the users selected to proceed with xTuples/PostBooks open source ERP.
However, it is crucial to mention that none of the students took into consideration the technical parameters and requirements of the open source ERP.
Installing and Configuring an Open Source ERP
All students showed the same behavior during installation process. None of them gave too much consideration to installation and configuring information. They all proceeded with preconfigured elements.
One of the students explained that this behavior is due to lack of knowledge about certain parts of software, such as databases offered and other elements; adding that "sometimes this is scary just because there are things configured you don't know anything about it". But in order to be on the safe side, the students chose to install all elements suggested. Also, the rush can be explained by the wish to run and try out the software at once, skipping the configuration and installation details.
When asked to share the first impression about the installation and configuration process, they expressed that the process was very easy and took little time and effort.
Getting the First Invoice
This part of assignment was the most challenging and most complex for the students. Some of the issues happened right in the beginning of the task, when they could not find the credentials needed to login to the enterprise resource system. However, all three found the interface of the software very "handy" and pleasant.
Having familiarized themselves with the interface and menus, they proceeded with the task. During the task, they all stated that the software was intuitive and helped them in achieving their goal. However, it also created partial confusion due to multiple reasons such as lack of knowledge, lack of supporting help, and no process transparency from the software. The last reason specifically is related to the save function of the software which was not notifying about its results, such that creating confusion whether the data was saved or not.
Before accomplishing their task, only one of the participants decided to turn to available help on Internet, video tutorials and ERP documentation on the vendor's website. It is interesting to mention that the same respondent used the same technique, googling, for solving issues whenever a problem appeared. The other two used their intuition and the menus available in the software.
Ultimately, the experiment participants succeeded to print out the first invoice. It is crucial to mention that the resulted invoices lacked all data required in the business case.
Perceived Ease of Use
Generally speaking, students graded the deployment process as relatively easy. The average grade, on a scale of one (very easy) to five (very difficult), was two. The most difficulties were faced when configuring and working with the ERP in order to type in necessary data and print out the invoice. Finding, downloading and installing the open source ERP was described as a very easy task. However, more transparency in the process was seen as necessary.
Figure 2. Perceived ease of use on a likert scale 1-5 (1 very easy) (5 very difficult).
The students mentioned that in order to accomplish the tasks of finding, downloading, installing the open source ERP not much knowledge is needed. Although, they believe that more knowledge is required in learning the system.
Generally, the deployment process of the open source ERP was assessed as easy. It can be claimed that there are multiple reasons for that. First of all, students mentioned that finding an open source ERP was very easy, as they could Google and pick the most relevant results. The key to finding relevant results were queering the right keywords. When it comes to ERP selection, students mentioned that it was very easy as well. However, most of them judged the quality of the software by Google results. None of the participants has thoroughly assessed the advantages and disadvantages of one product comparing to others, nor did they evaluate the product's technical requirements. Such decisions could affect the overall performance of any open source ERP, if the minimum technical requirements are not met.
Students mentioned throughout the experiment that the software had a very "handy" and good user interface, comparing it to regular software products like Microsoft etc. They also mentioned that the installation and configuration procedure was intuitive and guiding. In other words, they could get clear instructions. Generally, these indicate clear signs of good usability practices as suggested by Nielsen [START_REF] Nielsen | Usability 101: Introduction to Usability[END_REF].
However, the students felt lack of transparence and information during installation. They felt that more knowledge is required to understand all installation and configuration details. Nevertheless, all three participants succeeded with the installation, being able to run the application. That in turn indicates that little knowledge is needed to install and configure the software, and the role of the user becomes as assisting the process or monitoring, rather than obliged to decide.
The first difficulties came when literally working with the ERP user interface. After having familiarized themselves with available menus, two of the students proceeded immediately with the task using the trial-and-error method. Only one of the students tried to get better informed about the available functions and workflows. That is why the former two had consumed more time and committed more errors while executing the task. The latter used information available on Internet, particularly information on the vendor's site.
Consequently, the appropriate use of software documentation and tutorials might have helped students perform their task faster and with higher qualities. That also supports the idea that users with appropriate training find systems easier and relevant to their job performance. However, Nielsen [START_REF] Nielsen | Usability 101: Introduction to Usability[END_REF] supports the idea that easy to use system should support learnability, i.e. accomplishing task easily at first encounter with the application.
On one hand, the problem might lie not specifically in the interface design, but rather in the complexity of the ERP software systems. The students confessed that for better usage more knowledge and training is required. On the other hand, the poor performance of two students in terms of time and errors might be also explained by the rush of trying out the system.
Nevertheless, all three students have succeeded to place a sales order, ship and print out the invoice. The resulted invoices do miss important parts of data, and that suggests that they partially neglected the assignment. The problems appeared at creating a customized user for the company, i.e. register the company profile, and adding a product to the sales order. In one case, the students blamed the interface of the software, as they could not track the changes to the system. These problems resulted due to lack of training and proper education of the users. As Bueno and Salmeron [START_REF] Bueno | TAM-based success modeling in ERP[END_REF] mention training and education are necessary before, during and after the system implementation. Consequently, the last task of getting the first invoice was rather hard even for our participants, despite their large experience with other software and high levels of computer skills. Thus, relevant amount of knowledge of the ERP system is a key to perform better job. In terms of time, the deployment assignment was completed in less than one hour.
Perceived Usefulness
The experiment participants perceived the open source ERP as useful. All of the students believed that the software would enrich and help perform the job tasks, as well as reduce useless paperwork.
However, students believed that in order to get better output quality, one should get to learn the system first. The beliefs about result demonstrability were shared. Some of the students believed that at the moment the results are very limited because of the amount of work done. However, they were satisfied to see tangible results in the form of the invoice.
Regarding the image, one of the students mentioned that adopting an open source ERP is becoming a necessity, as well as affect the status and the image of the company in general.
Generally speaking, the perceived usefulness of the open source ERP has been received as positive. That is why two of the students stated that if they were to choose freely, then they would certainly use such software in their company. The left respondent hesitated to give an answer, explaining that it depends on the company and its activity.
In general, perceived usefulness of the ERP system tested was evaluated as positive. In other words, the open source ERP system has delivered the expected benefits, and all of the students find it advantageous for any small company.
All students found the ERP system as beneficial to their job performance, even though they have encountered it for the first time. One of them mentioned that this particular ERP brings more benefits than just rather using Excel or doing paperwork. Of course, it is hard to assess the benefit of a system in such a short time. Rather usefulness is a variable of time, and needs a longer period of time. The same idea is expressed by many scientists in the domain, who claim that majority of company cannot realize the benefits in the first period of time.
The image, voluntariness, result demonstrability factors have also affected positively the perceived usefulness. This can be explained by the background of the students who are all studying information systems. Thus, their general knowledge about ERP and their benefits might have affected their perceived usefulness of the product. However, the students claimed that in comparison to the effort, they could see clear results in the end.
The resulted invoices are a clear indicator of what can be achieved in shorter than one hour with an open source ERP. Although, the invoices lacked much of the important data, they indicate that with a certain amount of knowledge and training more benefits to the company and overall performance can be gained.
Conclusions
The research has shown that in order to deploy an open source ERP a relatively short amount of effort -time and knowledge -is needed. The perceived ease of use of the process has been evaluated as relatively easy, with main difficulties appearing in interacting with ERP workflow due to lack of training and little amount of knowledge. However, if appropriate user training and education is applied, greater job performance and output quality can be achieved.
The study has also shown that the output results of the open source ERP are perceived as beneficial to the company and its activity. The students evaluated the perceived usefulness as mostly positive. The factors of job performance, output quality, result demonstrability, voluntariness and image have positively affected the perceived usefulness of the open source ERP system.
Finding and Choosing an open source ERP, Installing and Configuring an open source ERP, Accomplishing the Business Case tasks.
The following experiment aimed at having students downloading, installing and print an invoice with data from a business case. | 47,989 | [
"1001319",
"1003478"
] | [
"344927",
"344927"
] |
01483874 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01483874/file/978-3-642-28827-2_7_Chapter.pdf | Thi Anh Duong
email: htaduong@ifs.tuwien.ac.at
Hoang
Hieu Tran
email: tran.hieu@ifs.tuwien.ac.at
Binh Thanh Nguyen
email: nguyenb@iiasa.ac.at
A Min Tjoa
Towards the Development of Large-scale Data warehouse Application Frameworks
Keywords: Business intelligence, Data warehouses, Application framework, Large-scale development
Facing with growing data volumes and deeper analysis requirements, current development of Business Intelligence (BI) and Data warehousing systems (DWHs) is a challenging and complicated task, which largely involves in ad-hoc integration and data re-engineering. This arises an increasing requirement for a scalable application framework which can be used for the implementation and administration of diverse BI applications in a straightforward and cost-efficient way. In this context, this paper presents a large-scale application framework for standardized BI applications, supporting the ability to define and construct data warehouse processes, new data analytics capabilities as well as to support the deployment requirements of multi scalable front-end applications. The core of the framework consists of defined metadata repositories with pre-built and function specific information templates as well as application definition. Moreover, the application framework is also based on workflow mechanisms for developing and running automatic data processing tasks. Hence, the framework is capable of offering an unified reference architecture to end users, which spans various aspects of development lifecycle and can be adapted or extended to better meet application-specific BI engineering process.
Introduction
Within the application scope of developing DWHs, it is necessary to implement the underlying data basis whereupon this task comprises selection and configuration of relevant software and hardware components for the DHWs architecture [START_REF] Kimball | The Data Warehouse Lifecycle Toolkit[END_REF]. Based on the requirements definitions, the design specification develops suitable database schemas, selects adequate system components, determines partitions of data base tables and specifies ETL processes. The size and complexity of DWH systems are increasingly involved in enabling large data service to scale when needed to ensure consistent and reliable operations. Therefore, it is difficult to design and maintain large-scale application framework with its own characteristics (dynamicity, scalability, etc) [START_REF] Chuck | Navigating the Next-Generation Application Architecture[END_REF] over heterogeneous environments.
The existing systems available in the market already offer pre-built data models, preconfigured applications and prearranged contents for common business domains, e.g. customer analytics, financial analytics, HR analytics and supply chain analytics. Moreover, the systems also enable various kinds of data integration and information delivery. Several BI suites also offer deliver ready-to-use templates and include features to build own ones. However, these approaches still tightly bound to the BI solution products. As the analysis of various multi-dimensional modeling methods shows, libraries comprising reusable elements of DWH reference models are mostly specialized on particular model element types [START_REF] Goeken | Multidimensional Reference Models for Data Warehouse Development[END_REF].
Although these various emerging services have reduced the cost of data storage and delivery in DWHs, DWH application development still remains a non-automated, risk prone and costly process. Much of the cost and effort for developing a complete enterprise DWH stems from the continuous re-discovery and re-invention of core concepts and components across the BI industry. In this context, there arise the requirements for an generic application framework that can enable an asset base of reusable information models and a structured platform to develop and incorporate DW and BI applications, in a cost effective manner and in the available set of tools and technologies.
Within the scope of this paper, a large-scale DWH application framework lays the groundwork for a new paradigm to specify, design, host and deliver standardized BI applications, with common consistent procedures such as data processing defined as consistently reusable components. The basis of the framework is the metadata-based architecture, including generic data models libraries and pre-built application libraries along with the associated mappings. Integrated with the data modeling components, the application framework enables an integrated set of services that collaborate to provide a reusable architecture for a family of DWH eco-system formed of a large numbers of decentralized data analysis services. On this basis, the application model can then be executed and replicated up to the deployment model in a flexible way, enhancing reusability as well as enabling diverse data model preprocessing and analyzing, thus reduces the implementation time and improves the quality of the BI applications.
The rest of this paper is organized as follows: section 2 introduces some approaches related to our work; in section 3, framework architecture and its core component are presented, along with the its design principles. In section 4, the mechanisms of using the framework in developing DWH applications is analyzed. Section 5 illustrates our experimental design to highlight the feasibility of proposed framework. At last, section 6 gives a summary of what have been achieved and future works.
2
Related works
The concept of the application framework have been proposed to describe a set of reusable designs and code in the development of software applications [START_REF] Mohamed | Object-oriented application frameworks[END_REF]. However, currently, there is still a lack of concepts for designing and implementing a possible adaption of large-scale context to the DWH application frameworks. To sketch the background of the presented research, this section discusses approaches proposed in the domain of BI and DWHs, which could help with large-scale development.
The key to a BI application development, in the large-scale context, depends on scalable, configurable and well-maintained application models. In this context, the scalability can range from complete BI applications down to atomic functional blocks, from data models, ETL tools to deployment packages. Research efforts are very much aware of this trend, and there are no fewer than a dozen companies in recent years that build specialized analytical data management software (e.g., Netezza, Vertica, DATAllegro, Greenplum, etc.), along with the known systems available in the market such as Sybase PowerDesigner, Business Objects Foundation, Oracle BI, etc.
However, within the scope of our knowledge, reference application model of Data Warehouse projects mostly refers to an ad-hoc modification of existing information models [START_REF] Knackstedt | Configurative reference model-based development of data warehouse systems[END_REF]. A traditional approach proposed either by industry and academia for applying application and data framework is one of manually assembling and configuring a set of compatible elements as a reference application model [START_REF] Goeken | Multidimensional Reference Models for Data Warehouse Development[END_REF]. Most of these frameworks still demand a vast amount of repetitive and tedious work to implement similar parts of a DWH application; these products are tied to their own extraction and information delivery tools and lack of pre-built data mappings. This is in part because in current reference DWH models, neither the data models nor the application models has any awareness of the other and these layers operate independently. Therefore, it is difficult to design and maintain DWH applications in large-scale context, especially as the data application models change heterogeneously.
Moreover, to solve the data management problems in large-scale development context, technologies have been developed to manage high volumes of data efficiently, e.g. Google Bigtable (http://labs.google.com/), Amazon Dynamo [START_REF] Giuseppe | Dynamo: amazon's highly available keyvalue store[END_REF], and so on. Recent years have seen an enormous increase in MapReduce adoption, a Google's programming model to solve the problem of massive explosion in data. Due to its convenience and efficiency, MapReduce is used in various applications as a basis for data frameworks. For example, Apache (http://apache.org/) Hadoop framework is commonly used on a scale-our storage cloud to perform the data transformation necessary to create the analytics database. Hive is a data warehouse infrastructure built on top of Hadoop that provides tools to enable easy data summarization, ad-hoc querying of large datasets stored in Hadoop files.
In this context, this paper is aimed to develop a structured application framework that enables a comprehensive, hosted BI solution tailored to scale and reshape to meet the needs of changing business needs and requirements. Integrated with the data modeling components, the DWH application framework can support implementation and deployment of the DWH applications by leveraging pre-built modeling and application framework, providing a high degree of re-usability, and configurability.
Large-scale data warehouse application framework
The proposed application framework enables users to develop DWHs by enabling framework reuse across application designs and platforms. The design of DWH application, along with the tool suites, simplifies the ability to add and remove analytic features without taking into account the underlying framework logic. This section presents an integrated DWH application framework (DWHAF), providing a flexible, scalable solution to enable the rapid development of differentiated DWHs.
Component analysis of DWHAF
A typical method for implementation and maintenance of DWH application comprising the steps of: analyzing the critical subject area, existing data models and data needs; designing in sequence the logical data modeling and physical database; developing back end services and end user applications; deploying database implementation, initial load and validation of the data warehouse. In this context, it needs to be taken into account the fact that developing BI applications requires an efficient application framework of tightly interoperating components along with a scalable data modeling framework. environment templates. To model Data warehouse data model, logical data model and database as well as mappings from source to target are designed. This way an information model of the required DW is created. This information model can be analyzed and the results can be used as a basis for ETL phase. Finally, subject to the development phase is a continuous adaptation and maintenance of the cube models.
The data modeling framework enables end users to define the logical data model using a series of graphical user interfaces (GUI) or application-programming interfaces (API), then dynamically translates the logical data model into a corresponding physical data model. Because the application framework is integrated with the data modeling framework, the application framework features are automatically available to end users, enables them to configure various application features and data management operations also by terms of UI and API. Through the API, BI applications can interact with the application framework, which in turn interacts with the logical data model.
It is clear that the common business domains require common application processing functionality, e.g. data delivery provides users with processing results tailored to their specific needs, for example what data format they accept, what are the relevant parameters they want to retrieve, etc. The required application framework also supports the description and execution of application-specific data processing workflow as well as predefined workflow patterns between distributed components. When considering the whole application processing from input to delivery it appears that instead of a single workflow there could be a variety of workflow definitions for a single application according to different user needs.
In this context, the DWH application framework, including the data modeling functions and application features, is aimed to be a scalable architecture to serve as the warehouse's technical and application foundation, identifying and selecting the middleware components to implement it. New business requirements or changed requirements may result in the addition of new components to the framework and a modification of the existing framework components. Therefore, it takes a series of iterations to get the framework right for the applications that are built on top of it.
Constructing the relationships between data and application models
The model component provides the access to pre-packaged data and application models, enabling end users to reconfigure and customize as well as provide aids to dimensional modeling in the DW and BI context. The data models can be distinguished as the model that stores fact and its related dimensions and the model that defines cube structure or relational schema [START_REF] Kimball | The Data Warehouse Lifecycle Toolkit[END_REF].
The data model component of the system offers the means to capture all the possible dimensions and their respective attributes/properties as well as establish mappings between the analytic facts and dimensions. Meanwhile, built-in dimensional models representing the best practices for comprehensive analysis relating to particular business functions. Moreover, data model can be generated based on definitions of dimensions, facts and analytical needs. End users can thus reconfigure these data models via user interfaces.
Meanwhile, metadata associated with a application model used to describe of a template of the interface and functionality of the application. These templates consist of ETL and warehouse design, mappings, data storage, deployment templates or application specifications. An example of application metadata model can include defined options to personalize the DWH applications, which are used in the runtime reconfiguration process when it is instanced in a specific application. This metadata can be executed at run time to provide the design interface for the designer, and to deploy the model into the output data storage structure in DWH applications.
In accordance to the requirements of integrated framework, variability of the application configuration should be based on the consistent variability of application/business models, i.e. data models, application models or domain models. By merging the application constraints with the data model constraints, we obtain a unified set of constraints that is used by the configuration tool to ensure that the template model generated is consistent and up to the business domain.
The first step towards the construction of a relationship is to capture the variability of a given domain by means of a data modeling service. Similarly, this data model should be represented by configurable data exchange and data store services model, respectively. Application specification should be identified with the variants of the applications and constraints should be defined to ensure the correctness. Once data model, application specification and their constraints have been identified, it is possible to define the mapping by means of the impact analysis: ─ from data model to application: given a data model specification, we need to estimate what are the implications to the setting in the application model; ─ from application to data model: given a change in application specification, we need to consider which data model are impacted.
A valid mapping is constructed when a valid application specification is defined with respect to the data model. The specification space is obtained by the intersection of the two specification spaces (data and application) via the mapping. By representing the data model variability in a separate model, we can avoid capturing the interdependencies of the data model in the configurable application model, as these interdependencies are propagated to the application model via the valid mapping. Once each customized data model has been configured, a set of service specification, attached to the application specification that captures the selected data model, are performed on the application model to define a configuration.
Architecture of proposed DWH application framework
The system architecture consists of abstract services that are the core components of DWHAF applications namely Application, Client, DWH and Platform services. Specifically, the framework also contains the main components: the metadata services, the management services and the libraries. The Client layer contains toplevel component such as Application interfaces, Service APIs, as well as specific services related to Application service framework. All instantiated application models, both native and third party, and other supplementary components, reside in this layer. The application layer runs within the system run time, using the templates and services available from underlying layers.
Client services
This layer is the place where the context information is processed and modeled. This context information acquired by the components at this layer helps services to respond better to varying user activities and environmental changes. Additionally it serves as an "inference engine" that deduces and generalizes facts during the process of updating context models as a result of information coming from Profile interfaces.
DWH services
This layer provides the services used to develop DWH applications as well as a generic abstraction for platform access and manages the user interface and application resources. Built around API connectivity, the proposed framework also includes preintegrated DWH solutions, validated on multiple platforms, cutting down the time cost to launch DWH model. Moreover, customers can plug in components of their choice, with minimal impact on system integrity.
Platform services
To ensure efficient development, customization and administration of DWH applications, the platform layer offers application delivery. Specifically, DWH development relies on a variety of core services, i.e. security, performance tuning, data storage, visualization packages, data analysis tools, and basic environments.
Along with the libraries, this layer forms the deployment basis for the application framework. For example, the Management Services offer application configuration and reporting capabilities; Virtualization delivers abstraction between physical and functional elements in service management.
Metadata services
As presented in the component analysis process, there's a clear separation of different kinds of metadata, i.e. metadata that describes the data model, the application model, the base functionality of an application, and the customizations. The Metadata repository provides a central, shared source of metadata including pre-built metadata, enabling reduction in implementation and maintenance costs. The metadata repository also serves as a key repository of all mappings between application models and function specific data models housed in the data model component.
Along with the data model component, the system enabling user interfaces to reconfigure the reference models, preventing the risks associated with being locked into a specific configuration. Meanwhile, the library layer contains repositories of prebuilt templates for DWH application development. This layer will grow as development of the framework continues and common themes are identified. Specifically, the architecture provides common practice solutions to reduce the effort of data modeling, in which they can be used as a starting point for the construction of application-specific models. Meanwhile, a metadata repository supports application implementation, i.e. definitions of data objects, application components, runtime customizations, etc.
Applying DWHAF in application development
Built on top of the framework, DWH application service provides the design for specific applications and the query interface required by the DWHs solution. The application service provides both the tools and applications that enable predefined and ad hoc collection, analysis, and presentation of end user's data by means of user interfaces and service API. The result of this design service includes an application prototype, recommended design templates and development tools to support implementation, and deployment plans for the required applications.
Fig. 3 provides an overview of our proposed framework for generating DWH application architecture. DWH designers interact with the service composition environment through a composition user interface, navigating through the generated DWH reference models, edit the process, and deploy DWH applications. The backend includes major components that process the contextual information, generate reference processes and analyze historic usage data to recommend DWH templates.
Fig. 3. DWH application framework and application service
From the modeling viewpoint, the ability to use history data is an important feature. An intuitive view to application-specific DWH design would be to satisfy the need for (non-existing) DWH services by bringing together existing ones. We aim to reuse development knowledge and provide a starting template for the users. This knowledge can be based on past successful deployment, and can be seeded by domain-specific knowledge and task-specific knowledge about the core types of information processing activities.
In this context, the presented application framework enables designers to develop DWHs by enabling pre-built service libraries reused across application designs and platforms. The design of DWH applications, along with the tool suites, simplifies the ability to add and remove analytic features without taking into account the underlying framework logic. The result of the architecture design is the Reference Architecture, which is an architectural framework along with a set of middleware elements facilitating the integration of services, and context modeling. An example of adapting the generic component for the implementation of an ETL specific component is presented in Figure 4. The example shows the implementation of the selective data storage service, and the other components can be implemented in a similar manner.
Fig. 4. Example process of defining ETL reference architecture
The application framework aims at providing a reference template configured to enable end users to leverage prebuilt practices and to avoid designing and developing DWH applications from scratch [START_REF] Oracle | Enabling Pervasive BI Through a Practical Data Warehouse Reference Architecture[END_REF]. Moreover, we focus on the stage of dynamic template instantiation, in which we need to identify specific service instantiation for each of the generic service in the template. The focus of supporting end users means that, we aim to automate only the main aspects of the process: selecting suitable services for each task and working out compatibilities between services in terms of data flow, pre-and post-conditions. For example, for querying services, the related pattern can be defined as Warehouse GUI to en-queue queries, check status and retrieve results.
Illustrative example
To establish the practical feasibility of our framework, we have designed a toolset that provides end-to-end support for DWH application model configuration. In this case study, we take advantage of Hadoop environment, which based on MapReduce programming model, in optimizing data warehouse performance. Specifically, experimental ETL application acts as a transformation engine, taking the extracted data from multiple data sources and processing this large-scale data into a common format for integration into the data warehouse. As a proof of concept, a data warehouse of Climate subject area will be deployed, in which we use Pentaho BI suite which extends their ETL (Pentaho Data Integration -PDI) to support processes that exploit Hadoop structures. Figure 5 provides an overview of the toolset's architecture. The fact tables can be defined from various subject areas, such as the value of a variable measured at one station an on a date given station key, time key, variable key, and measured value, monthly sum.
There is also a version management used to relate the instances of the model back to the generation of application template. Every time a application is instanced from that template, the instanced application gets a copy of the version attribute that describes which version of the template was used. When a designer customizes an instance of the application, these customizations are stored with the instanced application, they are not propagated back to the template.
Conclusions and future works
The rising demand for BI and DWH applications is fostering the need to support flexible deployment models for all BI application services. With ever growing data volumes and deeper analysis requirements, it also follows that DWH application framework must be able to scale out to meet operational and technical requirements of hosting and delivering applications. In this paper, a metadata-driven application framework is presented as the core that enable the platform to deliver configurable and scalable DWH applications. The core of this framework is that the application framework is tightly integrated with data modeling components, thus supports an associated and centralized logic to ensure the consistency of the DWH applications. Focusing on the potentials of semantic technologies [START_REF] Oscar | Automating multidimensional design from ontologies[END_REF][START_REF] Oscar | Discovering functional dependencies for multidimensional design[END_REF][START_REF] Spahn | Supporting business intelligence by providing ontology-based end-user information self-service[END_REF], we are working on applying sound formal techniques to represent and automate the mapping between domain data model and application model. To establish the practical feasibility of our framework, we have designed a toolset that provides end-to-end support for DWH application model configuration and conducts DWH development experiments with various subject areas to empirically evaluate the applicability and impact of the proposed configuration approach on the deployment process.
Fig. 1 .
1 Fig. 1. The required DWH application framework
Fig. 2 . 3 . 3 . 1
2331 Fig. 2. DWHAF conceptual architecture
Fig. 5 .
5 Fig. 5. Prototype implementation architecture
Acknowledgments
This work is supported by a Technology Grant for South East Asia of the Austrian Council for Research and Technology Development and ASEA UNINET. | 27,231 | [
"1003479",
"1003480",
"1003481",
"993405"
] | [
"19098",
"19098",
"487706",
"19098"
] |
01483875 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01483875/file/978-3-642-28827-2_8_Chapter.pdf | Victor Romanov
Alina Poluektova
Olga Sergienko
Adaptive EIS with Business Rules Discovered by Formal Concept Analysis
Keywords: Enterprise Information systems, business rules, formal concept analysis, system adaptation, customer profile
Business rules component became essential part of the software such companies as ORACLE, SAP, IBM and Microsoft and this fact signifies new stage in the Enterprise Information System (EIS) development and applications. The efficiency of application such new tools depends from business rules development technology. The new generation of ЕIS software requires not only deployment strategy, but also tools for extracting business rules from description of existing practice. Along with manual business rules extraction from mountains of documents, there exists the possibility to apply data mining technology based on formal concept analysis. In this paper we are presenting, how suppliers and customers data, being accumulated in data base, may be used in Customer Relationship Management (CRM) system for fitting services and relations to customers and suppliers profiles.
l. Introduction
The modern enterprises' business processes are very complex, especially for medium and large business. They contain many conditions, restrictions and rules that are implicit and hidden in numerous documents, job manuals, applications codes and experience of employees. Such sparse rules dissemination creates difficulties for a company to rebuild the business in time, because of time spent on documents finding, the conditions and rules reveal, rewriting instructions and regulations, and to make changes to the IT components and applications. Great number of business rules and its variability urges the companies to allocate business rules as independent part of the business description [START_REF] James | Smart Enough Systems: How to Deliver Competitive Advantage by Automating Hidden Decisions[END_REF][START_REF] Romanov | Customer-Telecommunications Company's Relationship Simulation Model (RSM), Based on Non-Monotonic Business Rules Approach and Formal Concept Analysis Method[END_REF]. The collecting of business rules together as a separate IT components managed by Business Rules Management System increases the response speed to the changing of competitive environment and speed of decision making.
The companies, such as Oracle, SAP, IBM and Microsoft, which are producing software for enterprise information systems during last two years the main effort have been focused at the problems of integrating broad spectrum of the different their software products, such as Web-applications, SOA, Collaboration suite, Data Warehousing, Application Server and so on. In the same time such business intelligence tools as Data Mining, Knowledge Discovery, and Business rules were regarded for a long time how not essential part or as special software and even were not included into structure of EIS software.
As a consequence, there arose a situation, when very complex and expensive software was targeted mainly at the secondary tasks in the business management, such as routine tasks of data processing, not towards the strategic planning tasks of top managers. As a result, in many cases, the efficiency of enterprise information systems, relative to the expenses for the implementation of these systems, was low and insufficient for the business users.
Meanwhile the concepts of EDM -Enterprise Decision Management [START_REF]What IS Enterprise Decision Management or EDM? FICO Decision Management Blog[END_REF] and ADS -Automated Decision Support [START_REF]Will Automated Decision Support Tools Replace the Business Analytics?[END_REF] were proposed as an approach for possible automated decision making. Having in mind implementation business rules approach at the some enterprise, we will concentrate our efforts on the problem of including business rule capability in CRM business processes. Only in recent years the companies, mentioned above, have included a component of business rules in their software. It is important to note that rules-component has been integrated into business processes component and Data miming tools are applying for rule discovery. Let's consider some essential features of BRMS components EIS software of these vendors.
BRMS Component of the Leading EIS Software Companies
Oracle Business Rules
Oracle Business Rules [5] is a new product that provides all features needed to realize the "agility" and cost reduction benefits of Business Rules. Oracle Data Mining can analyze historical transaction data and suggest business rules. Oracle integrate Business Rules integrates with SOA/BPM facilities.
The Rule Author is a Graphical User Interface tool for creating and updating Rules. The programmers use the Rules Author to create a business rules, extracting it from documents and practice and converting these business terms to Java or XML expressions. Rule Author provides a web-based graphical environment that enables the easy creation of business rules via a web browser.
The Rules engine is implemented as a Java Class, and is deployed as a Java callable library. Java programs directly call the Rules engine. The Rules engine implements the industry standard Rete algorithm making it optimized for efficiently processing large numbers of Rules. There are many types of services including "decision support" services.
SAP NetWeaver Business Rules Management
The SAP NetWeaver Business Rules Management [START_REF]SAP Netweaver Business Rules overview #sapteched09 -JT on EDM[END_REF] component complements and accelerates SAP NetWeaver Business Process Management. Together they have become important components of Enterprise SOA.
The Rules Composer is the rule modeling and implementation environment of SAP NetWeaver BRM. Because it is integrated within SAP NetWeaver Developer Studio, the Rules Composer is the most efficient way for developers to build rules-based applications targeted at the SAP NetWeaver platform.
The Rules Engine is the run-time engine of SAP NetWeaver BRM, available as a predeployed stateless session bean in the SAP NetWeaver Application Server (SAP NetWeaver AS) Java of SAP NetWeaver CE. This tool gives IT developers the ability to generate reusable rules services out of the box, which is particularly helpful for integrating rules into composite applications.
The Rules Analyzer, a targeted environment for business analysts, will allow them to model, test, simulate, and analyze business rules without assistance from developers
MS BizTalk Server
BizTalk Server includes the Business Rules Framework [START_REF]Microsoft BizTalk Server Business Rule Framework[END_REF] as a stand-alone .NET-compliant class library that includes a number of modules, support components, and tools. The primary modules include the Business Rule Composer for constructing policies, the Rule Engine Deployment Wizard for deploying policies created in the Business Rule Composer, and the Run-Time Business Rule Engine that executes policies on behalf of a host application.
The Business Rule Composer enables you to create rules by adding predicates and facts and defining actions. You can add facts and actions by dragging them to the Business Rule Composer design surface. The actions update the nodes in the specified document. You can also add AND, OR, and NOT operators to conditions to create complex comparisons.
The Business Rule Composer helps you create, test, publish, and deploy multiple versions of business rule policies and vocabularies to make the management of these artifacts easier.
IBM WebSphere ILOG Business Rule Management System
The IBM WebSphere ILOG Business Rule Management System [START_REF]IBM WebSphere ILOG JRules BRMS[END_REF] offerings include the next main components: Rule Studio, Rule Team Server, ILOG Decision Validation Services, Rule Solutions for Office, Rule Repository, Rule Execution Server.
From within Rule Studio, a developer can:
• Create a logical business object model (BOM) for the application, and map it to a customized, domain-specific rule vocabulary.
• Create business rules in a natural language syntax, which can be expressed in one or a more localized versions (for example, English or Spanish).
• Create rules in the form of decision tables and decision trees • Create technical rules in a platform-specific syntax.
• Separate rules in a rule set into tasks, and specify a rule flow to orchestrate the execution of these tasks.
The rule authoring and management environment for business analysts and policy managers is called IBM WebSphere ILOG Rule Team Server (RTS), a thin-client Web-based environment with a scalable, high-performance enterprise rule repository. The repository provides the BRMS with a central "source of truth", addressing the specific needs of rule-based business policy management.
The WebSphere ILOG BRMS offerings include Rule Execution Server (RES), a managed, monitorable execution environment for rules that can be incorporated into an application by being deployed to a J2EE or .NET application server, or embedded directly in an application.
Business Rules Applications for CRM
The most promising domain for application of business rules software at the current time is Customer Relationship Management (CRM). CRM is a business strategy, directed at the sustainable business building, the kernel of which is "client-oriented" approach. CRM also includes technology of customer retention and client-enterprise interaction history database maintenance. This strategy based on collecting information about clients at all stages of service life cycle, extracting knowledge from it and using this knowledge for business amelioration.
In this paper we will consider how business rule management conception can be applied to one of the Russian enterprise -Podolsk (town) Electromechanical Plant -PEMP (http://www.i-mash.ru/predpr/1250), which is producing hydraulic equipments. This enterprise is specialized in supplying of hydraulic and pneumatic equipment for energy companies and for construction, transportation and industrial complexes. The firm is producing various hydraulic equipment (pumps, motors, etc.), designed for application in marine hydraulics, transportation systems, facilities for the repair of wells, as well as railway equipment (locomotives). All the necessary spare parts and rubber products for this equipment are also manufactured.
The problem, nowadays facing this company, consists from reducing the amount of orders for products and from another side -customers churn. In modern conditions the market is saturated with offerings, and the struggle for the client retention becomes one of the major problems of each company.
Analysis of customers data and as a result, selecting different customer categories and corresponding business rules, then providing to customers services, relevant to the customer's consumption profile. Flexible system of discounts and offers for client will allow the company to increase sales, identify and select the most valuable and reliable customers and, accordingly to their score, focus on them, and as final result, to improve efficiency of the business in general.
Business rules manually could be derived from several sources, and three main sources may be described as follow: policy statements and objectives of organization, business processes, external factors (e.g. laws and regulations). The tracing the sources of business rules can help to the personnel to discover the need of changing BR. If the content of the source is changing, the rules, relating to this source, has to be modified or removed.
Business rules can be described in a simple natural language or by mean decision tables or in form of decision trees. If simple natural language would be chosen, the rules can be easily readable and accessible to all interested parties: business owner, business analyst, technical architect, and so on.
According to business rules for each category of customers the company is offering different set of products and services according to different customer's profile.
Table 1. PEMP customer relationship business rules
Business Rules Formalized Rules Source Type
If the volume of sales order from 80,000 to 120,000 rubles and the status of the client "risk", then give a discount of 5%
IF 80 000<order_value< 120 000 and status = "bad" THEN set discount 5% As a result of this work hundreds of business rules were selected, several examples of which are presented in the Table 1.
We suppose that the business model, based on business rules, makes possible for the plant quickly adjust production and services to changing conditions of market environment by applying a procedure of clients clustering according to their significance. The results of such clustering are used for risk rating evaluation and for using flexible discount system, that makes possible to retain profitable clients churn to another companies.
In our work for experimental part of research BRMS Visual Rules Modeler was selected and installed. As announced Bosch Software Innovations GmbH, Visual Rules Modeler now became the part of Visual Rules Suite Version 11.2 and is available now for download [START_REF]Two Reputedly Opposed Approaches: On the Integration of Business Rules and Data Mining[END_REF][START_REF]Bosch Software Innovations[END_REF] . The next structure of the database, containing products and customer data, was developed (figure 1).
Figure 1. Database structure
In this data base table «CUSTOMERS» contains the clients legal entities data such as title, description, legal address, phone number, email address, branch of the industry, the term of operation of the client on the market, besides the table contains several computed field to store such data as score, the volume of orders and size of the discounts, computed as a result of the business rules application. Data about customer's orders are saved in tables «ORDER» and «ORDER_ITEMS». Table «PRODUCTS» contains the hydraulic products items catalog of company.
Figure 2. Business Rules, that assign the discounts to the clients
With application Visual Rules Modeler was created business rules stream -Discount Assign‖, depending from order volume and client status. Stream rules "Discount Assign" illustrates the rule set, which assigns discounts to customers. Business Rules allow management to create a flexible system of discounts, which values can be easily modified (figure 2).
As can be seen from database and database views, into database fields, such as score, status, and the order volume discounts, corresponding values have been inserted. These values were computed in accordance with developed business rules model for certain customer and logic of the discounts assignment decision to the different users. So through the developed business rules model, companies can quickly adapt to changing environmental conditions and dynamic competitive environment.
At the testing stage the data, prepared for the model testing, were loaded to the data base and then the values of discount percentage were computed according business rules to different customers are presented at figure 3.
Figure 3. Clients discount percentage data, computed according Business Rules
Rules stream at figure 4 shows that business rules stream is actually displayed as graphic interface for development the program, which may contain as business rules for conditions checking, as SQL data base commands "SELECT" or "INSERT", performing in the definite order, according to business process model.
Rules Discovery by FCA Method
One of the problems which confronts business analytics is problem of business rules discovery. This problem is very hard and has very high dimension. The data mining method that may be used for extracting rules from client and services data (figure 5). The possibility of applying different data mining methods as a mean for implementation such feedback was widely discussed in the scientific literature [START_REF] Poelmans | Formal Concept Analysis in Knowledge Discovery: a Survey[END_REF][START_REF] Lakhal | Efficient Mining of Association Rules Based on Formal Concept Analysis[END_REF]. The formal concept analysis is one of such method. Formal concept analysis was primary proposed by R. Wille in 1982 year [START_REF] Wille | Restructuring lattice theory: an approach based on hierarchies of concepts[END_REF] and it finds its application at the different domains for discovering data structure.
Aside from selection clients group and its visualizing this method provide possibility for searching attributes dependencies in form of implications. The clients may be regarded as objects and their personal data, kinds of services, intensity services consumption and performed with them marketing actions as a attributes, which customers have. Upon this data, the set of objects with common attributes values may be discovered.
The data are presented as formal context that is as a table, the rows of which correspond to object and the columnsto attributes. If some object has definite attribute value then at the intersection is placed one (or cross). The essence of the method consists in the next [START_REF] Rudolf | Formal Concept Analysis as Mathematical Theory of Concepts and Concepts Hierarchies[END_REF].
Formal The object set A is named as extent and attribute set Bintent of the concept. Therefore, the formal concept it is a set of objects and corresponding attributes, such that every object has all attributes from the attribute set. In case, when extent of concept C 2 is included into the extent of concept C 1 , that is Ext(C 2 )Ext(C 1 ), we say that С 1 is a superconcept and С 2is subconcept. The concepts hierarchy is defined by relation subconcept-superconcept:
(A 1 ,B 1 ) (A 2 ,B 2 ) A 1 A 2 (B 1 B 2 ). For formal context (G,M,I) and X G, S M, 1. X 1 X 2 X 1 X 2 для X 1 , X 2 G, 2. S 1 S 2 S 1 S 2 для S 1 , S 2 M, 3. X X и X =X для X G, 4. S S и S =S для S M. 5. X S S X.
The ordered set of all formal concepts of (G,M,I) is denoted by L(G,M,I) and is called the concept lattice of (G,M,I). Infimum and supremum of the L(G,M,I) are given by: J j (X j , S j ) = ( J j X j , ( J j S j ) ), J j (X j , S j ) = (( J j
X j ) , J j S j ).
The most important problem is: how to build the concept lattice for the context (G, M, I). We obtain the most simple answer, creating (X, X ) for all X G or (S, S) for all S M.
In practice, in our work we use A. Yevtushenko's system of data analysis "Concept Explorer", (http://sourceforge.net/projects/conexp/), [START_REF] Serhiy | System of data analysis "Concept Explorer[END_REF]. The context for our PEMP example is depicted at figure 6, where data about customers, their consumption profile, values of score for clients, the efficiency data of preferences and discounts are contained. The concept lattice for our case is presented at figure 7. This lattice provides us an opportunity to explore and interpret the relationship between concepts.
Figure 7. The concept lattice
The dependency notion between attributes is based on the next idea: if for all objects in the context, for which some property P is true, some another property C is also true, then implication PC is valid. More precisely, the implication P → C is valid in a context K = (G, M, I), where P ⊆ M and C ⊆ M iff C⊆P .
Rules Quality Criteria
The implication means that all objects of context which contain attributes P also contain attribute C. That is in the situation P manager must make decision C. Let us define the next measures of rules quality: We assume that data may be incomplete or contradictory and therefore implication derived by Concept Explorer should be considered within the frame of non-monotonic logic [START_REF] Brewka | Nonmonotonic reasoning: logical foundations of commonsense[END_REF], and realized as defeasible theory rules [START_REF] Guido | Argumentation Semantics for Defeasible Logic[END_REF]. Defeasible logic is practical non-monotonic logic, containing facts, strict rules, defeasible rules and supporting relations.
Non-monotonic reasoning is an approach that allows reasoning process with incomplete or changing information. More specifically, it provides mechanisms for taking back conclusions that, in the presence of new information, turn out to be wrong and for deriving new, alternative conclusions instead.
The non-monotonic subset of the rules is obtained by computing the lattice corresponding to the subcontext consisting of the original context without those attributes which do not apply to the set of all objects. FCA computes the minimal base of implications corresponding to the actual context and asks the user if each single implication is valid in the universe of objects or if a counterexample is known. The counterexamples are than added to the context and the implications are newly computed until all implications are accepted.
Conclusion
In this paper we have considered the conception of adaptive EIS, which has possibility of tuning its business rules set by mean customer's data mining with applying Formal Concept Analysis method. This approach is especially important when number of business rules, which are changed with time, counts by thousands.
We have shown that on the way of including into the structure of modern EIS means for automated decision making, the essential role may play Formal Concept Analysis method, which can help to find out specific dependencies between observed customers' data and services provided by business.
The next step may be research the adaptive EIS with varying structure, including data base structure, procedures and semantic concepts of language, describing the domain of application.
Figure 4 .
4 Figure 4. Rule stream as a program, containing business rules and SQL operators
Figure 5 .
5 Figure 5. Adaptive EIS, self-adjusting to the customers' consumption profile and motivation customers to increase volume of consumption At the figure 5 is presented adaptive EIS, where feedback loop used for adjusting EIS services on the base of analyzing customers' data and applying the results for correcting services and products to customer consumption profile.The possibility of applying different data mining methods as a mean for implementation such feedback was widely discussed in the scientific literature[START_REF] Poelmans | Formal Concept Analysis in Knowledge Discovery: a Survey[END_REF][START_REF] Lakhal | Efficient Mining of Association Rules Based on Formal Concept Analysis[END_REF]. The formal concept analysis is one of such method. Formal concept analysis was primary proposed by R. Wille in 1982 year[START_REF] Wille | Restructuring lattice theory: an approach based on hierarchies of concepts[END_REF] and it finds its application at the different domains for discovering data structure.Aside from selection clients group and its visualizing this method provide possibility for searching attributes dependencies in form of implications. The clients may be regarded as objects and their personal data, kinds of services, intensity services consumption and performed with them marketing actions as a attributes, which customers have. Upon this data, the set of objects with common attributes values may be discovered.The data are presented as formal context that is as a table, the rows of which correspond to object and the columnsto attributes. If some object has definite attribute value then at the intersection is placed one (or cross). The essence of the method consists in the next[START_REF] Rudolf | Formal Concept Analysis as Mathematical Theory of Concepts and Concepts Hierarchies[END_REF].Formal context K:=(G,M,I) consists from sets G,M and a binary
2 G 2 G 2 M 2 ϕ
2222 context K:=(G,M,I) consists from sets G,M and a binary relation I ⊆ G ×M, where M -attribute set, G -objects sets, expression (g,m) ∈ I -signifies that object g has attribute m. Formal context may be presented as binary matrix, rows of which correspond to object and columnto attributes values. Let us define the mappings for A ⊆ G, B ⊆ M: (A)=def {m ∈ M | gIm ∀g ∈ A}, ψ(B)= def{g ∈ G | gIm ∀m ∈ B}, A ⊆ G, B ⊆ M. If A ⊆ G, B ⊆ M, then pair(A,B)-is named as a formal concept of context K, if ϕ(A) = B, ψ(B) = A (or another notation: Aand B = A).
Figure 6 .
6 Figure 6. The context, describing customers of the PEMP.
Support: supp(P) = Card((P)/Card(G) -the rate of objects, containing nonempty values of attributes P, comparing to all number of objects. Confidence: conf( P⇒C)= supp(PC)/supp(P), Lift: lift( P⇒C)=supp(P C)/ supp(P) supp(C), Conviction : conv(P⇒C)=1-supp(C)/1-conf(P⇒C). The implication set, discovered by Concept explorer, are business rules, some example of which are presented below. Business rules with confidence 100%: IF branch of the client = equipment THEN repair_discount_5%; IF time on a market > 10 THEN repair_discount_5%; IF time on a market > 20 THEN repair_discount_10% and service_discount and consulting_free; IF branch of client = shipment AND time on market> 20 THEN serv_discount AND sale_discount = 10% AND repair_discount. The rules with confidence less than 100%: IF branch of client = shipment THEN serv_discount; (confidence 80%) IF branch of client = industry THEN consulting_free; (confidence 60%) | 25,539 | [
"1003482",
"1003483",
"1003484"
] | [
"487707",
"487707",
"487708"
] |
01483887 | en | [
"spi",
"info"
] | 2024/03/04 23:41:48 | 2016 | https://hal.science/hal-01483887/file/New%20Evaluation%20Scheme%20for%20Software%20Function%20Approximation%20with%20Non-Uniform%20Segmentation.pdf | Justine Bonnot
Erwan Nogues
Daniel Menard
New Evaluation Scheme for Software Function Approximation with Non-Uniform Segmentation
may come
New evaluation scheme for software function approximation with non-uniform segmentation
I. INTRODUCTION
The technological progress in microelectronics as well as the Internet of Things era require that embedded applications integrate more and more sophisticated processing. Most embedded applications incorporate complex mathematical processing based on composition of elementary functions. The design challenge is to implement these functions with enough accuracy without sacrificing the performances of the application which are measured in terms of memory usage, execution time and energy consumption. The targeted processors are Digital Signal Processors (DSP). To implement these functions, several solutions can be used. Specific algorithms can be adapted to a particular function [START_REF] William | Numerical recipes in C (2nd ed.): the art of scientific computing[END_REF] as for instance the approximation by Tchebychev polynomials or by convergent series. For a hardware implementation, numerous methods have been proposed. Look-Up Tables (LUT), bi-or multipartite tables [START_REF] De Dinechin | Multipartite table methods[END_REF] are used. These tables contain the values of the function on the targeted segment and are impossible to include in embedded systems because it may take a lot of memory space for a given precision.
For the time being, a few software solutions exist for computing these functions. Libraries such as libm can be used but target scientific computation. They offer an important accuracy but are slow on the targeted architecture (DSP). The implementation of the CORDIC (COordinate Rotation DIgital Computer) algorithm that computes the values of trigonometric, hyperbolic or logarithmic functions [START_REF] Volder | The CORDIC trigonometric computing technique[END_REF], can also be used.
In this work, the software implementation of mathematical functions in embedded systems is considered. Low cost and low power processors are targeted. To achieve cost and power constraints, no floating-point unit is considered available and processing is carried-out with fixed-point arithmetic. Nevertheless, this arithmetic offers a limited dynamic range and precision. Then, the polynomial approximation of a function give a very accurate result in a few cycles if the segment I on which the function is computed, is segmented precisely enough to approximate the function by a polynomial on each segment.
To approximate the function, the Remez algorithm is used. The degree of the approximating polynomials can be chosen and impacts the number of segments needed to suit the accuracy constraint as well as the computation time and the memory required. Reducing the polynomial order increases the approximation error. Thus, to obtain a given maximal approximation error, the number of segments increases implying an increasing number of polynomials and then a larger memory footprint. On the contrary, for a given data-path wordlength, increasing the polynomial order raises the computation errors in fixed-point coding due to more mathematical and scaling operations. Even though higher polynomial order implies a smaller approximation error and consequently less segments, a higher fixed-point computation error counteracts this benefit. Thus, for fixed-point arithmetic, the polynomial order is relatively low. Consequently, to obtain a low maximal approximation error, the segment size is reduced. Accordingly, non-uniform segmentation is required to limit the number of polynomials to store in memory and to obtain a moderate approximation error. Then, each segment has its own approximating polynomial and the coefficients of each polynomial are stored in a single table P.
Consequently, the challenge of the polynomial approximation method is to find the accurate segmentation of the segment I as well as the fastest computation of the index of the polynomial in table P corresponding to the input value x. In that paper, the computation of the index of the polynomial associated to an input value x is considered. Different nonuniform segmentations [START_REF] Lee | Hierarchical segmentation for hardware function evaluation[END_REF] associated to a hardware method of indexation of the segmentation have been proposed, but they target only harware implementations. Moreover, these methods do not provide flexibility in terms of segmentation. The segmentation used in the proposed method is a nonuniform segmentation.
In this paper, a new indexing scheme for software function evaluation, based on polynomial evaluation with non uniform segmentation, is proposed. This recursive scheme enables exploring the trade-off between the memory size and the function evaluation time. Besides, compared to Table-based methods, our approach reduces significantly the memory footprint. The proposed method is compared to an indexing method using only conditional structures and shows a significant reduction in the computation time. The determination of the best nonuniform segmentation is not the scope of this paper.
The rest of the paper is organized as follows. First, the related work is detailed in Section II. The evaluation scheme using a non-uniform segmentation is detailed in Section III. Finally, the experiment results as well as comparisons with indexing method using conditional structures and table-based method are given in Section IV.
II. RELATED WORK
To compute the value of a function, iterative methods as the CORDIC algorithm [START_REF] Volder | The CORDIC trigonometric computing technique[END_REF] can be a software solution. That algorithm computes approximations of trigonometric, hyperbolic or logarithmic functions with a fixed precision. Nevertheless, that algorithm is composed of a loop whose number of iterations depends on the precision required and may take too long to compute precise values. A table needs to be stored too, whose size is the same as the number of iterations. The asset of that method is the sole use of shifts and additions that makes it particularly adapted to low cost hardware solutions.
Numerous hardware methods have been developed for polynomial approximation. Firstly, the LUT method consists in approximating the function with 0-degree polynomials on the segment I beforehand segmented so that the error criterion is fulfilled on each segment. That method is the most efficient in terms of computation time but the most greedy concerning the memory space required since the segmentation is uniform. Improvements of that method are the bi-or multi-partite methods presented in [START_REF] De Dinechin | Multipartite table methods[END_REF]. The function f is approximated by linear functions on the segment I prior segmented. Two tables need to be stored: a table containing the initial values of each segment obtained by the segmentation, and a table of offsets to compute whichever value belonging to this segment. The multi-partite method exploits the symmetry on each segment and reduces significantly the size of the tables. That method offers quick computations and reduced tables to store but is limited to low-precision requirements and is implemented only for hardware function evaluation. Non-uniform segmentation followed by polynomial approximation is developed in [START_REF] Lee | Hierarchical segmentation for hardware function evaluation[END_REF] for hardware function evaluation. The initial segment is recursively segmented until the error criterion is fulfilled on each segment. The segmentation is done according to 4 predefined segmentation schemes limiting the ability to fit onto the function evolution. Afterwards, AND and OR gates are used to find the segment corresponding to an input value x. LUT are used to store the coefficients of the polynomials. Nevertheless, that method does not allow to control the depth of segmentation of the initial segment. Finally that method targets only hardware implementation.
III. PROPOSED METHOD
A. Non-uniform Segmentation
The function f is approximated by the Remez algorithm as in the article [START_REF] Lee | Hierarchical segmentation for hardware function evaluation[END_REF]. That algorithm seeks the minimax, which is the polynomial that approximates the best the function according to the infinite norm on a given segment. The Remez algorithm is based on several inputs: the function f , the segment I = [a; b] on which f is approximated (f has to be continuous on I), and the degree N d of the approximating polynomials. The Remez algorithm in the proposed method, is called thanks to the Sollya tool [START_REF] Chevillard | Sollya: An environment for the development of numerical codes[END_REF].
The Remez approximation algorithm has an approximation error app . That error cannot be controled by the Remez algorithm but can be computed on the segment I. app is defined as the infinite norm of the difference between the function f and its approximating polynomial P , app = ||f -P || ∞ . On the segment I, the error of approximation can be greater than the maximal error required by the user. The segmentation of I allows to suit the error criterion provided by the user, on each segment. The coefficients of the approximating polynomials on each segment are saved in table P. Once the segmentation is determined, the challenge is to index efficiently P. To ease the addressing, the bounds of the segments obtained are sums of powers of two.
The segmentation of I is modeled by a tree, as depicted in figure 1. That tree is composed of nodes and edges. The root of that tree contains the bounds of the segment I and the children of the root contain the bounds of the segments obtained in each level of segmentation. The leaves of the tree correspond to the segments to which a polynomial is associated, i.e. the segments on which the error criterion is verified. The depth of the tree is a trade-off between the number of polynomials and the computation time of the index of a polynomial. For instance, the binary tree obtained by subdividing each segment in two equal parts is the deepest: it leads to the minimum number of polynomials but the computation of the index takes the longest. On the contrary, the tree whose depth is 1 has the greatest number of polynomials but the computation of the index is the fastest. If the depth of the binary tree is N , then the number of polynomials in the tree of depth 1 is 2 N . On the tree in figure 1, the segment I is segmented in 2 2 segments, that leads to 2 segments on which the error criterion is fulfilled (leaves n 10 and n 12 ). The segment represented by the node n 11 has an approximating polynomial that does not approximate accurately enough f , then that segment is segmented in 2 1 segments represented by leave n 21 and node n 22 that is segmented again in 2 1 segments. The segment represented by the node n 13 is segmented in 2 2 segments that Fig. 2. Indexing Tables associated to the tree in figure 1 leads to 3 leaves (leaves n 24 , n 25 and n 27 ) and a last node to segment, n 26 , in 2 2 segments.
B. Evaluation scheme
The main contribution of this paper is a new method to index the polynomial coefficients table P in a minimum time. Each line of this table represents a polynomial and each column the coefficient of the degree i monomial. The index corresponding to the input value x is the number of the line of that table used to apply the function to x, i.e. the segment in which the value x is. Once the non-uniform segmentation is obtained, polynomial coefficients are saved in P. The evaluation scheme, to approximate the function f , is composed of two parts corresponding to index computation and polynomial computation as presented in figure 3. The step of index computation determines from the w m most significant bits the index i used to address the polynomial table P. The step of polynomial computation evaluates the polynomial P i with x. 1) Index Computation: The aim of this step is to determine for an input value x the index associated to the segment in which x is located to get the coefficients of the polynomial approximating f on that segment. The problem is to find the path associated to the segment in a minimal time. Timing constraint discards solutions based on conditional structures and requiring comparison, test and jump instructions, as shown in Section IV. The proposed approach is based on the analysis and interpretation of specific bits of x, formatted using fixedpoint coding. Thanks to the sum of powers of two segment bounds, masking and shifting operations can be used to align these bits on the LSB and select them.
Since the tree is not well balanced due to the non-uniform segmentation, for any tree level, the number of bits to analyze is not constant and depends on the considered node. Indeed, all the nodes associated to a given level do not necessarily have the same number of children. The indexing method uses a table T which stores for each level, a structure for each node, containing the mask, the shift and the offset to pass from a level to the following, given the bits of x. A line of the table T is associated to each tree level. Each line of T contains N nl elements, where N nl is the number of total nodes in this level l. At each intermediate node n lj (where l is the level and j the node) a mask (T [l][j].M ) and a shift (T [l][j].s) are associated and used to select the adequate bits of x to move from node n lj to the next node n l+1j located at level l + 1.
Moreover, an offset (T [l][j]
.o) is used to compute the index of the polynomial associated to the considered segment. The number of information to store in the table T depends on the depth of the tree, the number of polynomials, and the degree of the approximating polynomials.
The pseudo code for the computation of the index of the polynomial corresponding to an input value x, from the tables detailed previously, is presented in Algorithm 1. For each node n lj , an offset o lj is provided and the index is obtained by summing the offsets to the different ∆ l , corresponding to the result of the bit-masking at each level l:
i = N l -1 l=0 o l + ∆ l (1)
The pseudo code to compute the index is compact and composed of few operations: three memory readings, a shift, a bitwise logic operation and two additions.
Algorithm 1 Indexing of the approximating polynomials
i = 0; for l = 0 to N l -1 do o := T [l][i].o ∆ := (x >> T [l][i].s)&T [l][i].M i := i + ∆ + o end for Return i
The value of the mask T [l][j].M associated to a node n lj is obtained from the number of children N bCh(n lj ) of that node with the following expression:
T [l][j].M = N bCh(n lj ) -1 (2)
The value of the shift T [l][j].s corresponding to a node n lj is obtained from the number of children NbCh(n lj ) and from the shift associated to node n l-1,j corresponding to the parent of node n lj with the following expression
T [l][j].s = T [l -1][j ].s -log 2 (NbCh(n lj )) (3)
The value of the offset T [l] [j].o associated to a node n lj is obtained from the number of children NbCh(n ij ) and from the shift associated to node n l-1,j corresponding to the parent of node n lj with the following expression
T [l][j].o = j-1 k=0 NbCh(n lk ) -1 (4)
For the considered example in figure 1, the tree is made-up of 3 levels leading levels in table T . As an example, let us consider, the 16 bits value x with x[15.
.10] = 111001b. In the tree from figure 1, in level 1, the 2 most significant bits x[15..14] are tested. The index i 1 = ∆ 0 = 3 and leads to node n 13 . Then, due to the mask M 1 for node n 13 , the bits x[13..12] are tested. The index
i 2 = i 1 + o 1 + ∆ 1 = 3 + 1 + 2 =
2) Polynomial Computation:
To improve the speed performances, fixed-point arithmetic is used to code the coefficients of the polynomials approximating the function, and they are stored in the bidimensional table
P[i][j]
. The term i is the index of the segment and the coefficient is the one of the j-degree monomial. Indeed, the computation of the value of P i (x), where P i is the approximating polynomial on a segment indexed by i, can be decomposed using the Horner rule, reducing the computation errors. According to the Ruffini-Horner algorithm [START_REF] Taylor | The calculus of observations[END_REF], each polynomial P i (x) = a n x n + a n-1 x n-1 + ... + a 1 x + a 0 can be factored as:
P i (x) = ((((a n x + a n-1 )x + a n-2 )x + ...)x + a 1 )x + a 0
Using that rule, the calculation scheme can be decomposed in a basic loop kernel presented in Algorithm 2.
Algorithm 2 Computation of the polynomial P i in fixed-point.
IntDP z; z
:= P[i][N d -1] for d = N d to 1 do z := ((IntSP )z × x) >> D[i][d] + P[i][d] end for
Taking into account the loop kernel, shifts are necessary to compute the value of P i (x) in fixed-point coding because the output of the multiplier can be on a greater format than necessary. By using arithmetic of intervals on each segment, the real number of bits necessary for the integer part can be adjusted with left shift operation.
A right shift operation can be necessary to put the two adder operands on the same format. In our case, a quantification from double (IntDP) to single precision (IntSP) is done and creates a source of error f xp . When the two shift values have been computed, they can be both added to do a single shift on the output of the multiplier and is stored in the bidimensional table D[i] [d] where d corresponds to the iteration of the loop of Algorithm 2 and i is the index of the segment.
Finally, since parameter N d is known and constant for all P i on an approximation, the loop can be unrolled so as to avoid overhead due to loop management.
IV. EXPERIMENTS
To illustrate the proposed method, the function exp(-(x)) is studied on segment [2 -6 ; 2 5 ] and the DSP C55x from Texas Instruments [START_REF]Texas Instruments. C55x v3.x CPU Reference Guide[END_REF] is considered. The considered maximum error is = 10 -2 . The trees for 1to 3-degree polynomials have been computed with depth varying from the maximal depth (binary tree) to a depth of 2. The evolution of the memory footprints S n-poly and S n-tot is observed in figure 4 where S n-poly is the memory footprint for the polynomial coefficients (table P) and shifts (table D for fixed-point computations) and Sn -tot is Sn -poly added to the memory footprint for the indexing table T in figure 4. The trees are computed with a maximal error criterion for the approximation app equal to half of (5 • 10 -3 ). The data and coefficients are on single-precision (16 bits), the coding of the input is Q 6,10 . The less levels the tree has, the greatest number of polynomials there is. Consequently, S n-poly is high. Nevetheless, since the tree has a reduced depth, few tables are needed. On the contrary, the more levels the tree has, the less polynomials there are and S n-poly is low. However, the table T is the biggest.
The total memory space can be characterized as a function of the computation time for a given error. This performance can be used as a Pareto curve during the system design phase to select the configuration leading to a good trade-off. The function exp(-(x)) on the segment [2 -6 ; 2 5 ] can be approximated by a polynomial of degree from 1 to 4 with data coded on 16 bits, requiring a maximal total error of 10 -2 . The maximum values of the error of fixed-point coding are presented in table I. A 5-degree polynomial does not suit this approximation since the error obtained with fixed-point coding is greater than f xp . However, a 5-degree polynomial would fulfill the error criterion with data coded in double-precision but at the expense of a high increase in execution time.
The expression of the computation time t depending on the degree N d and the number of levels in the tree N l in singleprecision with the provided code is: Figure 5 provides the evolution of the total memory footprint S n-tot and the computation time t. The tree with a low number of levels implies a high memory footprint but a minimum computation time. Then, the computation time increases with the number of levels in the tree while the required memory decreases until reaching a minimum. Finally, the memory slightly increases with the computation time and the number of levels due to the increase of the table T size.
t = 8 • N l + 3 • N d + 9 (5) Degree f xp 1 [-2.8 • 10 -3 ; 0] 2 [-2.5 • 10 -3 ; 0] 3 [-2.5 • 10 -3 ; 0.3 • 10 -3 ] 4 [-2.4 • 10 -3 ; 1.5 • 10 -3 ] 5 [-2.
The proposed method is compared to the standard solution libm, to the LUT method and to an indexing method using conditional structures. However, the proposed method cannot be compared to the hardware implementation of bi-or multipartite tables since the method proposed is a software-based method. The computation time obtained with our method shows a mean speed-up of 98.7 compared to the implementation by Texas Instruments of libm on the DSP C55x [START_REF]Texas Instruments. C55x v3.x CPU Reference Guide[END_REF]. Our approach is compared to the method using only conditional to index P. To find the segment in which an input value x is, each segment is tested using if statements while the right one has not been found. The computation time of the index with that method is not constant and depends on the segment containing x. To take into account this variability, mean execution time is considered. The results are presented for 1 to 3-degree approximating polynomials. Given that our approach provides different trade-off, the minimal, the mean and the maximal speed-ups are considered. The overhead in memory size and execution time of the conditional indexing method compared to our approach are presented in table II. The segmentation and the conditions of approximation are the same as in figure 5. Our approach requires more memory (overhead lower than 1) due to table T storage, but reduces significantly the execution time (overhead significantly greater than 1).
Finally, the memory space required by the proposed method (M em prop ) is compared in table III to the LUT method (M em tab ) for several approximations. The memory required is given in bytes. Our approach reduces significantly the memory footprint compared to LUT method.
V. CONCLUSION
The method proposed in this paper enables system designers to efficiently evaluate the cost of approximating a function. Indeed, Pareto curves giving the memory footprint depending on the computation time allow to choose a tradeoff between computation time and required memory space. That trade-off is obtained thanks to the different degrees of the approximating polynomials as well as the depth of the tree storing the segmentation of the segment I on which the function is computed. Besides, the new scheme of indexing the table of polynomials shows a significant reduction in terms of computation time and does not need a significant supplementary memory space compared to an indexing method using only conditional structures. Compared to libm implementation, the proposed method shows significant computation time reduction for low-degree polynomials since the speed-up mean of the proposed method on DSP C55x is 98.7.
Fig. 1 .
1 Fig. 1. Example of a tree obtained with a non-uniform segmentation
Fig. 3 .
3 Fig. 3. Evaluation scheme integrating index and polynomial computation. Illustration for a three-level tree
3 Fig. 4 .
34 Fig. 4. Evolution of the memory footprints S n-poly (tables P and D) and S n-tot (tables P, D and T ).
Fig. 5 .
5 Fig. 5. Pareto curves for approximating exp(-(x)) on [2 -6 ; 2 5 ])
6 and leads to n 26 . Finally, due to the mask M 2 for node n 26 , the bits x[11..10] are tested. The index i 3 = i 2 + o 2 + ∆ 2 = 6 + 1 + 1 = 8 and leads to node n 38 . The index of the polynomial associated to x is 8. The polynomial associated is P 8 .
TABLE I .
I 7 • 10 -3 ; 35.2 • 10 -3 ] RANGE OF THE FIXED-POINT CODING ERROR DEPENDING ON THE POLYNOMIAL DEGREE FOR THE APPROXIMATION OF exp(-(x))
TABLE III .
III MEMORY REQUIREMENTS FOR THE PROPOSED METHOD M emprop AND THE LUT METHOD M em tab | 24,259 | [
"961124",
"980"
] | [
"185974",
"185974",
"185974"
] |
01483891 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01483891/file/978-3-642-36611-6_4_Chapter.pdf | Raykenler Yzquierdo-Herrera
Rogelio Silverio-Castro
Manuel Lazo-Cortés
email: manuelslc@uci.cu
Sub-process discovery: Opportunities for Process Diagnostics
Keywords: Business process, process diagnostics, process mining, trace alignment
Most business processes in real life are not strictly ruled by the information systems that support them. This behavior is reflected in the traces stored by information systems. It is useful to diagnose in early stages of business process analysis. Process diagnostics is part of the process mining and it encompasses process performance analysis, anomaly detection, and inspection of interesting patterns. The techniques developed in this area have problems to detect sub-processes associated with the analyzed process and framing anomalies and significant patterns in the detected sub-processes. This proposal allows to segment the aligned traces and to form representative groups of sub-processes that compose the process analyzed. The tree of building blocks obtained reflects the hierarchical organization that is established between the sub-processes, considering main execution patterns. The proposal allows greater accuracy in the diagnosis. Based on the findings, implications for theory and practice are discussed.
Introduction
Most enterprises and businesses use information systems to manage their business processes [1]. Enterprise Resource Planning systems, Supply Chain Management systems, Customer Relationship Management systems, and systems for Business Process Management themselves are few of the examples that could be mentioned. Information systems register actions in the form of traces as a result of executing instances or cases of a business process. The discovery of processes from the information contained in the traces is part of process mining or workflow mining [2,3]. The discovery of the process model based on traces allows comparisons with the prescribed or theoretical model. Recent research works describe process mining application as support to the "operationalization" of the enterprise processes. "The idea of process mining is to discover, to monitor, and to improve real processes (i.e., not assumed processes) by extracting knowledge from event logs readily available in today's information systems" [4].
Most business processes in real life are not strictly ruled by the information systems on the background. This means that although there is a notion of a process, actors can get away from it, or even ignore it completely. In these environments, it may be wise to start a process improvement or to establish a process quality control to discover the actual running process [5][START_REF] Bose | Trace Alignment in Process Mining: Opportunities for Process Diagnostics[END_REF][START_REF] Song | Trace Clustering in Process Mining[END_REF].
It is useful to diagnose in early stages of business process analysis. Process diagnostics is part of process mining and it encompasses process performance analysis, anomaly detection, and inspection of interesting patterns [START_REF] Bose | Process diagnostics using trace alignment: Opportunities, issues, and challenges[END_REF]. Diagnosis provides a holistic view of the process, the most significant aspects of it and of the techniques that can be useful in further analysis.
The techniques developed in this area have problems to detect sub-processes associated with the analyzed process and framing anomalies, and significant patterns in the detected sub-processes [START_REF] Bose | Trace Alignment in Process Mining: Opportunities for Process Diagnostics[END_REF][START_REF] Bose | Process diagnostics using trace alignment: Opportunities, issues, and challenges[END_REF].
This proposal allows to segment the aligned traces and to form representative groups of sub-processes that compose the analyzed process. The tree of building blocks obtained reflects the hierarchical organization that is established between the sub-processes, considering main execution patterns. On each case, the building blocks created allows to group segments of the traces which can be significant for analysis.
The rest of this paper is organized as follows: section 2 introduces some related works; in section 3, methodological approach is presented. Furthermore, in section 4, real environment proposed algorithm's application and its results are discussed. Finally, conclusions are given in section 5.
Related Works
Among the most used techniques on log visualization, Dotted chart analysis can be found [START_REF] Song | Supporting process mining by showing events at a glance[END_REF]. This technique is a "Gantt charts analogous technique, showing a `helicopter view' of the event log and assisting in process performance analysis by depicting process events in a graphical way, and primarily focuses on the time dimension of events" [START_REF] Bose | Process diagnostics using trace alignment: Opportunities, issues, and challenges[END_REF]. Business analysis is manually made from the dotted chart. Manual inspection and comprehension of the dotted chart becomes cumbersome and often infeasible to identify interesting patterns over the use of logs with medium to large number of activities (within few tens to hundreds).
Other commonly used visualization technique is Stream scope visualization. It is based on the event class correlations [START_REF] Günther | Process Mining in Flexible Environments[END_REF]. Using stream scope visualization, patterns of co-occurring events may be easily recognized by their vicinity. However, the technique is restricted by its unavailability to provide a holistic view of the event log although it visualizes each trace separately.
The use of tandem arrays and maximal repeats to capture recurring patterns within and across the traces is proposed by Bose and Van der Aalst [START_REF] Bose | Abstractions in Process Mining: A Taxonomy of Patterns[END_REF]. This work has two limitations, the number of uncovered patterns can be enormous, and the patterns uncovered are atomic (the dependencies/correlations between patterns need to be discovered separately).
The Conformance checking allows to detect inconsistencies/deviations between a process model and its corresponding execution log [START_REF] Rozinat | Conformance checking of processes based on monitoring real behavior[END_REF][START_REF] Adriansyah | Towards Robust Conformance Checking[END_REF]. Conformance checking as a trend has inherent limitations in its applicability, especially for diagnostic purposes.
It assumes the existence of a preceding process model. However, in reality, process models are either not present or if present are incorrect or outdated [START_REF] Bose | Trace Alignment in Process Mining: Opportunities for Process Diagnostics[END_REF].
At this point research works that arise with interesting patterns and anomalies detection were shown. Further on, focus will be pointed to sub-processes detection. In this sense, investigations that obtain cluster activity in the analyzed process can be mentioned, which can be useful to understand the context of certain anomalies. These research works are not highly recommended for real environments either is difficult to know the relationship established between the activities that form a group [5,[START_REF] Aalst | ProcessMining: A Two-Step Approach to Balance Between Underfitting and Overfitting[END_REF]. Those techniques do not provide a holistic view of the process. The Fuzzy Miner discovery technique allows to obtain cluster activities, but it considers that each activity belongs to a single node [START_REF] Günther | Fuzzy Mining: Adaptive Process Simplification Based on Multi-Perspective Metrics[END_REF].
The insufficiency for detecting sub-processes makes it complicated, in many occasions, to contextualize detected aspects and to understand its causes. This limitation prevails on the work developed by Van der Aalst and Bose (2012) [START_REF] Bose | Process diagnostics using trace alignment: Opportunities, issues, and challenges[END_REF] despite the fact that these authors agree this research yields the best obtained results in diagnostics by making possible to identify recurring patterns and provides a comprehensive holistic view of the process.
Methodological approach
Initially, authors present a set of necessary definitions for a better understanding of the proposal.
Definition 1 (Business process): A business process consists of a set of activities that are performed in coordination in an organizational and technical environment. These activities jointly realize a business goal. Each business process is enacted by a single organization, but it may interact with business processes performed by other organizations [START_REF] Weske | Business Process Management. Concepts, Languages, Architectures[END_REF].■ Definition 2 (Sub-process): A sub-process is just an encapsulation of business activities that represent a coherent complex logical unit of work. Sub-processes have their own attributes and goals, but they also contribute to achieving the goal of the process. A sub-process is also a process and, an activity, its minimal expression.■ A process can be decomposed into multiple sub-processes using the following workflow patterns: Sequence: two sub-processes are arranged sequentially, if one occurs immediately after the other sub-process. Choice (XOR or OR): two sub-processes are arranged as options in a decision point; if on each case or process instance only one (XOR) or both in any order (OR) occur. Parallelism: two sub-processes are arranged in parallel if both occur simultaneously. Loop: A loop occurs when a sub-process is repeated multiple times.
Sub-processes can be decomposed into other sub-processes until the level of atomic activity. This allows building a tree where each level has a lower level of abstraction.
Definition 3 (Trace and event log). Let Σ denote the set of activities. Σ + is the set of all non-empty finite sequences of activities from Σ. Any T Σ + is a possible trace. An event log L is a multi-set of traces [START_REF] Bose | Process diagnostics using trace alignment: Opportunities, issues, and challenges[END_REF]. ■ Definition 4 (Building block and decomposition into building blocks): Let us denote by S the set of all sub-processes that compose the process P, L the event log that represents the executed instances of P, A is the matrix obtained in trace alignment from L, and Q is the set of all sub-matrices over A (Traces Alignment uses the technique developed by Bose and Van der Aalst (2012) [START_REF] Bose | Process diagnostics using trace alignment: Opportunities, issues, and challenges[END_REF]).
Let us denote by Q' the set of sub-matrices that represent the sub-processes of S, such that Q' ⊆ Q. Let C i , C j , C j+1 Q', the sequence relationship between two subprocesses represented by C j and C j+1 is denoted by C j >' C j+1 . Analogously the choice relationship is denoted by C j #' C j+1 and the parallelism relationship by C j ||' C j+1 . The loop over C i is denoted by (C j )*.
Let s i S the process represented by a matrix C i Q' and composed by the sequence of sub-processes represented by C j ,…, C j+k then matrix C i and the set {C j ,…, C j+k } are called building blocks. The sub-processes represented by {C j ,…,C j+k } are related in one way (sequence, parallelism, OR-XOR or loop).■ General steps of the proposal are presented below.
Trace alignment
Starting from a workflow log, traces are aligned following the algorithm developed by Bose and Van der Aalst [START_REF] Bose | Process diagnostics using trace alignment: Opportunities, issues, and challenges[END_REF]. With the result of aligned traces a file that represents the matrix A is generated. Trace alignment is a representation of the activities according to a relative order of occurrence and considering cases structure. The order established between activities allows identifying a group of workflow patterns.
Pre-processing aligned traces
Incomplete cases are determined as cases which do not meet the process end-event. Incomplete cases have gaps ("-") in the columns for the process final activities. These cases can be treated or eliminated; afterwards, traces can be re-aligned. Moreover, trace alignment can be modified in order to assure each column is occupied by a single task.
Tree of building blocks
The algorithm for determining the tree of building blocks is the following.
Algorithm 1. Determining the tree of building blocks Input: Matrix A Output: Tree of building blocks 1: Create an empty tree 2: Create a building block C 1 and it is associated with the root node of the generated tree. The procedures Sequence-Search, Loop-Search, XOR-OR-Search, Parallelism-Search and Hidden-Sequence-Search are described below.
C 1 = A 3: if C i is not a row matrix then 4: CL = Sequence-Search(C 1 ). /*
Sequence-Search: The purpose of this proceeding is to determine if the building block (as input) is a process that can be decomposed by a sequence of sub-processes. If the de-composition is possible, it returns a list of detected building blocks, otherwise it returns an empty list. Sequentially ordered sub-processes can be clearly identified. These are separated by one or more activities that appear to occupy an entire column each. Sometimes these activities may not be identified because they could not be mapped in the event log.
Loop-Search: The purpose of this proceeding is to determine if the building block (as input) represents a sub-process repetition. If the de-composition is possible, it returns a list with one building block, otherwise it returns an empty list. To determine if a building block that represents a sub-process repetition is necessary to identify the initial activity of that specific sub-process. This initial activity can be kept to separate sequences of activities. Those identified sequences constitute the rows of the new building block. Repeated sequences are discarded.
XOR-OR-Search: The purpose of this proceeding is to determine if the building block (as input) is a process that can be decomposed by a choice of sub-processes (OR or XOR). If the de-composition is possible, it returns a list of building blocks detected, otherwise it returns an empty list.
Firstly, authors search the de-composition by XOR. To determine the building blocks that represent options (XOR) in a decision point disjoint sets are constructed with the activities which form the analyzed building block. Originally, there is a set of activities to each building block row; later on, the sets that intersect with some activity are joined. If there is more than one set at the end of this process, then building blocks that represent each of resultant options are created. Otherwise, if there is only one set, then the search to identify the de-composition by OR is performed. In order to do this, base sequences are determined. A base sequence is a row of a building block that is not composed entirely by the join of other rows. Sequences that contain common activities belong to the same set.
Parallelism-Search: The purpose of this proceeding is to determine if the building block (as input) is a process that can be decomposed by a parallelism of subprocesses. If the de-composition is possible, it returns a list of detected building blocks, otherwise it returns an empty list. To determine the building blocks that represent parallel sub-processes, disjoint sets are identified with the activities which form the analyzed building block. Activities belonging to different sets are in parallel, while activities belonging to one specific set are related by other workflow pattern. If more than one set is obtained as result, then the building blocks are formed from these parallel sub-processes.
Hidden-Sequence-Search: The purpose of this proceeding is to determine if the building block (as input) is a process that can be decomposed by a sequence of subprocesses. If the de-composition is possible, it returns a list of detected building blocks, otherwise it returns an empty list.
In this case, it is assumed that the activity or activities which define the sequentially ordered sub-processes are not recorded in the traces. Consequently, possible solutions (de-composition scenarios) are determined considering the issues set out below.
Each building block that forms a solution can be decomposed by XOR, OR, loop or parallelism. The solutions are evaluated and the best are selected, taking into account within the evaluation that formed building blocks decrease the amount of broken loops and parallelisms (e.g. a broken loop is evident when an activity appears multiple times in a row in the analyzed building block; then different instances of the referred activity make appearance on different new building blocks instead a same new building block).
Applying the proposal in a real environment and discussion
The technique presented in section 3 has been implemented and the traces of module Management of Roles from National Identification System (SUIN) were analyzed. The SUIN is a system developed by the Cuban Ministry of Interior in conjunction with the Cuban University of Informatics Sciences. The event log did allow determining anomalies in the selected process (31 cases, 804 events, 52 events classes and 3 types of events). The first step was to apply the trace alignment technic developed by Van der Aalst and Bose (2012) [START_REF] Bose | Process diagnostics using trace alignment: Opportunities, issues, and challenges[END_REF]. Fig. 1 shows the obtained alignment from the event log. The proposal was applied to obtained a matrix from the alignment (Fig. 1) and afterwards the tree of building blocks, as shown in Fig. 2 (left panel), was obtained.
It can be noticed that: the obtained tree of building blocks may be expanded until all nodes become leaves or until they can no more be decomposed. The edges have different colors to differentiate the used workflow pattern; this is also indicated by a text message in each case (SEQUENCE, XOR, HIDDEN_SEQUENCE).
On Fig. 2 it appears the BB_2_4 building block selected (circle enclosed) which corresponds to the final decomposition resultant sub-process of BB_1_1. The BB_2_4 building block was chosen because it makes it possible to know how the process end. It also contains two cases, the first with frequency of 12 and a second with frequency of 19. This information can be appreciated in the middle table shown on figure 2, which corresponds to each case's occurrence frequency. Neither case's occurrence frequency nor activities' occurrence frequency are used in Algorithm 1, but they are incorporated in the developed tool to make process diagnosis easier.
The first BB_2_4 case is associated to the activity B which represents the event Roles Management activity fault. It is relevant this process failed 12 of the 31 executed times, representing a 38.7% of faults. Consequently failure causes on the process tested were sought. The origin of faults was searched on BB_2_3, which includes possible actions related to add, edit or delete a role. From the BB_2_3 de-composition two building blocks are obtained, the BB_3_5 and BB_3_6, both of them representing choice options. BB_3_6 represents the Create Role sub-process and it does not contain any B activity, which indicates that this building block had no influence in the process failure. From the BB_3_5 de-composition two new building blocks are obtained. In which the BB_4_10 represents the Edit Role and Delete Role sub-processes endevent. In BB_4_10 the event failure appears, it indicates that the failure lays on the sub-processes Edit Role and Delete Role. Detailed analysis of the Building block BB_4_9 and its de-composition was performed in order to determine the sequence of activities that led to failures in the Edit Role sub-process (represented by BB_5_11) and Delete Role sub-process (represented by BB_5_12). This detected sequence of activities is useful for future warnings prior to the possibility of a failure. Authors were able to identified specific cases in which failures occurred. Knowing the cases and events where the failure took place, the involved users were identified.
The technique developed by the authors, as well as the technique developed by Van der Aalst and Bose (2012) [START_REF] Bose | Process diagnostics using trace alignment: Opportunities, issues, and challenges[END_REF] allows detecting interesting patterns and provides a holistic view of the process. The proposal also allows detection of sub-processes that compose the analyzed process. The detected sub-processes enclose anomalies and interesting patterns, something that is not satisfied by the techniques discussed in section 2.
Another advantage of the present research is that it combines the cases and activities occurrence frequency analysis with the staged analysis from correctly structured sequence events on sub-processes. This contributes to the understanding of the failure causes and therefore a subsequent possible process improvement.
An important contribution of this work is that the anomalies detected can be framed in a context. For example, the detected anomalies in the analyzed process are located in sub-processes Edit Role and Delete Role.
The developed tool was also applied to analyze the process "Check Management" in the bar Gulf View and the restaurant Aguiar, both places belonging to the National Hotel (Cuba). Main characteristics of the process for both event logs, which supported the auditing of the process, were identified [START_REF] González | Procedure for the application of process mining techniques in auditing processes[END_REF].
Conclusion
Process diagnostics can be useful for detecting patterns and anomalies in the analyzed process. The techniques developed in this area have problems to detect subprocesses associated with the analyzed process and framing those anomalies and significant patterns in the detected sub-processes.
This proposal allows to segment the aligned traces and to form representative groups of sub-processes that compose the analyzed process. The obtained tree of building blocks reflects the hierarchical organization that is established between the sub-processes, considering main execution patterns. On each case, the building blocks created allow to group segments of the traces which can be significant for analysis.
The proposal allows detecting interesting patterns and provides a holistic view of the process. Another advantage of the present research is that the interesting patterns detected can be framed in a context. The discovery of sub-processes that compose the analyzed process, its dependencies and correlations allow greater accuracy in the diagnosis. All this is possible thanks to the combination of the cases and activities occurrence frequency analysis with the staged analysis from correctly structured sequence events on sub-processes.
Fig. 1 .
1 Fig. 1. Trace alignment.
Fig. 2 .
2 Fig. 2. Process de-composition.
6 References 1
. Hendricks, K.B., Singhal, V.R., and Stratman, J.K.: The impact of enterprise systems on corporate performance: A study of ERP, SCM, and CRM system implementations. Journal of Operations Management, vol. 25, issue 1, pp. 65--82 (2007) 2. Agrawal, R., Gunopulos, D., and Leymann F.: Mining Process Models from Workflow Logs. In EDBT '98 Proceedings of the 6th International Conference on Extending Database Technology: Advances in Database Technology. Springer-Verlag London, UK (1998) 3. Cook, J.E., Wolf, A.L.: Discovering Models of Software Processes from Event-Based Data. ACM Transactions on Software Engineering and Methodology, pp. 215--249 (1998) 4. Aalst, W.M.P.v.d.:Process Mining. Discovery, Conformance and Enhancement of Business Processes. Springer Heidelberg Dordrecht London New York (2011) 5. Dongen, B.F., Adriansyah, A.: Process Mining: Fuzzy Clustering and Performance Visualization. In Business Process Management Workshops, S. Rinderle-Ma, S. Sadiq, and F. Leymann, Editors, Springer Berlin Heidelberg. pp. 158--169 (2010) | 24,208 | [
"1003490",
"1003491",
"1003492"
] | [
"467395",
"467395",
"467395"
] |
01483911 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01483911/file/Thelegitimationofcivillawnotaries.pdf | Jean-Pierre Marguénaud
Corine Dauchez
Benjamin Dauchez
In France, Civil law Notaries were anointed by the civil code, which erected them as guardian of legal certainty in family and land legal relations. However, their national legitimacy is now eroding simultaneously with the authority of the State, which obeys the "Europe
2013 has certainly marked a reassuring hesitation. In fact, considering that civil law Notaries should be excluded from the scope of the directive 2005/36/EC of 7 September 2005, it has refused to transpose, to the professional qualifications recognition, the principles applied by the 2011 decision to judge discriminatory the refusal of foreign access to the profession. The fact remains that the European Union law, which is always obsessed by the paradoxical concern to adapt to the United Kingdom's specific practices, which expresses with increasing insistence its will to leave, appears as a serious threat to the deligitimation of civil law Notaries. 2. The profession, which is still tense at the mention of the Mazurek decision 5 , doesn't seem to be fully aware that the other European law, the Council of Europe and Human Rights law, could on the contrary contribute to its legitimation. This study will try, in complete immodesty, to raise awareness of this opportunity. 3. It is worth recalling that the European Court of Human Rights (ECtHR) has consecrated the main role in a democratic society of another legal profession, which is also in the firing line of Macron law: that of bailiffs. In effect, by the little--known Pini and Bertani and Manera and Atripaldi v. Romania decision of 22 June 2004, the ECtHR proclaimed that they "work to ensure the proper administration of justice and thus represent a vital component of the rule of law." 6 Yet, it could one day be a Pini and Bertani decision for civil law Notaries, if it was established that they are a vital component of "the principle of the rule of law, " declared for Bailiffs (1), at the same time and moreover the "watchdog" of authenticity (2).
I. Civil law Notaries as a vital component of the rule of law 4.
Bailiffs are considered by the Pini and Bertani decision as a vital component of the rule of law because of their position of public authority in execution of the final court decisions 7 . Civil law Notaries should also be considered as a vital component of the rule of law because of the particular role they play, and which they should play more, in order "to ensure respect for the rule of law in individual relations of private law" 8 by preventing the court decision necessity 9 . If civil law Notaries must have, prior to the court decision, the same European legitimacy that bailiffs have after the court decision, it is due to the extension of the scope of the legal certainty principle, critical component of the rule of law 10 , (A) that makes them play a privileged role in preventive legal certainty (B).
A. The extension of the scope of the legal certainty principle 5. A Brumarescu v. Romania decision of 28 October 1999 declared that: "One of the fundamental aspects of the rule of law is the principle of legal certainty, which requires, inter alia, that where the courts have finally determined an issue, their ruling should not be called into question" [START_REF] Cedh ; §61 | Brumarescu c/ Roumanie, 28 oct[END_REF] . Through using the terms "inter alia" 12 , the ECtHR explicitly states that legal certainty, as a critical component of the rule of law, has to be interpreted in regard to the right to a fair trial, guaranteed by Article 6 §1 of the European Convention on Human Rights (the Convention), does not limit to the recognition of the res judicata principle. For a long time, the ECtHR admitted that legal certainty could be searched and established in the name of the rule of law, " part of the common heritage of the Contracting States " [START_REF] Cedh ; §61 | Brumarescu c/ Roumanie, 28 oct. 1999[END_REF] , independently and prior to the sole activity of the courts. Perhaps, the importance has not been emphasised enough of the perspective change operated by the Brumarescu decision compared with the Marckx decision which, in § 58, has discreetly driven legal certainty on conventional stage in these terms: " The principle of legal certainty, which is necessarily inherent in the law of the Convention as in Community Law, dispenses the Belgian State from re--opening legal acts or situations that antedate the delivery of the present judgment " 14 . In order to fully understand, the Marckx decision terms have to be placed in their context. The solution was, in effect, intended to limit the chaotic consequences of the ECtHR decision and place Belgium State beyond the reach of the abyssal difficulty which consists in calling into question all the decisions that enshrined unequal distributions of estates to the detriment of « illegitimate » children. Therefore it was essential to protect the national decisions res judicata against the threat of European disruptions. Following the Brumarescu decision, the court has extended this rule, which stemmed from a concern for self--limitation of its own decisions, to the protection of authority of court decisions in the face of purely national threats, since the case concerned the quashing of a final judgment which itself quashed a decree in a nationalisation context. It is therefore entirely legitimate that the examples of case law on legal certainty in the decisions of the ECtHR, rightly stated by the first commission on authentic legal certainty of the 111th congress of French civil law Notaries 15 , relate primarily to cases protecting the res judicata principle. The innovation was already so remarkable it overshadowed the importance of the terms " inter alia ". It echoed the decision made a few months earlier against France. In effect the Mantovanelli decision of 18 March 1997 16 , without reference to the security of legal relations or to the rule of law, had the audacity to extend the adversarial requirements implied by Article 6 §1 of the Convention to the technical expertise phase. Thus, a remarkable enrichment of the domain of Article 6 §1 of the Convention came to be realised by an extension to the phase prior to the court decision 17 . The extension prior to and out of the court decision of the Article 6 §1 influence is therefore engaged for a long time.
6.
The Brumarescu decision, embedded in the mass of conventional applications of the legal certainty principle, deserves to be highlighted as a basis for the European promotion of preventive legal certainty that is inconceivable without civil law Notaries. B. Civil law Notaries as privileged actors of preventive legal certainty 7. The preventive legal certainty is not just a matter of self--interest legal certainty: it is also a matter of economic development and certainty. It contributes to economic certainty as Anglo--Saxon studies attest having established that if the preventive role of civil law Notaries could have been exercised in the USA, the subprime crisis would have been avoided if not at least greatly attenuated 18 . 8. The preventive legal certainty is also a factor in economic development because contrary to certain misconceptions influenced by experts exclusively selected according to the single thought sole criteria 19 , it reduces costs and generates competitive margins. In effect, the preventive legal certainty would prevent disputes, which place an inordinate burden on taxpayers who finance the public service justice, on economic agents whose deployment strategies are paralysed pending the outcome of the trial, on the parties who must pay very high title insurance fees due to the fragility of their real estate property title 20 . The economic benefit of preventive legal certainty still feeds on the effectiveness of benefits in kind, while a title insurance system only allows the ousted owner to obtain a sum of money 21 . In reality, it is likely that the economic arguments for softening or removing the role of civil law Notaries affording preventive legal certainty are completely reversible. Under these conditions, the principle of legal certainty and rule of law, whose fundamental nature has been stressed by the Convention, should tip the balance in favour of maintaining and strengthening the preventive legal certainty. Yet, this economically and legally appropriate consolidation can only be obtained through a public officer.
10.
The public officer is, in effect, required to conduct a review of the legality of the deeds he draws up. This is the reason why we often present " authenticity as a securitisation instrument in legal relations and a mean to avoid disputes", and not only as an attribute of the legal deeds which does not affect its substance 22 . This authentication assignment, central to legal certainty is closely related to the advisory duty 23 . Yet, this advisory duty is inseparable from the guarantees of impartiality that a public officer can only provide.
11.
It is due to this impartiality that the irreplaceable role of civil law Notaries was affirmed by the very important Consorts Richet and Le Ber v. France decision 24 . This decision found a violation of Article 1 of Protocol No. 1 in a famous case in which the former owners of the Porquerolles Island were refused building permits, paralysing their right to build on their few plots of land reserved by the deeds of sale concluded with the State before the Var prefect. The Consorts Richet and Le Ber decision has contributed powerfully in highlighting the advisory role that civil law Notaries also play on behalf of the parties in establishing that " the deeds of sale concluded in 20 For quantified data, P. Lorot, Le Notariat européen en danger, Les notes stratégiques de l'Institut Choiseul, oct. 2012, p. 15 et 18. 21 an administrative form before the Var prefect, as authorised by the State Domain Code, and not before a notary as it is provided for a sale agreement between private parties, the applicants didn't benefit from the notary advice on the eventual legal validity of the contractual clauses in the deeds of sale, but they relied on the prefect, representative of the State. " This is ultimately a tribute to the impartiality of civil law Notaries, who are the only ones able to advise both the parties since, unlike the prefect but also and especially the lawyers, civil law Notaries are equidistant from both. Thus, the idea is consolidated that " the notary affords a genuine public service of advice which is inherent in the achievement of his mission and differentiates him from the lawyer who has to advise only his client. " [START_REF] Sagaut | Déontologie notariale : Defrénois[END_REF] Yet, due to this duty of impartiality, which prevents him from promoting one of the parties, the civil law notary is a contractual relations peacemaker [START_REF] Sagaut | [END_REF] . 12.
It is through impartiality, which requires the civil law notary to " systematically search in all circumstances contract balance after hearing the parties, and ensure that each party has a good comprehension of the legal problems at stake, " 27 that the civil law notary, as already said by Domat, is Justice of the Peace [START_REF] Domat | Le droit public, suite des loix civiles dans leur ordre naturel[END_REF] . Thus, it has long been recognised that impartiality, which must be shown in exercising his duty, makes the civil law notary an authentic organ of preventive justice. The use of the law of the European Convention on Human Rights, expressed by the Consorts Richet and Le Ber decision, establishes that, only the civil law, notary public officer, is able to offer to all parties preventive legal certainty or justice without impartiality. By the inherent impartiality in adversary duty, essential for authentication on which the preventive legal certainty of legal relations, the Notary appears therefore, as much as the bailiff, as an essential element of the rule of law. The European law on Human Rights succeeds in legitimising both of these legal professions.
13.
However, impartiality is not the only major precept of public notarial ethics. Two others exist: integrity and independence [START_REF] Sagaut | [END_REF] , which the civil law Notaries can share with other professions, but which, under the European law on Human Rights, help to increase their legitimacy as « watchdog » of authenticity.
14.
The « watchdog » qualification is sometimes attributed to civil law Notaries in a pejorative sense to denounce the aggressiveness contrary to the image of an impartial public officer [START_REF] Burgan | Au bord du gouffre, prenons du recul[END_REF] . In the language of the ECtHR, the expression « watchdog », constantly used since the Observer and Guardian v. United Kingdom decision of 26 November 1991 [START_REF] Cedh | [END_REF] , is in fact, quite the reverse, positive: it justifies the privileged protection of the right to freedom of expression, of whom the journalists are the most prominent example, contributing to the general interest debate. Like the journalists who fuel the debate, the civil law Notaries guard the authenticity of the deeds. Others, like registrars, judicial auctioneers, bailiffs and clerks of the court contribute to the authentication of the deeds, which is dominated more by general interest than by private interest, since " a correct application of the rules in social relations " 32 depends on it.
15.
However, civil law Notaries deserve to be established as the " watchdog " of authenticity, because among all participants to authentication, they appear the most similar to the judge. It is recognised without extensive discussion since Cujas, that the civil law Notaries are judges out--of--court 33 and, generally speaking, the judge of application of the rules in social relations prior to the court decision. The assimilation has already been consecrated in a spectacular manner by a decision, relatively well known [START_REF] Grimaldi | Des statuts de service public[END_REF] , Estima Jorge v. Portugal of 21 April 1998 35 , following which, whatever the nature of the enforcement order, judgement or notarial deed 36 , Article 6 §1 applies to its enforcement. Yet, from a European law on Human Rights point of view, assimilation of the civil law notary to a guardian judge of authentic deeds provides noteworthy perspectives. In reference to Article 6 §1, we can envisage a right to a fair notary (B), which can be considered as an extension of the Estima Jorge decision. Article 6 §1 includes, since the famous Golder v. United Kingdom decision of 21 February 1975 37 , a right of access to a court, without the guarantee provided by Article 6 §1 would not be applicable, which could have a right of access to a civil law Notary as a counterpart, thus it is logically and chronologically necessary to begin here (A).
A. The right of access to a civil law Notary 16.
Already, from the constitutional perspective, the notary assignment has been assimilated to " a jurisdiction assignment, an authentication assignment, the out--of--court and preventive jurisdiction, exercised on behalf of the State. " [START_REF] Gaudemet | Synthèse générale[END_REF] The access to a civil law notary, who has been delegated the state seal, has accordingly been envisaged regarding the right of access to a court, notarial authentication required by the law as well as simply wished by the parties 39 . For the same reasons that the preventive justice is assimilated to the contentious justice, the right of access to a civil law Notary can be justified by the Convention, transposing the right of access to a court as defined by Article 6 §1 [START_REF]We can also consider that the right of access to a civil law notary is part of the right of effective access to a court, P. Crocq, « Des missions de service public[END_REF] . 17.
In this respect, it should be clarified that the conventional right of access to a civil law Notary is not, as envisaged by Macron law, a right to exercise the profession, but a Human Right of access to the authentic deeds which the citizens, awaiting preventive legal certainty, could benefit from. In regard to article 6 §1, the number of civil law Notaries is not an end in itself, but a question of means sufficiently deployed on the whole territory to allow general access to authentic deeds, when required by the law or wished for by the citizens. Article 6 §1 could be translated as a positive obligation to put in place a sufficient number of civil law Notaries, as veritable judges of authenticity, and would resist the idea of multiplication of the number of civil law Notaries, in order to attain the break--even point of their offices, should have to diversify their activities at the price of a dilution of their fundamental authentication assignment [START_REF] Burgan | Au bord du gouffre, prenons du recul[END_REF] . From then on, the conventional right to access to civil law Notaries, whose number is already satisfactory in regard to the latest CEPEJ study [START_REF] Étude | Rapport sur les systèmes judiciaires européens[END_REF] , should rather challenge the tariff uniqueness and the call for the redefinition of the authentication domain. Regarding the tariff question, which was and could be at the heart of discussions that tend to call into question the redistribution issue [START_REF] Gaudemet | [END_REF] , Article 6 §1 authorised a warning. would be the right to enforcement, already recognised by the Estima Jorge decision in the continuation of the right to enforcement of judicial decisions consecrated by the famous decision Hornsby v. Greece 46 ; this decision which is besides absolutely incompatible with the CJEU affirmation following which the notaries activity is not directly and specifically connected with the exercise of official authority. It would also include the right to the respect of a reasonable delay in the liquidation of the matrimonial property and inheritance, also recognised by the decisions Kanoun v. France of 3 October 2000 47 and Siegel v. France of 28 November 2000 48 . Other transpositions of the requirements of the right to a fair trial are to be expected: they concern integrity, but above all independency and impartiality that Article 6 §1 expects from the court. With regard to the independence requirement, it completely justifies that a civil law notary cannot draw up acts for members of his family and it could make unconventional partnerships between a civil law notary and other professions, which create links of economic dependence, direct or indirect, or statutory.
21.
The independence requirements and integrity that is the corollary, should also trigger profound reforms in the profession organisation. Legitimation of the civil law Notaries by the law of European Convention on Human Rights in effect cannot be a one--way process. In order for this legitimation to maintain its strength and pertinence, the profession still has to be exemplary with regard to the requirements that are its basis. Thus, the disciplinary procedure against civil law Notaries, which for the moment is governed by the case Le Compte, Van Leuven and de Meyere v. Belgium 49 , which is satisfied by a purge of the defects of violations to the rules of a fair trial operated by the court of appeal, should be more inspired by the rules which prevail for the judges in front of the High Council of the Judiciary. Here it will be sufficient to remember that the disciplinary sanctions imposed by the civil law Notaries chambers are not published, as well as the decision immediately enforced they make when they are appealed to decide, in case of non--conciliation, a dispute between civil law Notaries.
22.
At the cost of significant amendments to its internal organisation, the civil law Notaries profession should be considered as the « watchdog » of authenticity, fundamental element of the rule of law and find, on the side of the law of the European Convention on Human Rights, the European legitimacy that the law of European Union occasionally seems to contest. At least, this European legitimacy recovered by the law of European Convention on Human Rights seems to be able to counterbalance the mistrust that emanates from the "Europe of Trade". It would even be possible that it could contribute to its dissipation. From then on, to conclude, the efforts of the two Europes could be joined to achieve the dream of a European civil law Notary already expressed aloud by the professor Michel Grimaldi 50 , twelve years ago, which still echoes today 51 … 50 M. Grimaldi
In L'authenticité, op. cit., p. 144, n°113, see for Quebec, Fr. Brochu, La protection des droits par la publicité foncière, in Un ordre juridique nouveau ? Dialogues avec Louis Josserand, Mare et Martin, 2014, p. 133.
22
In L'authenticité, op. cit., n°57.
23
M. Latina, Le notaire et la sécurité juridique : JCP N 2010, 1325, n° 10. 24 CEDH, Consorts Richet et Le Ber c/ France, 18 nov. 2010, req. n°18990/07 et 22905/07, Nul ne peut être notaire et partie : émergence d'un nouvel adage européen ?, J-P. Marguénaud et B. Dauchez : JCP N, 2011, 1209.
, Colloque du 9 mars 2003, Association du Notariat francophone, « L'efficacité des actes publics dans l'espace francophone en matière civile. Comment concilier souveraineté nationale et mondialisation ? », p. 161. 51 Not. N. Laurent-Bonne, L'avenir du Notariat est-il dans son histoire ? :Point de vue, D. 2014, p. 1771, J. Tarrade, Entretien, in Les cahiers de la fonction publique, avril 2014, p. 49 et 50.
18.
If the actual or future reforms should, more or less mechanically, increase, in particular by tariff based on time cost, the costs of the more modest acts, the situation could activate the application, in notary access subject, of the famous case law Airey v. Ireland of 9 October 1979 44 relative to the right of access to a civil court. This decision, affirming that the rights guaranteed by the Convention should be practical and effective and not theoretical or illusory, has in effect decided, to permit people without resources access to the civil court, that the States assume a positive obligation to provide a free legal aid only required by Article 6 §3 in penal subject. To make the right of access to the judge of authenticity, who is the civil law notary, practical and effective, the positive conventional obligation to put in place material help adapted to circumstances is also perfectly foreseeable. Without doubt this quite unexpected perspective can come sooner as authenticity can be justified for deeds with little patrimonial stakes.
19.
The question of the scope of the civil law Notaries monopoly in the authentication subject, as we see, is the one that provokes the most discussion and greed. Here it will be simply taken up in regard to the law of the European Convention on Human Rights. From this point of view, a very simple criterium should appear to lead: the public notarial monopoly of authenticity has in principle vocation to apply to all deeds necessary to the exercise of the rights guaranteed by the Convention and its additional protocols. Thus, the deeds relevant to the exercise of the right to peaceful enjoyment of property (transfers for a fee or for free of real estate, land registration…), right to marry and to found a family, right of respect of his private and family life (adoption, IVF, end of life…) should, when there is no intervention of the registrar, have in principle and a priori, be conserved or linked with the authentication activity of the « watchdog, » who is the civil law notary. The right of access to the civil law notary now justified, the inherent guarantees to the right to a fair civil law Notary can be concretely envisaged by confrontation with the requirements of the right to a fair court defined by Article 6 §1.
B. The right to a fair civil law Notary 20.
Built on the model of the right to fair expertise 45 , which considered the extension of the guarantee provided by Article 6 §1, prior to the intervention of the contentious judge, the right to a fair civil law notary would include a certain number of procedural conventional rights. At first it 44 CEDH, Airey c/ Irlande, 9 oct. 1979, n°6289/73. 45 J-P. Marguénaud, Le droit à l'expertise équitable : D. 2000, p. 10. | 24,439 | [
"916070",
"12907"
] | [
"466369",
"461303",
"389474"
] |
01483914 | en | [
"info"
] | 2024/03/04 23:41:48 | 2016 | https://hal.science/hal-01483914/file/New%20Type%20of%20Non-Uniform%20Segmentation%20for%20Software%20Function%20Evaluation.pdf | Justine Bonnot
Daniel Menard
Erwan Nogues
New Type of Non-Uniform Segmentation for Software Function Evaluation
Embedded applications integrate more and more sophisticated computations. These computations are generally a composition of elementary functions and can easily be approximated by polynomials. Indeed, polynomial approximation methods allow to find a trade-off between accuracy and computation time. Software implementation of polynomial approximation in fixed-point processors is considered. To obtain a moderate approximation error, segmentation of the interval I on which the function is computed, is necessary. This paper presents a method to compute the values of a function on I using non-uniform segmentation, and polynomial approximation. Nonuniform segmentation allows to minimize the number of segments created and is modeled by a tree-structure. The specifications of the segmentation set the balance between memory space requirement and computation time. The method is illustrated with the function ( -log(x)) on the interval [2 -5 ; 2 0 ] and showed a speed-up mean of 97.7 compared to the use of the library libm on the Digital Signal Processor C55x.
I. INTRODUCTION
Technological progresses in microelectronics require the integration, in embedded applications, of numerous sophisticated processing composed of the computation of mathematical functions . To get the value of an intricate function, the exact value or an approximation can be computed. The challenge is to implement these functions with enough accuracy without sacrificing the performances of the application, namely memory usage, execution time and energy consumption. Several solutions can then be used. On the one hand, specific algorithms can be adapted to a particular function [START_REF] William | Numerical recipes in C (2nd ed.): the art of scientific computing[END_REF]. On the other hand, Look-Up Tables (LUT) or bi-/multi-partite tables methods can be used when low-precision is required. Nevertheless, the need to store the characteristics of the approximation in tables result in some impossibilities to include these methods in embedded systems because of the memory footprint. Finally, the majority of the proposed methods have been developed for a hardware implementation.
For the time being, two solutions exist for the software computation of these functions: the use of libraries such as libm that targets scientific computation and is very accurate but also quite slow, or the implementation of the CORDIC (COordinate Rotation DIgital Computer) algorithm that computes the approximation of trigonometric, hyperbolic and logarithmic functions.
In this work, the software implementation in embedded systems of mathematical functions is targeted as well as low cost and low power processors. To achieve cost and power constraints, no floating-point unit is considered available and processing is carried-out with fixed-point arithmetic. Then, the polynomial approximation of the function can give a very accurate result in a few cycles if the initial interval is segmented precisely enough. The proposed method is a new method of non-uniform subdivision. That new type of subdivision calls the Remez Algorithm of the Sollya tool [START_REF] Chevillard | Sollya: An environment for the development of numerical codes[END_REF] to approximate the function by polynomials on the segments obtained. The polynomial order is a trade-off between the computation error and the interval size. To obtain a given maximum approximation error, the polynomial order decreases, impliying the reduction of the segment size. This increases the number of polynomials to store in memory. For a given data-path word-length, the increase in polynomial order raises the fixed-point computation errors that annihilates the benefit of lower approximation error obtained by higher polynomial order. Thus, for fixed-point arithmetic, the polynomial order is relatively low. Consequently, to obtain a low maximal approximation error, the interval size is reduced. Accordingly, nonuniform interval subdivision is required to limit the number of polynomials to store in memory. The proposed method then originates two different types of error: the error of approximation app = ||f -P || ∞ of the Remez algorithm (where f is the function to approximate and P is the approximating polynomial on a segment), and the error caused by the fixedpoint computation f xp .
The challenge of the proposed method is to find the accurate segmentation of the initial interval I. Different nonuniform segmentations [START_REF] Lee | Hierarchical segmentation for hardware function evaluation[END_REF] have been proposed. Nevertheless, they have only been implemented for hardware function evaluation and do not provide flexibility in terms of depth of segmentation.
The proposed method presents a segmentation of the interval I in a non-uniform way, using a tree-structure. The parameters of that method are the polynomial order N d , the memory available for the application, the maximal computation time required and the precision needed on the approximation . The segmentation of I controls the error of approximation. The method provides the user Pareto curves giving the evolution of the memory size required depending on the computation time for a fixed precision, several degrees and several depths of segmentation. The user can then choose the most adapted degree and depth of segmentation given the constraints of the application. Besides, compared to table-based methods or the Cordic algorithm, our approach reduces respectively significantly the memory size required and the computation time.
The rest of the paper is organized as follows. First, the related work is detailed in Section II. The method for obtaining the polynomial corresponding to an input value x is presented in Section III. The tool for approximating functions by polynomials using non-uniform segmentation is introduced in Section IV. The Binary Tree Determination is deepened in Section V. The Reduced Tree Determination is presented in Section VI. The experiments are shown in Section VII.
II. RELATED WORK
Several methods can be used to compute an approximate value of a function. Iterative methods as the CORDIC algorithm [START_REF] Volder | The CORDIC trigonometric computing technique[END_REF] are generally easy to implement. The CORDIC algorithm computes approximations of trigonometric, logarithmic or hyperbolic functions. That method offers the ability to compute the value using only shifts and additions and is particularly adapted to a processor with no floating-point unit. Nevertheless, that algorithm requires to store tables to compute the approximate value, and may require a lot of memory space as well as a long computation time. These latest depend on the precision required that sets the number of iterations as well as the size of the table to store. It is possible to compute the approximation of a function thanks to other methods whose algorithms are described in [START_REF] William | Numerical recipes in C (2nd ed.): the art of scientific computing[END_REF]. For instance, the Chebyshev orthogonal basis allows to approximate a function using only polynomials, if the approximation has to be done on [-1; 1].
Hardware methods have been developed to compute approximate values of a function. The LUT method would consume the most memory space but would be the most efficient in computation time. That method consists in using 0-degree polynomials to approximate the value of the function. The initial segment is subdivided until the error between the polynomial and the real value is inferior to the maximal error on each segment obtained. However that segmentation has to be uniform, i.e. if the error criterion is not fulfilled on a single part of the initial segment, all the parts of the segments have to be segmented again. Improvements of the table-based method are the bi-/multi-partite methods detailed in [START_REF] De Dinechin | Multipartite table methods[END_REF]. The multipartite method propose to segment the interval on which the function f has to be approximated to be able to approximate f by a sequence of linear functions. The initial values of each segment as well as the values of the offsets to add to these initial values to get whichever value in a segment have to be saved. The size of these tables has been reduced compared to bi-partite method exploiting symetry in each segment. That method offers quick computations and reduced tables to store but is limited to a low-precision required and to a hardware implementation.
Finally, another method has been developed for hardware function evaluation in [START_REF] Lee | Hierarchical segmentation for hardware function evaluation[END_REF]. The interval I is segmented to control the precision. On each segment, the function is approximated by the Remez algorithm. 4 different segmentation schemes are used: uniform segmentation or P2S (left, right or both) with segments increasing or decreasing by powers of two. Once a segmentation has been applied, if the error does not suit the criterion, the segment can be segmented again using one of these segmentation schemes. Afterwards, AND and OR gates are used to find the segment corresponding to an input value x. LUT are used to store the coefficients of the polynomials. Nevertheless, that method does not allow to control the depth of segmentation of I which is offered in the presented method and has been implemented for hardware function evaluation.
III. COMPUTATION OF THE APPROXIMATING VALUE
A. Method for Indexing the Polynomials
The initial interval I is beforehand segmented using a nonuniform segmentation that is stored in a tree structure T . The algorithm of segmentation is recursive: the Remez algorithm is called on a segment and the error of approximation ||f -P || ∞ is compared to the maximum error of approximation app . If ||f -P || ∞ > app , the segment is segmented and the algorithm is applied on each segment until the criterion is fulfilled on each segment. Each time the segmentation algorithm is called, a node is added in T to store the bounds of the created segment. The segmentation tree T is illustrated in figure 1.
Then, to each segment of the final segmentation corresponds an approximating polynomial. All the polynomials are stored in the table P whose lines correspond to the polynomials associated to the different segments. Consequently, the aim of this step is to determine for an input value x the index associated to the segment in which x is located. The problem is to find the path associated to the interval in a minimum time. The approach is based on the analysis and interpretation of specific bits of x formatted in fixed-point coding. The segment bounds are sums of powers of two, so these bits can be selected with a mask and interpreted as a binary number after having shifted them to align them on the LSB.
The tree storing the segmentation is not well balanced. For any tree level, the number of bits to analyze is not constant and depends on the considered node since all the nodes associated to a given level do not necessarily have the same number of children. Thus, the mask and the shift operations are specific for each node. Consequently, at each intermediate node n lj (where l is the level and j the node) a mask and an offset between the index of the first child of a node and its index are associated. The mask is used to select the adequate bits of x to move from node n lj to the next node n l+1j located at level l + 1.
The indexing method uses a bi-dimensional table T which stores for each node, the mask, the shift and the offset to pass from a level to the following. For the presented example in figure 1, the tree possesses 3 levels. Consequently, three tables are computed and presented in figure 2.
Each table contains as many lines as the total number of nodes in the level. This number is obtained by summing the number of nodes in the level to the number of leaves of the previous levels. Consequently, the number of information to store depends on the depth of the tree, the number of polynomials, and the degree of the approximating polynomials.
To find the shifts and masks values in the tables, for each level, the tree is observed. The value of the mask corresponding to a node is the binary conversion of the number of children of that node. The value of the shift corresponding is obtained substracting the number of children to the shift of the previous level. Fig. 1: Illustrating the segmentation method on a tree of depth 3
i 1 = Δ 0 i 0 = 0 0 1 2 3 0 0 1 1 0 x[13] 0 x[13..12] M 1 i 1 o 1 0 0 x[15..14] i 0 o 0 M 0 i 2 =i 1 + o 1 + Δ 1 0 1 2 3 4 5 6 7
0 0 0 1 1 1 1 4 0 0 x[12] 0 0 0 x[11..10] 0 i 2 o 2 M 2 i 3 =i 2 + o 2 + Δ 2
Fig. 2: Indexing tables associated to the tree T in figure 1
B. Fixed-point computation
To improve the speed performances, the coefficients of the polynomials are formatted in fixed-point and stored in the bidimensional table
P[i][d].
The term i is the index of the interval and the coefficient is the one of the d-degree monomial. Each coefficient has its own fixed-point format to reduce the fixedpoint computation errors. The computation of P i (x), where P i is the approximating polynomial on the segment i, can be decomposed owing to the Horner rule, reducing the computation errors. According to the algorithm of Ruffini-Horner [START_REF] Taylor | The calculus of observations[END_REF], each polynomial P i (x) = a n x n + a n-1 x n-1 + ... + a 1 x + a 0 can be factored as:
P i (x) = ((((a n x + a n-1 )x + a n-2 )x + ...)x + a 1 )x + a 0
Using that factorization, the calculation scheme can be decomposed in a basic loop kernel as in figure 3.
x << + Input x a i-1
Fig. 3: The basic cell n -i of the n-degree polynomial computation using the Horner rule According to figure 3, shifts are necessary to compute the value of P i (x) in fixed-point coding. Indeed, the output of the multiplier can be on a greater format than necessary. By using interval arithmetic on each segment, the real number of bits necessary for the integer part can be adjusted with left shift operation.
Then, a right shift operation can be necessary to put the two adder operands on the same format. In our case, a quantification from double precision to single precision is done and creates a source of error. When the two shift values have been computed, they can be both added so as to do a single shift on the output of the multiplier that is stored in the bidimensional table D[i][d] where d corresponds to the iteration of the loop and i is the index of the segment.
IV. TOOL FOR POLYNOMIAL APPROXIMATION
Numerous approximating methods have been developed with hardware function evaluation, and cannot consequently be integrated to a software function evaluation because of the memory space required. Consequently, a new method of software function evaluation is proposed in that paper. That method is composed of six different stages sharing data of the algorithm. To begin, the algorithm needs several input parameters:
1) The function to approximate f 2) The interval I on which f is approximated (f has to be continuous on I) 3) The maximum error of approximation , i.e. app + f p < 4) The degree of the approximating polynomials N d These parameters are progressing through the diagram presented in figure 4 and the output of that diagram is the final tree giving the optimal segmentation of I. That final tree is determined after two stages: the binary tree T bin has, in a first stage, to be determined, and then the user can specify a number of levels in the final tree. Then, another tree, T reduced , with a reduced number of levels, is determined. In other words, the final tree is the one of the required depth. To get the final tree, several steps are then needed.
Firstly, the four inputs are needed in the first step of the approximation method, during the Binary Tree Determination. That step allows to segment I in a dichotomous way so that the function can be approximated, using the Remez Algorithm, on each segment obtained by that decomposition, while satisfying the error criterion. The Remez Algorithm is used in the approximation method to give the best approximating polynomials of the function on each segment, in the sense of the infinite norm and is called thanks to the Sollya tool [START_REF] Chevillard | Sollya: An environment for the development of numerical codes[END_REF]. The Binary Tree Determination is detailed in Section V. The obtained segmentation is saved in T bin and sent to the Reduced Tree Determination where the degree of the approximating polynomials N d is required too. A supplementary parameter is needed. Indeed, the determination of the binary tree gives the maximal depth of the segmentation, and the goal of that step is to reduce it: the number of levels required N l has to be an input of that step. Reducing the depth of the segmentation allows to reduce the computation time as it is explained in the section VI. The output of that step is the Reduced Tree T reduced .
Once that final tree is obtained, to compute the approximation of the value required, the segment in which the value to compute is has to be known. To determine the index of that segment the tables presented in Section III are used.
The error of the whole approximation, namely app and f p can then be computed. Finally the C code of the approximation is generated.
A. The Dichotomous Approach Segmentation
The dichotomous approach segmentation of the initial interval is presented in the algorithm 1.
Algorithm 1 The Polynomial Approximation
procedure RECURSE(Interval) N d ; ; I = [a; b]; P = Remez(f, I, N d ) if ||f -P || ∞ ≥ then I 1 = [a; b-a 2 + a] I 2 = [ b-a 2 + a; b] return RECURSE(I 1 ) return RECURSE(I 2 ) else return P end if end procedure
The Sollya tool calls the Remez Algorithm on each segment while the error of approximation is higher than app . Indeed, the user has specified the maximal error of approximation , so half of that error, app is allocated to the Remez Algorithm and the other half to the error committed during the computation of the approximation in fixed-point coding f xp . The algorithm is recursive: if the error criterion is not fulfilled on a segment, the segment is cut into two equal parts and the algorithm is applied on each obtained segment.
B. Computation of the Binary Tree
The segmentation of I is modeled by a tree T bin . Each time the recursive algorithm is called, the bounds of the obtained segment are stored in a node of a tree structure. The interval bounds are sums of powers of 2 to ease the adressing. The root of that tree contains the bounds of I and the leaves the bounds of the segments on which the error criterion is verified, thus a polynomial is associated to each leaf.
The depth of the binary tree T bin corresponds to the number of bits necessary to index the table P. Then, in the binary tree, a bit is tested in each level of the tree. For instance, the binary tree obtained subdividing each segment in both equal parts is the deepest but leads to the minimal number of polynomials. On the contrary, the tree whose depth is 1 has the greatest number of polynomials. If the depth of the binary tree is N , then the number of polynomials in the tree of depth 1 is 2 N .
C. Binary Tree Determination on an example
The dichotomous approach segmentation is detailed in figure 5. The function ( -log(x)) is approximated on the interval [2 -5 ; 1] with a specified error app of 10 -3 and 2-degree polynomials. According to the tree obtained, the function possesses a strong non-linearity in the neighbourhood of 1, since the tree branches particularly on this part of I. The correspondance between the polynomials P i and the segments is located under the tree in figure 5.
VI. REDUCED TREE DETERMINATION
So as to minimize the computation time, the proposed method aims to optimize the dichotomous segmentation of I, reducing the number of levels in the tree modeling the segmentation. Indeed, the first step of the algorithm allows to know the number of bits to be tested to know in which segment the value x is. Then, the tree can be modified as long as the sum of the bits allocated to each level is equal to the depth of the binary tree T bin . To know the best allocation of these bits, a tree of solutions T sol is computed. It allows to minimize the time of computation as well as the hardware resources necessary to save tables P and T .
A. The Tree of Solutions
Having the depth N lmax of the binary tree T bin , the number of bits to be tested to know in which interval the value x is, is known. The same number of bits has to be allocated to the reduced tree T reduced . Besides, reducing the depth of the tree T reduced , the memory space required to store the polynomials (table P) and their associated tables (table T ) does not have to increase. The tree of solutions studies all the possible allocation configurations and computes the theoretical required memory space depending on the allocation chosen. The depth of the tree of solutions is equal to the number of levels required and each path in the tree corresponds to an allocation configuration.
The tree computed in that step is the Tree of Solutions T sol . Each node of that tree contains the number of bits allocated to the associated level of the node, and the leaves of the tree contain the value of the equation giving the required memory space to store tables P and T . However, to compute T sol , the Remez algorithm shall not be called to save time. Consequently, from T bin , the list of the segments from the Dichotomous Segmentation is obtained and has to be compared to the one obtained with each allocation configuration. If a segment obtained with the new allocation is not included in one from the list of segments obtained from T bin , then the error of approximation on that segment is bigger than the required error. Conversely, if a segment from the new segmentation is included in one from the list extracted from T bin , then the error of approximation is smaller than the error required and that segment does not have to be segmented again.
Finally, to quickly know which path is the best given T sol , the sole comparison of the values contained by the leaves is needed. A branch-and-bound algorithm can be used in that part to reduce the computation time for T sol but is not needed since that computation is done upstream and not on the DSP. When the best allocation is found, the new tree T reduced is computed and the coefficients of the polynomials as well as the list of segments created are saved.
B. Reduced Tree Determination on an example
The Reduced Tree Determination is shown on the example given in Subsection V-C. The depth of the binary tree obtained in the Dichotomous Segmentation Step is 6. The tree of solutions obtained requiring depth(T reduced ) = 3 is in figure 6. The characteristics of the different allocation configurations at each node are given in the node itself: the number of bits allocated to the associated level are written in the node. The memory required by each allocation is indicated under each branch in bytes. According to the memory space required by each path indicated in figure 6, the allocation configuration leading to the minimum memory space is the sixth path (nodes drawn in red in figure 6). To build T reduced of depth 3, 2 bits are allocated to the first level (the initial segment is subdivided in 4 segments), 1 bit is allocated to the second level, i.e. if needed (by comparing with the values obtained in the list of segments given by T bin ), the segments obtained in the previous level are subdivided in 2 parts. Finally, 3 bits are allocated to the last level, thus the same comparison is made with the list given by T bin . If needed, the segments obtained in the previous level are subdivided in 8 parts.
On the example, T reduced obtained is represented in figure 7. The table detailing the segments corresponding to the polynomials is table I. Each bound of the segments can be written as a sum of powers of two, necessary to ease the addressing. As figure 7 shows it, 13 polynomials are created with that decomposition instead of 64 expected. That allocation leads then to the minimum memory space required to store both the polynomials coefficients and the tables to index them.
N d = 2 Pi Associated segment 0 [2 -5 ; 2 -3 ] 1 [2 -3 ; 2 -2 ] 2 [2 -2 ; 2 -1 ] 3 [2 -1 ; 0.
VII. EXPERIMENTS
The proposed method aims to minimize the computation time and the memory footprint to compute the approximated value of f (x). Nevertheless, to know the optimal polynomial degree and the optimal depth of T reduced to suit the criteria of the processor targeted (in terms of memory space) and of the application (in terms of computation time), Pareto curves are needed. These curves provides the system designer with the optimal points for each polynomial degree tested, and take into account the number of levels in the tree as well as the required memory space that can be computed or the computation time, knowing the size of the indexing table T and of the coefficients table P as well as the equation giving the computation time depending on the degree N l and the number of levels N l . These curves represent the memory space required depending on the depth of T reduced or the memory space required depending on the computation time in cycles. The functions chosen for the experiments are two composite functions (hardly computed with a basic CORDIC algorithm) and a trigonometric function to be able to compare the obtained results with an implementation of CORDIC.
Finally, the proposed method is compared to the use of libm, to an implementation of the CORDIC method and to the LUT method.
A. Experiments with the function ( -log(x))
The curves are drawn in figures 8 and 9 for the function ( -log(x)) approximated on [2 -5 ; 2 0 ], allocating a maximal error for the Remez approximation of app = 0.01 and a maximal error for fixed-point coding of f xp = 0.01. The equation giving the computation time depending on the degree N d and the number of levels N l depends on the processor used and on the precision of fixed-point coding (single or double). These results have been obtained with a DSP C55x [START_REF]Texas Instruments. C55x v3.x CPU Reference Guide[END_REF].
Experimentally, the only degrees that are adapted to the approximation of that function are 1 and 2. The maximal fixedpoint coding error obtained with 1-degree polynomials, whatever the number of levels in the tree and in single precision, is 4.878•10 -4 and is consequently inferior to f xp . Nevertheless, with 2-degree polynomials, in single precision, the maximal fixed-point coding error obtained is 6.21 • 10 -2 > f xp . To be able to suit the fixed-point error criterion, data have to be coded in double precision. In double precision, with 2-degree polynomials, the maximal fixed-point coding error is 3.9•10 -3 . 2-degree polynomials coded in double precision are adapted but higher degree polynomials lead to a too high fixed-point coding error.
Besides, the computation time has been determined given an equation depending on the fixed-point coding and on the processor, and determined experimentally. The equation giving the computation time depending on the degree N d and the number of levels in the tree N l with the provided C-code, in single precision on the target C55x, is:
t = 9 + 8 • N l + 3 • N d (1)
In double precision, the coefficients to store raise dramatically, hence the preferable use of 1-degree polynomials in the case of the example:
t = 2 + 8 • N l + 63 • N d (2)
Finally, the computation time using the library libm is constant and equal to 4528 cycles. The mean speed-up of the proposed method compared to the use of such a library is then with N d = 1 equal to 98.82 and with N d = 2 equal to 96.51.
Finally, the mean speed-up of the proposed method compared to the use of libm is 98.7 on the DSP C55x. ]. The trees for 1-degree polynomials to 3-degree polynomials have been computed with depth varying from the maximal depth (binary tree) to a depth of 2. The evolution of the memory space required to store, on the one hand, polynomial coefficients table P and shifts table D for fixed-point computations S n-pol and on the other hand both the polynomial coefficients table P, the shifts table D and the indexing table T S n-tot , depending on the number of levels in the tree N l is drawn on figure 10. The trees are computed with a maximal error criterion app of 5 • 10 -3 . The data and coefficients are on 16 bits, the fixed-point coding of the input is Q 6,10 and the tests have been computed on the DSP C55x . The less levels the tree has, the greatest number of polynomials there is. Consequently, S n-poly is high since the size of the tables P and D are high. Nevetheless, since the tree has a reduced depth, few indexing tables are needed (the depth of the table T is equal to the number of levels in the tree). On the contrary, the most levels the tree has, the less polynomials there are and S n-poly is low since the size of tables P and D decreases. However, the table T is the biggest because of the number of levels in the tree.
Then, computing the total memory space required depending on the computation time for a required error provides the user with Pareto curves in figure 11 giving the optimal points for the computation time or the memory footprint. The function exp(-(x)) on the interval [2 -6 ; 2 5 ] can be approximated by a polynomial of degree from 1 to 4 with datas coded on 16 bits, requiring a maximal total error (including app and f xp ) of 10 -2 . The maximum values of the error of fixed-point coding are in table II. A 5-degree polynomial does not suit this approximation since the error obtained with fixed-point coding is greater than the maximal error required.
Degree
f xp 1 [-2.8 • 10 -3 ; 0] 2 [-2.5 • 10 -3 ; 0] 3 [-2.5 • 10 -3 ; 0.3 • 10 -3 ] 4 [-2.4 • 10 -3 ; 1.5 • 10 -3 ] 5 [-2.7 • 10 -3 ; 35.2 • 10 -3 ]
TABLE II: Intervals of the fixed-point coding error depending on the polynomial degree for the approximation of exp(-(x))
According to the Pareto curves on figure 11, when the tree has a low number of levels, the required memory is high but the computation time is minimum. Then, the computation time increases with the number of levels of the tree while the required memory decreases until it reachs a minimum. After that minimum for the required memory, it slightly tends to increase with the computation time and the number of levels.
2) Experiments on ARM Cortex M3: The same function is studied in the same approximation conditions on the target of ARM, the microcontroller Cortex M3. The Pareto curves representing the evolution of the memory space required depending on the computation time are in figure 12. The computation time of the approximation of the value f (x) using the proposed method and data formatted in single precision on the Cortex M3 can be obtained thanks to the following equation determined experimentally:
t = 27 + 17 • N l + 15 • N d (3)
C. Experiments with the function sin(x)
The proposed method is tested with a trigonometric function to compare the computation time and the memory space required to the one obtained using the CORDIC method. The value to compute is sin( π 3 ). The segment on which the function is approximated is [0; ] with an error of 0.01 and the proposed method So as to compare the CORDIC method to the proposed method on a trigonometric function (sin(x)), the curve of the computation time depending on the precision required (figure 13) and the curve of the memory space required depending on the precision required (figure 14) are drawn. Two degrees of polynomial approximation are tested. Fig. 14: The evolution of the memory space required depending on the precision required for the CORDIC method, the proposed method with 1-degree and 2-degree polynomials
The results obtained show that the computation time is always lower for the proposed method. Besides, until a precision of 0.01, the proposed method consume less memory space (with 2-degree polynomials) than the CORDIC method.
D. Comparison of the memory space required by the proposed method and by the table-based method
VIII. CONCLUSION
The non-uniform segmentation scheme followed by polynomial approximation proposed in that paper provides the system designer with Pareto curves giving the optimal points for the memory footprint or the computation time. With these Pareto curves, the system designer can then choose the degree N d and the depth of the tree saving the non-uniform segmentation that suit the best the targeted application. Besides, the proposed method has been compared to other existing methods: in terms of software implementation, the methods used currently are the CORDIC method or libraries such as libm. Compared to the CORDIC method, the proposed method is always faster and if the parameters of the approximation are well chosen, consumes less memory. The proposed method has also a mean speed-up equal to 97.7 on the DSP C55x compared to the use of libm. Finally, compared to a hardware method such as the LUT method, the gain in memory is significant.
Fig. 4 :
4 Fig. 4: The different stages of the algorithm for approximating function by polynomials
Fig. 5 :
5 Fig. 5: T bin corresponding to f = ( -log(x)), app = 10 -3 , I = [0.03125; 1] and N d = 2
Fig. 6 :
6 Fig. 6: T sol with f = ( -log(x)) on I = [2 -5 ; 1], with app = 10 -3 and N d = 2
Fig. 7 :
7 Fig. 7: T reduced with f = ( -log(x)) on I = [2 -5 ; 1] with app = 10 -3 and N d = 2
Degree 2 Fig. 8 :
28 Fig.8: Evolution of the memory space required to store the polynomials (table P) and the shifts (table D)(circles) and both polynomials (P), shifts (D) and the indexing table (T ) (diamonds) depending on N l
3 Fig. 10 :
310 Fig. 10: Evolution of the memory space required to store S n-pol (circles) and S n-tot (diamonds)
Fig. 11 :
11 Fig. 11: The Pareto curves for the approximation of exp(-(x)) on [2 -6 ; 2 5 ])
Fig. 12 :
12 Fig. 12: The Pareto curves of the memory space required depending on the computation time for the approximation of exp(-(x)) on Cortex M3
Fig. 13 :
13 Fig.13: The evolution of the computation time depending on the precision required for the CORDIC method, the proposed method with 1-degree and 2-degree polynomials
TABLE I :
I Correspondence between the polynomials P i and the segments
table P) and the shifts (table D)(circles) and both polynomials (P), shifts (D) and the indexing table (T ) (diamonds) depending on N l
240 Memory space required (bytes) depending on the computation time (cycles)
Degree 1 Degree 2
220
Memory space required (bytes) 160 180 200
140
20 120 40 60 80 Computation time (cycles) 100 120 140 160 180
Fig. 9: Evolution of the memory space required to store the
polynomials (table P) and the shifts (table D)(circles) and
both polynomials (P), shifts (D) and the indexing table (T )
(diamonds) depending on t
B. Experiments with the function exp(-(x))
1) Experiments on DSP C55x: The function exp(-(x))
is studied on the interval [2 -6 ; 2 5
π 2 ]. The depth of T bin is then, with 1degree polynomials, 3. The total error of approximation is = 0.01. With 2-degree polynomials the binary tree has a depth of 1. The results obtained are presented in the table III.
Degree Depth Memory(bytes) Time(cycles)
1 2 38 28
1 3 42 36
2 1 16 23
TABLE III :
III Results obtained approximating sin(x) on [0; π 2
TABLE IV :
IV Comparison of the memory required for approximating the functions below using the proposed method M em prop and using the table-based method M em tab | 35,950 | [
"980",
"961124"
] | [
"185974",
"185974",
"185974"
] |
01484071 | en | [
"phys"
] | 2024/03/04 23:41:48 | 2017 | https://hal.sorbonne-universite.fr/hal-01484071/file/Alarcon-Diez_Charge_Collection.pdf | V Alarcon-Diez
I Vickridge
M Jakšić
V Grilj
H Lange
Charge Collection Efficiency in a Segmented Semiconductor Detector Interstrip Region
Keywords: Segmented Detector, IBIC, Charge Collection Efficiency, Interstrip
Charged particle semiconductor detectors have been used in Ion Beam Analysis (IBA) for over four decades without great changes in either design or fabrication. However one area where improvement is desirable would be to increase the detector solid angle so as to improve spectrum statistics for a given incident beam fluence. This would allow the use of very low fluences opening the way, for example, to increase the time resolution in real-time RBS or for analysis of materials that are highly sensitive to beam damage. In order to achieve this goal without incurring the costs of degraded resolution due to kinematic broadening or large detector capacitance, a single-chip segmented detector (SEGDET) was designed and built within the SPIRIT EU infrastructure project. In this work we present the Charge Collection Efficiency (𝐶𝐶𝐸) in the vicinity between two adjacent segments focusing on the interstrip zone. Microbeam Ion Beam Induced Charge (IBIC) measurements with different ion masses and energies were used to perform X-Y mapping of 𝐶𝐶𝐸, as a function of detector operating conditions (bias voltage changes, detector housing possibilities and guard ring configuration). We show the 𝐶𝐶𝐸 in the edge region of the active area and have also mapped the charge from the interstrip region, shared between adjacent segments. The results indicate that the electrical extent of the interstrip region is very close to the physical extent of the interstrip and guard ring structure with
Introduction
The advance of IBA towards studying more complex materials, together with new technical possibilities, are driving factors for the development of IBA detection and data acquisition systems. In particular, the statistics of charged particle detection play a great role in the quantity and quality of information that can be extracted from a given RBS or NRA experiment [START_REF] Wang | Handbook of Modern Ion Beam Materials Analysis[END_REF]. Increasing the detector solid angle will allow increased statistics for a given beam fluence, which will enable some limitations to be overcome. These include detection or determination of elements present below the present detection or quantification limit, measurements on materials sensitive to ion beam damage, such as some monocrystals or organic materials [START_REF] Auret | Mechanisms of damage formation in semiconductors[END_REF], [START_REF] Benzeggouta | Handbook on Best Practice for Minimising Beam Induced Damage during IBA[END_REF], and use of very low ion currents, such as doubly charged alphas from a standard RF ion source accelerated to double the beam energy for singly charged ions in a single ended electrostatic accelerator. Increasing the overall detection solid angle must be accomplished whilst maintaining a low kinematic energy spread. A limited detector surface area is also desirable since a large detector surface will generate significant electrical and thermal noise, and under standard analysis conditions the count rate could be very high, leading to significant deadtime and pileup. A segmented detector design -composed of an array of individual detectors of appropriately chosen geometries meets these criteria so long as a suitable number of pulse-shaping and data acquisition channels are also available. A segmented detector will also contribute to resolving the mass-depth ambiguity in RBS since spectra will be collected for several detection angles [START_REF] Spieler | Semiconductor Detector Systems[END_REF].
In the present work, we have studied Charge Collection Efficiency (𝐶𝐶𝐸) of a large solid angle semiconductor segmented strip detector (SEGDET) built in the framework of the SPIRIT EU project at HZDR. The detector is composed of 16 individual segments with a Guard Ring (GR) dividing them. We focus on the charge collection in the interstrip region between two adjacent segments, applying the Ion Beam Induced Charge (IBIC) technique with a scanning ion microprobe. We have investigated how the charge from the interstrip region may be shared between adjacent active zones, as well as the role of the GR, using different incident ions and energies.
Instruments and methodology
The SEGDET used in this work is made by standard semiconductor processing techniques: ion implantation, lithography, thermal oxidation, metal deposition, annealing and so on. As a substrate a (100)-oriented, n-type Si wafer (𝑁 𝐷 ≈ 1 • 10 12 𝑐𝑚 -3 , 𝜌 = 5500 Ω𝑐𝑚) is used. The implanted donor in the p+ entrance window is B + (10 𝑘𝑒𝑉, 𝑁 𝐷 = 5 • 10 14 𝑐𝑚 -2 ) and the acceptor is P + implanted to form the rear n+ contact (50 𝑘𝑒𝑉, 𝑁 𝐷 = 5 • 10 14 𝑐𝑚 -2 ). The implantation was made through a SiO2 layer (60 𝑛𝑚) which was removed after the implantation. Al was deposited to create the ohmic contacts.
The structure of two adjacent segments with the interstrip region in between is shown in Figure 1. The complete detector area is 29𝑥29 𝑚𝑚², divided into 16 segments (29𝑥1.79 𝑚𝑚²), as is shown in Figure 2.a. The microscopic view of the interstrip region is shown in Figure 2.b; where two adjacent segments are the active areas (Segment A and B), SiO2 is the passivation zone which delimits the segment border, and the central part is the aluminium GR which is intended to isolate the segments -avoiding crosstalk as much as possible, keeping the electric field at the edge under control and reducing the leakage current.
To characterize the detector electronically, the leakage current was measured under different bias and GR configurations. The segment depletion thicknesses were measured via standard C-V curves.
The IBIC technique has been used to study the 𝐶𝐶𝐸 in the last two decades for different semiconductor devices [START_REF] Vittone | Semiconductor Characterization By Scanning Ion Beam Induced Charge[END_REF], [START_REF] Breese | A review of ion beam induced charge microscopy[END_REF]. It consists in the measurement of the charge carriers induced by the incident high energy ion in the depletion region of the p-n junction (Figure 3). IBIC measurements were made at the Ruđer Bošković Institute, using a microprobe of 1𝑥1 µ𝑚² size and scan lengths from 250 µ𝑚 up to 650 µ𝑚 in X and Y (Figure 3). The incident ions used were 1 H 1+ at 4.5 𝑀𝑒𝑉, 12 C 3+ at 5.5 𝑀𝑒𝑉 and 12 C 4+ at 6 𝑀𝑒𝑉, where the differences in the charge state for C ions are negligible for the IBIC measurements. The incident ion fluxes were between 10 2 -10 3 𝑠 -1 . Two classical analogue charge acquisition channels were used to acquire the induced charge as a function of the X-Y scan position.
For the C ions three different GR configuration were used: GR biased at the same potential as the segments, GR not biased and GR floating.
Results and discussion
The main subject of this study is the role of the GR in the SEGDET, therefore the first stage is to measure the electronic response without beam. The leakage current through both segments and the GR was measured and plotted as a function of GR bias as is shown in Figure 4. Note that the 'no bias' configuration is when the power supply plug in the GR is switch off, but nevertheless a parasitic voltage due to the adjacent segment electric field appears in the display (when the segment voltage is -80 𝑉, for example, the GR voltage is -10.9 𝑉). In the graph it can be seen that almost no leakage current has been found in the segments when the GR is biased, no matter which voltage is implemented, however the current is great when there is no bias on the GR. When GR is floating we are in an intermediate case. Since the energy resolution depends on the leakage current [START_REF] Spieler | Semiconductor Detector Systems[END_REF], the optimal configuration so far is biasing the GR.
Figure 5 shows the C-V curve and the p-n junction depletion thickness as functions of segment bias.
The maximum depletion thickness of 260 µ𝑚 is reached at -100 𝑉 applied bias. For the main bias voltages -20 𝑉, -30 𝑉, -80 𝑉 used in the later IBIC experiments the depletion thicknesses are 162 µ𝑚, 192 µ𝑚, 255 µ𝑚, respectively. Note, that at 0 𝑉 segment bias there is an intrinsic depletion thickness of 32 µ𝑚.
IBIC measurements were normalised by taking the charge collection efficiency (CCE) within the segments (detector active zone) as equal to 1 at the highest bias voltage (-80 𝑉), neglecting the energy loss in the dead layer (~100 𝑛𝑚). Figure 6 shows the variation of 𝐶𝐶𝐸 in the segments with respect to the applied bias voltage when a 1 H + 4.5 𝑀𝑒𝑉 beam is used. The ion range in silicon given by SRIM [START_REF] Ziegler | SRIM-2003[END_REF] in this case is 180 µ𝑚. We note that 𝐶𝐶𝐸 reaches the maximum at a bias of -20 𝑉, which corresponds to a depletion region of 162 µ𝑚 as measured above. This is 10% less than the proton range, however with the tile constants of the preamplifier and pulse shaping amplifier used here the charge induced beyond the depletion zone can still be collected. The two adjacent segments showed identical behaviour.
The 𝐶𝐶𝐸 behaviour in the interstrip region has been studied by extracting a line scan across the interstrip region from the 3D IBIC map to represent a 2D graph with the average 𝐶𝐶𝐸 values projected onto the Y axis, as is shown in Figure 7 for 1 H + incident ions at 4.5 𝑀𝑒𝑉 and in this case keeping the GR floating and segment bias at -30 𝑉. Three zones can be identified: A) Detector segment, where 𝐶𝐶𝐸 = 1, B) SiO2 passivation, in which the 𝐶𝐶𝐸 is slowly decreasing, and C) GR, where the 𝐶𝐶𝐸 drops drastically and the signal is present in both acquisition channels. The number of events present simultaneously in both acquisition channels was calculated using an off line coincidence data treatment, and showed that the induced charge is shared (i.e. adjacent segments cross talk). The coincident events decrease with the bias from 5 • 10 -2 to 2.4 • 10 -2 % 𝑐𝑜𝑖𝑛𝑐𝑖𝑑𝑒𝑛𝑐𝑒𝑠/µ𝑚 2 ; therefore, taking into account that the active detector area is in the order of 10 5 -10 6 µ𝑚 2 , these events are negligible.
To define the different zones within the interstrip region more clearly it is desirable to use a heavier incident ion since it increases the 𝐶𝐶𝐸 contrast in the X-Y maps [START_REF] Breese | Materials analysis using a nuclear microprobe[END_REF]. Here we used 12 C 3+ and 12 C 4+ ions, both equivalent in the point of view of induced charge at 5.5 𝑀𝑒𝑉 for 12 C 3+ and 6 𝑀𝑒𝑉 for 12 C 4+ . SRIM gives ion ranges in silicon of 5.42 µ𝑚 and 5.86 µ𝑚, respectively. Using the same line scan averaging as above, Figure 8 shows the 𝐶𝐶𝐸 for the three different GR configuration when the segments are biased at -30 𝑉. When the GR is biased bias (black line), the edge of the electric field in both segments is well defined, and the crosstalk has been eliminated. However the 𝐶𝐶𝐸 within the segments is 15% lower.
On the other hand both the floating (blue line) and the no bias (red line) GR configurations represent a 𝐶𝐶𝐸 > 1 in the interstrip region and surroundings. This might be explained by the highly charged states of the C ions and high charge density along the ion tracks which this set up involves, where either light generation in the SiO2 or electron cascades could generate extra charge inside the detector interstrip resulting in a 𝐶𝐶𝐸 > 1. To characterize this behaviour further studies using 𝐶𝐶𝐸 simulations and systematic experiments with different device configurations are needed. Nevertheless, these heavy ions in those last GR configuration are giving us more information about the interstrip structure.
When the GR is not biased, the 𝐶𝐶𝐸 vs Y graph is mostly symmetric, where the 𝐶𝐶𝐸 > 1 is extending around 15 -20 µ𝑚 into the detector segment; then 𝐶𝐶𝐸 is stable in the Al contact region, increasing up to 1.6 in the SiO2 passivated zone. Approximately in the central part of the SiO2 the 𝐶𝐶𝐸 starts to drop dramatically until the zone below the GR is reached. The charge is shared from the SiO2 edge, however the number of these coincident events is very low, around 3 • 10 -4 % 𝑐𝑜𝑖𝑛𝑐𝑖𝑑𝑒𝑛𝑐𝑒𝑠/µ𝑚 2 .
The floating GR case is more asymmetric since there is no reference for the generated electric field.
Nonetheless, a 𝐶𝐶𝐸 peak can be seen in the Al contact edge between regions A and B, rather than the smooth drop observed in the no bias case. Here the number of shared events is even smaller, around 1 • 10 -4 % 𝑐𝑜𝑖𝑛𝑐𝑖𝑑𝑒𝑛𝑐𝑒𝑠/µ𝑚 2 .
Conclusions
The role of GR for our SEGDET has been clarified for the correct installation and routine use. We have shown that GR biased with the same voltage as the segment reduces the leakage current significantly, hence electrical and thermal noise contribution for the energy resolution is also reduced. For 1 H 1+ ions, the 𝐶𝐶𝐸 variation with applied bias seems to be in good agreement with the C-V measurements.
The charge generated in the interstrip region may be shared by two adjacent segments, however the number of these events is very small and can be either treated by anticoincidence methods or even neglected. Nevertheless biasing the GR further reduces this charge sharing, and therefore, there is no need for any external segment shield (such as a strip mask in front of the detector interstrip regions) to avoid crosstalk between the segments.
The CCE >1 generated by the 12 C ions in the interstrip region cannot be unequivocally explained with the present experimental results. Detailed electric field calculations and device simulations may shed further light on these observations. 7: 𝐶𝐶𝐸 using 1 H + at 4.5 𝑀𝑒V (GR floating) and -30 𝑉 segment bias through the interstrip region and the comparison with its schematic structure. There are three zones: A) detector segment, B) SiO2 and C) GR Figure 8: 𝐶𝐶𝐸 using 12 C 3+ at 5.5 MeV and 12 C 4+ at 6 MeV, with -30 V bias in each segment. There are the three GR configuration: black line, Bias; red line, no bias and blue line floating.
List of Figures
Figure 1 :
1 Figure 1: Two adjacent segments and interstrip region schema
Figure 2 :
2 Figure 2: a) Segmented Detector photos. b) Detailed interstrip region view: Segment A and B, SiO2 passivation and Guard Ring
Figure 3 :
3 Figure 3: a) IBIC technique basic fundaments schema. b) X-Y scanning interstrip region schema
Figure 4 :Figure 5 :
45 Figure 4: Leakage current vs applied bias in a segment (equivalent value in all segments) using three different GR configuration: floating (black), GR no bias (red) and bias as the segments (blue) Figure 5: Capacitance (right Y axis, black) and depletion thickness (left Y axis, blue) vs bias voltage in segment A. Equivalent values are found in all the segments.
Figure 6 :
6 Figure 6: 𝐶𝐶𝐸 for two adjacent segments in GR floating configuration using 4.5 𝑀𝑒𝑉 incident protons
Figure
Figure7: 𝐶𝐶𝐸 using 1 H + at 4.5 𝑀𝑒V (GR floating) and -30 𝑉 segment bias through the interstrip region and the comparison with its schematic structure. There are three zones: A) detector segment, B) SiO2 and C) GR
Figure 2
Figure 6
Figure 8
8 Figure 7
Acknowledgements
This work has been supported by Marie Curie Actions -Initial Training Networks (ITN) as an Integrating
Activity Supporting Postgraduate Research with Internships in Industry and Training Excellence (SPRITE) under EC contract no. 317169. We are very grateful to Isabelle Trimalle from INSP for the kind assistance in the C-V and microscopy measurements. Also we thank Ivan Sudić from RBI for his very kind welcome in Zagreb and the excellent help during the IBIC measurements. | 15,705 | [
"13477",
"735218"
] | [
"439879",
"558847",
"558847",
"58036",
"58036"
] |
01484072 | en | [
"chim",
"phys"
] | 2024/03/04 23:41:48 | 2017 | https://hal.univ-lorraine.fr/hal-01484072/file/JEEP%202017%20oral.pdf | Michel Ferriol
Marianne Cochez
Queny Kieffer
Michel Aillerie
Alain Maillard
Patrice Bourson
Attempts to grow β-BaB 2 O 4 (β-BBO) crystal fibers by the micro-pulling down technique. Characterization by Raman micro-spectroscopy
published or not. The documents may come
Attempts to grow β-BaB 2 O 4 (β-BBO) crystal fibers by the micro-pulling down technique. Characterization by Raman micro-spectroscopy | 430 | [
"5245",
"7748",
"4548",
"5406",
"742596"
] | [
"202503",
"202503",
"202503",
"202503",
"202503",
"202503"
] |
01484079 | en | [
"chim",
"phys"
] | 2024/03/04 23:41:48 | 2017 | https://hal.univ-lorraine.fr/hal-01484079/file/poster-JEEP2017-final.pdf | Growth of borate-based crystals for blue-UV laser generation by the micro-pulling down technique
Marianne COCHEZ, Michel FERRIOL, Michel AILLERIE Université de Lorraine-CentraleSupélec, Laboratoire Matériaux Optiques, Photonique et Systèmes, Metz, France marianne.cochez@univ-lorraine.fr
Conclusions
Growth conditions:
-Atmosphere: air -Seed: platinum wire -Pulling rates: 0.15 -12 mm.h -1 -Pt/Rh (95/5) or Pt crucible fitted with a capillary (0.7-1.2 mm internal diameter)
Interest
rate (up to 12 mm/h) -Fiber diameter : 50 µm to several mm Final goal -studying suitable materials for frequency conversion in the blue-UV range -growth of good optical quality fiber crystals by the micro-pulling down technique ( µ-PD) Growth of borate-based crystals by m-PD : not an easy task ! rate : 3 mm.h -1 -Capillary diameter : 0,9-1 mm Transparent fiber BZBO: growth of transparent and colorless crystals possible with a very low pulling rate. CBF: growth conditions well defined. Good quality crystal fibers obtained BCBF: growth impossible by m-PD LBGO: LiF-B 2 O 3 flux needs to be improved and optimized to hope obtaining single-crystal fibers | 1,155 | [
"7748",
"1003503",
"4548"
] | [
"202503",
"202503",
"202503"
] |
01484113 | en | [
"info"
] | 2024/03/04 23:41:48 | 2017 | https://inria.hal.science/hal-01484113/file/ipdps-final.pdf | Mathieu Faverge
Julien Langou
Yves Robert
Jack Dongarra
Bidiagonalization and R-Bidiagonalization: Parallel Tiled Algorithms, Critical Paths and Distributed-Memory Implementation
Keywords: bidiagonalization, R-bidiagonalization, critical path, greedy algorithms, auto-adaptive reduction tree
We study tiled algorithms for going from a "full" matrix to a condensed "band bidiagonal" form using orthogonal transformations: (i) the tiled bidiagonalization algorithm BIDIAG, which is a tiled version of the standard scalar bidiagonalization algorithm; and (ii) the R-bidiagonalization algorithm R-BIDIAG, which is a tiled version of the algorithm which consists in first performing the QR factorization of the initial matrix, then performing the band-bidiagonalization of the Rfactor. For both BIDIAG and R-BIDIAG, we use four main types of reduction trees, namely FLATTS, FLATTT, GREEDY, and a newly introduced auto-adaptive tree, AUTO. We provide a study of critical path lengths for these tiled algorithms, which shows that (i) R-BIDIAG has a shorter critical path length than BIDIAG for tall and skinny matrices, and (ii) GREEDY based schemes are much better than earlier proposed algorithms with unbounded resources. We provide experiments on a single multicore node, and on a few multicore nodes of a parallel distributed sharedmemory system, to show the superiority of the new algorithms on a variety of matrix sizes, matrix shapes and core counts.
I. INTRODUCTION
This work is devoted to the design and comparison of tiled algorithms for the bidiagonalization of large matrices. Bidiagonalization is a widely used kernel that transforms a full matrix into bidiagonal form using orthogonal transformations. In many algorithms, the bidiagonal form is a critical step to compute the singular value decomposition (SVD) of a matrix. The necessity of computing the SVD is present in many computational science and engineering areas. Based on the Eckart-Young theorem [START_REF] Eckart | The approximation of one matrix by another of lower rank[END_REF], we know that the singular vectors associated with the largest singular values represent the best way (in the 2-norm sense) to approximate the matrix. This approximation result leads to many applications, since it means that SVD can be used to extract the "most important" information of a matrix. We can use the SVD for compressing data or making sense of data. In this era of Big Data, we are interested in very large matrices. To reference one out of many application, SVD is needed for principal component analysis (PCA) in Statistics, a widely used method in applied multivariate data analysis.
We consider algorithms for going from a "full" matrix to a condensed "band bidiagonal" form using orthogonal transformations. We use the framework of "algorithms by tiles". Within this framework, we study: (i) the tiled bidiagonalization algorithm BIDIAG, which is a tiled version of the standard scalar bidiagonalization algorithm; and (ii) the R-bidiagonalization algorithm R-BIDIAG, which is a tiled version of the algorithm which consists in first performing the QR factorization of the initial matrix, then performing the band-bidiagonalization of the R-factor. For both bidiagonalization algorithms BIDIAG and R-BIDIAG, we use HQRbased reduction trees, where HQR stands for the Hierarchical QR factorization of a tiled matrix [START_REF] Dongarra | Hierarchical QR factorization algorithms for multi-core cluster systems[END_REF]. Considering various reduction trees gives us the flexibility to adapt to matrix shape and machine architecture. In this work, we consider many types of reduction trees. In shared memory, they are named FLATTS, FLATTT, GREEDY, and a newly introduced auto-adaptive tree, AUTO. In distributed memory, they are somewhat more complex and take into account the topology of the machine. The main contributions are the following:
• The design and comparison of the BIDIAG and R-BIDIAG tiled algorithms with many types of reduction trees. There is considerable novelty in this. Previous work [START_REF] Haidar | An improved parallel singular value algorithm and its implementation for multicore hardware[END_REF], [START_REF] Haidar | A comprehensive study of task coalescing for selecting parallelism granularity in a twostage bidiagonal reduction[END_REF], [START_REF] Ltaief | Parallel two-sided matrix reduction to band bidiagonal form on multicore architectures[END_REF], [START_REF] Ltaief | High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures[END_REF] on tiled bidiagonalization has only considered one type of tree (FLATTS tree) with no R-BIDIAG. Previous work [START_REF] Ltaief | Enhancing parallelism of tile bidiagonal transformation on multicore architectures using tree reduction[END_REF] has considered GREEDY trees for only half of the steps in BIDIAG and does not consider R-BIDIAG. This paper is the first to study R-BIDIAG for tiled bidiagonalization algorithm. and to study GREEDY trees for both steps of the tiled bidiagonalization algorithm.
• A detailed study of critical path lengths for FLATTS, FLATTT, GREEDY with BIDIAG and R-BIDIAG (so six different algorithms in total), which shows that: (i) The newlyintroduced GREEDY based schemes (BIDIAG and R-BIDIAG) are much better than earlier proposed variants with unbounded resources and no communication: for matrices of p × q tiles, p ≥ q, their critical paths have a length Θ(q log 2 (p)) instead of Θ(pq) for FLATTS and FLATTT; (ii) BIDIAGGREEDY has a shorter critical path length than R-BIDIAGGREEDY for square matrices; it is the opposite for tall and skinny matrices, and the asymptotic ratio is 1 1+ α 2 for tiled matrices of size p × q when p = βq 1+α , with 0 ≤ α < 1 • Implementation of our algorithms in DPLASMA [START_REF]Flexible development of dense linear algebra algorithms on massively parallel architectures with DPLASMA[END_REF], which runs on top of the PARSEC runtime system [START_REF] Bosilca | DAGuE: A generic distributed DAG engine for High Performance Computing[END_REF], and which enables parallel distributed experiments on multicore nodes. All previous tiled bidiagonalization study [START_REF] Haidar | An improved parallel singular value algorithm and its implementation for multicore hardware[END_REF], [START_REF] Haidar | A comprehensive study of task coalescing for selecting parallelism granularity in a twostage bidiagonal reduction[END_REF], [START_REF] Ltaief | Parallel two-sided matrix reduction to band bidiagonal form on multicore architectures[END_REF]- [START_REF] Ltaief | High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures[END_REF] were limited to shared memory implementation.
• Experiments on a single multicore node, and on a few multicore nodes of a parallel distributed shared-memory system, show the superiority of the new algorithms on a variety of matrix sizes, matrix shapes and core counts. AUTO outperforms its competitors in almost every test case, hence standing as the best algorithmic choice for most users.
The rest of the paper is organized as follows. Section II provides a detailed overview of related work. Section III describes the BIDIAG and R-BIDIAG algorithms with the FLATTS, FLATTT and GREEDY trees. Section IV is devoted to the analysis of the critical paths of all variants. Section V outlines our implementation, and introduces the new AUTO reduction tree. Experimental results are reported in Section VI. Conclusion and hints for future work are given in Section VII.
II. RELATED WORK
This section surveys the various approaches to compute the singular values of a matrix, and positions our new algorithm with respect to existing numerical software kernels.
Computing the SVD. Computing the SVD of large matrices in an efficient and scalable way, is an important problem that has gathered much attention. The matrices considered here are rectangular m-by-n, with m ≥ n. We call GE2VAL the problem of computing (only) the singular values of a matrix, and GESVD the problem of computing the singular values and the associated singular vectors.
From full to bidiagonal form. Many SVD algorithms first reduce the matrix to bidiagonal form with orthogonal transformations (GE2BD step), then process the bidiagonal matrix to obtain the sought singular values (BD2VAL step). These two steps (GE2BD and BD2VAL) are very different in nature. GE2BD can be done in a known number of operations and has no numerical difficulties. On the other hand, BD2VAL requires the convergence of an iterative process and is prone to numerical difficulties. This paper mostly focuses on GE2BD: reduction from full to bidiagonal form. Clearly, GE2BD+BD2VAL solves GE2VAL: computing (only) the singular value of a matrix. If the singular vectors are desired (GESVD), one can also compute them by accumulating the "backward" transformations; in this example, this would consist in a VAL2BD step followed by a BD2GE step. Golub and Kahan [START_REF] Golub | Calculating the singular values and pseudoinverse of a matrix[END_REF] provides a singular value solver based on an initial reduction to bidiagonal form. In [START_REF] Golub | Calculating the singular values and pseudoinverse of a matrix[END_REF]Th. 1], the GE2BD step is done using a QR step on the first column, then an LQ step on the first row, then a QR step on the second column, etc. The steps are done one column at a time using Householder transformation. This algorithm is implemented as a Level-2 BLAS algorithm in LAPACK as xGEBD2. For an m-by-n matrix, the cost of this algorithm is (approximately) 4mn 2 -4 3 n 3 . Level 3 BLAS for GE2BD. Dongarra, Sorensen and Hammarling [START_REF] Dongarra | Block reduction of matrices to condensed forms for eigenvalue computations[END_REF] explains how to incorporate Level-3 BLAS in LAPACK xGEBD2. The idea is to compute few Householder transformations in advance, and then to accumulate and apply them in block using the WY transform [START_REF] Bischof | The WY representation for products of Householder matrices[END_REF]. This algorithm is available in LAPACK (using the compact WY transform [START_REF] Schreiber | A storage-efficient WY representation for products of householder transformations[END_REF]) as xGEBRD. Großer and Lang [START_REF] Großer | Efficient parallel reduction to bidiagonal form[END_REF]Table 1] explain that this algorithm performs (approximately) 50% of flops in Level 2 BLAS (computing and accumulating Householder vectors) and 50% in Level 3 BLAS (applying Householder vectors). In 1995, Choi, Dongarra and Walker [START_REF] Choi | The design of a parallel dense linear algebra software library: Reduction to Hessenberg, tridiagonal, and bidiagonal form[END_REF] presents the SCALAPACK version, PxGEBRD, of the LAPACK xGEBRD algorithm of [START_REF] Dongarra | Block reduction of matrices to condensed forms for eigenvalue computations[END_REF].
Multi-step approach. Further improvements for GE2BD (detailed thereafter) are possible. These improvements rely on combining multiple steps. These multi-step methods will perform in general much better for GE2VAL (when only singular values are sought) than for GESVD (when singular values and singular vectors are sought). When singular values and singular vectors are sought, all the "multi" steps have to be performed in "reverse" on the singular vectors adding a non-negligible overhead to the singular vector computation.
Preprocessing the bidiagonalization with a QR factorization (preQR step). Chan [START_REF] Chan | An improved algorithm for computing the singular value decomposition[END_REF] explains that, for talland-skinny matrices, in order to perform less flops, one can pre-process the bidiagonalization step (GE2BD) with a QR factorization. In other words, Chan propose to do preQR(m,n)+GE2BD(n,n) instead of GE2BD(m,n). A curiosity of this algorithm is that it introduces nonzeros where zeros were previously introduced; yet, there is a gain in term of flops. Chan proves that the crossover points when preQR(m,n)+GE2BD(n,n) performs less flops than GE2BD(m,n) is when m is greater than 5 3 n. Chan also proved that, asymptotically, preQR(m,n)+GE2BD(n,n) will perform half the flops than GE2BD(m,n) for a fixed n and m going to infinity. If the singular vectors are sought, preQR has more overhead: (1) the crossover point is moved to more tall-andskinny matrices, and there is less gain; also [START_REF] Bischof | The WY representation for products of Householder matrices[END_REF] there is some complication as far as storage goes.
Two-step approach: GE2BND+BND2BD. In 1999, Großer and Lang [START_REF] Großer | Efficient parallel reduction to bidiagonal form[END_REF] studied a two-step approach for GE2BD: (1) go from full to band (GE2BND), (2) then go from band to bidiagonal (BND2BD). In this scenario, GE2BND has most of the flops and performs using Level-3 BLAS kernels; BND2BD is not using Level-3 BLAS but it executes much less flops and operates on a smaller data footprint that might fit better in cache. There is a trade-off for the bandwidth to be chosen. If the bandwidth is too small, then the first step (GE2BND) will have the same issues as GE2BD. If the bandwidth is too large, then the second step BND2BD will have many flops and dominates the run time.
Tiled Algorithms for the SVD. In the context of massive parallelism, and of reducing data movement, many dense linear algebra algorithms operates on tiles of the matrix, and tasks are scheduled thanks to a runtime. In the context of the SVD, tiled algorithms naturally leads to band bidiagonal form. Ltaief, Kurzak and Dongarra [START_REF] Ltaief | Parallel two-sided matrix reduction to band bidiagonal form on multicore architectures[END_REF] present a tiled algorithm for GE2BND (to go from full to band bidiagonal form). Ltaief, Luszczek, Dongarra [START_REF] Ltaief | High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures[END_REF] add the second step (BND2BD) and present a tiled algorithm for GE2VAL using GE2BND+BND2BD+BD2VAL. Ltaief, Luszczek, and Dongarra [START_REF] Ltaief | Enhancing parallelism of tile bidiagonal transformation on multicore architectures using tree reduction[END_REF] improve the algorithm for tall and skinny matrices by using "any" tree instead of flat trees in the QR steps. Haidar, Ltaief, Luszczek and Dongarra [START_REF] Haidar | A comprehensive study of task coalescing for selecting parallelism granularity in a twostage bidiagonal reduction[END_REF] improve the BND2BD step of [START_REF] Ltaief | High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures[END_REF]. Finally, in 2013, Haidar, Kurzak, and Luszczek [START_REF] Haidar | An improved parallel singular value algorithm and its implementation for multicore hardware[END_REF] consider the problem of computing singular vectors (GESVD) by performing GE2BND+BND2BD+BD2VAL+VAL2BD+BD2BND+BND2GE. They show that the two-step approach (from full to band, then band to bidiagonal) can be successfully used not only for computing singular values, but also for computing singular vectors.
BND2BD step. The algorithm in LAPACK for BND2BD is xGBBRD. In 1996, Lang [START_REF] Lang | Parallel reduction of banded matrices to bidiagonal form[END_REF] improved the sequential version of the algorithm and developed a parallel distributed algorithm. Recently, PLASMA released an efficient multithreaded implementation [START_REF] Haidar | A comprehensive study of task coalescing for selecting parallelism granularity in a twostage bidiagonal reduction[END_REF], [START_REF] Ltaief | High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures[END_REF], and Rajamanickam [START_REF] Rajamanickam | Efficient algorithms for sparse singular value decomposition[END_REF] also worked on this step.
BD2VAL step. Much research has been done on this kernel. Much software exists. In LAPACK, to compute the singular values and optionally the singular vectors of a bidiagonal matrix, the routine xBDSQR uses the Golub-Kahan QR algorithm [START_REF] Golub | Calculating the singular values and pseudoinverse of a matrix[END_REF]; the routine xBDSDC uses the divide-and-conquer algorithm [START_REF] Gu | A divide-and-conquer algorithm for the bidiagonal svd[END_REF]; and the routine xBDSVX uses bisection and inverse iteration algorithm. Recent research was trying to apply the MRRR (Multiple Relatively Robust Representations) method [START_REF] Willems | Computing the bidiagonal svd using multiple relatively robust representations[END_REF] to the problem.
BND2BD+BD2VAL steps in this paper. This paper focuses neither on BND2BD nor BD2VAL. As far as we are concerned, we can use any of the methods mentioned above. The faster these two steps are, the better for us. For this study, during the experimental section, for BND2BD, we use the PLASMA multi-threaded implementation [START_REF] Haidar | A comprehensive study of task coalescing for selecting parallelism granularity in a twostage bidiagonal reduction[END_REF], [START_REF] Ltaief | High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures[END_REF] and, for BD2VAL, we use LAPACK xBDSQR.
III. TILED BIDIAGONALIZATION ALGORITHMS
A. QR factorization Tiled algorithms are expressed in terms of tile operations rather than elementary operations. Each tile is of size n b × n b , where n b is a parameter tuned to squeeze the most out of arithmetic units and memory hierarchy. Typically, n b ranges from 80 to 200 on state-of-the-art machines [START_REF] Agullo | Comparative study of one-sided factorizations with multiple software packages on multi-core hardware[END_REF]. Consider a rectangular tiled matrix A of size p×q. The actual size of A is thus m × n, where m = pn b and n = qn b . In Algorithm 1, k is the step, and also the panel index, and elim(i, piv(i, k), k) is an orthogonal transformation that combines rows i and piv(i, k) to zero out the tile in position (i, k). To implement elim(i, piv(i, k), k), one can use six different kernels, whose costs are given in Table I. In this table, the unit of time is the time to perform n 3 b 3 floating-point operations. There are two main possibilities. The first version eliminates tile (i, k) with the TS (Triangle on top of square) kernels, while the second version uses TT (Triangle on top of triangle) kernels. In a nutshell, TT kernels allow for more parallelism, using several eliminators per panel simultaneously, but they reach only a fraction of the performance of TS kernels. See [START_REF] Bouwmeester | Tiled QR factorization algorithms[END_REF], [START_REF] Dongarra | Hierarchical QR factorization algorithms for multi-core cluster systems[END_REF] or the extended version of this work [START_REF] Faverge | Bidiagonalization with parallel tiled algorithms[END_REF] for details. There are many algorithms to compute the QR factorization of A, and we refer to [START_REF] Bouwmeester | Tiled QR factorization algorithms[END_REF] for a survey. We use the three following variants:
• FLATTS: This algorithm with TS kernels is the reference algorithm used in [START_REF] Buttari | Parallel tiled QR factorization for multicore architectures[END_REF], [START_REF] Buttari | A class of parallel tiled linear algebra algorithms for multicore architectures[END_REF]. At step k, the pivot row is always row k, and we perform the eliminations
elim(i, k, k) in sequence, for i = k + 1, i = k + 2 down to i = p.
• FLATTT: This algorithm is the counterpart of the FLATTS algorithm with TT kernels. It uses exactly the same elimination operations, but with different kernels.
• Greedy: This algorithm is asymptotically optimal, and turns out to be the most efficient on a variety of platforms [START_REF] Bouwmeester | Tiled Algorithms for Matrix Computations on Multicore Architectures[END_REF], [START_REF] Dongarra | Hierarchical QR factorization algorithms for multi-core cluster systems[END_REF]. It eliminates many tiles in parallel at each step, using a reduction tree (see [START_REF] Bouwmeester | Tiled QR factorization algorithms[END_REF] for a detailed description).
Algorithm 1: QR(p, q) algorithm for a tiled matrix of size (p, q).
for k = 1 to min(p, q) do
Step k, denoted as QR(k):
for i = k + 1 to p do elim(i, piv(i, k), k) Algorithm 2:
Step LQ(k) for a tiled matrix of size p × q.
Step k, denoted as LQ(k): for j = k + 1 to q do col-elim(j, piv(j, k), k) Table I: Kernels for tiled QR. The unit of time is
n 3 b 3
, where n b is the blocksize.
B. Bidiagonalization
Consider a rectangular tiled matrix A of size p×q, with p ≥ q. The bidiagonalization algorithm BIDIAG proceeds as the QR factorization, but interleaves one step of LQ factorization between two steps of QR factorization (see Figure 1). More precisely, BIDIAG executes the sequence QR(1); LQ(1); QR(2); LQ(2) . . . QR(q-1); LQ(q-1); QR(q)
where QR(k) is the step k of the QR algorithm (see Algorithm 1), and LQ(k) is the step k of the LQ algorithm. The latter is a right factorization step that executes the columnoriented eliminations shown in Algorithm 2.
QR(1)
LQ( 1) QR( 2) LQ( 2) QR( 3) LQ( 3) QR( 4) LQ( 4) QR( 5) LQ( 5) QR( 6)
Figure 1: Snapshots of the bidiagonalization algorithm BIDIAG.
In Algorithm 2, col-elim(j, piv(k, j), k) is an orthogonal transformation that combines columns j and piv(k, j) to zero out the tile in position (k, j). It is the exact counterpart to the row-oriented eliminations elim(i, piv(i, k), k) and be implemented with the very same kernels, either TS or TT.
C. R-Bidiagonalization
When p is much larger than q, R-bidiagonalization should be preferred, if minimizing the operation count is the objective. This R-BIDIAG algorithm does a QR factorization of A, followed by a bidiagonalization of the upper square q × q matrix. In other words, given a rectangular tiled matrix A of size p × q, with p ≥ q, R-BIDIAG executes the sequence QR(p, q); LQ(1); QR(2); LQ(2); QR(3) . . . LQ(q-1); QR(q) Let m = pn b and n = qn b be the actual size of A (element wise). The number of arithmetic operations is 4n 2 (m -n
3 ) for BIDIAG and 2n 2 (m + n) for R-BIDIAG [18, p.284]. These numbers show that R-BIDIAG is less costly than BIDIAG whenever m ≥ 5n 3 , or equivalently, whenever p ≥ 5q 3 . One major contribution of this paper is to provide a comparison of BIDIAG and R-BIDIAG in terms of parallel execution time, instead of operation count.
IV. CRITICAL PATHS
In this section, we compute exact or estimated values of the critical paths of the BIDIAG and R-BIDIAG algorithms with the FLATTS, FLATTT, and GREEDY trees.
A. Bidiagonalization
To compute the critical path, given the sequence executed by BIDIAG, we first observe that there is no overlap between two consecutive steps QR(k) and LQ(k). To see why, consider w.l.o.g. the first two steps QR(1) and LQ(1) on Figure 1. Tile (1, 2) is used at the end of the QR(1) step to update the last row of the trailing matrix (whichever it is). In passing, note that all columns in this last row are updated in parallel, because we assume unlimited resources when computing critical paths. But tile (1, 2) is the first tile modified by the LQ(1) step, hence there is no possible overlap. Similarly, there is no overlap between two consecutive steps LQ(k) and QR(k +1). Consider steps LQ(1) and QR(2) on Figure 1. Tile (2, 2) is used at the end of the LQ(1) step to update the last column of the trailing matrix (whichever it is), and it is the first tile modified by the QR(1) step.
As a consequence, the critical path of BIDIAG is the sum of the critical paths of each step. From [START_REF] Bouwmeester | Tiled Algorithms for Matrix Computations on Multicore Architectures[END_REF], [START_REF] Bouwmeester | Tiled QR factorization algorithms[END_REF], [START_REF] Dongarra | Hierarchical QR factorization algorithms for multi-core cluster systems[END_REF] we have the following values for the critical path of one QR step applied to a tiled matrix of size (u, v):
FLATTS QR -F T S 1step (u, v) = 4 + 6(u -1) if v = 1, 4 + 6 + 12(u -1) otherwise. FLATTT QR -F T T 1step (u, v) = 4 + 2(u -1) if v = 1, 4 + 6 + 6(u -1) otherwise. GREEDY QR -GRE 1step (u, v) = 4 + 2 log 2 (u) if v = 1, 4 + 6 + 6 log 2 (u) otherwise.
The critical path of one LQ step applied to a tiled matrix of size (u, v) is LQ 1step (u, v) = QR 1step (v, u). Finally, in the BIDIAG algorithm, the size of the matrix for step QR(k) is (p -k + 1, q -k + 1) and the size of the matrix for step LQ(k) is (p -k + 1, q -k). We derive the following values:
• FLATTS: BIDIAGFLATTS(p, q) = 12pq -6p + 2q -4 • FLATTT: BIDIAGFLATTT(p, q) = 6pq -4p + 12q -10 • GREEDY: BIDIAGGREEDY(p, q) = q-1 k=1 (10 + 6 log 2 (p + 1-k) )+ q-1 k=1 (10+6 log 2 (q -k) )+(4+2 log 2 (p+1-q)
If q is a power of two, we derive that BIDIAGGREEDY(q, q) = 12q log 2 (q) + 8q -6 log 2 (q) -4. If both p and q are powers of two, with p > q, we obtain BIDIAGGREEDY(p, q) = 6q log 2 (p) + 6q log 2 (q) + 14q -4 log 2 (p) -6 log 2 (q) -10. For the general case, see [START_REF] Faverge | Bidiagonalization with parallel tiled algorithms[END_REF] for the exact but complicated formula. Simpler bounds are obtained by rounding down and up the ceiling function in the logarithms [START_REF] Faverge | Bidiagonalization with parallel tiled algorithms[END_REF]. Here, we content ourselves with an asymptotical analysis for large matrices. Take p = βq 1+α , with 0 ≤ α. We obtain that lim q→∞ BIDIAGGREEDY(βq 1+α , q) (12 + 6α)q log 2 (q) = 1
Equation [START_REF] Agullo | Comparative study of one-sided factorizations with multiple software packages on multi-core hardware[END_REF] shows that BIDIAGGREEDY is an order of magnitude faster than FLATTS or FLATTT. For instance when α = 0, hence p = βq are proportional (with β ≥ 1), we have BIDIAGFLATTS(βq, q) = 12βq 2 + O(q), BIDIAGFLATTT(βq, q) = 6βq 2 + O(q), and BIDIAGGREEDY(βq, q) = 12q log 2 (q) + O(q).
In fact, we have derived a stronger result: the optimal critical path of BIDIAG(p, q) with p = βq 1+α is asymptotically equivalent to (12 + 6α)q log 2 (q), regardless of the reduction tree used for each QR and LQ step: this is because GREEDY is optimal (up to a constant) for each step [START_REF] Bouwmeester | Tiled Algorithms for Matrix Computations on Multicore Architectures[END_REF], hence BIDIAGGREEDY is optimal up to a linear factor in q, hence asymptotically optimal.
B. R-Bidiagonalization
Computing the critical path of R-BIDIAG is more difficult than for BIDIAG, because kernels partly overlap. For example, there is no need to wait for the end of the (left) QR factorization to start the first (right) factorization step LQ(1). In fact, this step can start as soon as the first step QR( 1) is over because the first row of the matrix is no longer used throughout the whole QR factorization at this point. However, the interleaving of the following kernels gets quite intricate. Since taking it into account, or not, does not change the higher-order terms, in the following we simply sum up the values obtained without overlap, adding the cost of the QR factorization of size (p, q) to that of the bidiagonalization of the top square (q, q) matrix, and subtracting step QR(1) as discussed above.
Due to lack of space, we refer to [START_REF] Faverge | Bidiagonalization with parallel tiled algorithms[END_REF] for critical path values of R-BIDIAG (p,q) with FLATTS and FLATTT. Here, we concentrate on the most efficient tree GREEDY. The key result is the following: combining [5, Theorem 3.5] with [START_REF] Cosnard | Parallel QR decomposition of a rectangular matrix[END_REF]Theorem 3] we derive that the cost QR -GRE of the QR factorization with GREEDY is QR -GRE(p, q) = 22q + o(q) whenever p = o(q 2 ). This leads to R-BIDIAGGREEDY(p, q) ≤ (22q + o(q)) + (12q log 2 (q) + (20 -12 log 2 (e))q + o(q)) -o(q) = 12q log 2 (q) + (42 -12 log 2 (e))q + o(q) whenever p = o(q 2 ).
Again, we are interested in the asymptotic analysis of R-BIDIAGGREEDY, and in the comparison with BIDIAG. In fact, when p = o(q 2 ), say p = βq 1+α , with 0 ≤ α < 1, the cost of the QR factorization QR(p, q) is negligible in front of the cost of the bidiagonalization BIDIAGGREEDY(q, q), so that R-BIDIAGGREEDY(p, q) is asymptotically equivalent to BIDIAGGREEDY(q, q), and we derive that:
lim q→∞ BIDIAGGREEDY(βq 1+α , q) R-BIDIAGGREEDY(βq 1+α , q) = 1 + α 2 (2)
Asymptotically, BIDIAGGREEDY is at least as costly (with equality is p and q are proportional) and at most 1.5 times as costly as R-BIDIAGGREEDY (the maximum ratio being reached when α = 1 -ε for small values of ε.
Just as before, R-BIDIAGGREEDY is asymptotically optimal among all possible reduction trees, and we have proven the following result, where for notation convenience we let BIDIAG(p, q) and R-BIDIAG(p, q) denote the optimal critical path lengths of the algorithms::
Theorem 1. For p = βq 1+α , with 0 ≤ α < 1: lim q→∞ BIDIAG(p, q) (12 + 6α)q log 2 (q) = 1, lim q→∞ BIDIAG(p, q) R-BIDIAG(p, q) = 1+ α 2
When p and q are proportional (α = 0, β ≥ 1), both algorithms have same asymptotic cost 12qlog 2 (q). On the contrary, for very elongated matrices with fixed q ≥ 2, the ratio of the critical path lengths of BIDIAG and R-BIDIAG gets high asymptotically: the cost of the QR factorization is equivalent to 6 log 2 (p) and that of BIDIAG(p, q) to 6q log 2 (p). Since the cost of BIDIAG(q, q) is a constant for fixed q, we get a ratio of q. Finally, to give a more practical insight, we provide detailed comparisons of all schemes in [START_REF] Faverge | Bidiagonalization with parallel tiled algorithms[END_REF].
C. Switching from BIDIAG to R-BIDIAG For square matrices, BIDIAG is better than R-BIDIAG. For tall and skinny matrices, this is the opposite. For a given q, what is the ratio δ = p/q for which we should switch between BIDIAG and R-BIDIAG? Let δ s denote this crossover ratio. The question was answered by Chan [START_REF] Chan | An improved algorithm for computing the singular value decomposition[END_REF] when considering the operation count, showing that the optimal switching point between BIDIAG and R-BIDIAG when singular values only are sought is δ = 5 3 . We consider the same question but when critical path length (instead of number of flops) is the objective function. We provide some experimental data in [START_REF] Faverge | Bidiagonalization with parallel tiled algorithms[END_REF], focusing on BIDIAGGREEDY R-BIDIAGGREEDY and writing some code snippets that explicitly compute the critical path lengths for given p and q, and find the intersection for a given q. Altogether, we find that δ s is a complicated function of q. oscillating between 5 and 8.
V. IMPLEMENTATION
To evaluate experimentally the impact of the different reduction trees on the performance of the GE2BND and GE2VAL algorithms, we have implemented both the BIDIAG and R-BIDIAG algorithms in the DPLASMA library [START_REF]Flexible development of dense linear algebra algorithms on massively parallel architectures with DPLASMA[END_REF], which runs on top of the PARSEC runtime system [START_REF] Bosilca | DAGuE: A generic distributed DAG engine for High Performance Computing[END_REF]. PARSEC is a high-performance fully-distributed scheduling environment for generic data-flow algorithms. It takes as input a problem-sizeindependent, symbolic representation of a Directet Acyclic Graph (DAG) in which each node represents a task, and each edge a dependency, or data movement, from one task to another. PARSEC schedules those tasks on distributed parallel machine of multi-cores, potentially heterogeneous, while complying with the dependencies expressed by the programmer. At runtime, task executions trigger data movements, and create new ready tasks, following the dependencies defined by the DAG representation. The runtime engine is responsible for actually moving the data from one machine (node) to another, if necessary, using an underlying communication mechanism, like MPI. Tasks that are ready to compute are scheduled according to a data-reuse heuristic: each core will try to execute close successors of the last task it ran, under the assumption that these tasks require data that was just touched by the terminated one. This policy is tuned by the user through a priority function: among the tasks of a given core, the choice is done following this function. To balance load between the cores, tasks of a same cluster in the algorithm (reside on a same shared memory machine) are shared between the computing cores, and a NUMA-aware job stealing policy is implemented. The user is then responsible only to provide the algorithm, the initial data distribution, and potentially the task distribution. The last one is usually correlated to the data distribution when the (default) owner-compute rule is applied. In our case, we use a 2D block-cyclic data distribution as used in the SCALAPACK library, and we map the computation together with the data. A full description of PARSEC can be found in [START_REF] Bosilca | DAGuE: A generic distributed DAG engine for High Performance Computing[END_REF].
The implementation of the BIDIAG and R-BIDIAG algorithms have then been designed as an extension of our previous work on HQR factorization [START_REF] Dongarra | Hierarchical QR factorization algorithms for multi-core cluster systems[END_REF] within the DPLASMA library. The HQR algorithm proposes to perform the tiled QR factorization of a (p × q)-tile matrix, with p ≥ q, by using a variety of trees that are optimized for both the target architecture and the matrix size. It relies on multi-level reduction trees. The highest level is a tree of size R, where R is the number of rows in the R × C two-dimensional grid distribution of the matrix, and it is configured by default to be a flat tree if p ≥ 2q, and a Fibonacci tree otherwise. The second level, the domino level, is an optional intermediate level that enhances the pipeline of the lowest levels when they are connected together by the highest distributed tree. It is by default disabled when p ≥ 2q, and enabled otherwise. Finally, the last two levels of trees are use to create parallelism within a node and work only on local tiles. They correspond to a composition of one or multiple FLATTS trees that are connected together with an arbitrary tree of TT kernels. The bottom FLATTS tree enables highly efficient kernels while the TT tree on top of it generates more parallelism to feed all the computing resources from the architecture. The default is to have FLATTS trees of size 4 that are connected by a GREEDY tree in all cases. This design is for QR trees, a similar design exists for LQ trees. Using these building blocks, we have crafted an implementation of BIDIAG and R-BIDIAG within the abridged representation used by PARSEC to represent algorithms. This implementation is independent of the type of trees selected for the computation, thereby allowing the user to test a large spectrum of configuration without the hassle of rewriting all the algorithm variants.
One important contribution is the introduction of two new tree structures dedicated to the BIDIAG algorithm. The first tree, GREEDY, is a binomial tree which reduces a panel in the minimum amount of steps. The second tree, AUTO, is an adaptive tree which automatically adapts to the size of the local panel and number of computing resources. We developed the auto-adaptive tree to take advantage of (i) the higher efficiency of the TS kernels with respect to the TT kernels, (ii) the highest degree of parallelism of the GREEDY tree with respect to any other tree, and (iii) the complete independence of each step of the BIDIAG algorithm, which precludes any possibility of pipelining. Thus, we propose to combine in this configuration a set of FLATTS trees connected by a GREEDY tree, and to automatically adapt the number of FLATTS trees, and by construction their sizes, a, to provide enough parallelism to the available computing resources. Given a matrix of size p×q, at each step k, we need to apply a QR factorization on a matrix of size (p -k -1) × (q -k -1), the number of parallel tasks available at the step beginning of the step is given by (p -k -1)/a * (q -k -1). Note that we consider the panel as being computed in parallel of the update, which is the case when a is greater than 1, with an offset of one time unit. Based on this formula, we compute a at each step of the factorization such that the degree of parallelism is greater than a quantity γ × nb cores , where γ is a parameter and nb cores is the number of cores. For the experiments, we set γ = 2. Finally, we point out that AUTO is defined for a resourcedlimited platform, hence computing its critical path would have no meaning, which explains a posteriori that it was not studied in Section IV.
VI. EXPERIMENTS
In this section, we evaluate the performance of the proposed algorithms for the GE2BND kernel against existing competitors.
A. Architecture
Experiments are carried out using the PLAFRIM experimental testbed 1 . We used up to 25 nodes of the miriel cluster, each equipped with 2 Dodeca-core Haswell Intel Xeon E5-2680 v3 and 128GB of memory. The nodes are interconnected with an Infiniband QDR TrueScale network with provides a bandwidth of 40Gb/s. All the software are compiled with gcc 4.9.2, and linked against the sequential BLAS implementation of the Intel MKL 11.2 library. For the distributed runs, the MPI library used is OpenMPI 2.0.0. The practical GEMM performance is of 37 GFlop/s on one core, and 642 GFlop/s when the 24 cores are used. For each experiment, we generated a matrix with prescribed singular values using LAPACK LATMS matrix generator and checked that the computed singular values were satisfactory up to machine precision.
B. Competitors
This paper presents new parallel distributed algorithms and implementations for GE2BND using DPLASMA. To compare against competitors on GE2VAL, we follow up our DPLASMA GE2BND implementation with the PLASMA multi-threaded BND2BD algorithm, and then use the Intel MKL multi-threaded BD2VAL implementation. We thus obtain GEVAL by doing GE2BND+BND2BD+BD2VAL.
It is important to note that we do not use parallel distributed implementations neither for BND2BD nor for BD2VAL. We only use shared memory implementations for these two last steps. Thus, for our distributed memory runs, after the GE2BND step in parallel distributed using DPLASMA, the band is gathered on a single node, and BND2BD+BD2VAL is performed by this node while all all other nodes are left idle. We will show that, despite this current limitation for parallel distributed, our implementation outperforms its competitors. Figure 2: Shared memory performance of the multiple variants for the GE2BND algorithm on the first row, and for the GE2VAL algorithm on the second row, using a single 24 core node of the miriel cluster.
On the square test cases, only 23 cores of a 24-core node were used for computation, and the 24 th core was left free to handle MPI communications progress. The implementation of the algorithm is available in a public fork of the DPLASMA library at https://bitbucket.org/mfaverge/parsec. PLASMA is the closest alternative to our proposed solution but it is only using FLATTS as its reduction tree, and is limited to single-node platform, and is supported by a different runtime. For both our code, and PLASMA, the tile size parameter is critical to get good performance: a large tile size will get an higher kernel efficiency and a faster computation of the band, but it will increase the number of flops of the BND2BD step which is heavily memory bound. On the contrary, a small tile size will speed up the BND2BD step by fitting the band into cache memory, but decreases the efficiency of the kernels used in the GE2BND step. We tuned the n b (tile size) and i b (internal blocking in TS and TT kernels) parameters to get the better performance on the square case m = n = 20000, and m = n = 30000 on the PLASMA code. The selected values are n b = 160, and i b = 32. We used the same parameters in the DPLASMA implementation for both the shared memory runs and the distributed ones. The PLASMA 2.8.0 library was used.
Intel MKL proposes an multi-threaded implementation of the GE2VAL algorithm which gained an important speedup while switching from version 11.1 to 11.2 [START_REF] Vipin | Significant performance improvement of symmetric eigensolvers and SVD in Intel MKL 11[END_REF]. While it is unclear which algorithm is used beneath, the speedup reflects the move to a multi-stage algorithm. Intel MKL is limited to single-node platforms.
SCALAPACK implements the parallel distributed version of the LAPACK GEBRD algorithm which interleaves phases of memory bound BLAS2 calls with computational bound BLAS3 calls. It can be used either with one process per core and a sequential BLAS implementation, or with a process per node and a multi-threaded BLAS implementation. The latter being less efficient, we used the former for the experiments. The blocking size n b is critical to get performances since it impacts the phase interleaving. We tuned the n b parameter to get the better performance on a single node with the same test cases as for PLASMA, and n b = 48 was selected.
Elemental implements an algorithm similar to SCALA-PACK, but it automatically switches to Chan's algorithm [START_REF] Chan | An improved algorithm for computing the singular value decomposition[END_REF] when m ≥ 1.2n. As for SCALAPACK, it is possible to use it as a single MPI implementation, or an hybrid MPIthread implementation. The first one being recommended, we used this solution. Tuning of the n b parameter similarly to previous libraries gave us the value nb = 96. A better algorithm developed on top of the LibFLAME [START_REF] Gunnels | Flame: Formal linear algebra methods environment[END_REF] is provided by Elemental, but this one is used only when singular vectors are sought.
In the following, we compare all these implementation on the miriel cluster with 3 main configurations: (i) square matrices; (ii) tall and skinny matrices with n = 2, 000; this choice restricts the level of parallelism induced by the number of panels to half the cores; and (iii) tall and skinny matrices with n = 10, 000: this choice enables for more parallelism. For all performance comparisons, we use the same operation count as in [3, p. 123] for the GE2BND and GE2VAL algorithms. The BD2VAL step has a negligible cost O(n 2 ). For R-BIDIAG, we use the same number of flops as for BIDIAG; we Tall & skinny (m = 1, 000, 000, n = 10, 000)
Figure 3: Distributed memory performance of the multiple variants for the GE2BND and the GE2VAL algorithms, respectively on the top and bottom row, on the miriel cluster. Grid data distributions are √ nb nodes × √ nb nodes for square matrices, and nb nodes × 1 for tall and skinny matrices. For the square case, solid lines are for m = n = 20, 000 and dashed lines for m = n = 30, 000.
do not assess the absolute performance of R-BIDIAG, instead we provide a direct comparison with BIDIAG.
C. Shared Memory
The top row of Figure 2 presents the performance of the three configurations selected for our study of GE2BND. On the top left, the square case perfectly illustrates the strengths and weaknesses of each configuration. On small matrices, FLATTT in blue and GREEDY in green illustrate the importance of creating algorithmically more parallelism to feed all resources. However, on large size problems, the performance is limited by the lower efficiency of the TT kernels. The FLATTS tree behaves at the opposite: it provides better asymptotic performance thanks to the TS kernels, but lacks parallelism when the problem is too small to feed all cores. AUTO is able to benefit from the advantages of both GREEDY and FLATTS trees to provide a significant improvement on small matrices, and a 10% speedup on the larger matrices.
For the tall and skinny matrices, we observe that the R-BIDIAG algorithm (dashed lines) quickly outperforms the BIDIAG algorithm, and is up to 1.8 faster. On the small case (n = 2, 000), the crossover point is immediate, and both FLATTT and GREEDY, exposing more parallelism, are able to get better performances than FLATTS. On the larger case (n = 10, 000), the parallelism from the larger matrix size allows FLATTS to perform better, and to postpone the crossover point due to the ratio in the number of flops. In both cases, AUTO provides the better performance with an extra 100 GFlop/s.
On the bottom row of Figure 2, we compare our best solutions, namely AUTO tree with BIDIAG for square cases and with R-BIDIAG on tall and skinny cases, to the competitors on the GE2VAL algorithm. The difference between our solution and PLASMA, which is using the FLATTS tree, is not as impressive due to the additional BND2BD and BD2VAL steps which have limited parallel efficiency. Furthermore, in our implementation, due to the change of runtime, we cannot pipeline the GE2BND and BND2BD steps to partially overlap the second step. However these two solutions still provide a good improvement over MKL which is slower on the small cases but overtakes at larger sizes. For such sizes, Elemental and SCALAPACK are not able to scale and reach up a maximum of 50 Gflop/s due to their highly memory bound algorithm.
On the tall and skinny cases, differences are more emphasized. We see the limitation of using only the BIDIAG algorithm on MKL, PLASMA and SCALAPACK, while our solution and elemental keep scaling up with matrix size. We also observe that MKL behaves correctly on the second test case, while it quickly saturates in the first one where the parallelism is less important. In that case, we are able to reach twice the MKL performance. Figure 4: Study of the distributed weak scalability on tall and skinny matrices of size (80, 000 nb nodes ) × 2, 000 on the first row, and (100, 000 nb nodes ) × 10, 000 on the second row. First column presents the GE2BND performance, second column the GE2VAL performance, and third column the GE2VAL scaling efficiency.
D. Distributed Memory a) Strong Scaling: Figure 3 presents a scalability study of the three variants on 4 cases: two square matrices with BIDIAG, and two tall and skinny matrices with R-BIDIAG. For all of them, we couple high-level distributed trees, and low-level shared memory trees. FLATTS and FLATTT configuration are coupled with a high level flat tree, while GREEDY and AUTO are coupled with a high level GREEDY tree. The configuration of the preQR step is setup similarly, except for AUTO which is using the automatic configuration described previously.
On all cases, performances are as expected. FLATTS, which is able to provide higher efficient kernels, hardly behaves better on the large square case; GREEDY, which provides better parallelism, is the best solution out of the three on the first tall and skinny case. We also observe the impact of the high level tree: GREEDY doubles the number of communications on square cases [START_REF] Dongarra | Hierarchical QR factorization algorithms for multi-core cluster systems[END_REF], which impacts its performance and gives an advantage to the flat tree which performs half the communication volume. Overall, AUTO keeps taking benefit from its flexibility, and scales well despite the fact that local matrices are less than 38 × 38 tiles, so less than 2 columns per core.
When considering the full GE2VAL algorithm on Figure 3, we observe a huge drop in the overall performance. This is due to the integration of the shared memory BND2BD and BD2VAL steps which do not scale when adding more nodes. For the the square case, we added the upper bound that we cannot beat due to those two steps. However, despite this limi-tation, our solution brings an important speedup to algorithms looking for the singular values, with respect to the competitors presented here. Elemental again benefits from the automatic switch to the R-BIDIAG algorithm, which allows a better scaling on tall and skinny matrices. However, it surprisingly reaches a plateau after 10 nodes where the performance stops increasing significantly. Our solution automatically adapts to create more or fewer parallelism, and reduces the amount of communications, which allows it to sustain a good speedup up to 25 nodes (600 cores).
b) Weak Scaling: Figure 4 presents a weak scalability study with tall and skinny matrices of width n = 2, 000 on the first row, and n = 10, 000 on the second row2 . As previously, FLATTS quickly saturates due to its lack of parallelism. FLATTT is able to compete with, and even to outperform, GREEDY on the larger case due to its lower communication volume. AUTO offers a better scaling and is able to reach 10 TFlop/s which represents 400 to 475 GFlop/s per node. When comparing to Elemental and SCALAPACK on the GE2VAL algorithm, the proposed solution offers a much better scalability. Both Elemental and SCALAPACK suffer from their memory bound BIDIAG algorithm. With the switch to a R-BIDIAG algorithm, Elemental is able to provide better performance than SCALAPACK, but the lack of scalability of the Elemental QR factorization compared to the HQR implementation quickly limits the overall performance of the GE2VAL implementation.
VII. CONCLUSION
In this paper, we have presented the use of many reduction trees for tiled bidiagonalization algorithms. We proved that, during the bidiagonalization process, the alternating QR and LQ reduction trees cannot overlap. Therefore, minimizing the time of each individual tree will minimize the overall time. Consequently, if one considers an unbounded number of cores and no communication, one will want to use a succession of greedy trees. We show that BIDIAGGREEDY is asymptotically much better than previously presented approaches with FLATTS. In practice, in order to have an effective solution, one has to take into account load balancing and communication, hence we propose trees that adapt to the parallel distributed topology (highest level tree) and enable more sequential but faster kernels on a node (AUTO). We have also studied Rbidiagonalization in the context of tiled algorithms. While Rbidiagonalization is not new, it had never been used in the context of tiled algorithms. Previous work was comparing bidiagonalization and R-bidiagonalization in term of flops, while our comparison is conducted in term of critical path lengths. We show that bidiagonalization has a shorter critical path than R-bidiagonalization, that this is the opposite for tall and skinny matrices, and provide an asymptotic analysis. Along all this work, we give detailed critical path lengths for many of the algorithms under study. Our implementation is the first parallel distributed tiled algorithm implementation for bidiagonalization. We show the benefit of our approach (DPLASMA) against existing software on a multicore node (PLASMA, Intel MKL, Elemental and ScaLAPACK), and on a few multicore nodes (Elemental and ScaLAPACK) for various matrix sizes, for computing the singular values of a matrix. Future work will be devoted to gain access to a large distributed platform with a high count of multicore nodes, and to assess the efficiency and scalability of our parallel distributed BIDIAG and R-BIDIAG algorithms. Other research directions are the following: (i) investigate the trade-off of our approach when singular vectors are requested; a previous study [START_REF] Haidar | An improved parallel singular value algorithm and its implementation for multicore hardware[END_REF] in shared memory was conclusive for FLATTS and no R-BIDIAG (square matrices only); the question is to study the problem on parallel distributed platforms, with or without R-BIDIAG, for various shapes of matrices and various trees; and (ii) develop a scalable parallel distributed BND2BD step; for now, for parallel distributed experiments on many nodes, we are limited in scalability by the BND2BD step, since it is performed using the shared memory library PLASMA on a single node.
(m = 2, 000, 000, n = 2, 000)
Inria PlaFRIM development action with support from Bordeaux INP, LABRI and IMB and other entities: Conseil Régional d'Aquitaine, Université de Bordeaux and CNRS, see https://www.plafrim.fr/.
Experiments for the n = 10, 000 case stop at 20 nodes due to the
bit integer default interface for all libraries
Acknowledgements Work by J. Langou was partially supported by NSF award 1054864 and NSF award 1645514. | 52,780 | [
"261",
"739318"
] | [
"409745",
"90029",
"135613",
"136200",
"135613",
"179718",
"6818",
"135613",
"121172",
"28160"
] |
01266215 | en | [
"spi"
] | 2024/03/04 23:41:48 | 2015 | https://hal.science/hal-01266215/file/Usman_CT_Loc_v7_accepted.pdf | Usman A Khan
Anton Korniienko
Karl H Johansson
An H ∞ -based approach for robust sensor localization
In this paper, we consider the problem of sensor localization, i.e., finding the positions of an arbitrary number of sensors located in a Euclidean space, R m , given at least m+1 anchors with known locations. Assuming that each sensor knows pairwise distances in its neighborhood and that the sensors lie in the convex hull of the anchors, we provide a DIstributed LOCalization algorithm in Continuous-Time, named DILOC-CT, that converges to the sensor locations. This representation is linear and is further decoupled in the coordinates.
By adding a proportional controller in the feed-forward loop of each location estimator, we show that the convergence speed of DILOC-CT can be made arbitrarily fast. Since a large gain may result into unwanted transients especially in the presence of disturbance introduced, e.g., by communication noise in the network, we use H∞ theory to design local controllers that guarantee certain global performance while maintaining the desired steady-state. Simulations are provided to illustrate the concepts described in this paper.
I. INTRODUCTION
Localization is often referred to as finding the position of a point in a Euclidean space, R m , given a certain number of anchors, with perfectly known positions, and point-toanchor distances and/or angles. Traditionally, distance-based localization has been referred to as trilateration, whereas angle-based methods are referred to as triangulation. Trilateration is the process of finding a location in R m , given only the distance measurements to at least m + 1 anchors, see Fig. 1 (Left). With m + 1 sensor-to-anchor distances, the nonlinear trilateration problem is to find the intersection of three circles. Triangulation, Fig. 1 (Right), employs the angular information to find the unknown location.
The literature on localization is largely based on the triangulation and trilateration principles, or in some cases, a combination of both. Recent work may be broadly characterized into centralized and distributed algorithms, see [START_REF] Destino | Positioning in Wireless Networks: Non-cooperative and Cooperative Algorithms[END_REF] where a comprehensive coverage of cooperative and noncooperative strategies is provided. Centralized localization algorithms include: maximum likelihood estimators, [START_REF] Moses | A self-localization method for wireless sensor networks[END_REF], [START_REF] Patwari | Relative location estimation in wireless sensor networks[END_REF]; multi-dimensional scaling (MDS), [START_REF] Shang | Localization from mere connectivity[END_REF], [START_REF] Shang | Improved MDS-based localization[END_REF]; optimizationbased methods to include imprecise distance information, see [START_REF] Cao | Localization with imprecise distance information in sensor networks[END_REF]; for additional work, see [START_REF] Patwari | Manifold learning algorithms for localization in wireless sensor networks[END_REF]- [START_REF] Anderson | Formal theory of noisy sensor network localization[END_REF]. Optimization based techniques can be found in [START_REF] Biswas | Semidefinite programming based algorithms for sensor network localization[END_REF], [START_REF] Ding | Sensor network localization, Euclidean distance matrix completions, and graph realization[END_REF] and references therein, whereas, polynomial methods are described in [START_REF] Shames | Polynomial methods in noisy network localization[END_REF].
UAK is with the Department of Electrical and Computer Engineering at Tufts University, Medford, MA 02155, USA, khan@ece.tufts.edu.
His work is supported by an NSF Career award # CCF-1350264.
AK is with Laboratoire Ampère, École Centrale de Lyon, 69134 Ecully Cedex, France, anton.korniienko@ec-lyon.fr. His work is supported by a grant from la Région Rhône-Alpes. KHJ is with the KTH ACCESS Linnaeus Center, School of Electrical Engineering, Royal Institute of Technology (KTH), Stockholm, Sweden, kallej@ee.kth.se. His work is supported by the Knut and Alice Wallenberg Foundation and the Swedish Research Council. Distributed localization algorithms can be characterized into two classes: multilateration and successive refinements. In multilateration algorithms, [START_REF] Savvides | The bits and flops of the n-hop multilateration primitive for node localization problems[END_REF], [START_REF] Nagpal | Organizing a global coordinate system from local information on an ad-hoc sensor network[END_REF], each sensor estimates its distance from the anchors and then calculates its location via trilateration; multilateration implies that the distance computation may require a multi-hop communication. Distributed multidimensional scaling is presented in [START_REF] Costa | Distributed weightedmultidimensional scaling for node localization in sensor networks[END_REF]. Successive refinement algorithms that perform an iterative minimization of a cost function are presented in, e.g., [START_REF] Albowicz | Recursive position estimation in sensor networks[END_REF], which discusses an iterative scheme where they assume 5% of the nodes as anchors. Reference [START_REF] Čapkun | GPS-free positioning in mobile ad-hoc networks[END_REF] discusses a Self-Positioning Algorithm (SPA) that provides a GPS-free positioning and builds a relative coordinate system. Other related work also consists of graph-theoretic approaches [START_REF] Fang | Sequential localization of sensor networks[END_REF], [START_REF] Deghat | Distributed localization via barycentric coordinates: Finite-time convergence[END_REF], and probabilistic methods, [START_REF] Ihler | Nonparametric belief propagation for self-calibration in sensor networks[END_REF], [START_REF] Thrun | Probabilistic robotics[END_REF].
Of significant relevance to this paper is Ref. [START_REF] Khan | Distributed sensor localization in random environments using minimal number of anchor nodes[END_REF], which describes a discrete-time algorithm, named DILOC, assuming a global convexity condition, i.e., each sensor lies in the convex hull of at least m + 1 anchors in R m . A sensor may find its location as a linear-convex combination of the anchors, where the coefficients are the barycentric coordinates; attributed to August F. Möbius, [23]. However, this representation may not be practical as it requires longdistance communication to the anchors. To overcome this issue, each sensor finds m + 1 neighbors, its triangulation set, such that it lies in their convex hull and iterates on its location as a barycentric-based representation of only the neighbors. Assuming that each sensor can find a triangulation set, DILOC converges to the true sensor locations. Ref. [START_REF] Khan | Distributed sensor localization in random environments using minimal number of anchor nodes[END_REF] analyzes the convergence and provides tests for finding triangulation sets with high probability in a small radius.
In this paper, we provide a continuous-time analog of DILOC-DT in [START_REF] Khan | Distributed sensor localization in random environments using minimal number of anchor nodes[END_REF], that we call DILOC-CT, and show that by using a proportional controller in each sensor's location estimator, the convergence speed can be increased arbitrarily. Since this increase may come at the price of unwanted transients especially when there is disturbance introduced by the network, we replace the proportional gain with a dynamic controller that guarantees certain performance objectives. We thus consider disturbance in the communication network that is incurred as zero-mean additive noise in the information exchange. In this context, we study the disturbance rejection properties of the local controllers and tune them to withstand the disturbance while ensuring some performance objectives. Our approach is based on H ∞ design principles and uses the input-output approach, see e.g., [START_REF] Moylan | Stability criteria for large-scale systems[END_REF]- [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF].
Notation: The superscript, 'T ', denotes a real matrix transpose while the superscript, ' * ', denotes the complexconjugate transpose. The N × N identity matrix is denoted by I N and the n × m zero matrix is denoted by 0 n×m . The dimension of the identity or zero matrix is omitted when it is clear from the context. The diagonal aggregation of two matrices A and B is denoted by diag(A, B). The Kronecker product, denoted by ⊗, between two matrices, A and B, is defined as
A ⊗ B = [a ij B]. We use T x→y (s)
to denote the transfer function between an input, x(t), and an output, y(t). With matrix, G, partitioned into four blocks, G 11 , G 12 , G 21 , G 22 , G K denotes the Redheffer product, [START_REF] Doyle | Review of LFT's, LMI's and µ[END_REF], i.e.,
G K = G 11 + G 12 K (I -G 22 K) -1 G 21 .
Similarly,
K G = G 22 + G 21 K (I -G 11 K) -1 G 12 .
For a stable LTI system, G, G ∞ denotes the H ∞ norm of G. For a complex matrix, P : σ (P ) denotes its maximal singular; λ i (P ) denotes the ith eigenvalue; and ρ(P ) denotes its spectral radius. Finally, the symbols, '≥' and '>, denote positive semi-definiteness and positive-definiteness of a matrix, respectively. We now describe the rest of the paper. Section II describes the problem, recaps DILOC-DT, and introduces the continuous-time analog, DILOC-CT. Section III derives the convergence of DILOC-CT with a proportional gain and investigates disturbance-rejection. In Section IV, we describe dynamic controller design using the H ∞ theory according to certain objectives that ensure disturbance rejection. Section V illustrates the concepts and Section VI concludes the paper.
II. PRELIMINARIES AND PROBLEM FORMULATION
Consider a network of M sensors, in the index set Ω, with unknown locations, and N anchors, in the index set κ, with known locations, all located in R m , m ≥ 1; let Θ = Ω∪κ be the set of all nodes. Let x i * ∈ R m denote the true location of the ith sensor, i ∈ Ω; similarly, u j ∈ R m , j ∈ κ, is the true location of the anchors. We assume that each sensor is able to compute its distances to the nearby nodes (sensors and/or anchors) by using, e.g., the Received Signal Strength (RSS) or camera-based methods, [START_REF] Moses | A self-localization method for wireless sensor networks[END_REF], [START_REF] Kim | Localization of mobile robot based on fusion of artificial landmark and RF TDOA distance under indoor sensor network[END_REF]. The problem is to find the locations of the sensors in Ω. Below, we describe DILOC, which was originally introduced in [START_REF] Khan | Distributed sensor localization in random environments using minimal number of anchor nodes[END_REF].
A. DILOC-DT
DILOC-DT assumes that each sensor lies in the convex hull, denoted by C(κ), of the anchors. Using only the internode distances, each sensor, say i, finds a triangulation set, Θ i , of m+1 neighbors such that i ∈ C(Θ i ),
|Θ i | = m+1. A convex hull inclusion test is given by i ∈ C(Θ i ), if j∈Θi A Θi∪{i}\j = A Θi ,
where A Θi denotes the m-dimensional volume, area in R 2 or volume in R 3 , of C(Θ i ), see Fig. 2 in R 2 , and can be computed by Cayley-Menger determinant, [START_REF] Khan | Distributed sensor localization in random environments using minimal number of anchor nodes[END_REF], [START_REF] Manfred | Cayley-menger coordinates[END_REF], using only the pairwise distances of the nodes in {i, Θ i }.
i Given a triangulation set, Θ i , for every i ∈ Ω, each sensor updates its location estimate, x i k , as follows:
1 2 3 A i23 A i12 A i13
x i k+1 = j∈Θi∩Ω A Θi∪{i}\j A Θi pij x j k + j∈Θi∩κ A Θi∪{i}\j A Θi bij u j , (1)
where k is discrete-time, the coefficients, p ij 's and b ij 's, are the barycentric coordinates that are positive and sum to 1. Let x k ∈ R M ×m be the vector of location estimates, x i k , i ∈ Ω; similarly, u for anchors; DILOC-DT is given by
x k+1 = P x k + Bu, (2)
where P = {p ij } is an M ×M matrix of the sensor-to-sensor barycentric coordinates; similarly, B = {b ij } is M × N . The following result is from [START_REF] Khan | Distributed sensor localization in random environments using minimal number of anchor nodes[END_REF].
Lemma 1: Let |κ| ≥ m+1 and i ∈ C(κ), ∀i ∈ Ω. Assume non-trivial configurations, i.e., A κ = 0, A Θi = 0, ∀i ∈ Ω.
Then, DILOC-DT, Eq. ( 2), is such that ρ(P ) < 1 and
lim k→∞ x k = (I -P ) -1 Bu = x * , (3)
where x * is the vector of true sensor locations.
Clearly, the proof relies on the fact that ρ(P ) < 1, which can be shown with the help of an absorbing Markov chain analogy1 , see [START_REF] Khan | Distributed sensor localization in random environments using minimal number of anchor nodes[END_REF]. In particular, each transient state (sensor) has a path (possibly over multiple links) to each absorbing state (anchor); subsequently, the (transient) state-transition matrix, P , is such that ρ(P ) < 1. That (I -P ) -1 Bu is the desired steady-state can be verified as the true sensor locations follow: x * = P x * + Bu.
B. DILOC-CT
We now provide DILOC-CT, a continuous-time analog to DILOC-DT. To this aim, let x i (t) denote the m-dimensional row-vector of sensor i's location estimate, where t ≥ 0 is the continuous-time variable. Since the anchors' locations are known, we have u j (t) = u j , ∀t ≥ 0, j ∈ κ. Borrowing notation from Section II-A, DILOC-CT is given by, where (t) is dropped in the sequel for convenience:
ẋi = -x i + r i ; r i j∈Θi∩Ω p ij x j + j∈Θi∩κ b ij u j , (4)
and Θ i is the triangulation set at sensor i. Note that Θ i may not contain any anchor, in which case
|Θ i ∩ Ω| = m + 1, and
|Θ i ∩ κ| = 0.
In other words, a sensor may not have any anchor as a neighbor and the barycentric coordinates are assigned to the neighboring sensors, with unknown locations.
In order to improve the convergence rate, we may add a proportional gain, α ∈ R, in the feed-forward loop of each sensor's location estimator, denoted by an identical system, T s , at each sensor:
T s : ẋi = α(-x i + r i ). (5)
We now assume that the received signal at each sensor incurs a zero-mean additive disturbance, z i (t), whose frequency spectrum lies in the interval, [ω - z , ω + z ]. With this disturbance, the location estimator is given by
ẋi = α(-x i + r i + z i ). (6)
Here, z i (t) can be thought of as communication noise that effects the information exchange. Note that Eq. ( 4) is special case of Eq. ( 6) with α = 1 and z = 0. Finally, we replace the proportional gain, α, with a local controller, K(s). The overall architecture is depicted in Fig. 3, where we separate the desired signal, r i , and the disturbance, z i , as two distinct inputs to each T s .
1/s K(s)
x i (t)
r i (t) Network y i (t) ε i (t) z i (t) T s Fig. 3. DILOC-CT architecture
The contributions of this paper are as follows: First, we show that DILOC-CT converges to the true sensor locations and characterize the range of gains that ensures this convergence. We then study the disturbance rejection properties of the proportional gain. This analysis is provided in Section III. Second, we note that an arbitrary high proportional gain may result in unwanted transients and disturbance amplification. In order to add design flexibility and guarantee certain performance objectives, we replace the proportional gain, α, with local controllers, K(s). We use H ∞ design procedure to derive these local controllers meeting global objectives. This analysis is carried out in Section IV. Finally, we study the disturbance rejection properties of the local controllers with the help of the H ∞ design in Sections IV and V.
III. DILOC-CT WITH PROPORTIONAL GAIN
We now analyze the convergence properties of the proportional gain controller without disturbance in Eq. ( 5). Let x(t) collect the location estimates at the sensors and let u collect the true locations of the anchors. Borrowing notation from Section II-A, we use the matrices, P and B, to denote the corresponding barycentric coordinates. Eq. ( 5) can be equivalently written in the following matrix form:
ẋ = -α(I -P )x + αBu P α x + B α u. (7)
We have the following result.
Lemma 2: If ρ(P ) < 1, then {λ i (P α )} < 0, ∀i, α > 0. Proof: Let λ i (P ) = a i + √ -1b i for some a i , b i ∈ R, and note that |a i | < 1, ∀i, since ρ(P ) < 1, then λ i (I -P ) = 1 -a i - √ -1b i , (8)
and the lemma follows for any α > 0.
The following theorem studies the DILOC-CT convergence. Theorem 1: DILOC-CT, Eq. ( 7), converges to the true sensor locations, x * , for all α > 0, i.e.,
lim t→∞ x(t) = (I -P ) -1 Bu = x * .
(9) Proof: From Lemma 2, we have {λ i (P α )} < 0, ∀i. Starting from Eq. ( 7), we get That the convergence speed is exponential in α > 0 can also be easily verified. In fact, the real part of the eigenvalues of P α move further into the left-half plane as α increases. However, in the presence of network-based disturbances, z i (t)'s, an arbitrarily large α also amplifies the disturbance. To guarantee certain performance objectives, a natural extension is to replace the proportional (static) gain with a dynamic controller, K(s). We study this scenario using H ∞ design in Section IV.
x(t) = e Pαt
In the following, we analyze the disturbance rejection with the proportional controller. To proceed, we use the fact that DILOC-CT is decoupled in the coordinates. Hence, each local system, T s , see Eq. ( 5) and Fig. 3, can be analyzed per coordinate and the same analysis can be extended to other coordinates. Recall that the network location estimate, x(t), is a M × m matrix where each column is associated to a location coordinate in R m . We let x to be an arbitrarily chosen column of x corresponding to one chosen dimension. Similarly, we let z, r, ε, and y (signals in Fig. 3) to represent M -dimensional vectors; for any of such vectors, the subscript i denotes the chosen coordinate at sensor i.
A. Rejection vs. Localization tradeoff, M = 1
We first consider the simplest case of DILOC-CT in R2 , with static controller, K(s) = α, N = 3 anchors and one sensor, M = 1, see Fig. 2 (Left). In this case, the dynamics of the overall network is equivalent to the dynamics, T s in Eq. ( 5), of one sensor and a constant input r i defined by Eq. ( 4) with p ij = 0. Since the H ∞ design approach used in the next sections is frequency-based, the localization performance will be expressed thereafter in the frequency domain. Since P = 0, we have r i = b ij u j , where u j is the true location of the anchors and the performance (steadysate error, convergence speed) is defined by the tracking performance of the dynamics, T s . Let us define a transfer function, S(s), between the reference input, r i , and tracking error, defined as
ε ref i x * i -x i , which for K = α is S(s) = (1 + K(s) 1 s ) -1 = s(s + α) -1 . ( 10
)
For all α > 0, S(s) is stable with a zero at the origin implying 0 steady-state error for constant inputs. It is important to note that the cutoff frequency, ω S , for which |S(jω
S )| = √ 2
2 , is ω S = α. The magnitude of S is close to zero in the Low Frequency (LF) range (ω ω S ) and approaches 1 in the High Frequency (HF) range (ω ω S ). Increasing α increases the cutoff frequency of S implying an increase in the convergence speed, which follows Theorem 1.
To proceed with the subsequent analysis, note that
T zi→xi (s) = T ri→xi (s) = α s+α = 1 -S(s) T (s), T zi→yi (s) = T ri→yi (s) = αs s+α = K(s)S(s) KS(s). (11)
The cutoff frequency, ω T , of T (s) is equal to the cutoff frequency, ω S , of S(s). Increasing α increases both ω T and ω S , and thus the bandwidth of T (s). Since the magnitude of T (s) is equal to 1 in the LF range and decreases in the HF range, increasing α implies the transmission of a broader frequency range of disturbance, z i , on the output, x i . It is therefore not possible to increase the convergence speed and disturbance rejection at the same time. This is also true for the dynamic controller, K(s); we may, however, impose a larger slope of magnitude decay in T (s). Let us now consider the transfer function, KS(s). In the HF range, |KS(s)| = α, and thus an increase in α amplifies the disturbance. A logical extension is to consider a dynamic controller, K(s), which is frequency-dependent such that it has a high gain in the LF range, for a good tracking performance; and a low gain in the HF range for a better disturbance rejection.
B. Rejection vs. Localization tradeoff, M > 1
To study the general case, let us define the global transfer function, Sg u→ε ref , as an M ×N matrix between the input, u (anchors positions), and the network tracking error, defined by ε ref x * -x. Note that we can write the input as u = u u u , where u is the Euclidean norm of u. In this case, the tracking performance of the network could be evaluated by the M × 1 transfer function:
S g u →ε ref = Sg u→ε ref u u S g , (12)
whose j-th component, S g j , is the transfer function between the constant, u , and j-th sensor's tracking error, ε ref j . Let us define additional global transfer functions as follows:
T z→x (s) T g (s), T z→y (s) KS g (s). (13)
The transfer functions T g and KS g are M × M matrices, components of which, T g ij and KS g ij , represent transfer functions between j-th sensor disturbance, z j , and i-th sensor location estimate, x i , and control signals, y i , respectively.
In general, the components of S g , T g , and KS g are different form the local dynamics, S, T , and KS. However, increasing α has the same consequences as discussed before in the simple case. We illustrate this numerically 2 in Fig. 4, which shows the maximal singular value, σ(•), of frequency responses of S g , T g , and KS g for a network of M = 20 sensors with P = 0 and for α varying form 1 to 10 4 . Briefly, σ(•) is a generalization of gain for MIMO systems, [START_REF] Skogestad | Multivariable Feedback Control, Analysis and Design[END_REF], and its maximum value for a given frequency is the maximum amplification between the Euclidean norms of input-output vectors (over all directions of applied input vector). We observe that similar to the local case, M = 1, an increase of α increases the convergence speed but also increases HF disturbance amplification. As in the case with M = 1, in order to reduce the HF disturbance amplification, it is possible to use a dynamic controller, K(s), that decreases the maximum HF singular value of T g and KS g . However, the design of such controller in the general case of M > 1 is a non-trivial problem and could result in poor performance (low speed, high oscillations) and a global system instability for some choice of K(s). In fact, it is a special case of the decentralized control problem, which is proved to be NP-hard even in the LTI case, [START_REF] Blondel | NP-hardness of some linear control design problems[END_REF]. However, since the sensors are identical it is possible to link the global network to the local dynamics and then perform a local design by the traditional H ∞ approach. This method is proposed in [START_REF] Korniienko | Control law design for distributed multi-agent systems[END_REF], [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF] and is applied, with some changes, to DILOC-CT in the next section.
IV. DILOC-CT: H ∞ DESIGN
We now consider DILOC-CT introduced earlier in Section II-B with local controllers. We assume that the barycentric matrices, P and B, are given3 . Our objective is to design identical (local) controllers, K(s), to achieve, besides the global system stability, certain performance objectives, summarized in Table I. We note that the location estimator at each sensor is identical and define the following global system description for each coordinate, recall Eq. ( 11):
x = (I M ⊗ T (s)) r z , ( 14
)
where r z = r + z = P x + B u u 0 + z, and
u 0 = u , B u = B u u0 . Next note that ε ref = x * -x = (I M -P ) -1 B u u 0 -
x, and ε = r zx, from Fig. 3. We have the following relation:
r z ε ref ε = P B u I M -I M (I M -P ) -1 0 P -I M B u I M H x u 0 z .
(15) The local transfer function, T (s), identical at each sensor, is
T (s) = K(s) 1 s 1 + K(s) 1 s ( 16
)
Given the representation in Eqs. ( 14) and ( 15), we have
T [ u0 z ]→ ε ref ε = (I M ⊗ T (s)) H, (17)
and T u0→ε ref (s) = S g (s), see Eq. ( 12). Furthermore, T g (s) and KS g (s) can be written in terms of T z→ε (s) as
T g (s) = K(s) s T z→ε (s), KS g (s) = K(s)T z→ε (s). ( 18
)
We formulate the following control design problem: Problem 1 (Control problem): Given the global system in Eqs. ( 14) and ( 15), find the local controller, K(s), such that the global system is stable and satisfies the following frequency constraints:
σ (S g (jω)) ≤ Ω S (ω) , in LF range, σ (T g (jω)) ≤ Ω T (ω) , in HF range, σ (KS g (jω)) ≤ Ω KS (ω) , in HF range. ( 19
)
We now briefly explain the frequency constraints. The first constraint, Ω S , ensures zero steady-state error and provides a handle on the speed of convergence. The second constraint, Ω T , imposes a maximum bandwidth on T g , which, in turn, limits the disturbance amplification in high-frequency. The last constraint, Ω KS , reduces the amplification of noise on the local input, y, to each sensor's local dynamics in high-frequency. Specifics on these constraints are tabulated in Table I and are further elaborated in Section V. In order to solve the above control problem, we will use the well-known input-output approach, which was introduced to deal with interconnected systems, see [START_REF] Moylan | Stability criteria for large-scale systems[END_REF]- [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF] for details.
A. Input-output Approach
We now describe the input-output approach over which we will formulate the DILOC-CT controller design problem. We use the concept of dissipitavity taken from [START_REF] Moylan | Stability criteria for large-scale systems[END_REF]- [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF], a simplified version of which is defined below.
Definition 1 (Dissipativity): An LTI, stable, and causal operator, H, is strictly {X, Y, Z}-dissipative, where X = X T , Y, Z = Z T , are real matrices such that
X Y Y T Z is full-rank; if ∃ ε > 0 such that for almost all ω > 0 I H(jω) * X Y Y T Z I H(jω) ≤ -εI. (20)
If the inequality in Eq. ( 20) is satisfied with ε = 0, the operator is said to be {X, Y, Z}-dissipative.
Consider a large-scale system represented as an interconnection, H, of identical subsystems, T s :
p = (I ⊗ T s ) (q) , q z = H p w , (21)
where
H = H11 H12 H21 H22
is a finite-dimensional, stable LTI system, T s = G K, w(t) is the input vector, z(t) is the output vector, and q(t), p(t), are internal signals. The LTI systems, G and K, are finitedimensional and are referred to as the local plant and controller. The global transfer function between external input, w, and output, z, is
T w→ z = (I ⊗ T s ) H,
and its H ∞ norm is ensured by the local controller, K, by the following theorem.
Theorem 2: Given η > 0, a stable LTI system, H, a local plant, G, and real matrices,
X = X T ≥ 0, Y , Z = Z T , if there exist (i) a positive-definite matrix, Q, such that H is {diag(Q⊗X, -η 2 I), diag (Q ⊗ Y, 0) , diag (Q ⊗ Z, I)}- dissipative, and (ii) a local controller, K, such that T s = G K is strictly {-Z, -Y T , -X}-dissipative,
then the local controller, K, ensures that the global system, (I Ns ⊗ T s ) H, is stable and
(I ⊗ T s ) H ∞ ≤ η. (22)
The proof of Theorem 2 can be found in [START_REF] Korniienko | Control law design for distributed multi-agent systems[END_REF], [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF], and it relies on a version of the graph-separation theorem used in [START_REF] Moylan | Stability criteria for large-scale systems[END_REF] for global stability and an S-procedure, [START_REF] Yakubovich | The S-procedure in non-linear control theory[END_REF], for global performance. It can also be seen as a generalization of the Kalman-Yakubovich-Popov lemma, [START_REF] Rantzer | On the Kalman-Yakubovich-Popov lemma[END_REF].
B. Local Control for Global Performance
Note that based on the properties of the H ∞ norm:
T 11 T 12 T 21 T 22 ∞ ≤ η ⇒ T 11 ∞ ≤ η, T 22 ∞ ≤ η, ⇔ σ (T 11 (jω)) ≤ η σ (T 22 (jω)) ≤ η , ∀ω ∈ R + , (23)
Theorem 2 is applicable to the system in Eqs. ( 14) and ( 15) to find a controller, K, ensuring a global bound, η, on the H ∞ norm of, in this case, T 11 = S g and T 22 = T z→ε . However, such imposed constraints are frequency-independent. We now present a result allowing to impose frequency-dependent bounds, Eq. ( 19), constructed with the help of local transfer functions, KS, S, or T , and constant gains. Consider the following augmented localization system:
x x = (I M +1 ⊗ T (s)) r r , r r ε ref ε = l 0 g1l 0 lBu P g1lBu g3I g -1 1 lP -g -1 1 I lP 0 g2lBu g2(P -I M ) g2g1lBu g2g3I H x x u z , (24)
with real positive scalars, g 1 , g 2 , g 3 , l = 1 1+β , 0 < β 1, P = (I M -P )
-1 B u , and one additional local sensor dynamics, T (s), with additional input, r, and output, x.
The main result of this paper is now provided in the following theorem that solves Problem 1 following a similar argument as in Section IV-A.
Theorem 3 (Control Design): Given η > 0, the system described in Eq. ( 24), and real scalars, X ≥ 0, Y, Z ≤ 0, if there exists a positive-definite matrix, Proof: Let us define weighted version of input-output signals of the original system in Eq. ( 15) as:
Q ∈ R (M +1)×(M +1) , such that (i) H is diag XQ, -η 2 I , diag (Y Q, 0), diag (ZQ, I)}-
ε ref = g -1 1 ε ref , ε = g 2 ε, z = g -1 3 z, u = g -1 1 (S + β) u 0 .
Based on this notation, one can define the following relation:
T [ u z ]→ ε ref ε = W 1 T [ u0 z ]→ ε ref ε W 2 , with 4 W 1 = g -1 1 0 0 g2I M and W 2 = g1(S(s)+β) -1 0 0 g3I M .
Since S(s) = T (s) -1, the matrix transfer function, W 2 , can be represented in the form of an interconnection (LFT) of one system T (s):
W 2 = T (s) H W , with H W = l g 1 l 0 l g 1 l 0 0 0 g 3 .
The global transfer function, Eq. ( 15), is an LFT of M systems, T (s), and is defined by
T [ u0 z ]→ ε ref ε = (T (s)I M ) H. The augmented system, T [ u z ]→ ε ref ε = (I M +1 ⊗ T (s)) H, is an LFT of M + 1 systems, T (s), representing a series connection of W 1 T [ u0 z ]→ ε ref ε
and W 2 , and is given in Eq. [START_REF] Moylan | Stability criteria for large-scale systems[END_REF]. The corresponding expression of H is computed based on the LFT algebra, see Section 2.4 in [START_REF] Doyle | Review of LFT's, LMI's and µ[END_REF], . Note that the first two conditions of Theorem 3 correspond to the two conditions of Theorem 2. Applying Theorem 2, a controller that ensures second condition of Theorem 3, therefore, ensures the global transfer function bound:
(T (s)I M +1 ) H ∞ = W 1 T [ u0 z ]→ ε ref ε W 2 ∞ ≤ η.
Using Eq. ( 23), the last inequality implies ∀ω:
σ (T u0→ε ref (jω)) ≤ η |S (jω) + β| , σ (T z→ε (jω)) ≤ η (g 2 g 3 ) -1 . (25)
For frequency range where β can be neglected compared to |S (jω)|, condition (iii) of the Theorem 3 ensures the first condition of Problem 1. Note that in the HF range:
|KS (jω)| = |K (jω)| 1 -j ω K (jω) ≈ |K (jω)| .
Therefore, together with Eq. ( 18), the condition (iv) implies the second and third conditions of the Problem 1.
Remark 1: Theorem 3 can be applied to efficiently design a controller, K, that solves Problem 1, if X, Y , and Z, are fixed. In this case, the first condition of Theorem 3 is a Linear Matrix Inequality (LMI) with respect to the decision variables, η and Q, and thus, convex optimization can be applied to find the smallest η such that it is satisfied. The conditions, (ii)-(iv), are ensured by the local traditional H ∞ design. However, if X, Y , and Z, are the decision variables, the underlying optimization becomes bilinear. In this case, a quasi-convex optimization problem for finding X, Y , and Z, that satisfy the condition (i) and relaxes the condition (ii) of Theorem 3 is proposed in [START_REF] Korniienko | Control law design for distributed multi-agent systems[END_REF], [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF] and is used thereafter.
Remark 2: The weighting filters, W 1 and W 2 , are used to impose frequency-dependent bounds on the global transfer function magnitudes, Eq. ( 19), in a relative fashion, i.e., global performance in Eq. ( 19) is ensured by the local system performance, see conditions (iii)-(iv) of Theorem 3. The reason for using different gains, g i , is to impose the constraint on diagonal blocks, T u0→ε ref and T z→ε , while reducing this constraint on the cross transfer functions, T u0→ε and T z→ε ref , if such constraints are not needed from application point of view. ] rad/sec. To reduce the contribution of this noise on the location estimate, x, and the control command, y, i.e., the input to the integrator in the local dynamics, T s , see Fig. 3, while ensuring the imposed tracking performance, see Table I, we add the frequency constraints, Ω S (ω), Ω T (ω), Ω KS (ω) in Problem 1, shown as red dotted lines in Fig. 6. We define the augmented system, Eq. ( 24), with g 1 = 1, g 2 = 14, g 3 = 1.7 and β = 10 -3 . Using the quasi-convex optimization problem proposed in [START_REF] Korniienko | Control law design for distributed multi-agent systems[END_REF], [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF], the sensor dissipativity characterization is defined by X = -4.99, Y = 1.99, and Z = 1. First condition of Theorem 3 is ensured with minimum η = 48.85 by convex LMI optimization. The local controller, K(s), is then computed using standard H ∞ design [START_REF] Skogestad | Multivariable Feedback Control, Analysis and Design[END_REF] to ensure conditions (ii)-(iv) of Theorem 3:
K(s) =
3.71 • 10 9 (s + 2094)(s + 6712) . As expected, the local controller is a low-pass filter with high gain, G K ≈ 264, in the LF range, and the negative slope (-40 dB/dec) in the HF range. According to the Theorem 3, the designed controller solves the Problem 1, i.e., it ensures the frequency constraints, Eq. ( 19), as can be verified in Fig. 6. Furthermore, the performance of this controller is compared to the static (proportional) gain with α = 264, which ensures the same convergence speed, see red dashed lines in Fig. 6. It is interesting to note that the LF gain of the dynamic controller is the same as with the static gain, α = 264; however, in the HF range, the dynamic controller allows to significantly reduce the maximum singular values of T g and KS g , which subsequently results in disturbance reduction. All these frequency domain observations are confirmed by temporal simulations presented in Fig. 7 where mean estimation error, ε ref , and command signals, y i 's, are presented for both cases.
VI. CONCLUSIONS
In this paper, we describe a continuous-time LTI algorithm, DILOC-CT, to solve the sensor localization problem in R m with at least m + 1 anchors who know their locations. Assuming that each sensor lies in the convex hull of the anchors, we show that DILOC-CT converges to the true sensor locations (when there is no disturbance) and the convergence speed can be increased arbitrarily by using a proportional gain. Since high gain results into unwanted transients, large input to each sensors internal integrator, and amplification of network-based disturbance, e.g., communication noise; Note the high values of the control command, y i (t)'s, with the proportional controller, that could overexcite the local system, integrator, at each sensor.
we design a dynamic controller with frequency-dependent performance objectives using the H ∞ theory. We show that this dynamic controller does not only provide disturbance rejection but is also able to meet certain performance objectives embedded in frequency-dependent constraints. Finally, we note that although the design requires the knowledge of the entire barycentric matrices, the approach described in this paper serves as the foundation of future investigation towards decentralized design of dynamic controllers.
Fig. 1 .
1 Fig. 1. Localization in R 2 , anchors: red triangles; unknown location: blue circle. (Left) Trilateration-the unknown location is at the intersection of three circles. (Right) Triangulation-the line segments, h, d c1 , d c2 , d h1 , and d h2 , are computed from trigonometric operations.
Fig. 2 .
2 Fig. 2. R 2 : (Left) Agent i lies in the convex hull of three anchors. (Right) Sensor 4, 6 and 7 form a triangulation set for sensor 5. Blue circles and red triangles indicate agents and anchors, respectively.
0 e
0 x(0) + t Pατ αBu(tτ )dτ, = e Pαt x(0) + P -1 α e Pαt -I αBu, which asymptotically goes to (since lim t→∞ e Pαt = 0) lim t→∞ x(t) = -P -1 α αBu = -(-α(I -P )) -1 αBu, and the theorem follows.
Fig. 4 .
4 Fig. 4. Maximal singular value of S g , T g , and KS g vs. α
2 Y 2
22 dissipative, and a local controller, K, such that:(ii) T (s) is strictly {-Z, -Y, -X}-dissipative with T (s) ∞ < 1, andT (s) = T (s) + Y X X -XZ ; (iii) |S (jω)| ≤ η -1 Ω S (ω) , in the LF range; (iv) |KS (jω) | ≤ η -1 g 2 g 3 min {Ω KS (ω) , ωΩ T (ω)} , in the HF range;then the local controller K solves the Problem 1.
Fig. 5 .
5 Fig. 5. DILOC-CT: Network and convergence speed
Fig. 6 .
6 Fig. 6. Singular value of S g , T g and KS g for different frequencies and for dynamic K(s) (solid blue line) and static α (red dashed line) cases together with corresponding frequency constraints (red dotted line).
Fig. 7 .
7 Fig. 7. Temporal simulations: (Left) Static, α; (Right): Dynamic, K(s).Note the high values of the control command, y i (t)'s, with the proportional controller, that could overexcite the local system, integrator, at each sensor.
Refs.[START_REF] Khan | Distributed sensor localization in random environments using minimal number of anchor nodes[END_REF],[START_REF] Usman A Khan | Distributed sensor localization in euclidean spaces: Dynamic environments[END_REF] further characterize the probability of successful triangulation, imperfect communication, and noise on the distance measurements, among many other refinements.
It is possible to compute the closed-form expressions of S g , T g , and KS g , e.g., by using the Lower Fractional Transformation algebra (LFT),[START_REF] Doyle | Review of LFT's, LMI's and µ[END_REF], but this computation is beyond the scope of this paper.
Note that the controller design requires the knowledge of all barycentric coordinates, P and B, which may be restrictive. However, we stress that the resulting controller is local and the procedure described here is critical to any future work on decentralized design. In addition, computing the locations at a central location from the matrices, P and B, and then transmitting them to the sensors incurs noise in the communication that is not suppressed; this procedure is further susceptible to cyber attacks revealing the sensor locations to an adversary.
The reason of using parameter β is that in order to properly define the H∞ norm, the weighting filters W i should be stable transfer functions. Since S has zero at zero, Eq. (10) (because of presence of integrator in local sensor dynamics, seeFig 3), the weighting filter W 2 would contain integrator and thus be unstable with β = 0. This is a classical problem in the H∞ design,[START_REF] Skogestad | Multivariable Feedback Control, Analysis and Design[END_REF], which is practically solved by perturbing the pure integrator,[START_REF] Destino | Positioning in Wireless Networks: Non-cooperative and Cooperative Algorithms[END_REF] s , by a small real parameter, β : 1 s → 1 s+β . | 40,308 | [
"975927",
"1228",
"927732"
] | [
"248373",
"408749",
"398719"
] |
01484244 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01484244/file/978-3-642-28827-2_4_Chapter.pdf | Roman Veynberg
email: veynberg@gmail.com
Victor Romanov
Different instrumental methods which can be used in new EIS: theory and practical approach
Keywords: enterprise information systems, business rules management systems, enterprise decision management, instrumental methods, Oracle Business Rules, SAP Business Rules
In this paper we study Business rules management system (BRMS) approach in Enterprise information systems (EIS) development. The approach is closely connected with Enterprise decision management (EDM) concept and usage of EIS in business and world economy. The paper has survey nature and gives retrospective analysis of today's marketing situation in EIS and BRMS industry. Highlight examples of EIS BRMS modules (ORACLE and SAP), their advantages and goals in the real high-tech industry.
Introduction
Sooner or later top management of any company faced with the problems of systematization of data and process of automation during working with the information. Over time, growth in the volume of data makes within the company problems in creating modern enterprise information systems (EIS), which cover all aspects of the business enterprise. Purchase of EIS is not the final purpose. EIS is only a tool which allows organizations to operate effectively. This applies to work not only ordinary performers, but also top-managers at any company's level [1,2].
Thus, purchase of EIS -is only purchase of a tool to maintain control over the company or to increase the effectiveness of this control. "Automatic Control", unfortunately, does not usually happen. Therefore, if after the implementation of EIS, the process of collecting and processing information is not accelerating and increasing the accuracy and completeness of the data, and management of the organization is not receiving new data, or can not use it properly, then the information remains unclaimed, and it does not lead to more effective solutions. EIS itself does not increase profitability [START_REF] Chisholm | How to build a business rules engine[END_REF]. It can increase efficiency and expedite the processing of data, which can provide information for decision making. Manager can increase profitability from effective solutions based on this information. It is therefore necessary not only to choose and implement EIS, but also learn how to use it with maximum efficiency. Moreover, understanding whether and how the use of EIS should be preceded, more precisely determine the choice of the supplier and the process of implementing EIS. The main thing that allows manager to make EIS is to unite the activities of the enterprise with its instrumental methods [START_REF] Chisholm | How to build a business rules engine[END_REF]. Author's concept of EIS with instrumental methods is presented in the figure 1. Instrumental methods include: business rules tech., scenario analysis and precedent approach.
Figure 1. Enterprise information system with instrumental methods within
For industrial EIS important information is: production data, finance data, procurement, and marketing. Based on the information, manager can quickly adjust and plan the activities of the company. He or she gets a chance to see the whole enterprise and business processes from the inside, to see the basic functioning of the system, where and how he or she can minimize the costs, which prevents from the increasing of the profits. Management team is interested in consolidation of information from the whole branches and central offices of their enterprises, as well as having possibility of monitoring remotely all units.
EDM (Enterprise Decision Management) Approach
Since the late 80's companies were focused on improving the efficiency of business processes and building a data structure. In the early nineties has become apparent insufficiency of the approach. Describing the processes and data, the researchers found that there is one other area of expertise, critical for understanding, the nature of any organization -its business rules. Designed to support business structure, control and influence the behavior of businesses, business rules appear as a result of restrictions imposed on business. This trend popularized by Ron Ross, Barbara von Halle, James Taylor, Tony Morgan and other authors [START_REF] Harmon | Business Rules //Business Process Trends[END_REF][START_REF] Chisholm | How to build a business rules engine[END_REF]. So, it was determined that this approach should include parameters such as terms, facts and circumstances. The concept of business rules are widely used today, there are special groups of organizations, such as, Business Rules Group (BRG), Semantics of Business Vocabulary & Business Rules association (SBVRa) and International Business Rules Forum, which develop different standards for business rules. While no single standard format for business rules was created, there were developed a number of related standards, for example, Semantics of Business Vocabulary and Business Rules (SBVR), Production Rule Representation (PRR).
The integration of business rules and EIS was facilitated to appear in the mid-90's into concept of Enterprise Decision Management (EDM), oriented to a higher degree of automation in decision-making process, replacing an approach based on business process management, when the main task was automatically chosen, and algorithms were presented in the form of business rules [START_REF] Chisholm | How to build a business rules engine[END_REF]. Enterprise Decision Management has three major components: organizational component, developing component and management component (figure 2).
Figure 2. Enterprise Decision Management approach with its three major components
Due to the complexity of decision-making process within organizations in recent years, EDM has acquired special importance to the business. It must match rapidly changing laws and market situations. To automate the decision-making process, EDM involves pooling of analytical tools, forecasting and automating solutions, business rules, processes and business procedures, electronic control systems and organizational structure.
The key points of EDM concept in terms of business rules are:
1.
business users who have the ability to control key points of solutions in business processes;
2. business rules are ideally suited to illustrate the correctness of the decision-making process;
3. the possibility of replacing part of the application code for business rules;
4.
visualization of business regulations and their relationships for easy management and the possibility of substitution code.
With the advent of EDM concept, business rules have strong architecture and clear justification from business standpoint, because business flexibility is unstable without right management decisions within the organization. The application of business rules in the way of automating decisions reduces development costs and maintenance; stops the dependence of system's update from IT industry [START_REF] Romanov | Customer-Telecommunications Company's Relationship Simulation Model (RSM), Based on Non-Monotonic Business Rules Approach and Formal Concept Analysis Method[END_REF].
BRMS (Business Rules Management System) Approach together with EIS Leaders
Every company use several hundreds or thousands of specific rules (business rules), such as legislative initiatives, agreements with partners, inside restrictions, certain internal rules of the organization which determine its behavior, business policy and distinguish the enterprise from others. Business rules are indirectly determined by a large number of inconsistent analytical and project documents, and mostly they can be transformed into logic and application programs.
Often they are not available or unconsciousness in general, developers make assumptions about the business rules that may be incorrect and poorly aligned with the objectives of the enterprise, and can not be easily modified and adapted [START_REF] Harmon | Business Rules //Business Process Trends[END_REF]. This fact leads to various inconsistencies and errors and makes it difficult to change business rules, and this is necessary response for change in the external and internal environment.
A leading providers of business rules management systems have been successfully developed their systems in parallel with the suppliers of EIS, and now two markets have become closer [1].
One of the leaders in EIS business is Oracle Corporation, has its own Business Rules module, which consists of three components:
1. A Rules engine 2. A Rules SDK for use by applications that modify and/or create Rules.
The Rule Author GUI for Rules creation
Architectural structure of Oracle BRMS is presented in figure 3.
Figure 3. Oracle Business rules components
Another EIS leader is German company SAP, which provides users with its own BRM system: SAP NetWeaver BRM [2].
SAP NetWeaver BRM helps managers to manage the growing set of business rules in any organizations. Therefore, SAP provides the following tools:
Competitive Preferences of BRMS Approach in Business
Business rules management system speeds up all processes occurring within the enterprise, where it was embedded. As a consequence, the company more favorably responds to changes in the environment or on the global economic crisis. Accelerated decision-making process and automate operational decisions inside and outside of the enterprise which has beneficial effect on business itself [START_REF] Chisholm | How to build a business rules engine[END_REF].
Examples are: the use of BRMS improve business efficiency by 25% in General Electric customer service; 15% of efficiency at the pharmaceutical giant Bayer; Swiss Medical's profit, using BRMS, was increased by 23.5% during the reporting period (second quarter 2010); in Delta Airlines, with the introduction of BRMS, processing speed of customer service was increased by 2.5 times, which resulted in increasing of revenue grows (15.8%). Wodafone, one of the biggest telecommunication companies in the world, implementing BRMS for its order processing system of personalized service packages for customers of different consumer clusters, increased processing speed operational decision-making by 2.5 times, increasing net income by 25 % for the report period (2010, Q2) [START_REF] Harmon | Business Rules //Business Process Trends[END_REF][START_REF] Chisholm | How to build a business rules engine[END_REF].
Conclusion
The above description of EIS, its basic definition and practical examples of the following BRMS systems give managers simple possibility to identify problems on time and solve them with immediate actions automatically. Together with BRMS approach Enterprise Information Systems can identify the relationships of objects in the presence of incomplete information, and decrypt the conceptual lattice, obtained by the algorithm which does not require additional knowledge, because of its simplicity.
1. 3 . 5 .
35 Rules composer -Enables process architects and IT developers to create and modify business rules via rule representation formats, such as decision tables 2. Rules analyzer -Enables business users to test, refine, analyze, and optimize business rules Rules manager -Enables business users to edit and manage business rules in a Web-based collaborative environment 4. Rules repository -Provides the environment for rules versioning, permissions management, access control, alerts, and additional repository services Rules engine -Executes rules, integrated with the run-time technology provided by SAP NetWeaver Composition Environment Architectural structure of SAP BRMS is presented in figure 4.
Figure 4 .
4 Figure 4. SAP Business rules components | 11,789 | [
"1003515"
] | [
"487707",
"487707"
] |
01484249 | en | [
"spi"
] | 2024/03/04 23:41:48 | 2016 | https://hal.science/hal-01484249/file/ACC_Sjoerd_AV.pdf | Sjoerd Boersma
email: s.boersma@tudelft.nl
Anton Korniienko
email: anton.korniienko@ec-lyon.fr
Khaled Laib
email: khaled.laib@doctorant.ec-lyon.fr
J W Van Wingerden
email: j.w.vanwingerden@tudelft.nl
J W Van
Robust performance analysis for a range of frequencies
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Robust Performance Analysis for a Range of Frequencies
S.Boersma 1 , A.Korniienko 2 , K.Laib 3 , J.W. van Wingerden 4 Abstract-Time domain specifications such as overshoot, rise time and tracking behaviour can be extracted from an amplitude frequency response. For uncertain systems we use for this an upper bound on the maximum amplitude frequency response. There are tools which can compute this upper bound for each frequency in a grid. Computing this upper bound can be computational expensive when studying a large scale system hence it is interesting to have a low dense frequency grid. However, in such a case, it can for example occur that the maximum peak of the amplitude frequency response occurs at a frequency which is not in this grid. A consequence is that the overshoot will not be determined well for the system. In this paper we will present a method such that this can not occur. We will augment the uncertainty set with an additional uncertain parameter. This uncertain parameter will cover the frequencies which are not covered by the grid. This allows us to do a robustness analysis for a range of frequencies. In this case we are sure that we do not miss any crucial information with respect to the amplitude frequency response lying in between the frequencies in the grid. We illustrate this using two simulation examples.
I. INTRODUCTION
It is possible to extract time domain specifications as e.g. overshoot, rise time and tracking behaviour from the amplitude frequency response of a system. The slope in the low frequency regions can give us for example information about the tracking behaviour of the system, the cross over frequency can give us information on the speed of the system and the maximum peak of the frequency response can give us information on the overshoot of the system. However, when dealing with uncertain systems as defined in for example [START_REF] Skogestad | Multivariable feedback control analysis and design[END_REF] and [START_REF] Zhou | Robust and Optimal Control[END_REF], it is not sufficient to study one amplitude frequency response to extract this kind of information since the system is then a function of an infinite set containing the uncertainties. Hence, in order to make statements on the previously mentioned time domain specifications while looking at the amplitude frequency response, it is necessary to find the maximum among these responses. By using the latter we can guarantee that we for example will find the maximum peak of the frequency response among all the uncertainties in the set. This maximum amplitude frequency response can then be used to make statements on e.g. the overshoot. Due to the fact that the set containing the uncertainties is infinite, it is from a practical point of view difficult to find the maximum amplitude frequency response. It is however possible to compute, per frequency, an upper bound on this maximum amplitude frequency response with the aid of µanalysis tools and corresponding convex optimization. We can then make statements on the time domain specifications using this upper bound.
As stated before, with µ-analysis tools we can compute an upper bound on the maximum amplification of an uncertain system. The method has been studied well over the past few years. See for example [START_REF] Doyle | Structured uncertainty in control system design[END_REF] and [START_REF] Fan | Robustness in the presence of mixed parametric uncertainty and unmodeled dynamics[END_REF]. In this paper we exploit µ-analysis tools in order to solve a problem which can occur when applying it in the standard way. The problem is that when using µ-analysis tools, the analysis is done only for specific frequencies in a grid. We do not analyse the intermediate frequencies hence we can not say anything about the maximum amplification of the system for these intermediate frequencies. It is in addition not always interesting to increase the density of the frequency grid since computing an upper bound is computational expensive. In this paper we introduce a method which allows us to also ensure the maximum amplification through the system for these intermediate frequencies while using the standard µanalysis tools. In other words, the method allows us to do a robustness analysis for a range of frequencies. The method can be applied to systems with multiple inputs and multiple outputs (MIMO) as well as single input single output (SISO) systems.
In [START_REF] Iwasaki | Generalized KYP lemma: unified frequency domain inequalities with design applications[END_REF], the authors propose a method to solve a similar problem. They suggest to test a frequency dependent infinite linear matrix inequality (LMI) conditions in the form of frequency independent finite LMI conditions. The latter conditions directly include the information about a specified frequency range to which they are applied. The method proposed in this paper is different. Instead of using "special LMI conditions", we represent the frequency as an uncertain parameter and then build our conditions based on the augmented system. The advantage is that we can use the traditional µ-analysis tools which implies that people familiar with these techniques are able to directly apply the novel method proposed in this paper. The outcome will then not be an upper bound on the maximum amplification through an uncertain system for only one specific frequency but for a range of frequencies. The magnitude of this symmetric range then depends on the size of uncertainty we put on the frequency under consideration. In [START_REF] Ferreres | A µ analysis technique without frequency gridding[END_REF], the authors also propose to make the frequency an uncertain parameter and then use classical µ-analysis tools to make statements on the system. To be more precise, a bound on the maximum amount of uncertainty is found for which the system is robustly stable.
In this paper we will however analyse if the system has a robust performance property instead of stability. The definition of robust performance in this paper will in addition differ from how it is defined in for example [START_REF] Skogestad | Multivariable feedback control analysis and design[END_REF] and [START_REF] Zhou | Robust and Optimal Control[END_REF] and the final result will be generalised. Moreover, we illustrate the effectiveness of the approach using two examples. The first example will be a relatively simple second order system and serves to illustrate the method. The second example will be a large scale system where we, in addition to our method, will use the Hierarchical Approach [START_REF] Dinh | Embedding of uncertainty propagation: application to hierarchical performance analysis[END_REF].
This paper is structured as follows. In Section II, we introduce the necessary preliminary mathematical tools used throughout this paper. In Section III, we will give the problem formulation using an example. This example will illustrate the necessity of the novel method presented in this paper. In Section IV, we will introduce the novel method which will solve the problem as formulated in Section III. We will revisit the example and will illustrate the benefit of the new method while applying it on the same example. In Section V we will demonstrate the novel method on a large scale system. Since computing an upper bound on the amplitude frequency of such a high dimensional system is computational expensive, the Hierarchical Approach is used. The interested reader is referred to [START_REF] Dinh | Embedding of uncertainty propagation: application to hierarchical performance analysis[END_REF], [START_REF] Dinh | Convex hierarchicalrchical analysis for the performance of uncertain large scale systems[END_REF] and [START_REF] Laib | Phase IQC for the hierarchical performance analysis of uncertain large scale systems[END_REF] for more detailed information on the Hierarchical Approach.
II. PRELIMINARIES
In this section we summarise the theoretical background used in this paper. We begin by giving the considered system interconnection followed by a definition of robust performance and a theorem with which we can ensure this robust performance. Figure 1 presents the system interconnection under consideration. We define z ∈ C nz and w ∈ C nw as the performance and exogenous input signal respectively. The signals p ∈ C np and q ∈ C nq are the output and input respectively of the uncertainty block ∆ ∈ R np×nq . The transfer M ⋆ ∆ is the performance transfer function which we would like to analyse with stable M ∈ C nz+nq×nw+np having the following state space representation:
ẋ = Ax + B p B w p w q z = C q C z x + D qp D qw D zp D zw p w (1)
The signal z * is the complex conjugate transpose of z and the operator ⋆ is defined as the Redheffer star product [START_REF] Doyle | Review of LFTs, LMIs, and µ[END_REF]. In [START_REF] Skogestad | Multivariable feedback control analysis and design[END_REF] this is referred to as the upper linear fractional transformation. We furthermore have the infinite uncertainty set
∆ = {∆ ∈ R np×nq | ∆ = diag(δ 1 I n1 , . . . , δ r I nr ), ||∆|| ∞ ≤ 1}
with real scalar uncertainties δ k ∈ R, for k = 1, . . . , r with r the number of uncertainties and n k the number of repetitions of the uncertainty δ k and n = r k=1 n k , the size of the uncertainty ∆. We note that the work presented in this paper can be extended to a more general class of uncertainties e.g. we could include complex uncertainties in the set ∆. This is however not necessary for illustrating the novel method hence we only consider real uncertainties in the set ∆. We define the following definition of robust performance. Note that this definition is different from the one used in e.g. [START_REF] Skogestad | Multivariable feedback control analysis and design[END_REF] and [START_REF] Zhou | Robust and Optimal Control[END_REF].
Definition 1 (Robust Performance): If for a specific frequency ω i there exists a γ ∈ R such that, for M and ∆ evaluated at the frequency ω i , the LMI
M ⋆ ∆ I * I 0 0 -γ 2 I M ⋆ ∆ I < 0 ∀∆ ∈ ∆ (2)
holds, robust performance is ensured for the M ∆-structure with performance transfer M ⋆ ∆ at the frequency ω i . When minimising γ 2 , the best possible robust performance can be found. Note that in the single input single output (SISO) case, the minimum γ is the maximum amplitude frequency response of the performance transfer M ⋆∆ among all ∆ ∈ ∆ for the frequency ω i . In the multiple input multiple output (MIMO) case, this will be the maximum singular value representing the maximum amplification through the performance transfer M ⋆ ∆ among all ∆ ∈ ∆ for the frequency ω i . The problem of testing if robust performance is satisfied is that it should hold for all ∆ ∈ ∆ with ∆ being an infinite set. In other words, we try to find a maximum frequency response among an infinite number of frequency responses when considering the SISO case. The following theorem is taken from [START_REF] Dinh | Convex hierarchicalrchical analysis for the performance of uncertain large scale systems[END_REF] and provides us with tools to ensure, for a specific frequency ω i , robust performance as defined in Definition 1.
Theorem 1 (Robust Performance Theorem): Robust performance as defined in Definition 1 is ensured for a specific frequency ω i if and only if there exists a matrix Φ ∆ with partitions X ∆ , Y ∆ and Z ∆ such that
∆ I * X ∆ Y ∆ Y * ∆ Z ∆ Φ∆ ∆ I ≥ 0 ∀∆ ∈ ∆ (3)
I M * X ∆ 0 Y ∆ 0 0 -γ 2 I 0 0 Y * ∆ 0 Z ∆ 0 0 0 0 I I M < 0 (4)
hold.
As stated in Theorem 1, two LMIs should be verified to ensure robust performance where the one given in (3) depends on ∆ and should hold for all ∆ ∈ ∆. It is shown in [START_REF] Scherer | Linear Matrix Inequalities in Control[END_REF] that by choosing a parametrisation of the matrix Φ ∆ , the LMI in ( 3) is always ensured. A consequence of introducing such a parametrisation is that the "if and only if" condition to ensure robust performance as given in Theorem 1 will become an "if" condition. This is stated in the following corollary.
Corollary 1: Let the matrix Φ ∆ belong to a bounded set Φ ∆ such that (3) is always satisfied. Then robust performance as given in Definition 1 is ensured for a specific frequency ω i if there exists a Hermitian matrix
Φ ∆ ∈ Φ ∆ with partitions X ∆ = X * ∆ , Y ∆ , Z ∆ = Z * ∆ ≥ 0 and X ∆ ∈ C np×np and Z ∆ ∈ C nq×nq such that (4) holds.
One possible parametrisation is the DG-scaling [START_REF] Scherer | Linear Matrix Inequalities in Control[END_REF] and appropriate for the set of uncertainties we take into consideration in this paper. The interested reader is referred to [START_REF] Scorletti | Improved efficient analysis for systems with uncertain parameters[END_REF] for other parametrisations. The DG-scaling is defined as the set:
Φ ∆ = Φ ∆ | Φ ∆ = -D G G * D with D = bdiag(D 1 , . . . , D r ) and G = bdiag(G 1 , • • • , G r ) (5)
And we have that
D k = D * k > 0 ∈ C n k ×n k with D having the property D∆ = ∆D, G k = -G * k ∈ C n k ×n k for k = 1, . . . , r.
We note that if one wants to take a more general uncertainty set into account, i.e. include also complex uncertainty, the G matrix in [START_REF] Iwasaki | Generalized KYP lemma: unified frequency domain inequalities with design applications[END_REF] has to be redefined as e.g. given in [START_REF] Scherer | Linear Matrix Inequalities in Control[END_REF]. We omit this here since it does not contribute to the demonstration of the new method presented in this paper.
Suppose now that we are given a nominally stable uncertain system M ⋆ ∆ with ∆ ∈ ∆. The objective is to find the maximum amplitude frequency response among all ∆ ∈ ∆ for each frequency in the grid. Finding such a maximum response is from a practical point of view not interesting since the set ∆ is infinite as discussed previously. It is however possible to compute, for each frequency in a grid, an upper bound on the maximum amplitude frequency response based on Corollary 1 and the DG-scaling given in [START_REF] Iwasaki | Generalized KYP lemma: unified frequency domain inequalities with design applications[END_REF]. This upper bound can be found by solving for each frequency ω i in the grid the following problem:
min γ 2 ,D,G γ 2 s.t. I M * -D 0 G 0 0 -γ 2 I 0 0 G * 0 D 0 0 0 0 I I M < 0 (6)
With D and G as defined in [START_REF] Iwasaki | Generalized KYP lemma: unified frequency domain inequalities with design applications[END_REF]. The value γ opt = arg min γ 2 is then the upper bound on the maximum amplitude frequency response for the frequency ω i . If we solve the problem given in [START_REF] Ferreres | A µ analysis technique without frequency gridding[END_REF] for the grid ω = [ω 1 , ω 2 , . . . , ω N ] we get an upper bound for each of these frequencies. This upper bound can give us then information about previously discussed time domain performance specifications, the original objective. Now that we have given all the necessary mathematical tools, we can proceed by giving a problem which can occur when using these tools to compute an upper bound and to make statements on time domain specifications by using this upper bound.
III. PROBLEM FORMULATION
Since the problem given in ( 6) is a frequency dependent problem, an upper bound for only the frequencies in the grid under consideration can be ensured. Hence we do not know what happens in between the subsequent frequencies in the grid. This implies that it is possible, for example, that we do not detect the maximum peak of the amplitude frequency response with, as a consequence, a guarantee of time domain specifications other than the system actually exhibits. Indeed, the more dense we make the frequency grid, the more likely it is that we do not miss any important information. However, for high dimensional systems, it can be computationally expensive to compute the upper bound hence it is interesting to have a non dense frequency grid. We will illustrate a possible problem which can occur when applying the classical method to a relatively simple system in the following subsection.
A. Numerical Example
Given the following nominally stable uncertain SISO system
M ⋆ ∆ = 1 ms 2 + bs + k (7) with b = b 0 (1 + W b δ b ) and k = k 0 (1 + W k δ k ) with |δ b | ≤ 1, |δ k | ≤ 1, ∆ = diag(δ b , δ k ), i.e we have that n z = n w = 1, n q = n p = 2.
We furthermore have that m = 10, b 0 = .3, k 0 = 10, W b = .25b 0 and W k = .05k 0 . We can define M accordingly as
M = -W b s -W b s W b s -W k -W k W k -1 -1 1 1 ms 2 + b 0 s + k 0 (8)
Then, for the system given in [START_REF] Dinh | Embedding of uncertainty propagation: application to hierarchical performance analysis[END_REF], we can solve the problem given in (6) for a frequency grid ω i.e. we can compute an upper bound on the maximum frequency response among all ∆ ∈ ∆ for the frequencies in the grid. In Figure 2 we depict the results including the frequency response of the nominal model. The latter is considered as continuous to illustrate the possible problem which can arise using standard µanalysis tools. When using the upper bound to, for example, make statements on the maximum peak of the frequency response, we obtain a maximum of 6.91 [dB]. However, the nominal model already has a maximum of 10.46 [dB] though on a different frequency. This frequency is however not in the set ω used for computing the upper bound. Hence, when using the upper bound, we will guarantee time domain specifications which the system actually does not exhibits. It could thus be interesting to be able to guarantee an upper bound for the complete grid ω, i.e. also for the intermediate frequencies. We will present in the following subsection a method which allows us to do this. We will see that we need to solve problem [START_REF] Ferreres | A µ analysis technique without frequency gridding[END_REF] with modified ∆ and M for a grid of frequencies and that we are able to ensure an upper bound for each frequency in the grid and for a small range around each of the frequencies. The proposed method will be illustrated using the same numerical example as presented in this section.
IV. PROPOSED METHOD
In order to ensure an upper bound for a range around a frequency we introduce an additionally uncertain parameter
ω i = ω 0 i (1 + W ω δ ω ) with |δ ω | ≤ 1.
In other words, we make each frequency in the grid an uncertain parameter with ω 0 i being one nominal frequency out of the grid and W ω defining the (symmetric) range around this nominal frequency ω 0 i . A consequence is that we will be able to guarantee an upper bound on the maximum amplitude frequency response for the nominal frequency ω 0 i and also for a symmetric range with amplitude defined by W ω around the frequency ω 0 i . Note that in this new case, the frequency grid under consideration will then be a grid with nominal frequencies defined as ω = [ω 0 1 , ω 0 2 , . . . , ω 0 N ]. Then each of these frequencies in the grid can be considered as a nominal frequency and for each of these frequencies, the upper bound can be computed, i.e. the problem given in (6) can be solved. In order to practically clarify the proposed method, we present in Figure 3 the integrator block in the original situation (left) and in the situation when the frequency is an uncertain parameter (right). Indeed, if we close the loop of the block scheme on the right and let s → iω 0 i we get the transfer:
1 iω 0 i (1 + W ω δ ω ) , |δ ω | ≤ 1 (9)
After the replacement of all the integrator blocks in our system, we can define an augmented uncertainty block
∆ a = diag(∆, I n δ ω ), ||∆ a || ∞ ≤ 1 (10)
with n the number of integrators in the system under consideration. Then we can compute an augmented M a matrix accordingly. Note that the augmented uncertainty block ∆ a will always contain one additional n times repeated uncertainty block. It is possible to generalise the above and depict the augmented uncertainty ∆ a and M a in the M ∆structure. This is illustrated in Figure 4. The partitions in the matrix M a are given in [START_REF] Skogestad | Multivariable feedback control analysis and design[END_REF]. Now that we have explained the necessary steps to ensure an upper bound for a range of frequencies and gave the definitions of the augmented uncertainty ∆ a and M a , we are ready to give the following theorem.
Theorem 2 (RP Theorem for a range of frequencies): Robust performance as defined in Definition 1 is ensured for a nominal frequency ω 0 i and a symmetric frequency range with magnitude |W ω | around ω 0 i if we solve: min
γ 2 ,D,G γ 2 s.t. I M a * -D 0 G 0 0 -γ 2 I 0 0 G * 0 D 0 0 0 0 I I M a < 0 (11
) with D and G as defined in [START_REF] Iwasaki | Generalized KYP lemma: unified frequency domain inequalities with design applications[END_REF] according to the new uncertainty block ∆ a as given in [START_REF] Doyle | Review of LFTs, LMIs, and µ[END_REF]. The value γ opt = arg min γ 2 is then the upper bound on the maximum frequency response for the frequency ω 0 i and a symmetric frequency range with magnitude |W ω | around ω 0 i . The proof of this theorem follows directly from Theorem 1 and Corollary 1. If we solve the problem as defined in [START_REF] Scherer | Linear Matrix Inequalities in Control[END_REF] for the grid ω = [ω 0 1 , ω 0 2 , . . . , ω 0 N ] we get an upper bound for each of these frequencies and a range around each of these frequencies. This upper bound can then give us information about time domain performance specifications, the original objective. Observe that the range for which we can ensure robust performance depends on |W ω | hence the latter should be chosen such that each subsequent frequency will be overlapped by its neighbours. In the following subsection, we will give the same numerical example as we presented before though now having uncertainty on the frequency as discussed in this subsection.
A. Numerical Example
In this section we reconsider again the system as given in [START_REF] Dinh | Embedding of uncertainty propagation: application to hierarchical performance analysis[END_REF]. We will now make the frequency an uncertain parameter as we have discussed in the previous subsection. Since the system in [START_REF] Dinh | Embedding of uncertainty propagation: application to hierarchical performance analysis[END_REF] has two integrators, the new uncertain system will have one two times repeated uncertain parameter in addition to the two uncertain parameters b and k. The new uncertainty block is defined as ∆ a = bdiag(δ b , δ k , δ ω I 2 ) (hence n p = n q = 4) and the new M a matrix can then be defined accordingly as:
M a = -W b s -W b s -W b msi -W b k0i W b s -W k -W k -W k mi -W k (ms+b0)i W k -Wω s -Wω s -Wωmsi Wωk0i Wω s -Wω -Wω -Wω mi -Wω (ms+b0)i Wω -1 -1 -mi -(ms+b0)i 1 1 ms 2 + b 0 s + k 0
When solving the problem as given in (11) while using appropriate matrices D and G as given in [START_REF] Iwasaki | Generalized KYP lemma: unified frequency domain inequalities with design applications[END_REF], we obtain, after properly choosing W ω , the (continuous) upper bound as illustrated in Figure 5. Note that we use the same frequency grid as we did when computing the upper bound as depicted in Figure 2. However, due to the fact that we made the frequency an uncertain parameter, we are now also able to guarantee an upper bound in between the frequencies under consideration. It can be seen that we guarantee a maximum peak of 13.18 [dB] and that the nominal frequency response is situated below the computed upper bound. It should be clear that the upper bound is a bound on the maximum amplitude frequency response hence we can conclude that the maximum amplification of the system is not more than 13.18 [dB]. Using the previous method we guaranteed an maximum amplification of 6.91 [dB] which was already violated by the nominal model. This implies that by using the new method, we able to ensure time domain specifications of the system which it really exhibits. The example given in this section illustrates that, by using the proposed method, we are able to ensure time domain specifications of a system which are guaranteed not violated. By using the standard technique we can guarantee a better performance due to a "wrong" choice of the frequency grid while this performance is not exhibited by the system under consideration. The novel method only needs a proper choice of the weight W ω such that the subsequent frequencies overlap. For the example considered until now, we could argue to increase the density of the grid ω to solve the frequency gridding problem because we are dealing with a relatively simple system. Then we will most likely not miss the peak in the maximum amplitude frequency response. However, when dealing with high dimensional systems, increasing the density of ω is not always interesting. This is due to the fact that computing an upper bound for high dimensional systems are computational expensive. In the following section we apply the presented method on such a high dimensional system.
V. LARGE SCALE NETWORK APPLICATION
The large scale system studied in this section will be a network of N = 16 phase locked loops (PLLs) and is taken from [START_REF] Korniienko | Control law synthesis for distributed multi-agent systems: Application to active clock distribution networks[END_REF]. Such a network can for example be used to distribute and synchronise a clock signal in a multicore processor [START_REF] Korniienko | Control law synthesis for distributed multi-agent systems: Application to active clock distribution networks[END_REF]. The network can be presented using the M ∆-structure as depicted in Figure 6. Here we have the static matrix M which defines how the PLLs influence each other in order to synchronize the network [START_REF] Dinh | Embedding of uncertainty propagation: application to hierarchical performance analysis[END_REF] (see Equation ( 13)) and ∆ = diag(T 1 , T 2 , . . . , T 16 ) with T l defined as one PLL in the network. We furthermore have the SISO performance transfer M ⋆ ∆ as the global transfer function for which we would like to compute an upper bound, i.e. for which we would like to ensure the optimal robust performance as defined in Definition 1. This example is suitable for illustration since the performance is naturally evaluated in the frequency domain [START_REF] Korniienko | Control law synthesis for distributed multi-agent systems: Application to active clock distribution networks[END_REF].
A. PLL network description
In the network considered in this paper, all the PLLs are homogeneous and their individual uncertainty blocks belong to the same uncertainty set ∆. These uncertainties "capture" the technological dispersions which occur due to the manufacturing process. They can be presented as parametric uncertainties belonging to the same set ∆. Then, the description of the N PLLs is:
T l (iω 0 i ) = k l (iω 0 i + a l ) -(ω 0 i ) 2 + k l iω 0 i + k l a l , ∀l ∈ {1, . . . , N }
where k l ∈ [0.76, 6.84] × 10 4 , a l ∈ [91. 1, 273.3] and ω 0 i is the current frequency. Furthermore, T l (iω 0 i ) can written as the interconnection of a certain and an uncertain part:
T l (iω 0 i ) = ∆ l ⋆ M PLL , ∆ l ∈ ∆
with ∆ l denoted as:
∆ = {∆ l ∈ R 2×2 | ∆ l = diag(δ k l , δ a l )}
B. Performance analysis
The performance analysis of this network consists in computing an upper bound on the maximum amplitude frequency response. The problem of using standard µ-analysis tools on such high dimensional system is the computation effort needed to compute an upper bound on the maximum amplitude frequency response due to the large scale aspect. In order to reduce this effort, the authors in [START_REF] Dinh | Embedding of uncertainty propagation: application to hierarchical performance analysis[END_REF], [START_REF] Dinh | Convex hierarchicalrchical analysis for the performance of uncertain large scale systems[END_REF] and [START_REF] Laib | Phase IQC for the hierarchical performance analysis of uncertain large scale systems[END_REF] use a method called the Hierarchical Approach initially proposed in [START_REF] Safonov | Propagation of conic model uncertainty in hierarchical systems[END_REF]. This method allows to reduce the computational effort with respect to the time necessary when using standard techniques. Nevertheless, it is interesting to reduce the resolution of the frequency grid for which we compute the upper bound in both methods: direct and hierarchical. Indeed, the denser the frequency grid, the higher the computation effort. However, we can miss important information if we reduce the density and as a consequence, make false statements on the time performance of the system as we have seen in the numerical example presented in Section III. Therefore, it is interesting to apply our method on the network of PLLs using the Hierarchical Approach.
C. Hierarchical Approach for a range of frequencies
For the network considered here, the Hierarchical Approach consists of two steps, the local and the global step. The final goal is to compute an upper bound on the maximum amplitude frequency response of the SISO performance transfer M ⋆ ∆, i.e. we would like to ensure the optimal robust performance as defined in Definition 1 for the performance transfer. In order to do so we can use Theorem 1. We can unfortunately not use the parametrisation as defined in [START_REF] Iwasaki | Generalized KYP lemma: unified frequency domain inequalities with design applications[END_REF] since in that case, the LMI in ( 3) is not by definition verified. Hence the first (local) step of the Hierarchical Approach consists of finding a suitable parametrisation of the set Φ ∆ such that (3) always holds. We will, in the following subsections, describe the steps done in the Hierarchical Approach in more detail.
1) Local step: In this step we make the frequency in each PLL uncertain such that we obtain ∆ l a = diag(δ k l , δ a l , δ ω I 2 ). We then define T l a accordingly which we can write as T a due to the fact that the PLLs are homogeneous. Now we are interested in characterizing the input-output behaviour of each PLL using integral quadratic constraints (IQCs) which can, in the complex plane, be interpreted by simple geometric forms: disc [START_REF] Dinh | Embedding of uncertainty propagation: application to hierarchical performance analysis[END_REF], band [START_REF] Dinh | Convex hierarchicalrchical analysis for the performance of uncertain large scale systems[END_REF] and cone [START_REF] Laib | Phase IQC for the hierarchical performance analysis of uncertain large scale systems[END_REF] such that for each frequency ω 0 i we have that:
T a I * X k Y k Y * k Z k T a I < 0
with k ∈ {disc,band,cone}. Details concerning the formulation of the IQCs are in [START_REF] Dinh | Embedding of uncertainty propagation: application to hierarchical performance analysis[END_REF], [START_REF] Dinh | Convex hierarchicalrchical analysis for the performance of uncertain large scale systems[END_REF] and [START_REF] Laib | Phase IQC for the hierarchical performance analysis of uncertain large scale systems[END_REF]. Finding the matrices X k , Y k and Z k is also referred to as finding a suitable embedding which can be characterised by IQCs. We can apply Theorem 2 to obtain these IQCs for a range of frequencies around ω 0 i .
2) Global step:
The different IQCs obtained in the local step can be used to characterize the 16 PLLs gathered in ∆ such that:
∆ I * X Y Y * Z ∆ I ≥ 0
holds with:
X = -diag k τ 1k X k , . . . , k τ Nk X k Y = -diag k τ 1k Y k , . . . , k τ Nk Y k Z = -diag k τ 1k Z k , . . . , k τ Nk Z k
where k ∈ {disc,band,cone} and τ ik > 0 are introduced to increase the number of decision variables in the global step which can reduce the possible conservatism. Now we can apply Theorem 2 to solve the problem of finding an upper bound on the maximum amplitude global transfer frequency response by solving for each frequency ω 0 i in the grid the following problem: min
γ 2 ,X,Y,Z γ 2 s.t. I M * -X 0 -Y 0 0 -γ 2 I 0 0 -Y * 0 -Z 0 0 0 0 I I M < 0 (12)
The value γ opt = arg min γ 2 is then the upper bound on the maximum amplitude frequency response of the transfer M ⋆ ∆ for the frequency ω 0 i and a range around this frequency defined by W ω . Note that the problem defined in ( 12) is an adapted version of the problem defined [START_REF] Ferreres | A µ analysis technique without frequency gridding[END_REF]. When we solve the problem given in [START_REF] Scorletti | Improved efficient analysis for systems with uncertain parameters[END_REF] for the grid ω = [ω 0 1 , ω 0 2 , . . . , ω 0 N ], the (continuous) upper bound as depicted in Figure 7 is obtained. We observe in Figure 7 that , rad/sec the performance transfer can be seen as a sensitivity function from which bandwidth ω b , maximum peak M s and slope in the low frequency region can be extracted. These can give us information on the time domain specifications rise time, overshoot and tracking behaviour of the system respectively. Due to the fact that we use the proposed method, we are able to ensure a continuous upper bound and will for sure not miss any important information about the system.
VI. DISCUSSION
In this paper we presented a method which allows us to compute an upper bound for a range of frequencies on the maximum amplitude frequency response of a system. The range depend on the weight W ω . If the latter is chosen such that the subsequent frequencies in the frequency grid overlap, then we are able to guarantee an continuous upper bound on the maximum amplitude frequency response from the first frequency in the grid to the last. It should be noted that the solution can become conservative. The source of this conservatism is the additional uncertainty on the frequency one introduces in the system using the new method. This conservatism depends on |W ω | which on its turn depends on the resolution of the frequency grid. The denser the frequency grid, the smaller we can make |W ω | since we need to set this such that there is an overlap between the subsequent frequencies. The upper bound will on its turn then become less conservative.
VII. CONCLUSIONS
The method presented in this paper allows us to ensure an upper bound on the maximum amplitude frequency response for a range of frequencies. This could be interesting according to the example given in Section III. Here we have illustrated that, by using a relatively simple SISO example, information on the time domain specifications can be ensured though these specifications are not really exhibited by the system. This overestimation occurred sine we are not able to, using classical techniques in the frequency domain, ensure an upper bound for a range of frequencies. The method in this paper however allows us to do so.
We also have showed that the new method can be applied to a more advanced high dimensional system. It is even more interesting to do so since in such a case it is more important to keep the density of the frequency low due to the otherwise increasing computational load. So the proposed method allows us to keep the density of the frequency grid low while also being able to ensure a continuous upper bound from the first to the last frequency in the grid. By doing so we are sure that the time domain specifications guaranteed using the upper bound are the specifications the systems actually has.
Fig. 1 .
1 Fig. 1. The M ∆-structure.
FrequencyFig. 2 .
2 Fig. 2. Frequency response of the nominal model (black line) and the upper bound on the maximum frequency response.
-Fig. 3 .
3 Fig. 3. Integrator block (left) and integrator block with uncertain frequency (right).
Fig. 4 .
4 Fig. 4. The M ∆-structure with augmented uncertainty ∆a and matrix Ma.
FrequencyFig. 5 .
5 Fig. 5. Frequency response of the nominal system (back line) and the upper bound on the maximum frequency response.
Fig. 6 .
6 Fig. 6. Block scheme representation of the large scale system.
Fig. 7 .
7 Fig. 7. The upper bound on the maximum frequency response of the performance transfer function. | 36,788 | [
"1003516",
"1228",
"8005"
] | [
"333368",
"408749",
"408749",
"333368"
] |
01484252 | en | [
"spi"
] | 2024/03/04 23:41:48 | 2016 | https://hal.science/hal-01484252/file/ACC2016_Usman_AV.pdf | Anton Korniienko
Gérard Usman A Khan
email: khan@ece.tufts.edu
Gerard Scorletti
Robust sensor localization with locally-computed, global H ∞ -design
Keywords: Sensor localization, Robust control, Decentralized control, Control of networks, Sensor fusion. I
published or not. The documents may come
Robust sensor localization with locally-computed, global
Hinf-design
Of significant relevance to this paper are Refs. [START_REF] Khan | Linear theory for self-localization: Convexity, barycentric coordinates, and Cayley-Menger determinants[END_REF], [START_REF] Khan | Distributed sensor localization in random environments using minimal number of anchor nodes[END_REF], where a discrete-time algorithm, named DILOC, is described assuming that the sensors lie inside the convex hull of at least m+1 anchors in R m . DILOC is completely local and distributed: each sensor finds a triangulation set of m+1 neighboring nodes (sensors and/or anchors) such that the sensor in question lies inside the convex hull of this set. The locations estimates are subsequently updated as a linear combination of the neighboring locations (estimates) in the triangulation set. The coefficients in this linear update are the barycentric coordinates, attributed to August F. Möbius, [17]. Assuming that a triangulation set exists at each sensor, DILOC converges to the true sensor locations. It is important to note that DILOC is linear and only requires m + 1 anchors regardless of the number of sensors as long as the deployment (convexity) conditions are satisfied.
In [START_REF] Khan | An H∞ -based approach for robust sensor localization[END_REF], we provide DILOC-CT that is a continuous-time analog of DILOC and show that by using a proportional controller in the location estimators, the convergence speed can be increased arbitrarily. Since this increase may come at the price of unwanted transients especially when there is disturbance introduced by the network, we consider dynamic controllers that guarantee certain performance and disturbance rejection properties. However, the controller design in [START_REF] Khan | An H∞ -based approach for robust sensor localization[END_REF] is not local, i.e., it requires all of the barycentric coordinates to be known in order to compute the controller parameters. In this paper, we describe a distributed control design that is completely local and only relies on a spectral bound of the matrix of barycentric coordinates. The design in this paper, thus, can be directly implemented at the sensors and no central computation is required. Our approach is based on H ∞ design principles and uses the input-output approach, see e.g., [START_REF] Moylan | Stability criteria for large-scale systems[END_REF]- [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to PLL network design[END_REF].
We now describe the rest of the paper. Section II describes the problem and recaps DILOC-CT while Section III presents the interpretation of the problem under consideration in frequency domain (s-domain). Section IV discusses the main results on using the input-output approach for the decentralized controller design. Section V illustrates the concepts and Section VI concludes the paper.
Notation:
The superscript, 'T ', denotes a real matrix transpose while the superscript, ' * ', denotes the complexconjugate transpose. The N × N identity matrix is denoted by I N and the n × m zero matrix is denoted by 0 n×m . The dimension of the identity or zero matrix is omitted when it is clear from the context. The diagonal aggregation of two matrices A and B is denoted by diag(A, B). The Kronecker product, denoted by ⊗, between two matrices, A and B, is defined as
A ⊗ B = [a ij B].
We use T x→y (s) to denote the transfer function between an input, x(t), and an output, y(t). With matrix, G, partitioned into four blocks, G 11 , G 12 , G 21 , G 22 , G K denotes the Redheffer product, [START_REF] Doyle | Review of LFT's, LMI's and µ[END_REF]
, i.e., G K = G 11 + G 12 K (I -G 22 K) -1 G 21 . Similarly, K G = G 22 + G 21 K (I -G 11 K) -1 G 12 .
For a stable LTI system, G, G ∞ denotes the H ∞ norm of G. For a complex matrix, P , σ (P ) denotes its maximal singular value while ρ(P ) denotes its spectral radius. Finally, the symbols, '≥' and '>, denote positive semi-definiteness and positive-definiteness of a matrix, respectively.
II. PRELIMINARIES AND PROBLEM FORMULATION
In this paper we consider the problem of Distributed Localization problem in continuous time (DILOC-CT) see [START_REF] Khan | Linear theory for self-localization: Convexity, barycentric coordinates, and Cayley-Menger determinants[END_REF], [START_REF] Khan | Distributed sensor localization in random environments using minimal number of anchor nodes[END_REF], [START_REF] Khan | An H∞ -based approach for robust sensor localization[END_REF], [START_REF] Korniienko | Robust sensor localization with locally-computed, global H∞-design[END_REF] for details. It consists in a network of M sensors, in the set Ω, with unknown locations, and N anchors, in the set κ, with known locations, all located in R m , m ≥ 1; let Θ = Ω ∪ κ. Let x * i ∈ R 1×m denote the true location of the ith sensor, i ∈ Ω; similarly, u j ∈ R 1×m , j ∈ κ, for an anchor. We assume that each sensor knows its distances to the nearby nodes (sensors and/or anchors). The problem is to find the locations of the sensors in Ω. For this purpose, let x i (t) ∈ R 1×m denote the m-dimensional row-vector of sensor i's location estimate, where t ≥ 0 is the continuous-time variable. DILOC-CT is given by the following equation, where (t) is dropped in the sequel for convenience:
ẋi = α(-x i + r i ); r i j∈Θi∩Ω p ij x j + j∈Θi∩κ b ij u j , (1)
where p ij 's are the sensor-to-sensor and b ij 's are sensor-to-anchor barycentric coordinates and Θ i is the triangulation set at sensor i. Note that Θ i may not contain any anchor and all barycentric coordinates are positive and sum to 1.
Let x(t)∈ R M ×m (and u(t)∈ R (m+1)×m ) denote the matrix regrouping the vectors of sensor (and anchor) locations, and let P {p ij } and B {b ij }. Since ρ(P ) < 1, it can be shown that the resulting dynamics, ẋ = -α(I -P )x + αBu P α x + B α u, ∀α > 0 converge to the true sensor locations:
lim t→∞ x(t) = (I -P ) -1 Bu = x * A. Problem formulation
Towards a more practical system, we consider the received signal at each sensor to incur a zero-mean additive disturbance, n i (t), whose frequency spectrum lies in the interval, [ω - z , ω + z ]. With this disturbance, the location estimator is given by ẋi = α(-x i + r i + n i ).
(
) 2
We replace the proportional gain, α, with a dynamic controller, K(s); see Fig. 1 leading to the dynamical system, T s , at each sensor. We note that the above localization algorithm is linear and in continuous-time, which allow us to use frequency-based design approach, which has advantages in handling multi-objective specifications and in treating the disturbance, usually present in High-Frequency (HF) range. The control design problem is to ensure global stability and global performance expressed in terms of a certain transient behavior, convergence rate, and disturbance rejection, all at the same time. In this paper, we relax the knowledge of P and B, required for controller design in [START_REF] Khan | An H∞ -based approach for robust sensor localization[END_REF]. In particular, we propose a controller design not for a given topology, P, B, but for a set of all topologies bounded in the sense of maximum singular value.
1/s K(s)
x i (t)
r i (t) Network y i (t) ε i (t) n i (t) T s
Fig. 1. DILOC-CT architecture, see [START_REF] Khan | An H∞ -based approach for robust sensor localization[END_REF].
III. DILOC-CT WITH DYNAMIC CONTROLLER: s-DOMAIN ANALYSIS
Since DILOC-CT is decoupled in the coordinates, each local system, T s , see Fig. 1, can be analyzed per coordinate and the same analysis can be extended to other coordinates. Recall that the network location estimate, x(t), is an M × m matrix where each column is associated to a location coordinate in R m . We let x to be an arbitrarily chosen column of x corresponding to one chosen dimension. Similarly, we let n, r, ε, and y (signals in Fig. 1) to represent M -dimensional vectors; for any of such vectors, the subscript i denotes the chosen coordinate at sensor i.
Similar to [START_REF] Khan | An H∞ -based approach for robust sensor localization[END_REF], the global system under consideration is described by:
x = (I M ⊗ T s (s)) r n , (3)
r n ε ref ε = P B IM -IM (IM -P ) -1 B 0 P -IM B IM H 0 x u n . ( 4
)
where s is Laplace variable, the input vector u ∈ R N represents the anchors position sent to the sensors, n ∈ R M is the input vector regrouping all sensor disturbance signals n i , x * ∈ R M and x ∈ R M are the vectors regrouping the true and estimated sensor location x * i and x i . The signal r n ∈ R M : r n r + n = P x + Bu + n, is the vector regrouping all local sensor inputs r n i . The signal
ε ref ∈ R M : ε ref x * -x = (I M -P ) -1 Bu -x is
T s (s) = K(s) s + K(s) ; S(s) = 1 -T s (s) = s s + K(s) (5)
Given the representation in Eqs. ( 3) and ( 4), we have
T [ u n ]→ ε ref ε = T u→ε ref T n→ε ref Tu→ε Tn→ε = (I M ⊗ T s ) H 0 . (6)
Let us define the following global transfer functions:
S g (s) T u→ε ref (s), T g (s) K(s) s T n→ε (s), KS g (s) K(s)T n→ε (s). (7)
In [START_REF] Khan | An H∞ -based approach for robust sensor localization[END_REF], we proposed a method for control design, K(s), that ensures the global stability and performance expressed in terms of frequency-dependent constraints on transfer functions, S g , T g and KS g . However, as pointed out before, the control law could be designed only if the interconnection topology, i.e., P and B, is known in advance. In this paper, we propose an extension of the method in [START_REF] Khan | An H∞ -based approach for robust sensor localization[END_REF] to a set of possible interconnection topologies defined in terms of maximum singular value on interconnection matrix, P :
σ(P ) ≤ γ < 1. (8)
Clearly, the convergence requirement for DILOC-CT, i.e., ρ(P ) < 1, is implied by the above inequality. However, the conditions for the existence of a dynamic controller that ensures the global stability and performance for all interconnections satisfying Eq. ( 8) are potentially more restrictive that only for one given interconnection, P . For this reason, we propose a reduction that is justified by the following Theorem.
Theorem 1: Consider DILOC-CT described in Fig. 1. If the local dynamic controller, K(s), (i) ensures the global system stability and (ii) does not contain pure derivative actions, i.e., K(0) = 0, then it ensures global tracking performance, i.e., DILOC-CT converges to the true sensor locations, x * :
lim t→∞ x(t) = (I -P ) -1 Bu = x * . (9)
Proof: See [START_REF] Korniienko | Robust sensor localization with locally-computed, global H∞-design[END_REF] for the proof.
Based on Theorem 1, and the fact that the speed of convergence is proportional to the static gain of controller, see [START_REF] Khan | An H∞ -based approach for robust sensor localization[END_REF], we can reduce the global performance transfer function by eliminating the inputs and outputs related to the tracking performance: u and ε ref , in Eqs. ( 4) and ( 6). We thus obtain the following reduced global system description:
T n→ε = (I M ⊗ T s ) H 1 , (10)
where ' ' is the Redheffer product, with T s defined by Eq. ( 5) and H 1 by
H1 = P I P -I I .
Hence, the following control design problem can formulated:
Problem 1 (Control problem): Given the global system in Eq. [START_REF] Albowicz | Recursive position estimation in sensor networks[END_REF], for all possible sensor interconnection topology such that, P satisfies Eq. (8), find the local controller, K(s), such that the global system is stable and satisfies the following frequency constraints:
|K (jω)| ≥ Ω K (ω) , in LF range, σ (T g (jω)) ≤ Ω T (ω) , in HF range, σ (KS g (jω)) ≤ Ω KS (ω) , in HF range. (11)
Furthermore, each sensor should be able to solve the problem locally, i.e., the computation of K is to be performed at each sensor.
We now briefly explain the frequency constraints. The first constraint, Ω K , ensures zero steady-state error (K(0) = 0) and provides a handle on the speed of convergence for each sensor to its true location. The second and third constraints, Ω T and Ω KS , impose a maximum bandwidth on T g and a maximum gain on KS g , which, in turn, limits the disturbance amplification by each sensor in high-frequency range. Specifics on these constraints are tabulated in Table I and are further elaborated in Section V.
It is important to note, that the proposed result is general enough to cover the case of not reduced problem, i.e., imposing the frequency constraints for all transfer functions, S g , KS g and T g , as it is proposed in [START_REF] Khan | An H∞ -based approach for robust sensor localization[END_REF]. However, to reduce the computational load such that the proposed design method can by easily implemented at each sensor, the reduced version of the Problem 1 is solved in section V.
IV. MAIN RESULTS
We now describe the main results of this paper. Before we start, in subsection IV-A, we recap the input-output approach and present its main Theorem: Theorem of graph separation which can be used to solve the problem under consideration for one given interconnection topology. Then, in subsection IV-B, we extend the result to a more general case where the interconnection topology is not fixed but supposed to belong to a bounded set. Finally, in subsection IV-C, we specialize the obtained result to the case of sensor localization under consideration.
A. Input-output Approach
We now describe the input-output approach used to solve Problem 1. We use the concept of dissipativity taken from [START_REF] Moylan | Stability criteria for large-scale systems[END_REF]- [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to PLL network design[END_REF], [START_REF] Korniienko | Control law synthesis for distributed multi-agent systems: Application to active clock distribution networks[END_REF], a simplified version of which is defined below.
Definition 1 (Dissipativity): An LTI, stable, and causal operator, H, is strictly {X, Y, Z}-dissipative, where X = X T , Y, Z = Z T , are real matrices such that
X Y Y T Z is full-rank; if ∃ ε > 0 such that for almost all ω > 0 I H(jω) * X Y Y T Z I H(jω) ≤ -εI. ( 12
)
If the inequality in Eq. ( 12) is satisfied with ε = 0, the operator is said to be {X, Y, Z}-dissipative.
Based on the dissipativity characterization of two interconnected systems, T and H, the following result could be obtained ensuring the stability and performance of the interconnection T H.
Theorem 2: Given η > 0, a stable LTI system, H, interconnected with an LTI system, T , and real matrices, X =
X T ≥ 0, Y, Z = Z T ≤ 0 of appropriate dimensions, if (i) H is {diag(X , -η 2 I), diag (Y, 0) , diag (Z, I)}-dissipative, and
(ii) T is strictly {-Z, -Y T , -X }-dissipative, then the global system, T H, is stable and
T H ∞ ≤ η. (13)
Proof: The proof of Theorem 2 can be found in [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to PLL network design[END_REF], and it relies on a version of the graph-separation theorem used in [START_REF] Moylan | Stability criteria for large-scale systems[END_REF] for the stability and an S-procedure, [START_REF] Yakubovich | The S-procedure in non-linear control theory[END_REF], for the performance. It can also be seen as a generalization of the Kalman-Yakubovich-Popov lemma, [START_REF] Rantzer | On the Kalman-Yakubovich-Popov lemma[END_REF].
Remark 1: We note that for a given interconnection topology, P , Problem 1 can be solved by applying Theorem 2 with H = H 1 , T = (I ⊗ T s ) and dissipativity matrices chosen as
X = Q ⊗ X, Y = Q ⊗ Y , Z = Q ⊗ Z with a
positive-definite symmetric matrix, Q ∈ R M ×M , see [START_REF] Khan | An H∞ -based approach for robust sensor localization[END_REF] for more details. However, for this purpose, the knowledge of interconnection matrix, H 1 (and thus P ), is necessary. turn, the knowledge of all barycentric coordinates, P , to compute the local controller may be restrictive from an application point of view. Indeed, if the barycentric coordinates are known in advance, the position of each sensor could be easily computed at some central computer and then transmitted to the sensors. In order to avoid such knowledge, the condition (i) of Theorem 2 should be satisfied not only for one given interconnection P but for a set of all possible interconnections. In this paper, we present this extension based on the idea of representing the interconnection matrix, P , as an uncertainty and exploiting the fact that it is bounded in terms of its spectral radius, ρ(P ) ≤ γ < 1, implying Eq. ( 8).
B. Global Performance for a set of Topologies
One approach to solve the Problem 1 could be to derive a similar constraint on the augmented interconnection matrix H 1 based on the constraint [START_REF] Savvides | The bits and flops of the n-hop multilateration primitive for node localization problems[END_REF], see Eq. ( 10):
σ(P ) ≤ γ < 1 ⇒ σ(H 1 ) ≤ γ < 1 (14)
Depending on the performance under consideration, the last condition could be very restrictive and actually never be applied in practice. For this reason, in this paper we propose to transform the system Eq. ( 10) in the form of an LFT (Linear Fractional Transform) interconnection of local subsystem dynamics, T s , and one or several (repeated) interconnection matrices, P . Such a transformation is general enough to cover a large set of performance specifications. It could be easily performed in the case of sensor localization problem under consideration as shown later. In this case, it is possible to derive less restrictive global performance conditions based on the constraint in Eq. ( 8), instead of Eq. ( 14). Let us thus consider the following general description of the transformed global system:
p = diag I M ⊗ T s , I m ⊗ P (q) , q z = H 11 H 12 H 21 H 22 H p w , (15)
is a finite-dimensional, stable LTI system, w(t) is the input vector of dimension n w , z(t) is the output vector of dimension n z , and q(t), p(t), are internal signals of dimensions n g . The local subsystem is T s = G K with n l inputs and outputs 1 . We note that n g = (1 + m)n l × M . Please also note that, to cover general result, the systems under consideration in Eq.( 15) could be Multi-Input Multi-Output (MIMO) if n l > 1. In this case, interconnection matrix P = P ⊗ I n l and its maximum singular value respect the same condition as P , i.e., Eq. ( 8).
The global transfer function between external input, w, and output, z, is
T w→z = diag I M ⊗ T s , I m ⊗ P H,
and its H ∞ -norm is ensured by the local controller, K, by the following theorem providing the main result of the paper.
Theorem Given η > 0 and γ > 0, a stable LTI system, H, a local plant, G, and real matrices,
X = X T ≥ 0, Y , Z = Z T , if there exist (i) positive-definite matrices Q and D of appropriate dimensions, such that H is {diag(Q⊗X, -D⊗I, -η 2 I), diag (Q ⊗ Y, 0, 0) , diag Q dissipative, and (ii) a local controller, K, such that T s = G K is strictly {-Z, -Y T , -X}-dissipative,
then the local controller, K, ensures that the global system, I ⊗ T s , I ⊗ P H, is stable and
I ⊗ T s , I ⊗ P H ∞ ≤ η
for all interconnection matrices P satisfying
σ( P ) ≤ γ. (16)
Proof: See [START_REF] Korniienko | Robust sensor localization with locally-computed, global H∞-design[END_REF] for the proof.
C. Local Control for Global Performance of DILOC-CT
In the previous subsection, we presented the Theorem that allows to solve Problem 1 for all interconnection topologies that satisfy Eq. ( 8) if the global system is transformed in the LFT form of Eq. [START_REF] Thrun | Probabilistic robotics[END_REF]. In this subsection, we show how this transformation is made for DILOC-CT application under consideration in order to ensure the frequency depended bounds, Eq. ( 11) by decentralized control, K.
Note that H 1 in Eq. ( 10) can be written in form of an LFT of one interconnection matrix, P , since:
H 1 = 0 I M -I M I M + I M I M P I M 0 .
Therefore, Eq. ( 10) can be equivalently written as:
x x p = diag (I M ⊗ T s , P ) r n r p , r n r p ε = 0 I M I M I M 0 0 -I M I M I M H2 x x p n , (17)
with x p = P r p , r p = x, and matrix, H 2 , independent of uncertain interconnection matrix, P .
The Problem 1 is now solved by the following theorem according to a similar argument as in Section IV-B.
Theorem 4 (Control Design): Given η > 0 and γ > 0, the system described in Eq. ( 17), and real scalars, X ≥ 0, Y, Z ≤ 0, if there exists positive-definite matrix, Q ∈ R M ×M and a real scalar D > 0, such that
(i) H2 is diag XQ, -D, -η 2 I , diag (Y Q, 0, 0), diag ZQ, γ 2 D, I -dissipative,
and a local controller, K, such that for T s and S in Eq. ( 5):
(ii) s is strictly {-Z, -Y, -X}-dissipative with T (s) ∞ < 1, and Ts (s
) = T (s) + Y X X 2 Y 2 -XZ ; (iii) |S (jω)| ≤ ωΩ -1 K (ω) ,
Proof: See [START_REF] Korniienko | Robust sensor localization with locally-computed, global H∞-design[END_REF] for the proof.
Remark 2: Conditions (ii)-(iv) of Theorem 3 are decentralized local conditions since they are only related to local subsystem dynamics. However, due to the condition (i), these local conditions imply the appropriate global system behavior: global stability and global system performance defined in Problem 1. In the scalar case, it is possible to find Q, η 2 and X, Y, Z dissipativity parameters (based on quasi-convex optimization) that, for a given γ, satisfy condition (i), and maximally relax condition (ii), see [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to PLL network design[END_REF] for more details. These algorithms can be implemented locally at each sensor provided the information on the number of sensors, M , is available to each sensor. In this case, the controller design reduces to a local H ∞ design problem, i.e., find a controller, K, such that it satisfies the conditions (ii)-(iv). It is a standard H ∞ design problem and can be easily solved (see [START_REF] Skogestad | Multivariable Feedback Control, Analysis and Design[END_REF]) and implemented by each sensor for a computed η and frequency constraints fixed by Ω K , Ω T , Ω KS .
V. SIMULATIONS
We consider a network of M = 20 sensors and N = m + 1 = 3 anchors in R 2 . The sensors lie in the convex hull (triangle) formed by the anchors The sensor deployment is assumed to be free as soon as each sensor is connected to three neighbors such that it lies in their convex hull and resulting network topology P is such that it respects constraint [START_REF] Savvides | The bits and flops of the n-hop multilateration primitive for node localization problems[END_REF].
First performance specification needed to be ensured by local controllers, K, is the sensor localization objective,
lim t→∞ x = x * .
Second is the rejection of the disturbance, n, each component of which is a realization of a bandlimited noise with amplitude, A < 5, in frequency range, [ω - z ω + z ] = [600 10 5 ] rad/sec. To achieve the specifications, we consider the frequency constraints, Ω K (ω), Ω T (ω), Ω KS (ω), see Problem 1 and Table I, shown as red dotted lines in Fig. 2.
Based on the proposed approach, see Theorem 4, each sensor is able to compute its local controller, K, that ensures the global performance specifications. The only information needed to be transmitted to the sensors is the total number of sensors, M = 20, in the network, the spectral bound for the network topology, γ = 0.99, and frequency constraints Ω K (ω), Ω T (ω), Ω KS (ω), see Remark 2. With this information, the quasi-convex optimization problem proposed in [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to PLL network design[END_REF] could be solved by each sensor, which for the global system description in Eq. ( 17), yields X = -3.33, Y = 1.17, Z = 1 and the global transfer function bound, η = 9.7. The local controller, K(s), is then computed using standard H ∞ -design, [START_REF] Skogestad | Multivariable Feedback Control, Analysis and Design[END_REF], to ensure the conditions (ii)-(iv) of Theorem 4: K(s) = 8.8 • 10 8 (s + 777.5)(s + 7508) .
Max and min singular values, dB
Frequency, rad/sec As expected, the local controller is a low-pass filter with high gain, K(0) ≈ 150.9 ≥ K 0 , in the Low Frequency (LF) range, and the negative slope (-40 dB/dec) in the HF range. According to the Theorem 4, the designed controller solves the Problem 1, i.e., it ensures the frequency constraints, Eq. ( 11), as can be verified in Fig. 2. It is important to note that even if the tracking performance specification was not directly imposed by the augmented global system under consideration, Eq.( 17), since the global stability is ensured and the computed controller has no pure derivative actions, K(0) = 0, by Theorem 1, the steady-state localization error
ε ref = x * -x → 0.
This is confirmed by temporal simulations of sensor network for 10 randomly chosen interconnection topologies that respect the constraint in Eq. ( 8). The temporal evolution of the mean value of localization errors, ε ref i , and command signals, y i 's, are presented in Fig. 3. To illustrate the interest of using the dynamic controller instead of a static one, Fig. 3 presents also the temporal simulations for the case of proportional gain controller K = α = K 0 .
Even though it ensures the same tracking performance (speed of convergence) as the dynamic controller proposed in this paper, it can be seen that due to the frequency constraint imposed on global transfer functions, KS g and T g , the influence of the disturbance, n, at each sensor is significantly reduced.
VI. CONCLUSIONS
In this paper, we consider a continuous-time, LTI algorithm to solve the sensor localization problem in R m with at least m + 1 anchors which know their locations. Towards a more practical scenario, we consider the information exchange to incur a zero-mean additive disturbance whose frequency spectrum lies in a certain HF range. To maintain certain performance objectives while reducing the impact of the disturbance, we design a dynamic controller at each sensor with frequency-dependent performance objectives using the H ∞ theory. This controller is computed with local information and relies on a spectral bound on the interconnection matrices. We show that this dynamic controller does not only ensure global system stability but is also able to meet certain performance objectives embedded in frequency-dependent constraints.
the vector regrouping all sensor localization tracking errors ε ref i and the signal ε ∈ R M : ε r n -x is the vector regrouping all local sensor errors ε i . The local transfer function, T s , identical at each sensor. It is complementary sensitivity function related to sensitivity transfer function S(s) by
in the LF range; (iv) |K (jω) S (jω) | ≤ η -1 min {Ω KS (ω) , ωΩ T (ω)} , in the HF range; then the local controller K solves the Problem 1 for all possible interconnection matrix P satisfying σ(P ) ≤ γ.
Fig. 2 .
2 Fig.2. Maximum (solid red line) and minimum (solid blue line) singular value of K, KS g , and T g for different network topologies of 20 sensors, respecting constraint (8), together with corresponding frequency constraints (red dotted line).
Fig. 3 .
3 Fig. 3. Temporal simulations for 10 different topologies: (Left) Static controller, α = K 0 ; (Right): Dynamic controller, K(s). Note the high values of the control command, y i (t)'s, with the static controller, that could overexcite the local system, integrator, at each sensor.
Without loss of generality and for the ease of notation, the square local system case is presented here.
Their work is supported by a grant from la Région Rhône-Alpes. His work is supported by an NSF Career award # CCF-1350264. | 28,599 | [
"1228",
"1271"
] | [
"408749",
"248373",
"408749"
] |
01484259 | en | [
"shs",
"math"
] | 2024/03/04 23:41:48 | 2018 | https://hal.science/hal-01484259/file/tournes_2015d_hyperbolas_parabolas.pdf | Dominique Tournès
Léon-Louis Lalanne
Julius Mandl
Junius Massau
Ferdinand August
Maurice Möbius
Louis-Ézéchiel D'ocagne
Pouchet
Calculating with hyperbolas and parabolas
Keywords: Abaque, Nomogram, Hyperbola, Parabola, Multiplication abaque, Multiplication nomogram, Graphical table, Nomography, John Clark
Graphical tables (abaques and nomograms) can give rise to original activities for 16 to 18 year olds with a strong historical and cross-curricular element. These activities lend themselves to a practical way of dealing with information and highlighting the changes in presentation (graphic, numerical, algebraic and geometric) as well as offering a motivating topic area for the usual functions required by the programme of study. They also allow the active use of the basic techniques of geometry in an unusual setting. This chapter deals with practical work trialled in a class of 16 year olds, based on two types of multiplication abaques situated in their historical and cultural background: a concurrent-line abaque using a family of hyperbolas and an alignment nomogram with a plotted parabola. The use of these graphical tables allowed the students to revisit their knowledge of inverse square functions, to use freely equations of straight lines and curves, and to anticipate the graphical methods for solving second degree equations.
Before the widespread use of electronic calculators and computers, people frequently had recourse to numerical tables which brought together the results of numerous calculations carried out once and for all so as to spare the user from repeating tedious calculations. In an analogue way, graphical tables allowed people to find the result of certain calculations with minimum effort. In general, a graphical table appears as a network of marked lines or marked points, with suitably graduated scales, moveable or not, giving, simply by reading off, the required value depending on those of the parameters. Without doubt such tables have been in evidence since the Middle Ages on astrolabes and sundials. In any case, the first ones which were specifically designed for calculation are to be found in connection with linear or circular slide rules, these graphic equivalents of the logarithmic tables invented by the British in the 17 th century. From the beginning of the 19 th century, graphical tables, initially called 'abaques', then 'nomograms', spread progressively in numerous professional bodies (engineers, artillerymen, navigators, industrialists, physicians etc.) to the point of becoming, a century later, the main instruments of graphical calculation. An entirely separate discipline, called 'nomography', even arose around their study and use. Hardly burdensome, scarcely encumbering, and sufficiently precise for current needs and practice, the abaques held sway most of all through the swift calculations they afforded, essential speed for the professionals having to use them in real time for complex formulae.
Nowadays, nomography has seen an inescapable decline, even if it continues to be used in certain areas of activity. Abaques are often found in technical manuals, catalogues of mechanical parts or catalogues of electrical components. Physicians and chemists still use such graphs, for example to calculate quickly the dose of a drug dependent on various parameters to be taken into account such as the sex, age, height or weight of the patient.
The first objective of this chapter is to show that nomography, although belonging chiefly to the past, retains a strong educational interest. In it the teacher can find a rich source of inspiration to devise motivating activities for all levels of ability. By way of illustration, I will describe some of these activities, as they were tried out on a class of 16 year olds.
To study nomography in depth, one can refer to numerous publications by Maurice d'Ocagne, in particular the 1891 work, which fulfilled the role of founding work in the discipline [START_REF] Ocagne | Nomographie. Les calculs usuels effectués au moyen des abaques[END_REF].
Let us begin by defining the basic mathematical notions which are hidden behind graphical tables. The central problem of nomography is that of the flat two-dimensional representation of the relationships between three variables F( , , ) 0.
α β γ = The general idea of abaques known as 'concurrent-line abaques', is to make this relationship appear as the result of the elimination of two auxiliary variables between the three equations, each only dependent on one of the three main variables:
F(α,β,γ ) = 0 ⇔ ∃ (x, y) F 1 (x, y,α) = 0 F 2 (x, y,β ) = 0 F 3 (x, y,γ ) = 0 ⎧ ⎨ ⎪ ⎩ ⎪
The abaque is therefore formed from three families of marked curves from the respective equations 1 F ( , , ) 0
x y α = , 2 F ( , , ) 0
x y β = and 3 F ( , , ) 0
x y γ = , drawn on a plane equipped with Cartesian coordinates x and y (Figure 8.1). For each value of the parameter , α the first equation determines a curve which is marked on the graph by writing the value of α near to it. Similarly for the other two families. The most common approach, which applies to any relationship of the three variables, simply consists of taking x α = and y β = for the first two equations. In this case, the curves parameterised by α are parallel to the axis of the ordinates and the curves parameterised by β are parallel to the axis of the abscissae. In practice, all that is required is to construct the curves of the equation F( , , ) 0 α β γ = on squared paper. It amounts to the topographical representation of a surface by its contour lines. Louis-Ézéchiel Pouchet (1748Pouchet ( -1809)), a cotton manufacturer from Rouen, was one of the first to employ this idea. In 1795, he represented the multiplication αβ γ = by taking x α = and y β = , and by drawing the hyperbolas xy γ = corresponding to particular values of . γ A little later, about 1843, a civil engineer in the Department of Bridges and Highways, Léon-Louis Lalanne (1811-1892) had the idea of geometric anamorphosis: by placing on the axes non-regular graduations, that is by taking ( ) x ϕ α = and ( ) y ψ β = for the first two equations where ϕ and ψ are suitably chosen functions, one manages in certain cases to make it so that the curves of the third family should also be straight lines. This was how Lalanne managed to turn into straight lines the hyperbolas of equal value used by Pouchet: indeed all it takes is to write log x α = and log y β = for the equation αβ γ = to become log .
x y γ + =
In 1884 the Belgian engineer Junius Massau (1852Massau ( -1909)), professor at the University of Ghent, studied more generally the conditions which would allow one to arrive at abaques in which the curves of the three families are straight lines, provided that parallels were no longer used on the coordinate axes. We then talk of 'concurrent-straight-line abaques' (Figure 8.2). . When writing that the equations of the three bundles of curves are equations of straight lines, Massau reached the condition
1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 ( ) ( ) ( ) 0 ( ) ( ) ( ) ( , ) ( ) ( ) ( ) 0 ( ) ( ) ( ) 0. ( ) ( ) ( ) 0 ( ) ( ) ( ) f x g y h f g h x y f x g y h f g h f x g y h f g h α α α α α α β β β β β β γ γ γ γ γ γ + + = ∃ + + = ⇔ = + + = ⎧ ⎪ ⎨ ⎪ ⎩ So when the initial equation F( , , ) 0
α β γ = can be placed in such a determinant, called 'Massau's determinant', it can be represented by a concurrent-straight-line abaque.
The following advancement in nomography happens in 1884 when Philibert Maurice d 'Ocagne (1862'Ocagne ( -1938)), a young engineer in the Department of Bridges and Highways, imagines a new type of abaque. By exploiting the advances in projective geometry, notably the principle of duality, he transforms the concurrent-straight-line abaques into abaques with aligned points. Indeed, if the nullity of Massau's determinant expresses the concurrence of three straight lines, this nullity equally expresses the alignment of three points, that is the points of the parameters , α β and γ taken respectively on the parameterised curves γ In practice, so as not to spoil the abaque, the line is not actually drawn on the paper: one either uses a transparency marked with a fine straight line, or a thin thread which one stretches between the points to be joined. The alignment nomograms are easier to read and, most of all, take up less space than the old concurrent-line abaques which allows for the setting out of several on the same piece of paper. If d'Ocagne introduced the new term 'nomogram' it was mainly to distinguish himself from his predecessors. Later, some authors continued to use the word 'abaque' to indicate any kind of graphical table. From the start of the 20 th century, alignment nomograms won the day through their ease of constructions and use, and became the most widespread abaques in all areas. Some still remain in current use, like the one in Figure 8.4, which allows a physician to evaluate quickly the bodily surface area of an adult patient according to height and weight (the line marked on the figure shows, for example, that a patient 170 cm tall weighing 65 kg has a body surface area of 1.75 m 2 ).
1 1 1 1 ( ) ( ) ( ) ( ) f x h g y h α α α α ⎧ = ⎪ ⎪ ⎨ ⎪ = ⎪ ⎩ , 2 2 2 2 ( ) ( ) ( ) ( ) f x h g y h β β β β ⎧ = ⎪ ⎪ ⎨ ⎪ = ⎪ ⎩ and 3 3 3 3 ( ) ( ) ( ) ( ) f x h g y h γ γ γ γ ⎧ = ⎪ ⎪ ⎨ ⎪ = ⎪ ⎩ .
Graphical tables with 16-18 year olds
As we have already seen, graphical tables (linear or circular slide rules, abaques and nomograms) have been among the instruments of calculation most commonly used before the appearance of electronic calculators, and they remain in use today in certain sectors. Consequently, it seemed to me quite pertinent to bring them back into current favour and to exploit them educationally to practise certain points in the programmes of study for 16-18 year olds. Indeed, for 16 year olds, they are valuable for active reading of information, emphasising the networking of registers (graphic, numerical, algebraic and geometrical), they offer a motivating area of application of the topics in the programme (affine functions, square function, reciprocal function, polynomials of the second degree, homographic functions) and they allow the practice of the first techniques of coordinate geometry in a rich context (alignment of points, intersection of straight lines or curves, graphical solution of equations). At ages 17 and 18 they can equally be used to give meanings to questions often treated in a purely technical way: representation and reading of contours, simple examples of functions of two variables, logarithmic scales. This is why the IREM of Réunion set up a working group on abaques and nomograms with the following objectives:
• historic research of ancient graphical tables and methods of graphical representation of equations likely to be studied with the basic equipment owned by 16-18 year olds;
• construction of precise graphical tables on large sheets of paper;
• simulation of abaques and nomograms with dynamic geometry software;
• conception and trialling practical tasks with 16-18 year olds, hinging on the use of graphical tables, both in paper form and dynamic electronic form, and on the justification of underlying mathematical properties.
It was one of the trials carried out in this context that I am going to recount here (On the Réunion IREM website accounts of other trials on abaques and nomograms can be found under the leadership of M. Alain Busser, teacher at the lycée Roland-Garros (Le Tampon)). It took place at the Bellepierre High School, at Saint-Denis in Réunion. It was Mr Jean-Claude Lise's class and I am most grateful for his welcome and collaboration. I devised and led two sessions of practical work of two hours each with the whole class (35 students). As the students had just studied the reciprocal function and the square function, I chose to base my input on the methods of graphical calculation using the hyperbola 1 y x = and the parabola 2 . y x = First I will describe the activities carried out during the two sessions. I will then elaborate on the historical elements which inspired these activities.
Calculating with hyperbolas
At the start of the first session, we set off from the multiplication table, familiar to the students since primary school. While analysing this table (Figure 8.5), we wondered how we could improve it to access more numbers directly. Curves appeared linking equal products. The students, fresh from their teacher's lesson on the reciprocal function, immediately recognised hyperbolas. After a brief review of the properties of curves, we realised that they could allow the creation of a 'continuous' table. To achieve that, all it needed was to have a network of hyperbolas xy k = , drawn permanently on a sheet of squared paper and marked by the values of the product k. Armed with such a graphical table, a multiplication is carried out in the following manner (Figure 8.6): if you want to calculate the product, let us say of 6 and 2, you follow the vertical line of the equation 6 x = and the horizontal line of the equation 2 y = until they intersect at A; then we see that this point A is on the hyperbola value 12, so 6 2 12. × = The abaque functions in the opposite way to carry out a division: to divide 12 by 2, you look along the hyperbola value 12 until arriving at its point of intersection A with the horizontal line of the equation 2 y = ; we then see that the vertical passing through A corresponds to the abscissa 6, so 12 2 6. Once the principle of this graphical multiplication table had been elucidated, the class was able to practise its use: I gave out an abaque on A3 paper to each pair of students to encourage discussion (Figure 8.7). The students settled quickly: we did a whole series of calculations, learning how to interpolate visually between the lines of the abaque when the numbers did not correspond to the lines already drawn, we estimated the accuracy, we wondered what to do when the numbers were outside the range [0, 10], or how to change the zone of the abaque to have greater accuracy when the number range is [0, 1]. + = and we read the abscissae of its point of intersection with the hyperbola of value 12: the solutions are therefore 2 and 6. Drawing the straight line of the equation 8 x y + = is very simple, since all that is needed is to join the points on the axes of the coordinates (8, 0) and (0, 8). If you do not want to damage the abaque by writing on it, you can use a ruler or a taut string between these two points.
Curiously, the more the students were at ease with the hyperbolas, the more they seemed to have forgotten everything about straight lines. It took quite a time to recall how to calculate the equation of a straight line between two given points, but after a while we managed and the students were able to immerse themselves in the activity again. Some, having become experts, quickly solved several equations and discovered all the different possible situations (two roots, a double root, no roots). Others even protested because they did not think the abaque was accurate enough: they wanted me to make them one with more hyperbolas! I had intended to end this first session by solving systems such as 5 2 x y + = and 3 xy = -, also to exploit the second and fourth quadrants of the extended abaque, but we did not have time to get that far.
Calculating with parabolas
During the second session, I suggested that the students should work with the parabola of the equation 2 , y x = which they had studied recently as a curve representative of the square function. First of all, with numerical examples, I asked them to determine an equation for the straight line passing through two points A and B of the parabola, the first with a negative abscissa and the second positive, then to calculate the ordinate of the point of intersection, C, of this straight line with the y-axis (Figure 8.9). This work took a lot of time for the same reasons as in the first session with the hyperbolas, as the majority of students still had not completely mastered straight line equations. In spite of everything, we managed to observe that the ordinate of C seemed to be the product (give or take the sign) of the abscissae of A and B. Once this conjecture had been clarified, I took it upon myself to demonstrate the general case on the board. together with its vertical axis, is a graphical multiplication table. Whereas the hyperbolic multiplication table was an abaque (the result being obtained by the concurrence of three lines), the new parabolic table is a nomogram (the result being obtained by the alignment of three points). Once the points of the parabola and the vertical axis have been marked with the values of , a b and c, this nomogram can be used with a ruler or taut thread which is simply placed on the points A and B, which allows the product required to be read directly on the vertical axis.
To learn how to use this table, I first gave out to each pair of students a nomogram using the parabola 2 y x = on A3 paper so they could follow the theoretical work as closely as possible. We quickly realised that this nomogram was not well adapted to calculations because of the rapid increase in the square function. I then gave them a second nomogram using the parabola The second hour was dedicated to a return to second degree equations. On the parabolic nomogram (see Figure 8.9), we have at our direct disposal from the points of the abscissae a and b, a graphical construction of the sum a b + and the product ab. From it we deduce a new solution of the equation 2 0 z sz p -+ = : we place the ruler on point C of the abscissa p and we pivot the ruler around this point until we reach a sum for a b + equal to s; the numbers a and b are then the required solutions.
The session ended by a comparison of the two techniques studied: hyperbolic multiplication abaque with concurrent lines versus parabolic nomogram with aligned points. The majority of students preferred the first method, the second seeming less precise. Generally speaking, they liked this work on graphical tables very much although they found it quite difficult. They even asked me when I was going to come back for other sessions of similar practical work.
Some history of graphical tables using hyperbolas and parabolas
In conclusion, let us give some information on the historic sources of the previously described activities. As already mentioned earlier, the hyperbolic multiplication abaque is an invention of Louis-Ézéchiel Pouchet. The context is that of the French Revolution's attempts to impose a new system of weights and measures. To help the population get used to the reform, article 19 of the law of 18 Germinal year III of the French Revolution prescribed a simplification of the conversion tools: "Instead of the conversion tables between the old and new measures, which had been ordered by the decree of 8 May 1790, it will be done by graphical scales to estimate those conversions without needing any calculation." It was in response to this that Pouchet drew up a book on metrology which went through three editions, including graphical tables which became more and more elaborate. In the third edition of his book (1797), he suggested real abaques for the first time, that is, graphs from which you could read the results of calculations directly without any manipulation [START_REF] Pouchet | Métrologie terrestre, ou Tables des nouveaux poids, mesures et monnoies de France[END_REF]. These tables allowed basic calculations to be carried out: addition, subtraction, multiplication, division, squaring, square rooting, the rule of three and converting units.
Then we had to wait until 1891 for Lieutenant Julius Mandl of the Imperial Corps of Austrian Engineers to come up with the idea of using Pouchet's multiplication abaque to solve equations of the second, third and fourth degrees [START_REF] Mandl | Graphische Auflösung von Gleichungen zweiten, dritten und vierten Grades[END_REF]). Mandl's article was almost translated into English in 1893 by Major W. H. Chippindall of the Royal Engineers [START_REF] Chippindall | Graphic solution for equations of the second, third and fourth powers[END_REF].
The solving of the second degree equation 2 0, x Ax B + + = by the intersection of the hyperbola xy B = given by the abaque and the straight line , x y A + = -shown by a ruler or a taut thread was explained earlier. For the third degree equation 3 2 0, x Ax Bx c + + + = the roots prove the relationships:
1 2 3 1 2 1 3 2 3 1 2 3 . x x x A x x x x x x B x x x C + + = - ⎧ ⎪ + + = ⎨ ⎪ = - ⎩ Supposing 2 3
x x z + = and 2 3 , x x y = the previous system becomes
1 1 1 x z A x z y B x y C + = - ⎧ ⎪ + = ⎨ ⎪ = - ⎩
The elimination of z between the first two equations leads to the relationship x and 3 .
x The previous parabola with equation
2 2 4 2 A A y B x ⎛ ⎞ ⎛ ⎞ -- = + ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ has its vertex at 2 , 2 4 A A B ⎛ ⎞ - - ⎜ ⎟ ⎝ ⎠
. For the graphical solution we have a parabola permanently drawn on a transparent sheet. Therefore all you have to do is to place this parabola on the abaque with its vertex at the point with coordinates 2 , 2 4
A A B ⎛ ⎞ - - ⎜ ⎟ ⎝ ⎠
and read the coordinates of where it intersects the hyperbola with equation xy C = -on the abaque. This solution of the third degree equation, undoubtedly too difficult for a class of 16 year olds, should be beneficial for older students. In his article, Mandl finally explains how to solve the fourth degree equation thanks to Pouchet's abaque and the transparency with the fixed parabola. Without going into details, it is enough to say that we resort in a classical manner to the successive solving of a third degree equation and several second degree equations by methods we have already seen.
Let us move on to the origins of the parabolic multiplication nomogram, used during the second session of practical work. For the first time we meet something like it in 1841, in the work of August Ferdinand Möbius (1790-1868) (Figure 8.11): on each of the parabolas shown in the table, if we draw a straight line joining two of the numbers marked on the parabola then the straight line passes through their product on the line at the top of the table [START_REF] Möbius | Geometrische Eigenschaften einer Factorentafel[END_REF]. However it does not seem that Möbius' work was noticed, nor that it had any influence on later authors. Then it was the engineer John Clark (A person about whom we know practically nothing except that he was a mathematics teacher at the Polytechnic School in Cairo at the time.) who brought to light in 1905 the parabolic multiplication nomogram as we presented it to the 16 year old students [START_REF] Clark | Théorie générale des abaques d'alignement de tout ordre[END_REF][START_REF] Clark | Théorie générale des abaques d'alignement de tout ordre[END_REF]. He achieved it by a seemingly complex route but whose value lies in the fact that this method can be used for a whole range of relationships with three variables. The main idea is to try to construct nomograms using a straight line and a doubly marked conic, called 'conic nomograms'. In the case of the multiplication , αβ γ = we can write, when α and β are distinct:
= + + = ⎧ ⎪ ⎪ = ⇔ ∃ = -- ⇔ ∃ + + = ⎨ ⎨ ⎪ ⎪ = -+ = ⎩ ⎩ ⇔ = -
The nullity of this last determinant fully expresses the alignment of the three points, of which two, marked by α and β , are on the parabola with equation 2 y x = and whose third, marked by , γ -lies on the y-axis.
In conclusion
This trial reassured me that nomography is a choice area for rich and attractive practical activities with students of age 16 onwards. Practising the principle notions of the programme of study in a non-routine context, permanent interaction between algebra and geometry, simple and inexpensive materials and the clear enthusiasm of the students are so many arguments which, I hope, will convince teachers to explore this avenue.
Figure 8
8 Figure 8.1. Concurrent-line abaque from (Ocagne, 1891).
Figure 8
8 Figure 8.2. Concurrent-straight-line abaque from (Ocagne, 1891).
Thus the three systems of marked straight lines become three marked curves, forming what d'Ocagne calls an 'alignment nomogram' (Figure 8.3). To solve an equation F( , , ) 0 α β γ = represented by such a nomogram is simple: if, for example, the values of α and β are given, one draws a straight line passing through the marked points α and β on the first two curves and this straight line meets the third curve at a point whose value is .
Figure 8
8 Figure 8.3. Alignment nomogram from (Ocagne, 1891).
Figure 8 . 4 .
84 Figure 8.4. Body surface area of an adult.
Figure 8 . 5 .
85 Figure 8.5. Multiplication table.
Figure 8
8 Figure 8.6. How to use a hyperbolic abaque for multiplication.
Figure 8 . 7 .
87 Figure 8.7. Multiplication abaque. The first hour ended by examining an extended version of the abaque (Figure 8.8), still on A3 paper, allowing working in positive and negative numbers. It was an opportunity for a brief review of negative numbers and the rule of signs.
Figure 8 . 8 .
88 Figure 8.8. Extended multiplication abaque. During the second hour we tackled equations of the second degree. I gave the students some second degree expressions to work with and to factorise, thus clarifying the idea that the solution of the equation 2 0 z sz p -+ = still amounted to finding two numbers x and y such that x y s + = and . xy p = Being 16 years old, you are hardly used to working with parameters, so I had planned to work solely with numerical examples, but one student said: "You are drawing conclusions based on a few examples, but who says that it is generally true with other numbers?". Pleasantly surprised by such maturity, I therefore continued with calculations in a more general form. Given that the abaque directly provides the hyperbola of the equation , xy p = it just remains to draw the straight line of equation x y s + = to read off graphically the abscissae of the points of intersection of the hyperbola and straight line, and thus to solve the second degree equation 2 0 z sz p -+ = . For example (see Figure 8.6), to solve the equation 2 8 12 0 z z -+ = , we draw the straight line of equation 8 x y+ = and we read the abscissae of its point of intersection with the hyperbola of value 12: the solutions are therefore 2 and 6. Drawing the straight line of the equation 8 x y + = is very simple, since all that is needed is to join the points on the axes of the coordinates (8, 0) and (0, 8). If you do not want to damage the abaque by writing on it, you can use a ruler or a taut string between these two points.Curiously, the more the students were at ease with the hyperbolas, the more they seemed to have forgotten everything about straight lines. It took quite a time to recall how to
Figure 8 . 9 .
89 Figure 8.9. How to use a parabolic abaque for multiplication.For that, let the three points in question be A
this for the gradient of AC we can obtain the equation
(Figure 8.10) which allowed them to work efficiently in the interval[1, 10]. This interval can always be reached by multiplying or dividing the given numbers by powers of 10.
Figure 8 .
8 Figure 8.10. Parabolic multiplication nomogram.
Through symmetry it is the same for the other two roots 2
Figure 8 .
8 Figure 8.11. One of Möbius' tables from (Möbius, 1841).
Acknowledgements: My thanks to Janet and Peter Ransom, who translated my text from the French. | 27,647 | [
"12623"
] | [
"54305",
"1004988"
] |
01484262 | en | [
"shs",
"math"
] | 2024/03/04 23:41:48 | 2018 | https://hal.science/hal-01484262/file/tournes_2015e_euler_method.pdf | Dominique Tournès
A graphic approach to Euler's method
Keywords: Tangent to a curve, Euler's polygonal method, Graphic method, Differential equations, Integral calculus, Exponential curve, Augustin-Louis Cauchy, Leonhard Euler, Gottfried Wilhelm Leibniz
To solve differential equations and study transcendental curves appearing in problems of geometry, celestial mechanics, ballistics and physics, mathematicians have imagined numerous approaches since the 17 th century. Alongside integration by quadratures and the series method, we can notably quote the polygonal method formalised by Euler in 1768. He directly used Leibniz's vision of curves as polygons made up of segments of infinitely tiny tangents. After an historical introduction and the study of an appropriate extract from the work by Euler on integral calculus, this chapter recounts a teaching experiment with 18 year olds, the aim of which was to introduce the notion of differential equations with support from the graphic version of the polygonal method. Through the purely geometric construction of integral curves formed from tiny segments of tangents, the students were able to make useful transfers between algebra and geometry and actively discover the first concepts of infinitesimal calculation.
The origins of Euler's method
Since 2001, Euler's method has played a significant place in the 16-18 year old curriculum for those studying the science subjects, both in mathematics and physics. This construction process, close to integrating differential equations, is generally performed numerically, the necessary calculations being carried out with the help of a programmable calculator or spreadsheet. In a previous paper (Tournès, 2007 pp. 263-285), I proposed that one could rely on a purely graphical version of the same method so that the notion of differential equations at high school level had greater meaning. To illustrate this in a concrete way, I am going to describe the teaching method I devised taking inspiration from history and that I was pleased to try out with my final year of upper secondary school students.
In the 17 th century the initial problems that led to differential equations were from either geometry or physics. The geometrical problems were linked to the properties of tangents, curves, squaring the circle and rectification (the process of finding the length of a curve segment). Physics involved the swinging of a pendulum and research into isochronal curves, the paths of light rays in a medium with a variable refractive index, orthogonal trajectory problems and the vibration of a string fixed at both ends. These problems, sometimes anecdotal, often appeared as challenges between corresponding scientists. Closely linked to the invention of infinitesimal calculus by Newton and Leibniz, these led to the most simple differential equations and everyday cases of quadrature (numerical integration). For nearly a century the principal approach to these equations was algebraic in nature because mathematicians tried to express their solutions in finite form using traditional algebraic operations as well as the new methods of differentiation and integration.
From the start of the 18 th century however, more systematic and more difficult problems arose which did not yield to the basic methods formulated by Newtonian mechanics. Mathematical physics provided numerous equations with partial derivatives which led, by separating the variables, to ordinary differential equations. The mechanics of points and solid bodies gave rise directly to such equations. At the heart of this proliferation there were two areas that played specific roles in perfecting these new methods of dealing with differential equations: celestial mechanics and projectiles. Certainly two-body problems and the trajectory of a cannonball in a vacuum that can be integrated by quadrature are different from three-body problems under the influence of gravity and taking air resistance into account when dealing with a projectile. Alongside integration by quadrature two other key routes were explored.
The first is writing the unknown functions as infinite series. This was started by Newton in 1671 and for a long time favoured by the English school of thought. In 1673 Leibniz began using infinite series closely followed by the Bernoulli brothers and other continental mathematicians. This was widely practised in mathematical physics and celestial mechanics sparking a considerable revolution in functions reaching far beyond Descartes' algebraic expressions. Little by little the explosion of infinity into algebraic calculation led to questioning the formal calculations and on to deep reflection about the notion of convergence.
The second method, the polygonal method is found in the early works of the founding fathers of infinitesimal calculus. It is linked to the Leibnizian concept of curves as polygons consisting of an infinity of infinitesimally tiny sides as elements of tangents. For example, in 1694 Leibniz constructed the paracentric isochrone (the curve traced out by a mass moving under gravity such that it distances itself from a fixed point at a constant speed) by means of a succession of segments of tangents as close as possible to the actual arc. On this occasion he writes (Leibniz, 1989, p. 304): Thus we will obtain a polygon […] replacing the unknown curve, that is to say a Mechanical curve replacing a Geometric curve, at the same time we clearly see that it is possible to make the Geometric curve pass through a given point, since such a curve is the limit where the convergent polygons definitely fade.
We can recognise the idea in this that Cauchy uses at a later date to demonstrate the famous theory of existence (Cauchy, 1981, p. 55) which, after a few tweaks, became the Cauchy-Lipschitz theorem. Between Leibniz and Cauchy it was Euler who formalised the polygonal method and from it created the numerical method. Since this really worked for its applications history has remembered it as Euler's method.
A wordy mathematician, master of the pen
Leonhard Euler (1707-1783) could single-handedly embody the mathematics of the 18 th century. Euler studied in his natal town of Basel, Switzerland where his father was the protestant pastor. In 1726 he was offered a post at the Academy of Science in St Petersburg to take over from Nicholas Bernoulli. A few years later, after his position in society was assured, he married Katharina Gsell. She was the daughter of a painter from St Petersburg and like him, of Swiss origin. They had thirteen children, of whom only five survived: Euler took pleasure in recounting that he had made some of his most important mathematical discoveries while holding a baby in his arms while the other children played around him. In 1741, on the invitation of Frederick the Great, Euler joined the Academy of Science in Berlin, where he was to stay until 1766. He then returned to St Petersburg for the last years of his life. Although he had become blind, he pursued his scientific activities without pause with the help of his sons and other members of the Academy.
Writing equally in Latin, German or French Euler maintained a regular correspondence with most of the continental mathematicians, finding himself at the crossroads of contemporary research. Gifted with an extraordinary creative power, he constructed an immense work which significantly enhanced progress in all areas of mathematics and physics. Begun in 1911, the publication of his complete works is still not finished in spite of the 76 volumes that have already appeared: 29 volumes on mathematics, 31 for mechanics and astronomy, 12 for physics and various works and 4 for correspondence.
In particular, Euler's work on differential equations is considerable. With great skill he explored the ideas launched by his predecessors and pushed the majority of them a great deal further. In his research on Riccati's equation
2 ( ) ( ) ( ) y a x y b x y c x ʹ = + +
, so important because its integration is equivalent to that of the second order linear equation
( ) ( ) ( ) y a x y b x y c x ʹʹ ʹ = + +
omnipresent in mathematical physics, Euler had recourse to all methods imaginable: series, definite integrals depending on a parameter, continuous fractions, tractional motion etc. This determination cannot just be explained by mathematical reasons as Euler needed to integrate second order linear equations in many of his works on geometry and physics. From 1728 he met second order equations dealing with the movement of a pendulum in a resistant medium. In 1733 the calculation of the length of a quarter of an ellipse led him to a second order linear equation, then on to one of Riccati's equations. In 1736, following Daniel Bernoulli, he tackled the oscillations of a vertically hung chain, homogenous or not. Later, in 1764, he took an interest in the vibrations of a circular membrane. In this later research he deals with different second order differential equations which he does not know how to integrate exactly. We see him making use more and more frequently of series, and this use becomes systematic from 1750. This was how, on several occasions, he came across Bessel equations and their equivalents leading to the first general expression of Bessel functions.
Euler's text
When equations cannot be integrated by quadratures and do not lend themselves readily to the series method, but for which a solution has to be found at all costs, at least approximately, for practical reasons, Euler resorted to the polygonal method. We find Euler's method initial appearance in the first volume of Institutiones calculi integralis, published in St Petersburg in 1768. However, Euler had already used this method on at least two occasions: in 1753 for his research on the trajectory of a body in a resistant medium and in 1759 to determine the perturbations of a planet or comet (Tournès, 1997, pp. 158-167). This work on ballistics and celestial mechanics shows that, for Euler, practice preceded theory: it is only after having rubbed shoulders at length with substantial applications that the great mathematician was able to perfect the simplified didactic text of 1768. We give below an English translation of the extract from the work which corresponds to what is actually taught in upper secondary. This will allow interested teachers to let their students discover Euler's method based on the original text. Here is the passage in question (Euler, 1768, pp. 424-425), which does not require commentary as it is so clear and instructive.
Problem 85 650. Whatever the differential equation might be, determine its complete integral in the most approximate way.
Solution
Let there be a differential equation between two variables x and y. This equation will be in the form dy V dx = where V is any function of x and y. Moreover, when you are calculating a definite integral, you must interpret it in such a way that if you give x a fixed value, for example x a = , the other variable should take on a given value, for example y b = . Let us first deal with finding the value of y when x is given a slightly different value to a. In other words, let us find y when x a ω = + . Now since ω is a small quantity, the value of y remains close to b. That is why, if x only varies from a to a ω + , it is possible to consider the quantity V as constant in that interval. So, having said x a = and y b = , it will follow that V A = and for this slight change we will have dy A dx = and by integration ( ) y b A x a = + -, a constant having been added so that y b = when x a = . Let us therefore assume that, when x a ω = + , y b Aω = + . In the same way we can allow ourselves to advance further from these last steps by means of more small steps until we finally reach values as far from the initial values as we wish. In order to show this more clearly, let us set them out in succession the following way:
iv iv iv Variables Successive values , , , , , , , , , , , , , , , , , ,
x a a a a a x x y b b b b b y y V A A A A A V V ʹ ʹʹ ʹʹʹ ʹ ʹ ʹʹ ʹʹʹ ʹ ʹ ʹʹ ʹʹʹ ʹ … … …
Obviously, from the first values x a = and y b = , one can derive V A = , but then for the second values we will have
( ) b b A a a ʹ ʹ = +
-, the difference a a ʹhaving been chosen as small as desired. From that, supposing x aʹ = and y bʹ = , we will calculate V Aʹ = and so for the third values we will obtain
( ) b b A a a ʹʹ ʹ ʹ ʹʹ ʹ = +
-, from which having established x aʹʹ = and y bʹʹ = , it will follow that V Aʹʹ = . So for the fourth values, we will have
( ) b b A a a ʹʹʹ ʹʹ ʹʹ ʹʹʹ ʹʹ = +
and from that, supposing x aʹʹʹ = and y bʹʹʹ = , we determine V Aʹʹʹ = and we can advance towards values that are as far removed from the originals as we wish. Now the first series which illustrates the successive values of x can be increasing or decreasing provided the change is by very small amounts.
Corollary 1 651. For each individual tiny interval, the calculation is done in the same way and thus the values which depend successively on each other are obtained. By this method for all the individual values given for x, the corresponding values can be determined.
Corollary 2 652. The smaller the values of ω , the more precise are the values obtained for each interval. However, mistakes made in each interval, even if they are even smaller, increase to a greater number. Corollary 3 653. Now in this calculation, errors derive from the fact that we consider at each interval the two values of x and y as constants and consequently the function V is held to be a constant. It follows that the more the value of V varies from one interval to the next, the greater the errors. If Euler's treatise of 1768 has been recorded by history it is undoubtedly because for the first time the polygonal method was clearly set out for didactic purposes at the same time it took on the entirely numerical form which we know today. Before Euler, according to the ancient geometric view of analysis, one did not study functions but constructed curves. Therefore the polygonal method was initially a way to determine geometrically the new transcendental curves which appeared alongside infinitesimal calculus rather than a numerical process. This aspect, moreover, survived well after Euler at the heart of graphical construction practised by engineers up to the Second World War: numerous variations on and improvements for the polygonal method then came to light to calculate the integrals of differential equations graphically. (Tournès, 2003, pp. 458-468).
In the last year of upper secondary school, in order to introduce the notion of differential equations, I think it could be pertinent to return to the original geometric meaning of the Euler-Cauchy method: constructing a curve based on the knowledge of its tangents and carrying out this construction entirely by geometric means without any recourse to numerical calculation. This is what I demonstrated at the IREM in Réunion and what I am going to present here.
Account of classroom activities
I set up this strategy in a final year science class covering two two-hour sessions. The trial took place at the Le Verger High School, at Sainte-Marie in Réunion. It was Mr Jean-Claude Lise's class and I am most grateful for his welcome and collaboration. The students had previously met Euler's methods with their mathematics and physics teachers in the traditional numeric form, carrying out the calculations on a spreadsheet.
First session: where the students see the exponential in a new light
The first session was dedicated to the construction of an exponential function, the keystone of the final year programme. I began with a brief historical outline of Euler: the main stages of his life in Basel, St Petersburg and Berlin; the immensity of his writings in mathematics and physics; some details on certain of his works which link with what is taught in high school. Then after having quickly run through the extract given earlier from the 1768 Institutiones calculi integralis and having made the link between the students' knowledge on differential equations, I told them that my objective was to get them to apply Euler's method in a different way, no longer numerically but purely graphically, by replacing all the calculations by geometric constructions with a ruler and compass. For that they first had to learn the basics of graphical construction, as they appear in the first pages of Descartes' Géométrie (Descartes, 1637, pp. 297-298). So I suggested to the class the following preparatory activity: "given one segment one unit in length and two segments of length x and y, construct segments of length , , , x y x y x y x y + -× ". Getting underway was laborious, the students having great difficulty in recalling Thalès' theorem and in applying it in context. They nevertheless managed the synthesis of Figure 7.1, completed by several of them on the interactive whiteboard. x y to the neighbouring point ( , ) x
x y y + Δ + Δ by drawing a small segment of the tangent. The students easily understood how to transform the original ordinate y at the starting point into a slope for the required tangent (see Figure 7.2). To do this it was enough to extend by a unit to the left of the point ( ,0)
x and then to join the point ( 1,0)
xto the point ( , )
x y ; thus one obtains a segment of slope y which now only needs to be extended. Once this basic construction was understood they had to repeat it in their own way from the starting point (0,1) to obtain a polygonal line approaching the graph of the exponential function. Figure 7.3 shows three quite different examples of students' work; on the third a confusion between the chosen unit (2 cm) and the process of the subdivision used for the construction (1 cm) can be seen, which means that the student treated the equations as 2 y y ʹ = . At this point I digressed so we could delve deeper into Euler's method and show its implicit variation. Explicitly one moves from a point ( , )
x y to a neighbouring point
( , ) x
x y y + Δ + Δ using the tangent at the starting point. In a symmetrical way, we can use the tangent at the end point, i.e. replace the differential equation y y ʹ = by the equation that uses finite differences ( ) y y y x Δ = + Δ Δ . We speak of the implicit method because the difference y Δ is not directly given, but determined implicitly by the previous equation. In the case of the exponential this equation is easily solved and we get:
1 y x y x Δ Δ = -Δ .
I then asked the students to do this basic construction with finite differences and to repeat it to arrive at a second construction approached from the exponential (see Figure 7.4). Some quicker students neatly incorporated the two figures on one single figure. The session ended with an analysis of this final figure: we tried to understand why the true curve defined by the differential equation, supposing it existed, had to be situated between the two polygonal lines provided by the explicit and implicit methods. We concluded by saying that a much better approximate polygonal line would be obtained by taking the average of the two y values for each x value.
Second session: where the students dealt with two Baccalaureate topics in an unusual way
During my second session with the students they had to deal with two topics from the course on Euler's method by practising the new techniques of graphical construction they had just discovered. One of these topics had been given by their teacher two weeks ago in a mock exam and I asked them to work on the other as a piece of homework in the week between my two sessions. In this way they had all the elements on hand to compare the numeric and graphic approaches of the two individual differential equations. Here is the start of the first topic we worked on:
We were to study the functions f which could be derived in [0, [ +∞ subject to
(1)
for all x ∈ 0, + ∞ ⎡ ⎣ ⎡ ⎣ , f (x) ʹ f (x) = 1 f (0) = 1. ⎧ ⎨ ⎪ ⎩ ⎪ Part A
Let a function f exist which will satisfy (1). Euler's method allows the construction of a series of points ( ) n M near the curve represented by f . The step 0.1 h = is chosen.
Then the coordinates ( , )
n n
x y of the points n M obtained by this method satisfy: The problem continues by getting the students to check that the function
( ) 2 1 f x x = +
is the only solution and asking them to compare the values of (0.1), (0.2), (0.3), (0.4), (0.5) f f f f f to those previously obtained by Euler's method. I gave the students a challenge: graphically construct a polygonal line from 0 x = to 0.5 x = with step 0.1 h = , without doing any numerical calculation, then measure with a 20 cm ruler the values of the corresponding ordinates and compare them to those found by the numerical method. At this stage in the progress of the work I gave no more guidance and left the students to fend for themselves. The completion of the basic construction associated with the equation y x y Δ = Δ took most of them an extremely long time. Figure 7.5 illustrates a way of organising this construction, but the students using their own initiative found many other ways. Figure 7.6 shows four pieces of student work, all very different. Reaching such a result took some more than an hour of intense work. Often there were several false starts or careless errors. I came away convinced that if one allows the students time to get involved in what they are doing they achieve remarkable results. The more advanced students could then turn their attention in a similar way to the second syllabus topic, following the same path as before. The start of the problem, reproduced below, deals with a differential equation using Euler's method. Here one is given the explicit function
4 4 e 1 ( ) 2 e 1 x x f x ⎛ ⎞ - = ⎜ ⎟ + ⎝ ⎠
, which allows it to be studied directly and bring attention to the asymptote 2 y = .
The plane is given an orthonormal basis ( ) 0, , i j ! ! . We are interested in the functions f derivable in [0, [ +∞ satisfying the conditions
(1) : for all real x in 0,
+ ∞ ⎡ ⎣ ⎡ ⎣ , ʹ f (x) = 4 -f (x) ⎡ ⎣ ⎤ ⎦ 2 (2) : f (0) = 0. ⎧ ⎨ ⎪ ⎩ ⎪
We admit that there is a unique function f satisfying (1) and ( 2) simultaneously.
The two parts can be dealt with independently. The annex will be completed and submitted with the work at the end of the text.
Part B Follow-up study
To obtain a representative approximation of the curve f we use Euler's method with a step length of 0.2. Thus we obtain a succession of points marked ( ) n M , an abscissa n x and ordinate n y such that 0 1 2 0 1 0 for all natural numbers , 0.2 0 for all for all natural numbers , 0, 2 0.8. Now well versed in graphical construction, the students are tasked with finding a basic construction using the equation with finite differences expression of 'the inverse problem of tangents' takes on its full meaning here: the students experience this problem by physically drawing the tangent and following its movement step by step. After these very telling graphic investigations it should be easier for them to move from the small to the infinitely small; from the discrete to the continuous; and to imagine the ideal curve defined by the differential equation which they will eventually study more abstractly.
Figure 7 . 1 .
71 Figure 7.1. Basic constructions. Then we moved on to the exponential function by replacing the differential equation y y ʹ = by the equation with finite differences y y x Δ = Δ . I first asked the class to explain the basic construction which would allow progress from a given point ( , )x y to the neighbouring point ( , ) xx y y + Δ + Δ by drawing a small segment of the tangent. The students easily understood how to transform the original ordinate y at the starting point into a slope for the required tangent (see Figure7.2). To do this it was enough to extend by a unit to the left of the point ( ,0)x and then to join the point ( 1,0)xto the point ( , )x y ; thus one obtains a segment of slope y which now only needs to be extended.
Figure 7 . 2 .
72 Figure 7.2. Basic construction of a tangent to the exponential.
Figure 7
7 Figure 7.3. Three constructions of the exponential.
Figure 7 . 4 .
74 Figure 7.4. Constructing the exponential using Euler's implicit method.
Figure 7 . 5 .
75 Figure 7.5. Basic construction of a tangent for the equation 1/ . y y ʹ =
. The coordinates of the first few points are shown in the table below. Complete the table. Give your answers to the nearest
7.7) and with carefully constructing an approximate integral curve (see Figure7.8), which will allow them to compare the diagram with the numerical values in the table.
Acknowledgements: My thanks to Janet and Peter Ransom, who translated my text from the French.
At the end of the trial I am convinced that these practical tasks inspired by the history of the polygonal method allow a revision of basic geometric knowledge learnt in secondary school (ages 11-16). They also create the opportunity for fruitful interaction between algebra and geometry as well as offering a gentle introduction to analysis. They lead to acquiring a kinaesthetic feel of the tangent describing the curve from the differential equation. The old | 25,291 | [
"12623"
] | [
"54305",
"1004988"
] |
01482901 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2017 | https://minesparis-psl.hal.science/hal-01482901/file/IMDS_Manuscript_online.pdf | Using Customer-related Data to Enhance E-grocery Home Delivery
Keywords: City Logistics, Food Delivery, E-commerce, Data Mining, Freight Transportation
Purpose -The development of e-grocery allows people to purchase food online and benefit from home delivery service. Nevertheless, a high rate of failed deliveries due to the customer's absence causes significant loss of logistics efficiency, especially for perishable food. This paper proposes an innovative approach to use customer-related data to optimize e-grocery home delivery. The approach estimates the absence probability of a customer by mining electricity consumption data, in order to improve the success rate of delivery and optimize transportation.
Design/methodology/approach -The methodological approach consists of two stages: a data mining stage that estimates absence probabilities, and an optimization stage to optimize transportation.
Findings -Computational experiments reveal that the proposed approach could reduce the total travel distance by 3% to 20%, and theoretically increase the success rate of first-round delivery approximately by18%-26%.
Research limitations/implications -The proposed approach combines two attractive research streams on data mining and transportation planning to provide a solution for e-commerce logistics.
Practical implications -This study gives an insight to e-grocery retailers and carriers on how to use customer-related data to improve home delivery effectiveness and efficiency.
Social implications -The proposed approach can be used to reduce environmental footprint generated by freight distribution in a city, and to improve customers' experience on online shopping.
Originality/value -Being an experimental study, this work demonstrates the effectiveness of data-driven innovative solutions to e-grocery home delivery problem. The paper provides also a methodological approach to this line of research.
Introduction
Recent developments of e-commerce have had a significant impact on food supply chains. Today, many traditional grocery retailers offer their customers the opportunity to purchase food items online and have them delivered to their home utilizing their existing distribution network [START_REF] Ogawara | Internet grocery business in Japan: current business models and future trends[END_REF][START_REF] Agatz | E-fulfillment and multi-channel distribution -A review[END_REF]. At the same time, new companies enter the retail groceries market by providing online supermarkets with no physical stores, fulfilling home deliveries from their warehouses (e.g. the case of Ocado [START_REF] Saskia | Innovations in e-grocery and Logistics Solutions for Cities[END_REF]). In addition to that, internetbased retailers, like Amazon, exploit their e-commerce expertise to build their own online grocery shops thus extending even more the options end-customers have for purchasing groceries online [START_REF] Kang | Why Consumers Go to Online Grocery: Comparing Vegetables with Grains[END_REF].
In e-grocery commerce, home delivery -the process of delivering goods from a retailer's storage point (e.g. distribution centers, shops) to a customer's home-plays a crucial role [START_REF] Punakivi | Identifying the success factors in e-grocery home delivery[END_REF]. In fact, due to its convenience to customers, home delivery has become a dominant distribution channel of business-toconsumer e-commerce [START_REF] Campbell | Incentive Schemes for Attended Home Delivery Services[END_REF]. Nevertheless, a certain challenge faced in e-grocery is that the perishability and storage condition-sensitivity of food and drink items requires the attendance of the customer at the moment of delivery [START_REF] Hsu | Vehicle routing problem with time-windows for perishable food delivery[END_REF]. At the same time, this makes alternative methods for unattended delivery, such as delivery boxes, reception boxes and shared reception boxes, hard and unsafe to use. This has led e-tailers to introduce strict policies for deliveries that could not be completed due to the customer's non-attendance (Ehmke, 2012a), while aiming to increase the probability of an attended delivery by allowing their customers to choose their preferred time slot. However, it is still common for end-customers to be absent at the time of delivery either due to their own fault (e.g. failing to remember) or due to a delayed delivery (e.g. due to traffic).
In this paper, we aim to address the attended home delivery problem (AHDP) (Ehmke, 2012a, Ehmke and[START_REF] Ehmke | Customer acceptance mechanisms for home deliveries in metropolitan areas[END_REF]) in e-grocery motivated by the fact the attendance of the customer is often hard to predict. We do this by investigating a new approach that utilizes customer-related data to improve attended home delivery efficiency. The approach consists of two stages. The first stage concerns a data mining process, whose objective is to estimate the purchaser's absence probability at a given time window according to his/her electricity consumption behavior. The second stage uses the calculated absence probabilities as an input to an optimization model for managing the fleet of trucks that execute the home deliveries. This paper aims to make both a theoretical and a practical contribution to the AHDP. With regards to the theoretical contribution, this study is among the first ones applying data mining techniques to AHDP. It provides a novel methodology to investigate the AHDP from the aspect of customer-related data, which can be thought of as a new research line on AHDP. From a practical point of view, the two-stage approach proposed could serve as a decision-making model for e-grocery retailers, or other retail businesses that provide (attended) home delivery service, to organize or enhance their delivery service.
Following this introduction, Section 2 provides a review of related work. Then, Section 3 presents the two-stage approach, that is, respectively, data mining stage and the transportation planning stage. Section 4 presents an application example to demonstrate the practicability and performance of the proposed approach. Finally Section 5 concludes this work.
A brief review of e-grocery and its logistics
In this section we briefly discuss the literature from three related problems: current challenges to grocery and food e-commerce, delivering grocery and food items in ecommerce, and innovations in home delivery.
On-line grocery retailing
On-line grocery retailing, also known as e-grocery, is a type of business to consumer e-commerce that has enjoyed great growth in the last decade and is expected to continue growing in the years to come [START_REF] Mortimer | Online grocery shopping: the impact of shopping frequency on perceived risk[END_REF]. Similarly to other Internet retailing examples, on-line grocery shopping offers significant benefits to end-customers including time-savings, access to multiple retailers and products and home delivery. Nevertheless, there are a number of factors that can affect the decision of end-customers to use on-line channels for their grocery shopping. These important factors include:
(1) Ordering interface and product information [START_REF] Tanskanen | The way to profitable Internet grocery retailing -six lessons learned[END_REF][START_REF] Boyer | Customer Behavior in an Online Ordering Application: A Decision Scoring Model*[END_REF][START_REF] Boyer | Customer behavioral intentions for online purchases: An examination of fulfillment method and customer experience level[END_REF]): a well-designed and easy to use online shop is critical to the overall customer experience of online shopping. In grocery shopping in particular, where customers are used to visually checking the products they buy (for nutrition details, expiry dates, ingredients etc.), it is important that label information and accurate product photos are available to the end-customer via a website or app. Moreover, customers often expect that managing their basket and checking out online should be a straighforward process, similar to placing products in a shopping cart and visiting a cashier in a physical store. (2) Product range availability [START_REF] Colla | E-commerce: exploring the critical success factors[END_REF][START_REF] Anckar | Creating customer value in online grocery shopping[END_REF][START_REF] Zott | Strategies for value creation in e-commerce:: best practice in Europe[END_REF]: customers expect the products they can normally purchase in a physical store to be available online. A certain limitation here has to do with the purchase of non-packaged food (or even fresh packaged food) products, especially in cases when the customer is used to buying a product after visually checking its condition. (3) Suitable logistical delivery options (Wang et al., 2016a[START_REF] Anu | E-Commerce Logistics: A Literature Research Review and Topics for Future Research[END_REF][START_REF] Koster | Distribution strategies for online retailers[END_REF]: having chosen the products they want to purchase, customers need to also make a decision about the delivery of the physical goods. Besides choosing the preferable option, this factor also includes the retailer's response to delivery problems, cost of different options and subscription models for delivery services. (4) Consistency between all sales and media channels [START_REF] Ishfaq | Realignment of the physical distribution process in omni-channel fulfillment[END_REF][START_REF] Hübner | Last mile fulfilment and distribution in omni-channel grocery retailing: A strategic planning framework[END_REF][START_REF] Breugelmans | Cross-Channel Effects of Price Promotions: An Empirical Analysis of the Multi-Channel Grocery Retail Sector[END_REF]: this is a relatively recent trend due to the emergence of omni-channel retail. With retailers offering more and more options for customers to purchase goods, they need to make sure that different channels offer the same information and functionality.
In this paper, we focus on the third factor described above -the delivery options offered to the customer -due to its continuous importance for e-commerce success [START_REF] Ricker | Order fulfillment: the hidden key to e-commerce success[END_REF][START_REF] Lee | Winning the last mile of e-commerce[END_REF][START_REF] Zhang | Repurchase intention in B2C e-commerce-A relationship quality perspective[END_REF][START_REF] Ramanathan | The moderating roles of risk and efficiency on the relationship between logistics performance and customer loyalty in e-commerce[END_REF]. We discuss this challenge in more detail in the next section.
Delivery challenges in grocery and food e-commerce
Unlike traditional in-store sales where customers are able to receive physical products directly after their purchase, e-grocery requires a set of logistics operations that are crucial not only for the right delivery of a product but also for the overall satisfaction of the end-customer [START_REF] Hübner | Last mile fulfilment and distribution in omni-channel grocery retailing: A strategic planning framework[END_REF]. It has been noted that the retailers that provide a grocery home delivery service are the ones that face the greatest logistical challenges [START_REF] Fernie | Retail logistics in the UK: past, present and future[END_REF]. These logistical challenges refer to both the back-end fulfillment and the last mile distribution of an order [START_REF] Hübner | Last mile fulfilment and distribution in omni-channel grocery retailing: A strategic planning framework[END_REF]. Back-end fulfillment mainly deals with the picking and preparation of an order. Last mile distribution (which is the focus of this study) involves decisions related to the delivery method, time and area as well as the returns of unwanted products.
Two methods are commonly available for e-grocery delivery: home delivery and click and collect. As regards the first method, due to the perishability and storage condition-sensitivity of food, attendance of customer (or receiver in general) is often required at the moment of home delivery [START_REF] Hsu | Vehicle routing problem with time-windows for perishable food delivery[END_REF]. This problem is known in the literature as the Attended Home Delivery Problem (AHDP) (Ehmke, 2012a, Ehmke and[START_REF] Ehmke | Customer acceptance mechanisms for home deliveries in metropolitan areas[END_REF]. As the attendance of a customer is hard to predict, home delivery usually results in high rate of failures [START_REF] Agatz | Time Slot Management in Attended Home Delivery[END_REF][START_REF] Gevaers | Characteristics and typology of last-mile logistics from an innovation perspective in an urban context[END_REF][START_REF] Lowe | The Last Mile: Exploring the online purchasing and delivery journey[END_REF] and can lead to high delivery costs, waste of waiting time (for the customer) and waste of energy spent in transportation. Effectively and efficiently tackling AHDP is becoming a key success factor to food delivery, as well as an important challenge with regards to sustainability of freight transportation (Ehmke and Mattfeld, 2012[START_REF] De | Factor Influencing Logistics Service Providers Efficiency' in Urban Distribution Systems[END_REF][START_REF] Gevaers | Characteristics and typology of last-mile logistics from an innovation perspective in an urban context[END_REF].
At the stage of transportation, the AHDP has mainly been addressed as a Vehicle Routing Problem (VRP), or VRP with time windows (VRPTW) if a delivery time window is imposed (Ehmke andMattfeld, 2012, Hsu et al., 2007). The latter is also studied as time slot management problem [START_REF] Agatz | Time Slot Management in Attended Home Delivery[END_REF]. Reliability and width of the time windows -slots -strongly impact on the results of routing optimization as well as on customer's experience. At the stage of reception, some practical solutions can be employed when a customer is absent at delivery (i.e. unattended delivery), such as leaving the package to someone nearby (neighbors, gatekeeper etc.) or at a secure place (mailbox, yard, garage etc.) or calling the purchaser to confirm attendance [START_REF] Punakivi | Identifying the success factors in e-grocery home delivery[END_REF]Saranen, 2001, Ehmke, 2012a). These solutions are helpful to the problem, but their disadvantages are significant; time wasted for phone calls or for waiting for delivery, product security issues, lockers storage condition issue, reception boxes size issue [START_REF] Iwan | Analysis of Parcel Lockers' Efficiency as the Last Mile Delivery Solution -The Results of the Research in Poland[END_REF][START_REF] Lowe | The Last Mile: Exploring the online purchasing and delivery journey[END_REF][START_REF] Ehmke | Customer acceptance mechanisms for home deliveries in metropolitan areas[END_REF]. These disadvantages are particularly noticeable in food delivery.
In order to tackle the above challenges and improve home delivery, e-tailers and logistics providers have been driven to invest in delivery innovation and technology [START_REF] Lowe | The Last Mile: Exploring the online purchasing and delivery journey[END_REF]. In the competitive delivery market, this is expected to create solutions that can better meet customer needs. We discuss relevant innovations in the next section.
Innovation in home delivery
Innovation in home delivery has received considerable attention in both real-world practice and research in recent years. Generally, innovations can be classified in three lines of research that are constantly and interactively developing: organizational, technology-enabled, and data technique-enabled innovations. Organizational innovations refer to the implementation of innovative organization models or methods for last mile distribution (LMD). Examples include urban consolidation centers [START_REF] Allen | A review of urban consolidation centres in the supply chain based on a case study approach[END_REF][START_REF] Van Duin | New challenges for urban consolidation centres: A case study in The Hague[END_REF], synchronization and horizontal collaboration [START_REF] De Souza | Collaborative urban logistics-synchronizing the last mile a Singapore research perspective[END_REF] and crowdsourcing [START_REF] Paloheimo | Transport reduction by crowdsourced deliveries -a library case in Finland[END_REF][START_REF] Chen | Using taxis to collect citywide Ecommerce reverse flows: a crowdsourcing solution[END_REF].
Technology-enabled innovations are the result of the application of emerging technologies in LMD. Automated lockers and drones are the most noticeable examples in this line as both industrial practice (DHL, 2016[START_REF] Amazon | Amazon Prime Air[END_REF] and academic work indicate [START_REF] Gevaers | Characteristics and typology of last-mile logistics from an innovation perspective in an urban context[END_REF][START_REF] Iwan | Analysis of Parcel Lockers' Efficiency as the Last Mile Delivery Solution -The Results of the Research in Poland[END_REF].
Data technique-enabled innovations refer to applications of data techniques (data mining, data analytics, big data etc.) in LMD aiming at improving effectiveness and efficiency. Academic research has concentrated heavily on the exploitation of historical traffic data sets to optimize time-dependent routes in LMD. For example, Ehmke and Mattfeld (2012) and Ehmke et al. (2012) propose mining taxi-Floating Car Data (FCD) to determine time-dependent travel times in a city and use that information to optimize routing. Chen et al. ( 2016) mines taxi-FCD to plan taxi-based crowdsourcing LMD in big cities. These examples have indicated a great potential for data techniques to improve LMD, especially in the context of city logistics. Nevertheless, data other than historical traffic data are rarely studied, even though they could provide significant benefits to LMD. We notice here that customer-related data have been previously used in the context of e-commerce but mainly for the purpose of marketing. Examples include mining customer data to understand consumption performance and design online shopping services [START_REF] Liao | Mining customer knowledge to implement online shopping and home delivery for hypermarkets[END_REF], or for the purpose of customer relationship management [START_REF] Mithas | Why do customer relationship management applications affect customer satisfaction[END_REF][START_REF] Karakostas | The state of CRM adoption by the financial services in the UK: an empirical investigation[END_REF].
Inspired by the potential data-enabled techniques have shown for LMD in particular but also for logistics and supply chain management in general (Wang et al., 2016b), this study is focused on this type of home delivery innovation. More specifically, the fact that the attended home delivery could benefit from predictions regarding a customer's absence makes data-enabled techniques -which are capable of providing such predictions -a promising tool for tackling this problem.
A data-driven approach for e-grocery home-delivery
The discussion in the previous section has highlighted the importance of the AHDP problem in e-grocery delivery as well as the potential of data usage in last-mile logistics. Motivated by this, this work introduces an innovative approach that utilizes customer-related data to improve the home-delivery service of grocery items. In summary, we attempt to exploit customer-related data in order to provide a method to determine the optimal time slot profiles for carriers or shippers, in order to optimize the success rate of home delivery.
The approach proposed in this paper consists of two stages (see Figure 1). The first stage aims to estimate the probability the purchaser of an online grocery order will be at the chosen delivery location at different points of time during the day. At this stage, customer-related data are collected and used as an input to a data mining model, capable of calculation the customer attendance (or absence) probabilities. A time series of customer telemetry data is taken as the input to the framework. The time series is arranged as a temporally ordered telemetry measurement collected from each customer, noted as (x 1 , x 2 , x 3 ...x n ), as x i to be the measurement obtained at the i-th time slot during the whole observation time window. The physical sense of x i depends on the intrinsic of different telemetry sources. For example, if we use in-house electrical power loads data of each customer, x i then denotes the instant power load level of one particular customer measured at the i-th time slot. Other examples include water consumption data and historical data from previous deliveries. With more and more data being collected nowadays from households and individuals (due to emerging trends like the Internet of Things, Big Data and Open Data), we expect the availability of customer-related data that can be used in this approach to increase considerably in the years to come. Such an open dataset on electricity consumption is provided by the Irish Commission for Energy Regulation (CER) and is used as an example in the remaining of this study. Once the customer-related data are collected, they are used in a data mining model to estimate home attendance. As an extra input, expert knowledge might also be required at this stage, especially with regards to energy consumption, in order to link consumption data with occupancy detection (home attendance or absence). In this paper, we estimate home absence at given time windows according to one's electricity consumption behavior. Building statistical correlations between energy consumption traces and social-economical factors of customers has received a lot of attention lately. The major purpose of the research topic is to find out the underlying patterns of customers' daily living habits based on their energy usage behaviors, in order to provide a strong base to not only the energy supplying service, but also to other value added services, such as on-line shopping recommendation and targeted advertisement.
Generally, the previous research efforts in this domain may be divided into two categories: end-user models and econometric methods. End-user models are commonly used as an alternative to black-box methods [START_REF] Wood | Dynamic energy-consumption indicators for domestic appliances: environment, behaviour and design[END_REF][START_REF] Abreu | Household electricity consumption routines and tailored feedback[END_REF][START_REF] Aman | Improving energy use forecast for campus micro-grids using indirect indicators[END_REF][START_REF] Kolter | A large-scale study on predicting and contextualizing building energy usage[END_REF][START_REF] Beckel | Towards automatic classification of private households using electricity consumption data[END_REF]. They require information about housing conditions, electrical appliance usage and environmental factors. Such background information is used with energy domain knowledge to disaggregate the daily electricity consumption measurements of a specific user into elementary components, including heating/cooling, water usage, cooking and other behaviors. The disaggregation result is then applied to find out their usage preferences. The shortcoming of end-user models is that forecasting performance depends heavily on the quality of available information, which makes them sensitive to noise and unable to perform automatically. Econometric methods [START_REF] Beckel | Towards automatic classification of private households using electricity consumption data[END_REF][START_REF] Abreu | Household electricity consumption routines and tailored feedback[END_REF][START_REF] Kolter | A large-scale study on predicting and contextualizing building energy usage[END_REF] estimate the relationship between energy consumption profiles and the factors influencing consumption behavior using statistical machine learning approaches, such as support vector regressors, decision trees and so on. Econometric models are built by learning the mapping from pairs of the factors and energy consumption profiles automatically, which is appealing for realistic application deployments. Recently, this category has gained popularity. Most research efforts along this direction focus on estimating the users' general social economical factors, such as professions, family status, salary levels and so on.
The outcome of the first stage, which is in the form of a set of time windows with attendance probabilities for each customer, serves as an input to the second stage. In this stage, the approach aims to provide transportation plans for carriers who need to schedule their home-delivery operations. Here, information about delivery requests, such as origin and destination, delivery option and order size is combined with the estimated attendance probabilities to decide when and how a delivery should be made.
As already discussed in Section 2.2, scheduling of attended home deliveries is often done by first modeling the problem as a vehicle routing problem (usually with time windows and limited capacity). In short, in the context of last mile delivery, the vehicle routing problem involves the planning of a set of routes for a capacitated vehicle that starts from and ends with a (retailer's) warehouse, aiming at delivering the demanded goods to a set of customers [START_REF] Ehmke | Integration of information and optimization models for routing in city logistics[END_REF][START_REF] Cattaruzza | Vehicle routing problems for city logistics[END_REF].
Once the VRP problem is modeled, an optimization technique can be employed to solve it. In this paper, we propose a model that takes into account the customer absence probabilities and aims to establish transportation plans that maximize delivery success rates while minimizing total transportation distances. Other models can also be considered in the second stage of the approach, depending on the key performance indicators defined by each retailer or logistics provider as well as the limitations imposed by each business scenario.
Having introduced our approach, in the next section we will demonstrate its application using an example. We will also evaluate the effectiveness of the approach (and thus the effectiveness of data-driven techniques for e-grocery home delivery) by comparing it with a base scenario where absence probabilities are unknown.
Application example
This section aims to illustrate how the proposed methodological approach can be applied in a real world case. It also demonstrates potential benefits of the approach over conventional solutions. The illustration of the application follows the two-stage approach, in which Stage 1 relates to the customer-related data collection and mining; and Stage 2 to the transportation planning optimization.
Stage 1
Customer-related data collection
Due to the scarcity of available customer-related data, this work uses a publicly available electricity consumption trace data set, named CER ISSDA to simulate and demonstrate the use of estimating occupancy patterns of private households for estimating absence probabilities. The CER ISSDA dataset is collected by the Irish Commission for Energy Regulation (CER) in a smart meter study1 : it contains electricity consumption data of 4,225 private households and 485 small and medium enterprises (all called customer hereafter); the trace covers 1.5 years (from July 2009 to December 2010). For each customer, the daily load curve is sampled every 30 minutes: energy data can be thought of as a series of timestamps and energy readings.
In addition to energy data, the dataset includes a series of survey sheets and answers for each customer, describing their housing condition, occupancy, employment status, income level, social class, appliance usage information and other socio-economic factors. Forty-one survey questions belonging to five categories were selected that enable the customer profiles to be built based on heating and lighting behavior, hot water and other electrical appliances usage. As a sample for this work, we select 20 private households randomly from the dataset to simulate the shipping network in a practical application. To avoid the effects of seasonal variation on the consumption profiles, e.g. air conditioning use in summer and heating use in winter changing daily consumption patterns, we only focus on the time period from April 1 st to June 30 th in 2009 to collect electricity consumption data, which in total involves 91 days.
Data mining model
The detection of occupancy within a residential household is primarily based on activity detection. Activity within a household is often linked to electric consumption [START_REF] Kleiminger | Occupancy Detection from Electricity Consumption Data[END_REF]. Therefore, the variance and magnitude of a residential consumption profile can give indications about whether a household is occupied or not. Peaks and high variation in a consumption profile (indicating typical active behaviors) were labeled in our model and then compared to low variation and low overall consumption periods. The combination of low overall consumption and low consumption profile variation indicates a period of inoccupation (absence). Automated devices within a household can interfere with occupancy detection. High overall consumption due to heating devices left on can also interfere with occupancy detection. Thus it is necessary to identify patterns for each specific client. For each client, a typical profile variation for occupied and unoccupied periods was established in our model. A threshold of average consumption was also determined. When overall consumption was below the threshold and variation of the profile was relatively low, the period was determined to be unoccupied.
Based on the above qualitative analysis, we propose a computational model to estimate occupied periods for a given user, as described below. Hereafter, we use X i,j to denote electricity consumption level at the j-th time step of i-th day for one specific user. Given the context of shipping programming, we especially focus on estimating occupancy period within the time interval ranging from 8:00 am to 8:00 pm. Our estimation approach is defined into 4 successive steps:
1. Indicator estimation: calculating the consumption magnitudes X i,j and the absolute values of the first-order difference D 1 i,j and D 2 i,j , as defined in Eq (1). The three measurements form an indicator cell (x i,j , D 1 i,j , D 2 i,j ), which is used as the feature to detect occupancy patterns. Given all 91 days in the database, we have in total 91 (days) * 12 (delivery hours per day) * 2 (timestamps per hour) indicator cells for each user.
𝐷 !,! ! = 𝑥 !,! -𝑥 !,!!! and 𝐷 !,! ! = 𝑥 !,! -𝑥 !,!!! (1)
2. Label the L p indicator cells for each user with the most typical active behaviors manually by human experts on energy consumption. Treating occupancy detection as an anomaly detection problem, the labeled L p indicator cells form a reference set to identify whether the customer's activities are present within a given time period. Based on the labeled indicator cells, we build a linear kernel one-class SVM based detector F [START_REF] Schölkopf | Estimating the support of a high-dimensional distribution[END_REF]. The output Y i,j of the SVM detector is 1 or 0, deciding whether the human activity is present at the given time step or not (1 corresponds to the presence of the customer and vice versa).
𝑌 !,! = 𝐹(𝑥 !,! , 𝐷 !,! ! , 𝐷 !,! ! ) (2)
3. Apply the built SVM based detector on the 24 time steps between 8:00 am to 8:00 pm for each day besides the labeled L p time steps. The binary output of the detector is used as an estimated occupancy label of the concerned time steps. As such, for the i-th day, we can obtain a 24-dimensional binary valued vector as the estimated occupancy label vector of the concerned day.
4. Accumulate estimated occupancy label vectors derived from the total 91 days.
For each day of the week (from Monday to Sunday), we calculate the empirical expectation of the occupancy label for each time step as the occupancy probability of the time step j for each day of the week. As a result, we generate a probability map M∈R 24x7 . Each entry M k,j represents the estimated occupancy probability of the time step (j+16) on the given day of the week. Monday to Sunday is denoted as 0 to 6.
According to the algorithmic description, the occupancy detection procedure is defined as an one-class anomaly detection problem in our work. Compared with varied inactive consumption behaviors, typical active behavior patterns indicating residences' occupancy is easier to identify manually for human experts. Therefore, the expected occupancy detector is built to describe the common characteristics of the active consumption profiles and expected to differentiate the active profiles from inactive ones at the same time. This is a typical one-class learning problem in machine learning theory [START_REF] Schölkopf | Estimating the support of a high-dimensional distribution[END_REF] and one-class SVM perfectly matches with this goal. In previous research, it is popularly used to detect a concerned events from backgrounds given only limited number of samples belonging to the event are available. The indicator cell designed in this work covers both the instant measurement of the electricity consumption level and the dynamical information about the consumption variation, represented by the two first-order differences. The feature design is based on the theoretical analysis of active consumption profiles. High instant consumption level and high variance of consumption measurement are strong indicators of active behaviors, thus the chosen three features are able to sensitively indicate the potential human activities existing behind the consumption measurements. Finally, the binary output of the occupancy detector can be noisy due to the hard threshold intrinsic. Furthermore, to insert occupancy estimate as a constraint into the shipping programming problem, we need to smooth the binary decision into a soft, continuously valued confidence of residences' occupancy for each specific time step. As a result, empirical expectation is used as the estimate of the underlying true occupancy probability.
Customer absence probability
Finally, the results of time slots with absence probability by customer are obtained, as Figure 2 shown. Among the 20 customers randomly selected, 5 are excluded in this study because the variation of their consumption curve is not significant (thus making these cases irrelevant to this study, e.g. absence probability always > 90%). Moreover, the daily working hours from 8h-17h (common delivery hours in many countries) are chosen. The customer attendance probability is then the inverse of the absence probability (knowing that both staying out or inactive at home are considered as absence here). The results show that most customers have similar absence probability curve during most days of the week but Saturday. The figure shows only the average of weekdays' probability to compare with Saturday (Sunday is not considered for delivery). each customer. Accordingly a suitable best delivery time window profile for every customer can be defined.
Stage 2
Delivery requests
This part aims to set up delivery request for the 15 customers identified in Stage 1. A request is defined by four attributes: size, origin and destination, and delivery option.
In order to describe origin and destinations, we use a two-dimension (x,y) plan to simulate a city of 18*18 km 2 , in which the 15 customers are randomly located (see Figure 3 and Table 1). In particular (0,0) represents e-tailer's storage point for the city.
Figure 3. The objective of this stage is to establish delivery routes that satisfy all delivery requests, while minimizing total distance generated by the first-round delivery and, in case of failure, the rescheduled delivery. Several assumptions are made here.
(1) Every customer has one delivery request of size 1 in a week and the truck capacity is 5. (2) The delivery option given to the customers would be the day of delivery expected. Then, it is up to the carrier to select the optimal delivery time on that day and propose it to the customers. Two types of days are considered here: weekdays and Saturday. By this assumption we attempt to explore the interest of Saturday delivery.
(3) All customers will accept the proposed delivery time windows. The time window is set to 1 hour and the associated absence probability is the average of the values in Figure 2, e.g. the average value of 8h-8h30 and 8h30-9h for 8h-9h. (4) Failed delivery due to the absence of customer will be rescheduled as a direct delivery from the e-tailer's storage point to the customer in the next day.
Accordingly, the distance generated by the rescheduling is equal to round trip distance * absence probability for each customer. In other words, we do not consider another VRPTW for the failed deliveries due to the service constraint and the lack of knowledge on new deliveries. (5) Truck's speed is set to 20 km/h in city. (6) Service time per customer is set to 5 minutes.
Delivery optimization model
We assume that in each time slot the delivery customers' attendance probability is known and that the optimal time slot for each customer's delivery is that with the highest attendance probability. It is therefore a typical capacitated VRPTW [START_REF] Baldacci | New Route Relaxation and Pricing Strategies for the Vehicle Routing Problem[END_REF][START_REF] Solomon | Algorithms for the vehicle routing and scheduling problems with time window constraints[END_REF][START_REF] Azi | An exact algorithm for a single-vehicle routing problem with time windows and multiple routes[END_REF], in which the time window is the optimal time slot for delivery. We propose a Mixed Integer Linear Programming (MILP) model for the capacitated VRPTW which follows the guidelines in [START_REF] Toth | Models, relaxations and exact approaches for the capacitated vehicle routing problem[END_REF] and incorporates the classical constraints to enforce time windows that can be found in [START_REF] Azi | An exact algorithm for a single-vehicle routing problem with time windows and multiple routes[END_REF]. The MILP can be described as follows. Given a set of customers V={1,2…n} with known demands of q i for any i∈V, we have a fleet of homogeneous vehicles of capacity Q to deliver those demands, from a depot noted as 0 to customers. The directed graph can be thus noted as G=(V + ,A), where V + =V∪{0} is the set of nodes and A is the set of arcs. Each arc (i,j)∈A is associated with a travel time t ij >0 and a distance d ij >0. Each customer i∈V is associated with a service time s i and a time window [a i ,b i ] that presents respectively the earliest and latest time at which the service must begin at i. The objective is to minimize the total distance traveled to serve all customers while satisfying the capacity and time window constraints:
Min 𝑑 !" 𝑗∈𝑉 + 𝑖∈𝑉 + 𝑥 !" (3) s.t. 𝑥 !" 𝑗∈𝑉 + = 1, 𝑖 ∈ 𝑉 (4) 𝑥 !! 𝑖∈𝑉 + - 𝑥 !! 𝑗∈𝑉 + = 0, ℎ ∈ 𝑉 (5) 𝑞 ! ≤ 𝑢 ! ≤ 𝑄, 𝑖 ∈ 𝑉 (6) 𝑢 ! -𝑢 ! + 𝑄𝑥 !" ≤ 𝑄 -𝑞 ! , 𝑖, 𝑗 ∈ 𝑉 (7) 𝑡 ! + 𝑡 !" + 𝑠 ! 𝑥 !" -𝑁 1 -𝑥 !" ≤ 𝑡 ! , 𝑖, 𝑗 ∈ 𝑉 (8) 𝑎 ! ≤ 𝑡 ! ≤ 𝑏 ! , 𝑖 ∈ 𝑉 (9) 𝑥 !" ∈ 0,1 , 𝑖, 𝑗 ∈ 𝑉 ! (10) 𝑢 ! , 𝑡 ! ≥ 0, 𝑖 ∈ 𝑉 (11)
With decision variables: x ij is 1 if arc (i,j) is included in any route, 0 otherwise; t i indicates the start time of service on every node i.
In the model, ( 4)-( 5) guarantee that every customer is visited exactly once and that every route begins from the depot and ends at the depot. Eq ( 6)-( 7) ensure that the total demand on every route will not exceed the vehicle capacity Q, in particular u i is a variable indicating the accumulative total demand on customer i. Eq (8)-( 9) ensure feasibility of the time schedule on every route, with N a large number.
Results and optimal transportation plan
Five scenarios are designed for this case, as shown in Table 2. S0 serves as baseline scenario that does not consider time windows in transportation planning (thus not taking into account absence probabilities). In other words, S0 is a classic VRP optimizing transportation distance and it represents a conventional approach to the problem. S1 and S2 take into account the customers' optimal time windows profile only in weekdays. S3 and S4 consider both weekdays and Saturday so that customers are divided in two clusters: one for the customers with highest attendance probability appeared on weekdays and another one for the others. Scenarios 1-4 are therefore used as different ways the proposed approach can be implemented in this business scenario.
All time windows can be observed from Figure 2. Since the optimal time windows can be dispersive in a day, in S2 and S4 we deliberately added the constraint (12) to limit the waiting time in-between two successive customers to 60 minutes. Constraints ( 8) and ( 12) define that, when x ij =1, t j -60≤t i +s i +t ij ≤t j .
t i + (t ij + s i )x ij -N(1-x ij ) ≥ t j -60, i, j ∈ V (12)
All scenarios ran using GUSEK on a ThinkPad T440 with 4 GB of RAM. For all scenarios, every single computation process required about one minute, except S0 that required nearly 7 hours. The results are summarized in Table 2 andTable 3 Sc. Optimal Tours S0 R1=(0-10-6-1-13-8-0), R2=(0-7-12-5-4-2-0), R3=(0-11-15-14-9-3-0) S1 R1=(0-4-2-5-12-7-0), R2=(0-8-10-1-13-6-0), R3=(0-11-9-14-15-3-0) S2 R1=(0-7-0); R2=(0-4-0), R3=(0-6-12-5-0), R4=(0-8-13-15-3-2-0), R5=(0-9-14-11-1-10-0) S3 WD: R1=(0-4-8-13-12-7-0); Saturday: R2=(0-1-11-14-15-6-0), R3=(0-10-9-3-5-2-0) S4 WD: R1=(0-4-8-13-0), R2=(0-12-7-0); Saturday: R3=(0-5-2-0), R4=(0-1-11-10-0), R5=(0-9-14-3-15-6-0)
Table 3. Optimal tours in the scenarios
Theoretically S0 provides the shortest routes without considering time windows and failure probability, i.e. the first-round delivery. However, this routing plan caused a low rate of successful delivery of 37% and, as a result, generated the highest total distance resulted from a significant number of rescheduled deliveries. Low rate of successful delivery can also be seen as poor service to customers, since it means that customers might not receive their orders on time. In terms of total distance, S3 considering Saturday delivery performs the best, reducing 20% km comparing to S0, thanks to a higher rate of successful delivery of 63%. This is because 9 of the 15 customers have lower absence probability on Saturday. As shown by S2 vs S1, or S4 vs S3, the constraint (12) that aims to limit each waiting time made total distance increased. However, without this constraint, the waiting time in-between two successive customers can be longer than 3 hours in S1 and S3 (result of timeline is not provided here due to lake of space). In practice, waiting time depends closely on the quantity of customers to deliver in a tour.
Conclusions and Discussion
With the emergence of e-commerce in grocery retail, the food supply chain faces new challenges. In this paper, we focused on such a challenge regarding the successful delivery of grocery orders placed online. Due to the perishability and sensitivity of some grocery items, customer attendance is often critical for the successful delivery of online orders. As a solution to this problem, this paper introduced a two-stage methodological approach that utilizes customer-related data to schedule transportation plans. This is done by first estimating the probabilities of customer attendance/absence at different point of times during a day and then using these estimations in a way that satisfies a company's key performance indicators (e.g. maximize the probability of attended delivery while minimizing travel distance covered by delivery trucks).
As an application example, this paper presented an experimental study to investigate how a customer's historical electricity consumption data can be used to estimate time windows with a lower probability of inoccupation (absence). The best time windows were then used in a VRP model in order to plan the deliveries of online orders to customers aiming at improving delivery success rate. A numerical study has been conducted to demonstrate the effectiveness of the proposed approach that shows its potential in the delivery of online grocery shopping.
Besides increasing the rate of successful deliveries, the proposed approach can help etailers better understand the habits of their customers and thus the optimal delivery time for them. It can also be considered as a useful tool for dynamically pricing different delivery options and as a mechanism for time slot management. From a customer's point of view, the approach can improve customer satisfaction as it can reduce unnecessary traveling to pick up missed orders or long telephone calls required to re-arrange deliveries. It is also obvious that the approach can easily be used in different business cases in urban freight transportation and last mile logistics (e.g. non-food items, general merchandise) where attended home delivery is critical and alternative solutions cannot be easily offered.
This study is among the first ones integrating data mining techniques in urban freight transportation. Some prospects can thus be identified to this line of research. For example, one may test the approach with other customer-related datasets, e.g. water consumption, historical deliveries, or with different data mining techniques in order to compare accuracy and performance. The attendance probabilities can also be used in different VRP models or for different reasons such as slot pricing. Further, the proposed methodological approach can also be generalized from e-grocery commerce to all businesses that provide home delivery service, i.e. the general attended home delivery problem (AHDP).
Several limitations have to be carefully considered with regards to the proposed approach. Firstly, some legal issues, (e.g. data privacy and security), can arise from accessing and using energy consumption data of households by third parties. Secondly, some e-grocery retailers offer the possibility of time slot selection from the customer during the placement of an order. Even though the proposed approach is generic enough to cover this case, the application example in this paper did not demonstrate this interesting scenario. Finally, as any other data-driven approach, our approach might be limited by computational capacity. The first stage of our approach could be limited by the size of the training data and the data mining algorithms used on them. The second stage is mostly limited by algorithmic efficiency. Further field testing to select the appropriate, case-specific training data and algorithms is required before putting the approach into practice.
Figure
Figure 1. Methodology Flowchart
Figure 2 .
2 Figure 2. Heat map of time slots with absence probability during working hour (Weekdays vs Saturday)
Table 2 .
2 . Scenarios and results
Sc. Time Windows Saturday Delivery Waiting Time ≤60 mn Distance of First Delivery (km) Total Distance (km) Δkm vs S0 Average Probability of Successful First Delivery
S0 N N N 111 367 - 37%
S1 Y N N 130 320 -13% 55%
S2 Y N Y 166 356 -3% 55%
S3 Y Y N 143 295 -20% 63%
S4 Y Y Y 176 327 -11% 63%
http://www.ucd.ie/issda/data/commissionforenergyregulationcer/ | 47,758 | [
"1343",
"972513",
"989184"
] | [
"39111",
"160825",
"39111",
"214579",
"39111"
] |
01386174 | en | [
"info"
] | 2024/03/04 23:41:48 | 2017 | https://inria.hal.science/hal-01386174v2/file/heteroPrioApproxProofsRR.pdf | Olivier Beaumont
Lionel Eyraud-Dubois
Suraj Kumar
Approximation Proofs of a Fast and Efficient List Scheduling Algorithm for Task-Based Runtime Systems on Multicores and GPUs
Keywords: List scheduling, Approximation proofs, Runtime systems, Heterogeneous scheduling, Dense linear algebra
In High Performance Computing, heterogeneity is now the norm with specialized accelerators like GPUs providing efficient computational power. The added complexity has led to the development of task-based runtime systems, which allow complex computations to be expressed as task graphs, and rely on scheduling algorithms to perform load balancing between all resources of the platforms. Developing good scheduling algorithms, even on a single node, and analyzing them can thus have a very high impact on the performance of current HPC systems. The special case of two types of resources (namely CPUs and GPUs) is of practical interest. HeteroPrio is such an algorithm which has been proposed in the context of fast multipole computations, and then extended to general task graphs with very interesting results. In this paper, we provide a theoretical insight on the performance of HeteroPrio, by proving approximation bounds compared to the optimal schedule in the case where all tasks are independent and for different platform sizes. Interestingly, this shows that spoliation allows to prove approximation ratios for a list scheduling algorithm on two unrelated resources, which is not possible otherwise. We also establish that almost all our bounds are tight. Additionally, we provide an experimental evaluation of HeteroPrio on real task graphs from dense linear algebra computation, which highlights the reasons explaining its good practical performance.
Introduction
Accelerators such as GPUs are more and more commonplace in processing nodes due to their massive computational power, usually beside multicores. When 1 trying to exploit both CPUs and GPUs, users face several issues. Indeed, several phenomena are added to the inherent complexity of the underlying NP-hard optimization problem.
First, multicores and GPUs are unrelated resources, in the sense that depending on the targeted kernel, the performance of the GPUs may be much higher, close or even worse than the performance of a CPU. In the literature, unrelated resources are known to make scheduling problems harder (see [START_REF] Brucker | Complexity results for scheduling problems[END_REF] for a survey on the complexity of scheduling problems, [START_REF] Lenstra | Approximation algorithms for scheduling unrelated parallel machines[END_REF] for the specific simpler case of independent tasks scheduling and [START_REF] Bleuse | Scheduling Independent Tasks on Multi-cores with GPU Accelerators[END_REF] for a recent survey in the case of CPU and GPU nodes). Second, the number of available architectures has increased dramatically with the combination of available resources (both in terms of multicores and accelerators). Therefore, it is almost impossible to develop optimized hand tuned kernels for all these architectures. Third, nodes have many shared resources (caches, buses) and exhibit complex memory access patterns (NUMA effects), that render the precise estimation of the duration of tasks and data transfers extremely difficult.
All these characteristics make it hard to design scheduling and resource allocation policies even on very regular kernels such as linear algebra. On the other hand, this situation favors dynamic strategies where decisions are made at runtime based on the state of the machine and on the knowledge of the application (to favor tasks that are close to the critical path for instance). In recent years, several task-based systems have been developed such as StarPU [START_REF] Augonnet | Starpu: A unified platform for task scheduling on heterogeneous multicore architectures[END_REF], StarSs [START_REF] Planas | Hierarchical taskbased programming with StarSs[END_REF], SuperMatrix [START_REF] Chan | SuperMatrix: A multithreaded runtime scheduling system for algorithms-by-blocks[END_REF], QUARK [START_REF] Yarkhan | QUARK Users' Guide: QUeueing And Runtime for Kernels[END_REF], XKaapi [START_REF] Hermann | Multi-GPU and Multi-CPU Parallelization for Interactive Physics Simulations[END_REF] or PaRSEC [START_REF] Bosilca | PaRSEC: A programming paradigm exploiting heterogeneity for enhancing scalability[END_REF]. All these runtime systems model the application as a DAG, where nodes correspond to tasks and edges to dependencies between these tasks. At runtime, the scheduler knows (i) the state of the different resources (ii) the set of tasks that are currently processed by all non-idle resources (iii) the set of (independent) tasks whose all dependencies have been solved (iv) the location of all input data of all tasks (v) possibly an estimation of the duration of each task on each resource and of each communication between each pair of resources and (vi) possibly priorities associated to tasks that have been computed offline. Therefore, the scheduling problem consists in deciding, for an independent set of tasks, given the characteristics of these tasks on the different resources, where to place and to execute them. This paper is devoted to this specific problem.
On the theoretical side, several solutions have been proposed for this problem, including PTAS (see for instance [START_REF] Bonifaci | Scheduling unrelated machines of few different types[END_REF]). Nevertheless, in the target application, dynamic schedulers must take their decisions at runtime and are themselves on the critical path of the application. This reduces the spectrum of possible algorithms to very fast ones, whose complexity to decide which task to execute next should be sublinear in the number of ready tasks.
Several scheduling algorithms have been proposed in this context and can be classified in several classes. The first class of algorithms is based on (variants of) HEFT [START_REF] Topcuouglu | Performance-Effective and Low-Complexity Task Scheduling for Heterogeneous Computing[END_REF], where the priority of tasks is computed based on their expected distance to the last node, with several possible metrics to define the expected durations of tasks (given that tasks can be processed on heterogeneous resources) and data transfers (given that input data may be located on different resources).
To the best of our knowledge there is not any approximation ratio for this class of algorithms on unrelated resources and Bleuse et al. [START_REF] Bleuse | Scheduling Independent Tasks on Multi-cores with GPU Accelerators[END_REF] have exhibited an example on m CPUs and 1 GPU where HEFT algorithm achieves a makespan Ω(m) times worse the optimal. The second class of scheduling algorithms is based on more sophisticated ideas that aim at minimizing the makespan of the set of ready tasks (see for instance [START_REF] Bleuse | Scheduling Independent Tasks on Multi-cores with GPU Accelerators[END_REF]). In this class of algorithms, the main difference lies in the compromise between the quality of the scheduling algorithm (expressed as its approximation ratio when scheduling independent tasks) and its cost (expressed as the complexity of the scheduling algorithm). At last, a third class of algorithms has recently been proposed (see for instance [START_REF] Agullo | Are Static Schedules so Bad? A Case Study on Cholesky Factorization[END_REF]), in which scheduling decisions are based on the affinity between tasks and resources, i.e. try to process the tasks on the best suited resource for it.
In this paper, we concentrate on HeteroPrio that belongs to the third class and that is described in details in Section 2. More specifically, we prove that HeteroPrio combines the best of all worlds. Indeed, after discussing the related work in Section 3 and introducing notations and general results in Section 4, we first prove that contrarily to HEFT variants, HeteroPrio achieves a bounded approximation ratio in Section 5 and we provide a set of proved and tight approximation results, depending on the number of CPUs and GPUs in the node. At last, we provide in Section 6 a set of experimental results showing that, besides its very low complexity, HeteroPrio achieves a better performance than the other schedulers based either on HEFT or on an approximation algorithm for independent tasks scheduling. Concluding remarks are given in Section 7.
HeteroPrio Principle
Affinity Based Scheduling
HeteroPrio has been proposed in the context of task-based runtime systems responsible for allocating tasks onto heterogeneous nodes typically consisting of a few CPUs and GPUs [START_REF] Agullo | Task-based FMM for heterogeneous architectures[END_REF].
Historically, in most systems, tasks are ordered by priorities (computed offline) and the highest priority ready task is allocated on the resource that is expected to complete it first, given the estimation of the transfer times of its input data and the expected processing time of this task on this resource. These systems have shown some limits in strongly heterogeneous and unrelated systems, what is typically the case of nodes consisting of both CPUs and GPUs. Indeed, the relative efficiency of accelerators, that we call the affinity in what follows, strongly differs from one task to another. Let us for instance consider the case of Cholesky factorization, where 4 types of tasks (kernels dpotrf, dtrsm, dsyrk and dgemm) are involved. The acceleration factors are depicted in Table 1.
In all what follows, acceleration factor is always defined as the ratio between the processing time on a CPU and on a GPU, so that the acceleration factor may be smaller than 1. From this table, we can extract the main features that will influence our model. The acceleration factor strongly depends on the kernel. Some kernels, like dsyrk and dgemm are almost 30 times faster on GPUs, dpotrf dtrsm dsyrk dgemm CPU time / GPU time dpotrf are only slightly accelerated. Based on this observation, a different class of runtime schedulers for task based systems has been developed, in which the affinity between tasks and resources plays the central role. HeteroPrio belongs to this class. In these systems, when a resource becomes idle, it selects among the ready tasks the one for which it has a maximal affinity. For instance, in the case of Cholesky factorization, among the ready tasks, CPUs will prefer dpotrf to dtrsm to dsyrk to dgemm and GPUs will prefer dgemm to dsyrk to dtrsm to dpotrf. HeteroPrio allocation strategy has been studied in the context of StarPU for several linear algebra kernels and it has been proved experimentally that it enables to achieve a better utilization of slow resources than other strategies based on the minimization of the completion time. Nevertheless, in order to be efficient, HeteroPrio must be associated to a spoliation mechanism. Indeed, in above description, nothing prevents the slow resource to execute a task for which it can be arbitrarily badly suited, thus leading to arbitrarily bad results. Therefore, when a fast resource is idle and would be able to restart a task already started on a slow resource and to finish it earlier than on the slow resource, then the task is spoliated and restarted on the fast resource. Note that this mechanism does not correspond to preemption since all the progress made on the slow resource is lost. It is therefore less efficient than preemption but it can be implemented in practice, what is not the case of preemption on heterogeneous resources like CPUs and GPUs.
In what follows, since task based runtime systems see a set of independent tasks, we will concentrate on this problem and we will prove approximation ratios for HeteroPrio under several scenarios for the composition of the heterogeneous node (namely 1 GPU and 1 CPU, 1 GPU and several CPUs and several GPUs and several CPUs).
HeteroPrio Algorithm for a set of Independent Tasks
When priorities are associated with tasks then Line 1 of Algorithm 1 takes them into account for breaking ties among tasks with the same acceleration factor and put highest (resp. lowest) priority task first in the scheduling queue for acceleration factor≥ 1 (resp. < 1). Queue of ready tasks in Algorithm 1 can be implemented as a heap. Therefore, time complexity of Algorithm 1 would be O(N log(N )), where N is the number of ready tasks.
Related Works
The problem considered in this paper is a special case of the standard unrelated scheduling problem R||C max . Lenstra et al [START_REF] Lenstra | Approximation algorithms for scheduling unrelated parallel machines[END_REF] proposed a PTAS for the general Algorithm 1: The HeteroPrio Algorithm for a set of independent tasks 1: Sort Ready tasks in queue Q by non-increasing acceleration factors 2: while all tasks did not complete do
3:
if all workers are busy then Select an idle worker W
7: if Q = ∅ then 8:
Remove a task T from beginning of Q if W is a GPU worker otherwise from end of Q 9:
W starts processing T 10:
else 11:
Consider tasks running on the other type of resource in decreasing order of their expected completion time. If the expected completion time of T running on a worker W can be improved on W , T is spoliated and W starts processing T .
12:
end if 13: end while problem with a fixed number of machines, and a 2-approximation algorithm, based on the rounding of the optimal solution of the linear program which describes the preemptive version of the problem. This result has recently been improved [START_REF] Shchepin | An optimal rounding gives a better approximation for scheduling unrelated machines[END_REF] to a 2 -1 m approximation. However, the time complexity of these general algorithms is too high to allow using them in the context of runtime systems.
The more specialized case with a small number of types of resources has been studied in [START_REF] Bonifaci | Scheduling unrelated machines of few different types[END_REF] and a PTAS has been proposed, which also contains a rounding phase whose complexity makes it impractical, even for 2 different types of resources. Greedy approximation algorithms for the online case have been proposed by Imreh on two different types of resources [START_REF] Imreh | Scheduling problems on two sets of identical machines[END_REF]. These algorithms have linear complexity, however most of their decisions are based on comparing task execution times on both types of resources and not on trying to balance the load. The result is that in the practical test cases of interest to us, almost all tasks are scheduled on the GPUs and the performance is significantly worse. Finally, Bleuse et al [START_REF] Bleuse | Scheduling Independent Tasks on Multi-cores with GPU Accelerators[END_REF][START_REF] Bleuse | Scheduling Data Flow Program in XKaapi: A New Affinity Based Algorithm for Heterogeneous Architectures[END_REF] have proposed algorithms with varying approximation factors ( 43 , 3 2 and 2) based on dynamic programming and dual approximation techniques. These algorithms have better approximation ratios than the ones proved in this paper, but their time complexity is higher. Furthermore, as we show in Section 6, their actual performance is not as good when used iteratively on the set of ready tasks in the context of task graph scheduling. We also exhibit that HeteroPrio performs better on average than above mentioned algorithms, despite its higher worst case approximation ratio.
In homogeneous scheduling, list algorithms (i.e. algorithms that never leave a resource idle if there exists a ready task) are known to have good practical performance. In the context of heterogeneous scheduling, it is well known that list scheduling algorithms cannot achieve an approximation guarantee. Indeed, even with two resources and two tasks, if one resource is much slower than the other, it can be arbitrarily better to leave it idle and to execute both tasks on the fast resource. The HeteroPrio algorithm considered in this paper is based on a list algorithm, but the use of spoliation (see Section 2.2) avoids this problem.
Notations and First Results
General Notations
In this paper, we study the theoretical guarantee of HeteroPrio for a set of independent tasks. In the scheduling problem that we consider, the input is thus a platform of n GPUs and m CPUs and a set I of independent tasks, where task T i has processing time p i on CPU and q i on GPU, and the goal is to schedule those tasks on the resources so as to minimize the makespan. We define the acceleration factor of task T i as ρ i = pi qi and C Opt max (I) denotes the optimal makespan of set I.
To analyze the behavior of HeteroPrio, it is useful to consider the list schedule obtained before any spoliation attempt. We will denote this schedule S NS HP , and the final HeteroPrio schedule is denoted S HP . Figure 1 shows S NS HP and S HP for a set of independent tasks I. We define T FirstIdle as the first time any worker is idle in S NS HP , this is also the first time any spoliation can occur. Therefore after time T FirstIdle , each worker executes at most one task in S NS HP . Finally, we define C HP max (I) as the makespan of S HP on instance I.
Area Bound
In this section, we present and characterize a lower bound on the optimal makespan. This lower bound is obtained by assuming that tasks are divisible, i.e. can be processed in parallel on any number of resources. More specifically, any fraction x i of task T i is allowed to be processed on CPUs, and this fraction overall consumes CPU resources for x i p i time units. Then, the lower bound AreaBound(I) for a set of tasks I on m CPUs and n GPUs is the solution (in rational numbers) of the following linear program.
Minimize AreaBound(I) such that
i∈I x i p i ≤ m • AreaBound(I) (1) i∈I (1 -x i )q i ≤ n • AreaBound(I) (2) 0 ≤ x i ≤ 1
Since any valid solution to the scheduling problem can be converted into a solution of this linear program, it is clear that AreaBound(I) ≤ C Opt max (I). Another immediate bound on the optimal is ∀T ∈ I, min(p T , q T ) ≤ C Opt max (I). By contradiction and with simple exchange arguments, one can prove the following two lemmas. Lemma 1. In the area bound solution, the completion time on each class of resources is the same, i.e. constraints (1) and (2) are both equalities.
Proof. Let us assume that one of the inequality constraints of area solution is not tight. Without loss of generality, let us assume that Constraint (1) is not tight. Then some load from the GPUs can be transferred to the CPUs which in turn decreases the value of AreaBound(I). This achieves the proof of the Lemma.
Lemma 2. In AreaBound(I), the assignment of tasks is based on the acceleration factor, i.e. ∃k > 0 such that ∀i,
x i < 1 ⇒ ρ i ≥ k and x i > 0 ⇒ ρ i ≤ k.
Proof. Let us assume ∃(T 1 ,T 2 ) such that (i) T 1 is partially processed on GPUs (i.e., x 1 < 1), (ii) T 2 is partially processed on CPUs (i.e., x 2 > 0) and (iii)
ρ 1 < ρ 2 .
Let W C and W G denote respectively the overall work on CPUs and GPUs in AreaBound(I). If we transfer a fraction 0
< 2 < min(x 2 , (1-x1)p1 p2 ) of T 2 work from CPU to GPU and a fraction 2q2 q1 < 1 < 2p2 p1 of T 1 work from GPU to CPU, the overall loads W C and W G become W C = W C + 1 p 1 -2 p 2 W G = W G -1 q 1 + 2 q 2 Since p1 p2 < 2 1 < q1 q2
, then both W C < W C and W G < W G hold true, and hence the AreaBound(I) is not optimal. Therefore, ∃ a positive constant k such that ∀i on GPU, ρ i ≥ k and ∀i on CPU, ρ i ≤ k.
Summary of Approximation Results
This paper presents several approximation results depending on the number of CPUs and GPUs. The following table presents a quick overview of the main results proven in Section 5.
(#CPUs, #GPUs) Approximation ratio Worst case ex.
(1,1)
1+ √ 5 2 1+ √ 5 2
(m,1)
3+ √ 5 2 3+ √ 5 2 (m,n) 2 + √ 2 ≈ 3.41 2 + 2 √ 3 ≈ 3.15
5 Proof of HeteroPrio Approximation Results
General Lemmas
The following lemma gives a characterization of the work performed by Het-eroPrio at the beginning of the execution, and shows that HeteroPrio performs as much work as possible when all resources are busy. At any instant t, let us define I (t) as the sub-instance of I composed of the fractions of tasks that have not been entirely processed at time t by HeteroPrio. Then, a schedule beginning like HeteroPrio (until time t) and ending like AreaBound(I (t)) completes in AreaBound(I). Proof. HeteroPrio assigns tasks based on their acceleration factors. Therefore, at instant t, ∃k 1 ≤ k 2 such that (i) all tasks (at least partially) processed on GPUs have an acceleration factor larger than k 2 , (ii) all tasks (at least partially) allocated on CPUs have an acceleration factor smaller than k 1 and (iii) all tasks not assigned yet have an acceleration factor between k 1 and k 2 . After t, AreaBound(I ) satisfies Lemma 2, and thus ∃k with k 1 ≤ k ≤ k 2 such that all tasks of I with acceleration factor larger than k are allocated on GPUs and all tasks of I with acceleration factor smaller than k are allocated on CPUs. Therefore, combining above results before and after t, the assignment S beginning like HeteroPrio (until time t) and ending like AreaBound(I (t)) satisfies the following property: ∃k > 0 such that all tasks of I with acceleration factor larger than k are allocated on GPUs and all tasks of I with acceleration factor smaller than k are allocated on CPUs. This assignment S, whose completion time on both CPUs and GPUs (thanks to Lemma 1) is t + AreaBound(I ) clearly defines a solution of the fractional linear program defining the area bound solution, so that t + AreaBound(I ) ≥ AreaBound(I).
Similarly, AreaBound(I) satisfies both Lemma 2 with some value k and Lemma 1 so that in AreaBound(I), both CPUs and GPUs complete their work simultaneously. If k < k, more work is assigned to GPUs in AreaBound(I) than in S, so that, by considering the completion time on GPUs, we get AreaBound(I) ≥ t + AreaBound(I ). Similarly, if k > k, by considering the completion time on CPUs, we get AreaBound(I) ≥ t + AreaBound(I ). This achieves the proof of Lemma 3.
Since AreaBound(I) is a lower bound on C Opt max (I), the above lemma implies that (i) at any time t ≤ T FirstIdle in S NS HP , t + AreaBound(I (t)) ≤ C Opt max (I), (ii) T FirstIdle ≤ C Opt max (I), and thus all tasks start before
C Opt max (I) in S NS HP , (iii) if ∀i ∈ I, max(p i , q i ) ≤ C Opt max (I), then C HP max (I) ≤ 2C Opt max (I).
Another interesting characteristic of HeteroPrio is that spoliation can only take place from one type of resource to the other. Indeed, since assignment in S NS HP is based on the acceleration factors of the tasks, and since a task can only be spoliated if it can be processed faster on the other resource, we get the following lemmas.
Lemma 4. If, in S NS
HP , a resource processes a task whose execution time is not larger on the other resource, then no task is spoliated from the other resource.
Proof. Without loss of generality let us assume that there exists a task T executed on a CPU in S NS HP , such that p T ≥ q T . We prove that in that case, there is no spoliated task on CPUs, which is the same thing as there being no aborted task on GPUs.
T is executed on a CPU in S NS HP , and p T q T ≥ 1, therefore from HeteroPrio principle, all tasks on GPUs in S NS HP have an acceleration factor at least p T q T ≥ 1. Non spoliated tasks running on GPUs after T FirstIdle are candidates to be spoliated by the CPUs. But for each of these tasks, the execution time on CPU is at least as large as the execution time on GPU. It is thus not possible for an idle CPU to spoliate any task running on GPUs, because this task would not complete earlier on the CPU. Lemma 5. In HeteroPrio, if a resource executes a spoliated task then no task is spoliated from this resource.
Proof. Without loss of generality let us assume that T is a spoliated task executed on a CPU. From the HeteroPrio definition, p T < q T . It also indicates that T was executed on a GPU in S NS HP with q T ≥ p T . By Lemma 4, CPUs do not have any aborted task due to spoliation.
Finally, we will also rely on the following lemma, that gives the worst case performance of a list schedule when all tasks lengths are large (i.e. > C Opt max ) on one type of resource. Proof. Without loss of generality, let us assume that the processing time of each task of set B on CPU is larger than C Opt max (I). All these tasks must therefore be processed on the GPUs in an optimal solution. If scheduling this set B on k GPUs can be done in time C, then C ≤ C Opt max (I). The standard list scheduling result from Graham implies that the length of any list schedule of the tasks of B on GPUs is at most (2
-1 k )C ≤ (2 -1 k )C Opt max (I).
Approximation Ratio with 1 GPU and 1 CPU
Thanks to the above lemmas, we are able to prove an approximation ratio of
φ = 1+ √ 5 2
for HeteroPrio when the node is composed of 1 CPU and 1 GPU. We will also prove that this result is the best achievable by providing a task set I for which the approximation ratio of HeteroPrio is φ.
Theorem 7. For any instance I with 1 CPU and 1 GPU, C HP max (I) ≤ φC Opt max (I).
Proof. Without loss of generality, let us assume that the first idle time (at instant T FirstIdle ) occurs on the GPU and the CPU is processing the last remaining task T . We will consider two main cases, depending on the relative values of T FirstIdle and (φ -
1)C Opt max . -T FirstIdle ≤ (φ -1)C Opt max . In S NS
HP , the finish time of task T is at most T FirstIdle + p T . If task T is spoliated by the GPU, its execution time is T FirstIdle + q T . In both cases, the finish time of task T is at most
T FirstIdle + min(p T , q T ) ≤ (φ -1)C Opt max + C Opt max = φC Opt max . -T FirstIdle > (φ -1)C Opt max .
If T ends before φC Opt max on the CPU in S NS HP , since spoliation can only improve the completion time, this ends the proof of the theorem. In what follows, we assume that the completion time of T on the CPU in S NS HP is larger than φC Opt max (I), as depicted in Figure 2.
It is clear that T is the only unfinished task after C Opt max . Let us denote by α the fraction of T processed after C Opt max on the CPU. Then αp T > (φ -1)C Opt max since T ends after φC Opt max by assumption. Lemma 3 applied at instant t = T FirstIdle implies that the GPU is able to process the fraction α of T by C Opt max (see Figure 3) while starting this fraction at
T FirstIdle ≥ (φ -1)C Opt max so that αq T ≤ (1 -(φ -1))C Opt max = (2 -φ)C Opt max .
Therefore, the acceleration factor of T is at least φ-1 2-φ = φ. Since HeteroPrio assigns tasks on the GPU based on their acceleration factors, all tasks in S assigned to the GPU also have an acceleration factor at least φ.
Let us now prove that the GPU is able to process S {T } in time φC Opt max . Let us split S {T } into two sets S 1 and S 2 depending on whether the tasks of S {T } are processed on the GPU (S 1 ) or on the CPU (S 2 ) in the optimal solution. By construction, the processing time of S 1 on the GPU is at most C Opt max and the processing of S 2 on the CPU takes at most C Opt max . Since the acceleration factor of tasks of S 2 is larger than φ, the processing time of tasks of S 2 on the GPU is at most C Opt Proof. Let us consider the instance I consisting of 2 tasks X and Y , with p X = φ, q X = 1, p Y = 1 and q Y = 1 φ , such that ρ X = ρ Y = φ. The minimum length of task X is 1, so that C Opt max ≥ 1. Moreover, allocating X on the GPU and Y on the CPU leads to a makespan of 1, so that C Opt max ≤ 1 and finally C Opt max = 1. On the other hand, consider the following valid HeteroPrio schedule, where CPU first selects X and the GPU first selects Y . GPU becomes available at instant 1 φ = φ-1 but does not spoliate task X since it cannot complete X earlier than its expected completion time on the CPU. Therefore, the completion time of HeteroPrio is φ = φC Opt max .
Approximation Ratio with 1 GPU and m CPUs
In the case of a single GPU and m CPUs, the approximation ratio of Hetero-Prio becomes 1 + φ = 3+ √ 5
2 , as proved in Theorem 9 and this bound is tight (asymptotically when m becomes large) as proved in Theorem 11.
Theorem 9. HeteroPrio achieves an approximation ratio of
(1 + φ) = 3+ √ 5 2
for any instance I on m CPUs and 1 GPU.
Proof. Let us assume by contradiction that there exists a task T whose completion time is larger than (1 + φ)C Opt max . We know that all tasks start before C Opt max in S NS HP . If T is executed on the GPU in S NS HP , then q T > C Opt max and thus p T ≤ C Opt max . Since at least one CPU is idle at time T FirstIdle , T should have been spoliated and processed by 2C Opt max . We know that T is processed on a CPU in S NS HP , and finishes later than (1+φ)C Opt max in S HP . Let us denote by S the set of all tasks spoliated by the GPU (from a CPU to the GPU) before considering T for spoliation in the execution of HeteroPrio and let us denote by S = S {T }. The following lemma will be used to complete the proof.
Lemma 10. The following holds true (i) p i > C Opt max for all tasks i of S , (ii) the acceleration factor of T is at least φ, (iii) the acceleration factor of tasks running on the GPU in S NS HP is at least φ. Proof. of Lemma 10. Since all tasks start before T FirstIdle ≤ C Opt max in S NS HP , and since T finishes after (1 + φ)C Opt max in S NS HP , then p T > φC Opt max . Since HeteroPrio performs spoliation of tasks in decreasing order of their completion time, the same applies to all tasks of S : ∀i ∈ S , p i > φC Opt max , and thus q i ≤ C Opt max . Since p T > φC Opt max and q T ≤ C Opt max , then ρ T > φ. Since T is executed on a CPU in S NS HP , all tasks executed on GPU in S NS HP have an acceleration factor at least φ.
Since T is processed on the CPU in S NS HP and p T > q T , Lemma 4 applies and no task is spoliated from the GPU. Let A be the set of tasks running on GPU right after T FirstIdle in S NS HP . We consider only one GPU, therefore |A| ≤ 1. 1. If A = {a} with q a ≤ (φ-1)C Opt max , then Lemma 6 applies to S (with n = 1) and the overall completion time is Theorem 11. Theorem 9 is tight, i.e. for any δ > 0, there exists an instance I such that C HP max (I) ≥ (φ + 1 -δ)C Opt max (I). Proof. ∀ > 0, let I denote the following set Task CPU Time GPU Time # of tasks accel ratio
≤ T FirstIdle + q A + C Opt max ≤ (φ + 1)C Opt max . 2. If A = {a} with q a > (φ -1)C Opt max ,
T 1 1 1/φ 1 φ T 2 φ 1 1 φ T 3 (mx)/ 1 T 4 φ x/ φ
where x = (m -1)/(m + φ).
The minimum length of task T 2 is 1, so that C Opt max ≥ 1. Moreover, if T 1 , T 3 and T 4 are scheduled on CPUs and T 2 on the GPU (this is possible if is small enough), then the completion time is 1, so that C Opt max = 1. Consider the following valid HeteroPrio schedule. The GPU first selects tasks from T 4 and the CPUs first select tasks from T 3 . All resources become available at time x. Now, the GPU selects task T 1 and one of the CPUs selects task T 2 , with a completion time of x + φ. The GPU becomes available at x + 1/φ but does not spoliate T 2 since it would not finish before x + 1/φ + 1 = x + φ. The makespan of HeteroPrio is thus x + φ, and since x tends towards 1 when m becomes large, the approximation ratio of HeteroPrio on this instance tends towards 1 + φ.
Approximation Ratio with n GPUs and m CPUs
In the most general case of n GPUs and m CPUs, the approximation ratio of HeteroPrio is at most 2 + √ 2, as proved in Theorem 12. To establish this result, we rely on the same techniques as in the case of a single GPU, but the result of Lemma 6 is weaker for n > 1, what explains that the approximation ratio is larger than for Theorem 9. We have not been able to prove, as previously, that this bound is tight, but we provide in Theorem 14 a family of instances for which the approximation ratio is arbitrarily close to 2 + 2 Proof. We prove this by contradiction. Let us assume that there exists a task T whose completion time in S HP is larger than (2 + √ 2)C Opt max . Without loss of generality, we assume that T is executed on a CPU in S NS HP . In the rest of the proof, we denote by S the set of all tasks spoliated by GPUs in the HeteroPrio solution, and S = S ∪ {T }. The following lemma will be used to complete the proof.
Lemma 13. The following holds true
(i) ∀i ∈ S , p i > C Opt max (ii) ∀T processed on GPU in S NS HP , ρ T ≥ 1 + √ 2.
Proof. of Lemma 13. In S NS HP , all tasks start before T FirstIdle ≤ C Opt max . Since T ends after (2 + √ 2)C Opt max in S NS HP (since spoliation can only improve the completion time), then
p T > (1 + √ 2)C Opt max .
The same applies to all spoliated tasks that complete after T in S NS HP . If T is not considered for spoliation, no task that complete before T in S NS HP is spoliated, and the first result holds. Otherwise, let s T denote the instant at which T is considered for spoliation. The completion time of T in S HP is at most s T + q T , and since q T ≤ C Opt max , s T ≥ (1 + √ 2)C Opt max . Since HeteroPrio handles tasks for spoliation in decreasing order of their completion time in S NS HP , tasks T is spoliated after T has been considered and not finished at time s T , and thus
p T > √ 2C Opt max . Since p T > (1 + √ 2)C Opt max and q T ≤ C Opt max , then ρ T ≥ (1 + √ 2)
. Since T is executed on a CPU in S NS HP , all tasks executed on GPU in S NS HP have acceleration factor at least 1 + √ 2.
Let A be the set of tasks executed on GPUs after time T FirstIdle in S NS HP . We partition A into two sets A 1 and
A 2 such that ∀i ∈ A 1 , q i ≤ C Opt max √ 2+1 and ∀i ∈ A 2 , q i > C Opt max √ 2+1 .
Since there are n GPUs, |A 1 | ≤ |A| ≤ n. We consider the schedule induced by HeteroPrio on the GPUs with the tasks A S (if T is spoliated, this schedule is actually returned by HeteroPrio, otherwise this is what HeteroPrio builds when attempting to spoliate task T ). This schedule is not worse than a schedule that processes all tasks from A 1 starting at time T FirstIdle , and then performs any list schedule of all tasks from A 2 S . Since |A 1 | ≤ n, the first part takes time at most
C Opt max √ 2+1 . For all T i in A 2 , ρ i ≥ 1 + √ 2 and q i > C Opt max (I) √ 2+1
imply p i > C Opt max . We can thus apply Lemma 6 to A 2 S and the second part takes time at most 2C Opt max . Overall, the completion time on GPUs is bounded by
T FirstIdle + C Opt max √ 2+1 +(2-1 n )C Opt max < C Opt max +( √ 2-1)C Opt max +2C Opt max = ( √ 2+2)C Opt max , which is a contradiction.
Theorem 14. The approximation ratio of HeteroPrio is at least 2 + 2 √ 3
3.15.
Proof. We consider an instance I, with n = 6k GPUs and m = n 2 CPUs, containing the following tasks.
Task CPU Time GPU Time # of tasks accel ratio
T 1 n n r n r T 2 rn 3 see below see below r 3 ≤ ρ ≤ r T 3 1 1 mx 1 T 4 r 1 nx r
, where x = (m-n) m+nr n and r is the solution of the equation n r + 2n -1 = nr 3 . Note that the highest acceleration factor is r and the lowest is 1 since r > 3. The set T 2 contains tasks with the following execution time on GPU, (i) one task of length n = 6k, (ii) for all 0 ≤ i ≤ 2k -1, six tasks of length 2k + i.
This set T 2 of tasks can be scheduled on n GPUs in time n (see Figure 4). ∀1 ≤ i < k, each of the six tasks of length 2k + i can be combined with one of the six tasks of length 2k + (2k -i), occupying 6(k -1) processors; the tasks of length 3k can be combined together on 3 processors, and there remains 3 processors for the six tasks of length 2k and the task of length 6k. On the other hand, the worst list schedule may achieve makespan 2n -1 on the n GPUs. ∀0 ≤ i ≤ k -1, each of the six tasks of length 2k + i is combined with one of the six tasks of length 4k -i -1, which occupies all 6k processors until time 6k -1, then the task of length 6k is executed. The fact that there exists a set of tasks for which the makespan of the worst case list schedule is almost twice
2k + 1 4k -1 2k + 2 4k -2 2k + 3 4k -3 • • • 3k -1 3k + 1 uses k -1 procs repeated 6 times 3k 3k 3k 3k 3k 3k 2k 2k 2k 2k 2k 2k 6k uses 6 procs t 2k 4k -1 2k + 1 4k -2 2k + 2 4k -3 • • • 3k -1 3k uses k procs repeated 5 times 2k 4k -1 2k + 1 4k -2 2k + 2 4k -3 • • • 3k -1 3k 6k uses k procs t
Fig. 4: 2 schedules for task set T 2 on n = 6k homogeneous processors, tasks are labeled with processing times. Left one is an optimal schedule and right one is a possible list schedule.
the optimal makespan is a well known result. However, the interest of set T 2 is that the smallest execution time is C Opt max (T 2 )/3, what allows these tasks to have a large execution time on CPU in instance I (without having a too large acceleration factor).
Figure 5a shows an optimal schedule of length n for this instance: the tasks from set T 2 are scheduled optimally on the n GPUs, and the sets T 1 , T 3 and T 4 are scheduled on the CPUs. Tasks T 3 and T 4 fit on the m -n CPUs because the total work is mx + nxr = x(m + nr) = (m -n)n by definition. On the other hand, Figure 5b shows a possible HeteroPrio schedule for I. The tasks from set T 3 have the lowest acceleration factor and are scheduled on the CPUs, while tasks from T 4 are scheduled on the GPUs. All resources become available at time x. Tasks from set T 1 are scheduled on the n GPUs, and tasks from set T 2 are scheduled on m CPUs. At time x + n r , the GPUs become available and start spoliating the tasks from set T 2 . Since they all complete at the same time, the order in which they get spoliated can be arbitrary, and it can lead to the worst case behavior of Figure 4, where the task of length n is executed last. In this case, spoliating this task does not improve its completion time, and the resulting makespan for HeteroPrio on this instance is C HP
Experimental evaluation
In this section, we propose an experimental evaluation of HeteroPrio on instances coming from the dense linear algebra library Chameleon [START_REF]Chameleon, A dense linear algebra software for heterogeneous architectures[END_REF]. We evaluate our algorithms in two contexts, (i) with independent tasks and (ii) with dependencies, which is closer to real-life settings and is ultimately the goal of the HeteroPrio algorithm. In this section, we use task graphs from Cholesky, QR and LU factorizations, which provide interesting insights on the behavior of the algorithms. The Chameleon library is built on top of the StarPU runtime, and implements tiled versions of many linear algebra kernels expressed as graphs of tasks. Before the execution, the processing times of the tasks are measured on both types of resources, which then allows StarPU schedulers to have a reliable prediction of each task's processing time. In this section, we use this data to build input instances for our algorithms, obtained on a machine with 20 CPU cores of two Haswell Intel R Xeon R E5-2680 processors and 4 Nvidia K40-M GPUs. We consider Cholesky, QR and LU factorizations with a tile size of 960, and a number of tiles N varying between 4 and 64. We compare 3 algorithms from the literature : HeteroPrio, the well-known HEFT algorithm (designed for the general R|prec|C max problem), and DualHP from [START_REF] Bleuse | Scheduling Data Flow Program in XKaapi: A New Affinity Based Algorithm for Heterogeneous Architectures[END_REF] (specifically designed for CPU and GPU, with an approximation ratio of 2 for independent tasks). The DualHP algorithm works as follows : for a given guess λ on the makespan, it either returns a schedule of length 2λ, or ensures that λ < C Opt max . To achieve this, any task with processing time more than λ on any resource is assigned to the other resource, and then all remaining tasks are assigned to the GPU by decreasing acceleration factor while the overall load is lower than nλ. If the remaining load on CPU is not more than mλ, the resulting schedule has makespan below 2λ. The best value of λ is then found by binary search.
Independent Tasks
To obtain realistic instances with independent tasks, we have taken the actual measurements from tasks of each kernel (Cholesky, QR and LU) and considered these as independent tasks. For each instance, the performance of all three algorithms is compared to the area bound. Results are depicted in Figure 6, where the ratio to the area bound is given for different values of the number of tiles N . The results show that both HeteroPrio and DualHP achieve close to optimal performance when N is large, but HeteroPrio achieves better results for small values of N (below 20). This may be surprising, since the approximation ratio of DualHP is actually better than the one of HeteroPrio. On the other hand, HeteroPrio is primarily a list scheduling algorithm, that usually achieve good average case performance. In this case, it comes from the fact that DualHP tends to balance the load between the set of CPUs and the set of GPUs, but for such values of N , the task processing times on CPU are not negligible compared to the makespan. Thus, it happens that average loads are similar for both kinds of resources, but one CPU actually has significantly higher load than the others, what results in a larger makespan. HEFT, on the other hand, has rather poor performance because it does not take acceleration factor into account, and thus assigns tasks to GPUs that would be better suited to CPUs, and vice-versa.
Task Graphs
Both HeteroPrio and DualHP can be easily adapted to take dependencies into account, by applying at any instant the algorithm on the set of (currently) ready tasks. For DualHP, this implies recomputing the assignment of tasks to resources each time a task becomes ready, and also slightly modifying the algorithm to take into account the load of currently executing tasks. Since HeteroPrio is a list algorithm, HeteroPrio rule can be used to assign a ready task to any idle resource. If no ready task is available for an idle resource, a spoliation attempt is made on currently running tasks.
When scheduling task graphs, a standard approach is to compute task priorities based on the dependencies. For homogeneous platforms, the most common priority scheme is to compute the bottom-level of each task, i.e. the maximum length of a path from this task to the exit task, where nodes of the graph are weighted with the execution time of the corresponding task. In the heterogeneous case, the priority scheme used in the standard HEFT algorithm is to set the weight of each node as the average execution time of the corresponding tasks on all resources. We will denote this scheme avg. A more optimistic view could be to set the weight of each node as the smallest execution time on all resources, hoping that the tasks will get executed on their favorite resource. We will denote this scheme min.
In both HeteroPrio and DualHP, these ranking schemes are used to break ties. In HeteroPrio, whenever two tasks have the same acceleration factor, the highest priority task is assigned first; furthermore, when several tasks can be spoliated for some resource, the highest priority candidate is selected. In DualHP, once the assignment of tasks to CPUs and GPUs is computed, tasks are sorted by highest priority first and processed in this order. For DualHP, we also consider another ranking scheme, fifo, in which no priority is computed and tasks are assigned in the order in which they became ready.
We thus consider a total of 7 algorithms: HeteroPrio, DualHP and HEFT with min and avg ranking schemes, and DualHP with fifo ranking scheme. We again consider three types of task graphs: Cholesky, QR and LU factorizations, with the number of tiles N varying from 4 to 64. For each task graph, the makespan with each algorithm is computed, and we consider the ratio to the lower bound obtained by adding dependency constraints to the area bound [START_REF] Agullo | Are Static Schedules so Bad? A Case Study on Cholesky Factorization[END_REF]. Results are depicted in Figure 7.
The first conclusion from these results is that scheduling DAGs corresponding to small or large values of N is relatively easy, and all algorithms achieve a performance close to the lower bound: with small values of N , the makespan is constrained by the critical path of the graph, and executing all tasks on GPU is the best option; when N is large, the available parallelism is large enough, and the runtime is dominated by the available work. The interesting part of the results is thus for the intermediate values of N , between 10 and 30 or 40 depending on the task graph. In these cases, the best results are always achieved by HeteroPrio, especially with the min ranking scheme, which is always within 30% of the (optimistic) lower bound. On the other hand, all other algorithms get significantly worse performance for at least one case.
To obtain a better insight on these results, let us further analyze the schedules produced by each algorithm by focusing on the following metrics: the amount of idle time on each type of resources (CPU and GPU)1 , and the ade- quacy of task allocation (whether the tasks allocated to each resource is a good fit or not). To measure the adequacy of task allocation on a resource r, we define the acceleration factor A r of the "equivalent task" made of all the tasks assigned to that resource: let J be the set of tasks assigned to r, A r = i∈J pi i∈J qi . A schedule has a good adequacy of task allocation if A GPU is high and A CPU is low. The values of equivalent acceleration factors for both resources are shown on Figure 8. On Figure 9, the normalized idle time on each resource is depicted, which is the ratio of the idle time on a resource to the amount of that resource used in the lower bound solution.
On Figure 8, one can observe that there are significant differences in the acceleration factor of tasks assigned to the CPU between the different algorithms. In particular, HeteroPrio usually assigns to the CPU tasks with low acceleration factor (which is good), whereas HEFT usually has a higher acceleration factor on CPU. DualHP is somewhat in the middle, with a few exceptions in the case of LU when N is large. On the other hand, Figure 9 shows that HEFT and Het-eroPrio are able to keep relatively low idle times in all cases, whereas DualHP induces very large idle time on the CPU. The reason for this is that optimizing locally the makespan for the currently available tasks makes the algorithm too conservative, especially at the beginning of the schedule where there are not many ready tasks, DualHP assigns all tasks on the GPU because assigning one on the CPU would induce a larger completion time. HeteroPrio however is able to find a good compromise by keeping the CPU busy with the tasks that are
Conclusion
In this paper, we consider HeteroPrio, a list-based algorithm for scheduling independent tasks on two types of unrelated resources. This scheduling problem has strong practical importance for the performance of task-based runtime systems, which are used nowadays to run high performance applications on nodes made of multicores and GPU accelerators. This algorithm has been proposed in a practical context, and we provide theoretical worst-case approximation proofs in several cases, including the most general, and we prove that our bounds are tight. Furthermore, these algorithms can be extended to schedule tasks with precedence constraints, by iteratively scheduling the (independent) set of currently ready tasks. We show experimentally that in this context, HeteroPrio produces very efficient schedules, whose makespans are better than the state-of-the-art algorithms from the literature, and very close to the theoretical lower bounds. A practical implementation of HeteroPrio in the StarPU runtime system is currently under way.
Fig. 1 :
1 Fig. 1: Example of a HeteroPrio schedule
Lemma 3 .
3 At any time t ≤ T FirstIdle in S NS HP , t + AreaBound(I (t)) = AreaBound(I)
Lemma 6 .
6 Let B ⊆ I such that the execution time of each task of B on one resource is larger than C Opt max (I), then any list schedule of B on k ≥ 1 resources of the other type has length at most (2 -1 k )C Opt max (I).
Fig. 2 :Fig. 3 :Theorem 8 .
238 Fig.2: Situation where T ends on CPU after φC Opt max (I).
√ 3 .
3 Theorem 12. ∀I, C HP max (I) ≤ (2 + √ 2)C Opt max (I).
- 1 tFig. 5 :
15 Fig. 5: Optimal and HeteroPrio on Theorem 14 instance
Fig. 6 :
6 Fig. 6: Results for independent tasks
Fig. 7 :Fig. 8 :
78 Fig. 7: Results for different DAGs
Fig. 9 :
9 Fig. 9: Normalized idle time
since ρ a > φ by Lemma 10, p a > φ(φ -1)C Opt max = C Opt max . Lemma 6 applies to S A, so that the overall completion time is bounded by T FirstIdle + C Opt max ≤ 2C Opt max . 3. If A = ∅, Lemma 6 applies to S and get C HP max (I) ≤ T FirstIdle + C Opt max ≤ 2C Opt max . Therefore, in all cases, the completion time of task T is at most (φ + 1)C Opt max , what ends the proof of Theorem 9.
For fairness, any work made on an "aborted" task by HeteroPrio is also counted as idle time, so that all algorithms have the same amount of work to execute. | 49,201 | [
"181224",
"174911",
"964256"
] | [
"56002",
"56002",
"409747",
"56002",
"90029"
] |
01484381 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484381/file/978-3-642-34549-4_13_Chapter.pdf | Iyad Zikra
email: iyad@dsv.su.se
Implementing the Unifying Meta-Model for Enterprise Modeling and Model-Driven Development: An Experience Report
Keywords: Model-Driven Development, Tools, MetaEdit+, Eclipse Modeling Framework, Graphical Modeling Project
is becoming increasingly popular as a choice for developing information systems. Tools that support the principles of MDD are also growing in number and variety of available functionality. MetaEdit+ is a meta-modeling tool used for developing Domain Specific Languages and is identified as an MDD tool. The Eclipse Modeling Framework (EMF) and Graphical Modeling Project (GMP) are two Eclipse projects that provide plug-ins to support the principles of MDD. In this paper, we report on our experience in using MetaEdit+ and the Eclipse plug-ins for developing a graphical editor for the unifying meta-model, which is an MDD approach that extends the traditional view of MDD to cover Enterprise Modeling. The two modeling environments are reviewed using quality areas that are identified by the research community as necessary in MDD tools. This report will provide useful insights for researchers and practitioners alike concerning the use of MetaEdit+ and the Eclipse plug-ins as MDD tools.
Introduction
The increasing reliance on Model-Driven Development (MDD) for creating Information Systems (IS) in recent years has led to a stream of research projects, approaches, and tools that support the use of models as the main development artifacts. Models in MDD are used to capture various aspects of an IS, and (automatic) transformations enable the derivation of models from each other and generate executable code.
Attempts to extend the use of MDD to describe the organization are starting to emerge [START_REF] Da Silva | Integration of RE and MDD Paradigms: The ProjectIT Approach and Tools[END_REF][START_REF] Navarro | Architecture Traced from Requirements applying a Unified Methodology[END_REF][START_REF] Pastor | Linking Goal-Oriented Requirements and Model-Driven Development[END_REF]. Models can be used to capture the underlying motivation for which IS are developed, which provides deeper understanding of the IS models and improves the design decisions that are made. In order to capitalize on the full potential of MDD principles, MDD approaches need to be supported with tools to facilitate the creation and management of models, meta-models, and transformations. Tools also enable the execution of model transformations and, eventually, code generation.
Aside from that, tools should offer practical and usable functionalities that simplify the complexity associated with managing models and transformations [START_REF] Henkel | Pondering on the Key Functionality of Model Driven Development Tools: The Case of Mendix[END_REF].
This paper presents an account of using two tooling environments that are associated with MDD; namely, MetaEdit+ 1 and the modeling environment available in Eclipse2 through the Eclipse Modeling Framework (EMF) 3 and the Graphical Modeling Project (GMP) 4The remainder of the paper is organized as follows. Section 2 gives an overview of the quality areas that MDD tools should have as described in the literature. Section 3 presents the unifying meta-model. Sections 4 and 5 include reflections on how MetaEdit+ and the Eclipse plug-ins fulfill the properties in relation to implementing the unifying meta-model. Finally, section 6 gives concluding remarks. plug-ins. The aim of this paper is to relay the experience of using MetaEdit+ and EMF/GMP for creating a tool for the unifying meta-model for Enterprise Modeling (EM) and MDD [START_REF] Zikra | Bringing Enterprise Modeling Closer to Model-Driven Development[END_REF], thus helping researchers and practitioners alike in gaining useful insights into the two modeling environments.
Qualities of Model-Driven Development Tools
The importance of MDD tools that is emphasized in the literature has yet to lead to the creation of a tool that covers all the necessary aspects of MDD. Several authors have addressed the requirements for MDD tools, pointing out the drawback of existing MDD approaches in terms of tooling, and highlighting the lack of tools that can realize the theoretical benefits of MDD. As pointed out in [START_REF] Atkinson | Concepts for Comparing Modeling Tool Architectures[END_REF], current works that tackle MDD tools are limited to superficial comparisons of feature lists, with no deep insights to select a tool based on realistic project requirements and available technical abilities and standards compliance. As a result, [START_REF] Atkinson | Concepts for Comparing Modeling Tool Architectures[END_REF] suggests a conceptual framework for MDD tool architectures that covers major existing MDD theoretical architectures. The proposed tool architecture captures a three-dimensional view of modeling levels by identifying three types of modeling abstractions: form instantiation, where instances of one level follow the structure defined by concepts of a higher level; content instantiation, where instances capture content instances of the content types described on a higher level; and generalization, where instances are themselves types that extend the definition of the types of a higher level. By introducing the notions of embedded levels and spanning, it becomes possible to project the three-dimensional architecture as a two-dimensional view. Using this architecture, MDD tool developers can plan the number and types of modeling levels that will be supported by the tool.
Aside from the core MDD principles which tools must support (i.e. the creation and management of models and meta-models, and the creation and execution of transformations), the following quality areas are highlighted in the literature as necessary in MDD tools to enhance the process of creating and managing the models:
• Understandability: this refers to the ease with which tool users are able to create meta-models and models [START_REF] Pelechano | Building Tools for Model Driven Development. Comparing Microsoft DSL Tools and Eclipse Modeling Plug-ins[END_REF]. Understandability can be enhanced by tools through facilities that highlight and explain the intended purpose of parts of the models to tool users [START_REF] Henkel | Pondering on the Key Functionality of Model Driven Development Tools: The Case of Mendix[END_REF]. The graphical notation, which is generally part of the definition of the modeling language, can be enhanced by the graphical editor in the tool [START_REF] Pelechano | Building Tools for Model Driven Development. Comparing Microsoft DSL Tools and Eclipse Modeling Plug-ins[END_REF]. Furthermore, tools should support the representation of details on several levels of abstraction [START_REF] Henkel | Pondering on the Key Functionality of Model Driven Development Tools: The Case of Mendix[END_REF], which can occur along multiple axes [START_REF] Atkinson | Concepts for Comparing Modeling Tool Architectures[END_REF]. A usability framework, including a conceptual model for capturing tool usability and an experimental process for conducting the usability evaluation, is proposed in [START_REF] Panach | Towards an Experimental Framework for Measuring Usability of Model-Driven Tools[END_REF] to measure the satisfaction, efficiency, and effectiveness of an MDD tool. • Model evaluation: tools can offer support for model analysis and evaluation [START_REF] Oldevik | Evaluation Framework for Model-Driven Product Line Engineering Tools[END_REF],
Evaluation is referred to as observability or "model-level debugging" [START_REF] Uhl | Model-Driven Development in the Enterprise[END_REF] when the reporting of warnings and errors is done during the creation of the model and the execution of the transformations-similarly to how compilers do for programming languages. [START_REF] Oldevik | Evaluation Framework for Model-Driven Product Line Engineering Tools[END_REF], collaborative development [START_REF] Henkel | Pondering on the Key Functionality of Model Driven Development Tools: The Case of Mendix[END_REF], and Quality of Service (QoS) management [START_REF] Oldevik | Evaluation Framework for Model-Driven Product Line Engineering Tools[END_REF] are some of the activities highlighted in the literature. • Tool documentation: finally, tools need to provide self-documentationguidelines for using the tool itself. The list of quality criteria described above is by no means complete. There could be other desirable qualities for MDD tools. However, we focus on criteria which are relevant for reporting our experience with MetaEdit+ and the Eclipse MDD plug-ins.
The Unifying Meta-Model
With the increased popularity of MDD, several efforts have attempted to exploit models and transformations to cover activities that precede the development of IS [START_REF] Zikra | Analyzing the Integration between Requirements and Models in Model Driven Development[END_REF]. Such efforts try to enhance the content captured by design models to represent aspects that affect IS development but is not directly related to it, such as the intention and motivation expressed in enterprise models. However, the survey presented in [START_REF] Zikra | Analyzing the Integration between Requirements and Models in Model Driven Development[END_REF] highlights the hitherto existing need for an MDD approach that spans aspects of both Enterprise Modeling (EM) and MDD. The unifying meta-model was proposed in [START_REF] Zikra | Bringing Enterprise Modeling Closer to Model-Driven Development[END_REF] as a response to that need.
The unifying meta-model provides an overall platform for designing enterprise models, which are then used to derive IS models that can subsequently be used to generate a functioning and complete system following MDD principles. Six complimentary views, illustrated in Fig 1, constitute the unifying meta-model and cover aspects that describe enterprise-level information in addition to information relevant for IS development. Organizational business goals are captured by the Goal Model (GM) view, and the internal rules and regulations that govern the enterprise and its operations are provided by the Business Rules Model (BRM) view. The Concepts Model (CM) view covers the concepts and relationships that describe the static aspects of the enterprise and its supporting IS, while business process that describe the activities needed to realize the business goals are part of the Business Process Model (BPM) view. A Requirements Model (RM) view captures the highlevel requirements that are associated with the development of the IS and relates them to components of other views. The IS Architecture Model (ISAM) covers the implementation architecture which will be used to realize the IS, describing how the components that implement the business process, business rules, concepts, and requirements will work conjointly together in an operational system. A complete description of the complementary views of the unifying meta-model and how they can be utilized to develop enterprise-aligned IS can be found in [START_REF] Zikra | Bringing Enterprise Modeling Closer to Model-Driven Development[END_REF]. The work is still ongoing for implementing all views of the unifying meta-model. In this paper, the implementations of the CM and BPM views are discussed.
Implementation in MetaEdit+
MetaEdit+ is a tool for developing Domain Specific Languages (DSLs), which are non-generic modeling languages that are designed for a specific application domain [START_REF] Mernik | When and How to Develop Domain-Specific Languages[END_REF]. DSLs substitute the generic modeling capabilities offered by general-purpose languages (GPLs), such as UML 5The modeling environment of MetaEdit+ is divided into two main parts: MetaEdit+ workbench and MetaEdit+ modeler. The workbench, shown in Fig. 2, includes the necessary facilities for creating the meta-model of the language (in this case the unifying meta-model), organized as a set of tools. The whole meta-model is called a graph, and components of the meta-model are created in the workbench using the object tool, property tool, and the relationship tool.
, for more expressiveness that results from tailoring the language to the needs of a defined domain. Notations that are familiar to domain experts are utilized instead of generic shapes with broad semantics. Domain conventions and abstractions are also incorporated in DSLs. Generally, DSLs are not required to be executable [START_REF] Mernik | When and How to Develop Domain-Specific Languages[END_REF]. However, when used in the context of MDD, a DSL needs to be transformed into other models and eventually into an executable form, following the MDD principles. The object tool (Fig. 3-a) is used to define concepts in the meta-model, where each concept is defined as an object that has a name, an ancestor (a super type), a description, and a list of properties created using the property tool (Fig. 3-b). In turn, each property has a name and a data type, and it is possible to define the input method for creating property values and constraints on those values. For example, a property can have the type "string," an editable list input widget to indicate that the property can have multiple values, and a regular expression that governs the values that can be entered by the user. Nesting of objects is made possible through allowing objects to be selected as the types of properties.
Relationships are defined in the workbench using the relationship tool in a similar manner to objects. In fact, relationships are treated in MetaEdit+ as individual modeling components that can have properties of their own. Roles are used to connect concepts to relationships following the principles of Object Role Modeling (ORM) 6 , a conceptual modeling method that focuses on the separation of concepts, relationships, and the roles which concepts play in the relationships in which they participate. To realize that, the workbench includes a role tool that can be used to create roles and assign properties to them. In general, and for the purposes of implementing the unifying meta-model, roles can be kept as simple connection points. However, the workbench allows for more control over the definition of roles because they are also treated as standalone modeling components. The graph tool (Fig. 4) is where all modeling components are combined to create the meta-model. In the graph tool, concepts and relationships that are part of the metamodel are selected, and concepts are assigned to relationships using the defined roles. Additional management is also possible, such as creating constraints on how many different relationships a concept can play a role in.
MDD Tool Qualities in MetaEdit+
According to [START_REF] Atkinson | Concepts for Comparing Modeling Tool Architectures[END_REF], MetaEdit+ implements the two-level cascading tool architecture. Tool users create a model on an upper level using the built in tool format in the workbench, constituting the meta-model of the DSL that is being developed. Then, the tool uses the meta-model to generate another tool, the MetaEdit+ modeler, which is used to create models on a lower level, representing instance models that follow the definition of the DSL. When implementing the CM and BPM views of the unifying meta-model in MetaEdit+, the two views were created using form based interfaces (some of which are shown in figures 3 and 4) in the workbench, and the resulting meta-model was maintained using a proprietary format. In other words, the metamodels were stored in a binary format that is only accessible in MetaEdit+. Then, the workbench generated a new modeler (Fig. 5) that was used to create instance models.
When it comes to understandability, the form based interfaces of the workbench tools hinder the ability of the tool user to gain an overview of the whole meta-model that is being implemented. Lists of objects, properties, relationships, and roles can be viewed separately, and only using the graph tool can they be viewed at the same time. Even then, the connections between objects and relationships using roles can be seen individually, without a full overview. This limitation became a real obstacle during the development of the unifying meta-model because repetitive change cycles meant that the tool user had to maintain a mental image over the way the meta-model is being changed. Eventually, a separate copy of the meta-model was created in MS Visio 7 , adding extra complexity for maintaining two versions of the meta-model. Unlike the workbench, the MetaEdit+ modeler has a graphical interface for creating the models. The graphical representations of modeling components used in the modeler are defined in the workbench using a WYSIWYG tool. However, no additional support is offered by the tool in terms of explaining what each graphical symbol in the modeler means, limiting the understandability that can be offered during the creation of models.
Meta-models and models created using MetaEdit+ are both stored in a model repository that is part of the tool. The repository is a binary model database, the content of which can only be viewed using the MetaEdit+. Model reuse is supported by the existence of the repository. However, transporting models requires additional user intervention and is not directly supported by the tool. MetaEdit+ includes a scripting language, called MERL, which can be used to traverse models and generate corresponding textual reports. Using MERL, tool users can create generator scripts that can be used to output any text based on the structure and content of the model, including serialization standards (e.g. XMI 8 ) or human readable reports (e.g. in HTML). During our work, model executability was enabled using generators which translated BPM models into Java classes and CM models into XML schemas, but this required additional effort since the classes and schemas needed to be constructed from scratch using output statements in MERL. The need for observability is eliminated by the tool architecture used in MetaEdit+, since models are not allowed to deviate from the way they are supposed to be constructed as dictated by the meta-model. For example, attempting to create a relationship between two instances of the wrong classes would prompt the tool to display an illegal operation message. While being an advantage when creating models, the inability to arbitrarily connect instances or assign attributes limited our experimentation ability during the development of the unifying meta-model, because testing new structures was not possible. This limitation is also stressed by [START_REF] Atkinson | Concepts for Comparing Modeling Tool Architectures[END_REF] while discussing the tool architecture.
Among the various SE activities that can be supported by MDD tools, only collaborative development is available in MetaEdit+, realized using multiple user accounts which can simultaneously access the same model repository.
The available documentation for using different parts of MetaEdit+, including the tools in the workbench, the modeler, and the scripting language, is quite extensive. Tutorials are available to guide beginners, and there is an active community that can provide help when needed. MetaEdit+ offers additional modeling functionalities that can be helpful for complex DSLs, such as model embedding, where a single modeling component can be further describe using another model, as well as the ability to reuse modeling components across multiple models. However, these advantages are overshadowed by the inflexibility of the form-based interfaces and the two-level cascading architecture.
Implementation in EMF/GMP
Since its introduction as an Integrated Development Environment (IDE) over a decade ago, Eclipse has grown to be one of the major software development platforms available today. The open source and free software nature and the plug-in based architecture contribute to a wide and highly customizable range of modules that are available in Eclipse. A large community is involved in developing Eclipse plug-ins, and the development is organized in projects that focus on certain domains. The Eclipse modeling project 9 EMF provides facilities for building applications using models and model transformations. Meta-models can be created in EMF using Ecore, which is an implementation of Essential MOF (EMOF), a subset of MOF focuses on modeling standards, frameworks, and tooling, and includes the Eclipse Modeling Framework (EMF) and the Graphical Modeling Project (GMP), which were used to implement the unifying meta-model and develop its graphical editor. 10 that is aligned with implementation technologies and XML. This highlights the mindset assumed by EMF as component in an implementation of the Model-Driven Architecture (MDA) 11 The process for implementing the unifying meta-model using EMF and GMP is outlined in Fig 6, and is based on the recommended process in GMP. The first step of the process is to create the domain Ecore model, which in our case covers the CM and BPM views of the unifying meta-model. The domain model is acquired from a UML model designed in Papyrus [START_REF] Steinberg | EMF: Eclipse Modeling Framework[END_REF]. Ecore meta-models can be acquired from models written in annotated Java code, XML, or UML. The meta-models are transformed into Java code that can be used to create, edit, and serialize models. EMF is also able to generate a basic editor for the models. However, creating a rich graphical editor for models is made possible through the functionalities of the GMP plug-ins [START_REF] Gronback | Eclipse Modeling Project: A Domain-Specific Language (DSL) Toolkit[END_REF]. The combination of EMF and GMP enables the development of DSLs and accompanying rich graphical editors in a manner that is aligned with MDD principles. not always correctly reflected in the Ecore model. Papyrus represented a good alternative, especially since it offers a similar modeling experience, itself being built using GMP. The domain model is then used to create the generator model, which is a facility available in EMF to extend the domain model with implementation-specific details which lie outside the scope of the meta-model. These details include features that can be enabled, disabled, or customized during code generation, such as interface naming pattern and operation reflection. The generator model is the one used in EMF to actually generate the Java classes for creating and editing models. An excerpt from the generator model is shown in Fig. 7. Two complementary models that are part of GMP are derived from the domain model. The graphical definition model describes the graphical notation that is used to create instances of the meta-model which is being implemented, while the tooling definition model describes the palette that will be generated as part of the graphical editor to select and create modeling components. By using these models, GMP separates the meta-model from its visual representation and from the functionality used to create it. The graphical definition model and the tooling definition model can be edited separately from the domain model to customize the graphical editor. For instance, Fig. 8 shows the graphical notation of the "concept" modeling component of CM, defined as a rounded rectangle that includes labels for the id, name, and description properties of the concept. Similarly, the graphical representations of other concepts and relationships can be defined, and GMP offers a range of basic shapes with the ability to customize them and embed them in each other, contributing to a flexible notation definition tool. A mapping model is created to integrate the domain, graphical definition, and tooling definition models. The integrated model is finally used to derive an editor generator model that adds the necessary implementation-specific details, and has a role similar to that of the generator model in EMF. Fig. 9 illustrates the mapping model, showing how a node mapping is created for the concept modeling component to associate the concept definition in the domain model with its graphical notation in the graphical definition model and its tool in the palette that is created from the tooling definition model.
The code generated from the editor generator model is combined with the code generated from the EMF generator model to create the graphical editor. GMP offers a choice to generate the editor as a standalone Rich Client Platform (RCP) that can be used solely to create and edit models, or as an Eclipse plug-in that can be installed in an Eclipse environment and used in combination with other plug-ins.
MDD Tool Qualities in EMF/GMP
According to the tool architecture proposed by [START_REF] Atkinson | Concepts for Comparing Modeling Tool Architectures[END_REF], the combination of EMF and GMP constitute an MDD tool that realizes level compaction while providing spanning at the same time. Level compaction refers to the use of one representation format to capture two modeling levels [START_REF] Atkinson | Concepts for Comparing Modeling Tool Architectures[END_REF], and this is achieved in EMF by using Ecore to describe the structure of domain and instance models. As mentioned earlier, Ecore is an implementation of a subset of MOF called EMOF. Since MOF represents the format abstraction of both meta-models and models, the models can serve as metamodels that can be further instantiated. In other words, the graphical editor generated for the unifying meta-model can be used to design models that can serve as domain models for creating new graphical editors, and this cycle can be repeated endlessly. As for spanning, the use of MOF and the model editing framework provided in EMF enable the creation of different modeling languages, be it a DSL or a GPL, using the same generic mechanisms. A desired side-effect is the increased understandability by tool users; developing a new DSL does not require additional knowledge of the tool. Understandability is further enhanced by using a standardized language for metamodeling, i.e. MOF, because the semantics of the language are well-defined. Another factor effecting understandability is the ability to acquire domain models from a variety of sources. Tool users who are familiar with Java code can use annotated java to design the meta-model, while users who prefer XML or UML can continue to work in their usual ways, and EMF will take care of deriving the Ecore domain model from the input meta-model. The existence of a plethora of UML graphical modeling tools, both as standalone environments and as Eclipse plug-ins, allows further flexibility in creating the domain model. In fact, the switch from the EMF built-in Ecore graphical editor to Papyrus was seamless, made easier by the fact that both editors are built using GMP and present similar user experiences.
Both EMF and GMP include a general Eclipse plug-in for error highlighting and management. Using the wrong value for a property or not entering the value of a required property when editing EMF and GMP models prompts Eclipse to highlight the involved property and display an explanatory message. Suggested solutions are also offered for common and recurring types of problems.
The models that are created in EMF and GMP are encoded in XML and eventually transformed into Java code, making them human-readable as well as transportable to other tools or platforms. The open source nature of Eclipse enables tool users to access the source code of the plug-ins and customize their functionality if necessary, changing the generated code to suit their needs. The Eclipse plug-ins also support model integration, which occurs at two points in the process illustrated in Fig. 6: the mapping model integrates the domain, graphical definition, and tooling definition models, and the final graphical editor is created by integrating code generated from the editor generator model with code generated from the EMF generator model. Wizards exist to guide users through repetitive and common activities.
Traceability is partially managed by the facilities of EMF and GMP through the use of naming conventions. This way, model parts and generated code can be traced back to their sources. However, traceability is limited in parts of GMP due to the separation between the domain model and its graphical notation and tooling. Changes to the graphical definition and tooling definition models will be lost if the models are re-generated following changes to the domain model.
The large community that stands behind the development of Eclipse guarantees a wide range of plug-ins that can support the software development process. Code repositories, task lists, collaboration, project planning, and integration with other tools are only a few domains for which plug-ins can be found. Furthermore, resources associated with Eclipse in general and with EMF are plentiful, both as books and on the internet. However, it was difficult to find up-to-date resource on GMP.
Conclusion
The increased interest in MDD as an effective means for developing IS in recent years has led to plenty of research in this domain. Many open questions still stand [START_REF] Zikra | Analyzing the Integration between Requirements and Models in Model Driven Development[END_REF], among which is the availability of tools that are able to support all the principles of MDD. In this paper, we report on our experience of using two MDD tools: MetaEdit+ and the Eclipse plug-ins of EMF and GMP. The tools were used to implement the unifying meta-model, which is an attempt to bridge the gap between EM and MDD and streamline the development of IS that are more aligned with organizational goals. The results of this report and can help both tool developers and practitioners in gaining helpful insights into the investigated tools. Table 1 summarizes the observations of this report.
Our observations show that both tools have advantages and drawbacks. On one hand, the separate tools available in MetaEdit+ for creating and managing concepts, properties, relationships, and ports, in addition to the graph itself (i.e. the meta-model) provided increased control over the definition of the meta-model. MetaEdit+ combined all modeling components in a single graph (albeit using multiple user interfaces). A single transition was then required to generate an editor, and only one modeling technique was involved. But these advantages were limited by the use of a proprietary modeling technique and the lack of a single and complete view of the whole meta-model. The need for the overall view forced us to rely on an external tool (MS Visio), multiplying the effort needed to maintain the meta-model, which was still under development. -No up-to-date resources on GMP.
One the other hand, EMF and GMP implemented well-known and open standards. The visual representation of the domain model in EMF offered the necessary overall view and shortened the cycle of updating the meta-model and testing the changes for suitability. EMF and GMP used many models and automatic transformations between the models, constituting an MDD approach that covered several layers of modeling. But while this separation is necessary to decouple unrelated information, it resulted in a long development process that involved many types of models. Consequently, different types of modeling knowledge were required.
The implementation of the unifying meta-model is part of a research effort that involves developing an MDD approach that extends to cover EM aspects as well. Acquiring a tool that supports the creation and editing of models described by the unifying meta-model is only one step. In terms of tooling, the next step will investigate the possibility of adding executability support to the editors generated using MetaEdit+ and the Eclipse plug-ins. (This is not to be confused with the executability support already available in MetaEdit+ and the Eclipse plug-ins; both tools are able to generate editors from models). Executability in the generated editor is not directly supported in MetaEdit+, but arises as a side effect of MERL, which could be used to generate executable code. Several possibilities exist in Eclipse for extending the generated editor with executability support, such as Java Emitter Templates (JET), which is another part of the Eclipse modeling project.
Fig. 1 .
1 Fig. 1. The Complimentary Views of the Unifying Meta-Model.The GM, BRM, CM, BPM, and RM views represent the EM side of the unifying meta-model. Simultaneously, the RM, CM, BPM, BRM, and ISAM represent the IS and are part of the MDD side of the unifying meta-model. This overlap between the EM and MDD views, supported by inter-model relationships that relate components across the different views and provide built-in traceability support, guarantees the combined overview offered by the unifying meta-model.A complete description of the complementary views of the unifying meta-model and how they can be utilized to develop enterprise-aligned IS can be found in[START_REF] Zikra | Bringing Enterprise Modeling Closer to Model-Driven Development[END_REF]. The work is still ongoing for implementing all views of the unifying meta-model. In this paper, the implementations of the CM and BPM views are discussed.
Fig. 2 .
2 Fig. 2. The MetaEdit+ Workbench.
Fig. 3 .
3 Fig. 3. The MetaEdit+ Object Tool (a) and Property Tool (b).
Fig. 4 .
4 Fig. 4. The MetaEdit+ Graph Tool.
Fig. 5 .
5 Fig. 5. The MetaEdit+ Modeler for the CM view.
Fig. 6 .
6 Fig. 6. The Implementation Process of the Unifying Meta-Model Using Eclipse Plug-ins.
Fig. 7 .
7 Fig. 7. The Generator Model of EMF.
Fig. 8 .
8 Fig. 8. The Graphical Definition Model of GMP.
Fig. 9 .
9 Fig. 9. The Mapping Model of GMP.
•
Executability: the use of model-to-text transformations to derive (generate) executable program code from models that describe the desired IS[START_REF] Henkel | Pondering on the Key Functionality of Model Driven Development Tools: The Case of Mendix[END_REF][START_REF] Oldevik | Evaluation Framework for Model-Driven Product Line Engineering Tools[END_REF][START_REF] Pelechano | Building Tools for Model Driven Development. Comparing Microsoft DSL Tools and Eclipse Modeling Plug-ins[END_REF].
• Model repositories: tools must provide mechanisms for serializing models in
order to transport them to other tools or store them for later reuse [4, 7, 10].
Serialization is another form of model-to-text transformations. However, it does
not generate executable programming code. Storing models in repositories also
raises the question of model integration [4].
• Traceability and change management: MDD involves the use of transformations
to advance from one stage of modeling to the next. Changes in earlier models
need to be tracked and reflected in later models, and tools must provide the
necessary facilities to realize that [7].
• Other Software Engineering (SE) activities: since MDD tools are used in IS
development projects, activities related to the development process itself are not
isolated from MDD activities and must be supported by the tool. Project
planning
Table 1 .
1 Summary of our experience in using MetaEdit+ and EMF/GMP.
MetaEdit+ Eclipse plug-ins of EMF and GMP
Understandability + Consistent interfaces of all tools. + Uniform usage of models and
+ Graphically create models. meta-models because MOF is the
-No overview of the whole meta- meta-meta-modeling language.
model. + Support for many sources and
-No explanation during the formats to acquire domain models,
creation of instance models. hence adapting to user skills.
Model Evaluation + Tool architecture eliminates the + Ability to use a general Eclipse
need for evaluation: not possible to plug-in that provides error
create models that do not conform highlighting and suggested
to the meta-model. solutions during development time.
Executability + Scripting language for generating + Many Eclipse plug-ins to support
any text from models. executability (e.g. Java Emitter
-No generation framework (e.g. Templates JET).
for Java code).
Model Repositories + Model reuse. + XML-based: human-readable and
-Proprietary binary storage accessible by other tools.
format. + Java-based, enabling access and
editing of code.
+ Many Eclipse plug-ins to support
model integration.
Traceability and Not applicable since only one + Partially supported using naming
Change model is used. conventions.
Management -Limited in parts of GMP, causing
changes to be lost if some models
are re-generated.
Other SE Activities + Collaborative development. + Many Eclipse plug-ins to support
a myriad of activities.
Tool + Extensive documentation. + Extensive documentation.
Documentation + Large support community. + Large support community.
http://www.metacase.com/
http://www.eclipse.org/
http://www.eclipse.org/modeling/emf/
http://www.eclipse.org/modeling/gmp/
http://www.uml.org/
http://www.orm.net/
http://office.microsoft.com/en-us/visio/
http://www.omg.org/spec/XMI/
[START_REF] Pastor | Linking Goal-Oriented Requirements and Model-Driven Development[END_REF] http://www.eclipse.org/modeling/ , which is an Eclipse plug-in for creating UML models. EMF includes a GMP-based graphical editor that can be used for directly creating the domain model. However, it suffers from synchronization problems and changes are 10 http://www.omg.org/mof/ 11 http://www.omg.org/mda/ 12 www.papyrusuml.org/ | 37,766 | [
"1003523"
] | [
"300563"
] |
01484384 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484384/file/978-3-642-34549-4_15_Chapter.pdf | Muhammad Wasimullah Khan
Jelena Zdravkovic
email: jelenaz@dsv.su.se
A User-Guided Approach for Large-Scale Multi-Schema Integration
Keywords: Schema Integration, Business Intelligence, System Interoperability
Schema matching plays an important role in various fields of enterprise system modeling and integration, such as in databases, business intelligence, knowledge management, interoperability, and others. The matching problem relates to finding the semantic correspondences between two or more schemas. The focus of the most of the research done in schema and ontology matching is pairwise matching, where 2 schemas are compared at the time. While few semi-automatic approaches have been recently proposed in pairwise matching to involve user, current multi-schema approaches mainly rely on the use of statistical information in order to avoid user interaction, which is largely limited to parameter tuning. In this study, we propose a userguided iterative approach for large-scale multi-schema integration. Given n schemas, the goal is to match schema elements iteratively and demonstrate that the learning approach results in improved accuracy during iterations. The research is conducted in SAP Research Karlsruhe, followed by an evaluation using large e-business schemas. The evaluation results demonstrated an improvement in accuracy of matching proposals based on user's involvement, as well as an easier accomplishment of a unified data model.
Introduction
A schema represents a formal structure and can be of many types such as database schema, XML schema, ontology description, or domain conceptual description. The schema matching problem relates to finding the semantic correspondences between two or more schemas. Semantic heterogeneity arises from differences in naming, structure and the context in which these schemas are being used. In the database field, schema matching is used to merge different relational schemas to produce a mediated schema. In e-business, it may be used to align business documents with varying data structures. In healthcare, record of a same patient may exists in various hospitals needing alignment in order to present a single view. However, this alignment comes at a cost. It takes a number of domain experts to manually inspect schemas in order to perfectly align them. Over time, and especially with the proliferation of the Web, number and size of schemas to be matched increases significantly. The complexity forces companies to find the matches manually often with the help of commercial tools such as [START_REF] Zdravkovic | Adaptive Technology for the Networked Enterprise[END_REF], [START_REF]Microsoft BizTalk Server website[END_REF] and [START_REF]InfoSphere Platform[END_REF]. However, pure manual specification of mappings can both be time consuming and error-prone considering the number and size of schemas to be matched in this information age.
In order to overcome this shortcoming, a lot of research has been done over the last decade. [START_REF] Madhavan | Generic Schema Matching with Cupid[END_REF] were the first to propose treating schema matching as an independent problem. The focus of the most of the research done in schema and ontology matching is pairwise matching [START_REF] Rahm | A Survey of Approaches to Automatic Schema Matching[END_REF][START_REF] Berlin | Autoplex: Automated Discovery of Content for Virtual Databases[END_REF][START_REF] Doan | Reconciling Schemas of Disparate Data Sources: A Machine-Learning Approach[END_REF][START_REF] Madhavan | Generic Schema Matching with Cupid[END_REF][START_REF] Rahm | Matching large XML schemas[END_REF][START_REF] Bernstein | Industrial-strength Schema Matching[END_REF][START_REF] Euzenat | A survey of schema-based matching approaches[END_REF]. On the other hand, we do not find many examples where schemas are matched holistically. The goal of holistic or multischema integration is to integrate more than two schemas at once which result usually in creating a mediated schema where all matching elements are represented only once. Thus holistic schema matching resembles the pairwise matching in a sense that it generalizes the problem from matching two schemas to n schemas.
While few semi-automatic approaches have been proposed in pairwise matching to involve user, current holistic approaches rely on statistical information extracted from schemas in order to avoid user interaction. The user interaction is thus largely limited to parameter tuning. These n-way approaches are able to avoid user interaction and still match schemas to considerable degree of accuracy due to the fact that their algorithms operate in domains with quite limited number of distinct concepts [START_REF] Rahm | Towards Large-Scale Schema and Ontology Matching. Schema Matching and Mapping[END_REF]. The approaches are considered primarily for matching and integrating Web forms. The schemas are small and simple and consist of list of attributes.
The problem arises when dealing with domains where schemas are large and complex. The example of one such domain is e-business, where exist several standards providing common message structures for organizations to exchange business information. For instance, CIDX is a data exchange standard for a chemical industry and RosettaNet for high-tech industry. Organizations adapt these standards to their specific needs. When organizations define mappings to conduct a business together, those mappings are always their own interpretations of the standard. Thus what is valid in one setting may not be true in other setting.
In this study, we present a user-guided iterative approach for large-scale multischema integration. Given n schemas, the goal is to match the elements iteratively and demonstrate that a learning approach results in an improved accuracy with each iteration. This results in a unified data model consisting of a set of schemas and a set of correspondences. For the realization of the unified data model, we complement human activities with a data mining technique for detecting similar structures in a repository of schemas. The purpose of the research is to show a mechanism of user involvement in n-way schema matching, benefits derived from the pairing of users and machine, while addressing the issues that possibly arise from a user involvement.
This paper is structured as follows: Section 2 presents our approach to a userguided large-scale multi-schema integration. Section 3 is devoted to the evaluation of results. Section 4 reports the works related to ours, and Section 5 summarizes our conclusions and indicates the steps forward.
Approach to a User-Guided Schema Integration
This section describes the process of the user-guided iterative approach for a holistic schema matching. The main idea is to leverage the system's growing confidence from the user feedback loop, to better rank the matching proposals with each iteration. Figure 1 shows a sample InvoiceTo schema. An XML schema consists of a root, intermediary elements (optional) and the leaves, together forming a hierarchy. Leaves are where actual information within the schema lies. Rests of the elements are used to structure the schema. Here we make an assumption that leaf correspondences are already given by the user, as it is situation with e-business standards (http://help.sap.com/bp_bniv2604/BBLibrary/Documentation/705_BB_ConfigGuide_ EN_DE.doc).
Fig. 1. A sample schema
Based on input schemas and leaf correspondences, system makes proposals to the user with varying degree of confidence as we move up the hierarchy of a schema. For elements higher in hierarchy, less information is available, so system will be less confident than for those which are lower in hierarchy. As user judges proposals and new matchings are found, this information gets added to the system repository and is used for making decisions in subsequent iterations. Consequently, system confidence increases with each iteration and it is likely to produce higher quality proposals. Figure 2 below shows the iterative process.
In the first step, hierarchical input schemas are transformed to linear input of data mining. Then these schemas along with given leaf correspondences act as input to Closed Frequent Itemset Mining (CFIM) [START_REF] Uno | LCM ver. 2: Efficient mining algorithms for frequent/closed/maximal itemsets[END_REF]. CFIM was originally used for market basket analysis as to detect common patterns such as: 'Shoppers who buy oatmeal and sugar also buy milk.' In the context of this paper, CFIM is applied for detecting similar structures in a repository of electronic business document schemas. The result of mining is redundancy groups (step 4) which are presented to the user for judgment and are therefore also referred to as 'proposals' shown in Figure 3. Each redundancy group is composed of redundant transactions. They are called redundant because the transactions in these groups share the same set of items. Transaction and item are the abstract terms used in frequent itemset mining literature. Input to frequent itemset mining is a set of transactions. Each transaction is associated to a set of items. In XML schemas, we only have elements that form the trees of the type definitions. "Elements" are mapped to the notions of "transaction" and "item" to apply frequent itemset mining. If we want to know the similarity between two elements e1 and e2 of two different schemas S1 (containing e1) and S2 (containing e2), then e1 is a transaction and all inferior elements in S1 are e1's items, and e2 is another transaction with all inferior elements of e2 in S2 being e2's items.
The very first iteration will result in presenting mining results as proposed, to the user. However, in subsequent iterations user feedback is taken into account to improve the results.
The ranked results are then presented to the user (Figure 3). The user can judge about these proposals. Judging may involve acknowledging or approving a proposal, disapproving a proposal, or making corrections to a proposal. The user judgment produces agreed matchings (correspondences). The process repeats with user agreed matchings in previous iteration added to the set of leaf correspondences. The process is repeated until no new agreed matching is found.
In a learning approach, it is natural for a user to expect that judgment will result in increased quality of proposals while the effort required to construct a correspondence will decrease. Thus, we take some measures shown in step 5 of Figure 2 based on user judgment in order to meet the user expectations. These measures include hiding, adaptation of correspondences and backtracking.
Hiding correspondences
The hiding of proposals needs to take place because of the fact that user does not want to go through long list of proposals every iteration, especially the proposals which are already (dis)approved by the user. Thus it helps us reduce the number of proposals presented to a user in each iteration. The hiding takes place in two cases: when a user either approves a proposal or disapproves it. Disapproval of a proposal will result in hiding only that particular correspondence as it does not add to any information about other proposals. On the other hand, approval of a proposal provides a valuable insight. From the hiding perspective, not only the approved correspondence is hidden but any sub-correspondence if it exists is also removed from the list. S is a subcorrespondence of correspondence T if transactions of S is a subset of transactions of T. In order to see the motivation behind hiding the sub-correspondences, consider the example: let A. Let's assume user approves the group g1. That would mean A.InvoiceTo can only form a correspondence with BillTo element from B schema and Invoice_To element from C schema. If the transaction (e,g. A.InvoiceTo) from an approved group (e.g. g1) forms a correspondence with any other schema element (e.g. B.Address, C.Address) besides already approved ones (e.g. B.BillTo, C.Invoice_To), the transaction from an approved group is said to be in conflict. In order to determine a conflict, the algorithm for adaptation determines if any of the transactions (e.g. A.InvoiceTo, B.BillTo or C.Invoice_To) from an approved group exists in some other proposed group (e.g g2, g3 and g4). In our example, A.InvoiceTo exists in all three other groups g2, g3 and g4, while B.BillTo exists in g4. Thus, we can see that A.InvoiceTo is in conflict with B.Address and C.Address in g2 and with C.Organizaion in g3. The transaction(s) in conflict must be removed from the proposal in order to make it valid. The group g2 will then be adapted to: g2': B.Address, C.Address Similarly A.InvoiceTo will be removed from g3 which reduces the group to a single transaction. Such groups are removed (hidden) from the list of proposed ones. Assuming there exists fourth schema D, which takes no part in an approved correspondence g1. If any of the transactions from an approved group (A.InvoiceTo, B.BillTo, C.Invoice_To) form a correspondence with a schema with which currently no correspondence exists (D), such a group will not be affected. In our example, g4 will remain unaffected as it could still result in a valid correspondence.
Backtracking
Besides hiding and adaptation, the system allows the users to backtrack. Backtracking means user can retrace the earlier judgments made if the user feels a mistake has been made. One possible way of determining that the mistake may have been made is to check the list of proposals. Approval of undesired correspondences may lead to unwanted list of proposals.
In order to facilitate backtracking, two lists are displayed. One is the list of previously approved correspondences where a user can click on each approved correspondence to see which other correspondences are hidden, when this particular correspondence was approved. The other one is the list of hidden correspondences, clicking on each hidden correspondence will show the correspondence approved due to which this particular correspondence was hidden. Backtracking can be performed on two levels: individual correspondence level and iteration level.
Ranking of Proposals
The list of proposals generated by mining algorithm is ranked (step 7, Figure 2). To understand why ranking is important, consider the following example: This table shows the common types which each group share and the elements those types share among them (one per row) Even from this small example, it can be seen that some redundancy groups are more interesting than others. For example, ! ! although shared among fewer types, has more common elements and is thus more interesting than ! ! which only shares ID of types. In real-world scenario, there can be several such cases. Hence, it is important to rank mining results which orders potentially interesting results higher than others.
A redundancy group ! ! could be more interesting than ! ! based on three factors. Two of them are evident from the motivating example above: the number of types or transactions in a group and the number of common elements which forms the core of a group. However, it is also important to take uncommon items in a redundancy groups into account as well. If ratio of common to uncommon elements for one of the groups (! ! ) is higher than the other (! ! ), then ! ! could potentially be more interesting than ! ! . The rank of a redundancy group is thus a product of three components: the number of transactions, the number of common elements and ratio of common elements to average number of elements in a group.
In an iterative approach where a user is not likely to go through all the proposals in a single iteration, it is important to rank proposals in a way which makes the task of finding a right correspondence easier for the user. Therefore to facilitate the user we rank elements lower in the hierarchy higher. The reason is that it prevents the user from overwhelming with lot of elements to compare. It can be argued that lower the element in hierarchy (fewer children), fewer elements are needed to be matched. This not only decreases the user effort but as more and more information becomes available in subsequent iterations (by matching lower level elements) user becomes more confident in decisions regarding the higher level element matching.
Evaluation
In this section, we will demonstrate the results and analyze them to verify our claim that user-guided iterative approach for n-way schema matching results in an increasingly improved accuracy. First we will describe the experimental design and settings and then present the results in the light of those settings.
Experimental Design & Settings
Any schema integration approach is deemed successful by evaluating the quality of correspondences it produces. It is also necessary to examine whether the approach is able to find every possible correspondence to unify the schemas. Since our approach is iterative, we not only evaluate the quality of final sets of correspondences, but also assess the quality of proposals during iterations and at the end of all the iterations. But what is quality? Quality encompasses correctness and completeness of an integration proposal, which we have defined in the following ways:
Correctness of proposal
Here correctness is defined in terms of an individual proposal. Thus, it determines the quality of a single proposal. Assuming the proposal is composed of 50 transactions and 45 of them corresponds, precision or correctness of a proposal would be 0.9. Thus, if there are 10 proposals in total and assuming 5 of them have precision of 1.0, 3 have 0.9 and 2 have 0.8. Then overall precision of 10 proposals would be 0.93 (5*1 + 3*0.9 + 2*0.8)/10. This measure is used to assess the user effort required to correct the proposal if it is not entirely correct. If 15 or 20 out of 50 transactions need to be corrected, it would make sense for a user to ignore or discard such a proposal.
Correct correspondences-to-incorrect-correspondences ratio
Here the quality criteria refer to how many of the total proposals correspond. Proposal is either precise or it is not, and it measures the overall quality of proposals in a list. For example, there are 10 proposals in a list and 5 of them have precision of 1.0 according to quality criteria defined in point 1. Then the precision would be 0.5(5/10) according to this criterion. This measure is used to take into account the fact that user may not have complete knowledge of domain. If entire list of proposals are correct, user is less likely to make a mistake.
Correctness and completeness of proposal
Besides correctness, this criterion also includes completeness. Completeness means, transactions from each corresponding schema should be a part of correspondence. Thus correct and complete proposals are the actual correspondence to be identified. Correct but not complete proposals are sub-correspondences.
Based on these definitions of quality we evaluate whether or not we were able to achieve the following goals: • G1: have quality proposals. The iterative process can be seen as a journey towards an ideal UDM consisting of only semantically correct correspondences. To achieve that target, the objective is to have high quality of proposals. • G2: have better quality proposals at the top. Achieving 100% correct proposals cannot be expected because matching is a subjective task. It can be expected that there will be better and worse proposals. At the same time, there will be many proposals when integrating multiple large schemas and we expect the user to only go through certain top proposals before moving on to next iteration. Therefore, ordering of the proposals is of the utmost importance. The proposals which are of better quality must be ranked higher than others. • G3: improve overtime. This is the whole purpose of learning that we gain from user actions. The more information we obtain, the higher the chances that results would improve. Improvement can occur in many ways: generating more semantically correct proposals, the way these proposals are ordered, hiding or removing incorrect proposals, automatic adaptation of proposals, etc. • G4: minimize the number of iterations. To help reducing the user effort as in case of very large schemas, computation can even take days. • G5: find a Unified Data Model (UDM). This is the output that the system is expected to produce. It consists of a set of schemas and a set of correspondences.
For space reasons, we will show the evaluation results for three of the above five goals. We have used 5 XML hierarchal schemas. These schemas, CIDX, Noris, Excel, Paragon and Apertum belong to the purchase order domain. We have used them because on one hand they are fairly complex real-world schemas, and on the other, their mappings are available which helped us simulate the user and verify our results.
In order to avoid manual specifications of mappings, or the acknowledgement or rejection of proposed matches by the user, we have simulated a user which performs these tasks automatically. The simulated user approves the first correct and complete proposal from the list of proposals, while ignoring any proposal above it. Thus, the user interaction is minimal, because he only approves the proposed matches. Any incorrect proposals will be ignored and no corrections are made.
One proposal is approved every iteration if there remains an actual correspondence to identify, otherwise algorithm stops. This was necessary to demonstrate improvement during iterations, although algorithm is able to find approximately 80% of actual correspondences to be identified in very first iteration. There were 14 correspondences to identify for 5 schemas and it took 14 iterations, as one correspondence is identified in each one.
Experimental Results
Have better proposals at the top (G2): Analyze top N proposals presented to the user with respect to quality.
For the five schemas, the number of proposals generated in the first iteration was 75. We have set a limit of 10 proposals for which accuracy is demonstrated. If top 10 proposals' accuracy is high, that would mean the user is most likely be able to find all correspondences within these proposals over certain number of iterations. In order to demonstrate the degree of accuracy for top 10 proposals, we will show where the first correct and complete proposal lies in the list of proposals and what the average quality of top 10 proposals is in comparison to average quality of all the proposals. Table 3 represents the data for average index of the first correct and complete proposal. Every 2 nd proposal was the actual correspondence which user needed to identify. This implies that our simulated user only needed to process 2 proposals in every iteration, ignoring the first and approving the 2 nd one. The worst index of 8 means that user should be able to find every correspondence within top 10 proposals.
Figure 4 shows average quality comparison for top 10 vs. all proposals. It can be seen that there is a significant increase in quality of top 10 proposals in comparison to all proposals. On average 4 out of 5 transactions correspond, while 8 out of 10 proposals were correct. This signifies that our ranking algorithm is able to order proposals such that quality proposals are processed first by the user. This also shows that even 56% correct proposals appear on top and hence represent significant quality. To demonstrate the improvement, we will show how proposal quality varies over iterations and does the additional information lead to finding of missing correspondences. Figure 5 shows the average quality of top 10 proposals over 14 iterations. The graph shows all quality variants as defined in the previous section. All the three curves show similar trend of increase in accuracy and reaching peak point by the 10 th iteration. Ratio of correct to incorrect correspondences increases, as number of iterations increase (see correct to incorrect ration curve). While only 50% of top 10 proposals were 'correct' when only leaf information was available, from 4 th iteration to 9 th iteration the percentage increases to 90%. In the 10 th iteration, there was no 'incorrect' proposal in top 10. The trend demonstrates the improvement in accuracy of proposals as additional information is gained through user feedback. However, it can also be noted that there is a falling trajectory after 10 th iteration. One possible reason for this behavior is that there are too few correspondences left to identify. Since our simulated user is not rejecting any 'incorrect' proposals, but ignoring them, the number of incorrect proposals may possibly be much higher as compared to correct proposals at this point in time.
The demonstrated improvement in accuracy of proposals also leads to finding missing correspondences during iterations. Figure 6 shows there were 6 missing correspondences in first iteration, which were reduced to zero by the 13 th iteration. Thus our improvement measures of hiding and adapting proposals were successful in finding all the correspondences to unify schemas. It can be argued that the improvement during iterations, having best proposals at the top and having quality proposals, would mean nothing if due to some missing correspondences unified data model could not be accomplished. The correct and complete UDM is obtained if correct and complete identified correspondences equals number of actual correspondences to be found and false and missing correspondences are zero. Our results show that we are able to identify every correspondence needed to unify schemas and there were no missing correspondences. In addition, our simulated user was 'informed' so no 'incorrect proposals' are expected to be approved. Therefore, we can say that ideal (correct and complete) UDM is obtained.
Summing up, we can claim that the user-guided iterative approach for multischema integration leads to increase in precision of proposals as additional information becomes available. Although overall precision of proposals is low, we have shown that proper ordering of proposals with the flexibility to iterate without going through each and every proposal still makes that precision significant. Moreover, the reliance on user feedback means it can affect the results both positively and negatively. The user has the final word when it comes to a decision regarding the proposed matches. If the user approves a seemingly 'incorrect' proposal, perhaps in the user context it was the 'correct' match. However, through backtracking feature, the user can always make corrections, if any proposal was approved by mistake.
Related work
A lot of research has been done in schema matching over the past decade. In [START_REF] Bellahsene | Schema Matching and Mapping[END_REF] an overview of the approaches is given. There are also many surveys reviewing matching and mapping tools for schemas [START_REF] Do | Comparison of Schema Matching Evaluations[END_REF][START_REF] Rahm | A Survey of Approaches to Automatic Schema Matching[END_REF][START_REF] Shvaiko | A Survey of Schema-Based Matching Approaches[END_REF] and for ontologies [START_REF] Noy | Semantic Integration: A Survey Of Ontology-Based Approaches[END_REF]. Iterative approach has been studied so far only in the context of two way schema matching. N-way matching is not solved and remains a big area of research [START_REF] Rahm | Towards Large-Scale Schema and Ontology Matching. Schema Matching and Mapping[END_REF].
Two closest approaches to our work are [START_REF] Bernstein | Incremental Schema Matching[END_REF][START_REF] Chen | A User Guided Iterative Alignment Approach for Ontology Mapping[END_REF] which takes existing mappings and user action history into account in order to incrementally and iteratively match schemas. In order to address problems associated with single-shot approach, [START_REF] Bernstein | Incremental Schema Matching[END_REF] proposed to match schema incrementally, asking the user to select a single element for which top-k matches are generated. The user action history is exploited to rank the match candidates of a selected element. Our approach is similar in a sense that we also generate top-k match candidates and use existing mappings and user action history to rank these candidates. However, we generate top-k match candidates for each element of the schema. It is true that it will result in lot of false positives but we negate their impact by ranking the potentially correct matches higher as our results show and allow the user to iterate without going through each match candidate. Additionally, we not only exploit user action history for ranking of match candidates but also molding the result set. For large schemas, the approach of selecting each element and generating candidates does not seem feasible due to user effort and time involved. In this respect, our approach is heuristic and is much more applicable.
In [START_REF] Chen | A User Guided Iterative Alignment Approach for Ontology Mapping[END_REF] the user is involved by presenting the best guess results to the user which user can tailor according to their needs. More specifically, user can reject the alignments they found to be incorrect. The rejected alignments are then excluded from the result set in next iteration. In an evaluation, they demonstrated iterative improvement by executing five iterations. For large ontologies or schemas, it appears that their approach is going to take significant number of iterations before final result set can be achieved. Moreover, their approach does not allow for undoing of previous user actions. This is a significant drawback, because if user did accidently reject a correct alignment, the process has to be started all over again. Our objective of improving precision of the result set during iterations is similar, but it is more interactive in a sense that we allow the user to approve, disapprove, and correct the correspondences. Major learning occurs from approval action as it helps reducing the size of match candidates significantly, not only the approved correspondence is hidden but also any sub-correspondences and conflicting correspondences also get hidden. We also learn if the user revokes an incorrect decision.
Finally, the two approaches described above perform pairwise matching. Though some approaches are proposed for holistic schema matching, they are of single-shot type. From the user interaction perspective, they follow a different matching process. The learning occurs through statistical methods without user involvement. This usually limits the success to specific domains, and especially to the domains with limited number of concepts. In our case learning occurs through user involvement which makes it much more generic. It can be expected that our approach performs worse than other holistic approaches in domains where these approaches are expert.
Conclusions and Future Work
In this paper, we have described a user-guided iterative approach for large-scale multi schema integration. The approach learns from a user feedback and produces matching proposals with an increased accuracy during iterations. The role of a user is to judge the proposals presented and guide the iterative matching process until a unified data model is accomplished. The matching problem is abstracted to find the closed frequent itemsets, using CFIM implementation. The output of CFIM is redundancy groups or proposals. Among these proposals, some are more interesting than others. There are three components which make a proposal interesting: the number of transactions, the number of common elements and ratio of common elements to average number of elements. Thus potentially more interesting proposals are ranked higher than others. In ranking of proposals, preference is given to elements lower in the hierarchy and they are ranked higher. This prevents a user from overwhelming with lot of correspondences to compare. Moreover, as additional information becomes available user becomes more confident in decisions regarding the higher level correspondences.
We conducted a comprehensive evaluation of our work using large e-business schemas. In the evaluation, we have proved our hypothesis that the iterative approach leverages the growing confidence from the user feedback loop to better rank the proposals with each iteration. At present we cannot guarantee overall higher level of precision, but one thing where we are confident is that the proposals get more precise than they were in the past, irrespective of the number and size of schemas.
In future, our work can be improved in different ways. Specifically, fragmentbased matching as proposed by [START_REF] Rahm | Matching large XML schemas[END_REF] could be considered in order to solve the problem of execution time that may arise in case of greater number of schemas. Furthermore, more and larger schemas demand an advanced graphical user interface. Interactive techniques proposed by [START_REF] Falconer | Interactive Techniques to Support Ontology Matching[END_REF] for matching tasks are needed to be taken into account. Finally, a community driven approach where a group of users together (governance board) match schemas is also a possibility [START_REF] Rech | Intelligent assistance for collaborative schema governance in the German agricultural eBusiness sector[END_REF][START_REF] Zhdanova | Community-Driven Ontology Matching[END_REF].
Fig. 2 .
2 Fig. 2. Iterative approach to finding UDM
Fig. 3 .
3 Fig. 3. Output of mining -Redundancy groups (proposals)
InvoiceTo, B.BillTo and C.Invoice_To be a correspondence, where each of A, B and C are different schemas. Then any of the subsets of a set {A.InvoiceTo B.BillTo C.Invoice_To} is a called a sub-correspondence. Thus, if user agrees that {A.InvoiceTo B.BillTo C.Invoice_To} is a match, then {A.InvoiceTo B.BillTo}, {B.BillTo C.Invoice_To} and {A.InvoiceTo C.Invoice_To} must be a match too. This implies that the system needs to hide not only {A.InvoiceTo B.BillTo C.Invoice_To} but any of its subsets if present in the list of proposals. On the other hand, if user approves { A.InvoiceTo B.BillTo} and {B.BillTo C.Invoice_To} instead of {A.InvoiceTo B.BillTo C.Invoice_To}, by transitive property, we can then conclude that {A.InvoiceTo C.Invoice_To} and hence {A.InvoiceTo B.BillTo C.Invoice_To} is a match too. In this case also, the system needs to hide not only user approved proposals but any other proposals that match as a result of that approval. However, only approval of {A.InvoiceTo B.BillTo} will not lead to any conclusion about {A.InvoiceTo B.BillTo C.Invoice_To} as relationship between {A.InvoiceTo C.Invoice_To} and {B.BillTo C.Invoice_To} is yet unknown. Thus {A.InvoiceTo B.BillTo C.Invoice_To} in this case will remain the part of list of proposals.Adapting correspondencesAdaption of correspondence is possible only in the case of approval of a correspondence. In order to understand the concept behind adaptation, assume mining algorithm proposes following redundancy groups: g1: A.InvoiceTo, B.BillTo, C.Invoice_To g2: A.InvoiceTo, B.Address, C.Address g3: A.InvoiceTo, C.Organization g4: A.InvoiceTo, B.BillTo, D.Address
Fig. 4 .
4 Fig. 4. Average Quality comparison -top 10 proposals vs all proposals
Fig. 5 .
5 Fig. 5. Average quality of top 10 proposals (all variants)
Fig. 6 .
6 Fig. 6. Missing Correspondences
Table 1 .
1 Data StructuresTable shows the types (transactions) and their corresponding elements (one per row), each belonging to a different schema. Obviously, there are some overlapping elements among the types. These overlapping or common elements lead to redundant types, that is, the types which share same set of elements (not necessarily all). Mining results in set of redundancy groups as shown in the table below:
Type Elements
Customer ID, Name, DateOfBirth, Phone, City, Street, State, Zip, Country
Partner ID, DateOfBirth, Email, Fax, City, Country
Party ID, Name, Email, Address
BuyerParty ID, Phone, Fax, Address
Table 2 .
2 Redundancy Groups (Proposals)
Group Redundant Types Common Elements
! ! Customer, Partner, Party, Buyer Party ID
! ! Customer, Partner ID, DateOfBirth, City, Country
! ! Customer, Party ID, Name
! ! Customer, BuyerParty ID, Phone
! ! Partner, Party ID, Email
! ! Partner, BuyerParty ID, Fax
! ! Party, BuyerParty ID, Address
Table 2 .
2 Data for Average Index of first actual correspondence to identify
Index of first actual correspondence to identify Best 1 Worst 8 Avg. 2
Acknowledgments: This research is conducted at SAP Research Karlsruhe, Germany, under the supervision of Jens Lemcke. It is the part of iGreen Project for providing users with "standardized, industry wide connectivity". To support this, the aim is to build a service network where small businesses can have access to secure eservices. Semi-automatic integration of schemas is used to generate transformations between different schemas. These transformations are then used in a service network to facilitate German agricultural sector. | 37,289 | [
"1003525",
"942421"
] | [
"366312",
"300563"
] |
01484385 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484385/file/978-3-642-34549-4_1_Chapter.pdf | Janis Stirna
Anne Persson
email: anne.persson@his.se
Evolution of an Enterprise Modeling Method -Next Generation Improvements of EKD
Keywords: Enterprise Modeling, Enterprise Modeling method, method evolution
The field of Enterprise Modeling (EM) consists of many methods and method development is one of the key activity areas of EM practitioners and researchers. This paper ponders on future improvements for one EM method, namely Enterprise Knowledge Development (EKD). A number of improvements to the EKD method are identified and discussed, based on empirical observations. The improvements fall into four categories: the modeling language, the modeling process, tool support, and other improvements. The paper can be seen as a step towards a new and improved version of EKD.
Introduction
Enterprise Modeling (EM) is a process where an integrated and negotiated model describing different aspects of an enterprise is created. In [START_REF] Persson | An explorative study into the influence of business goals on the practical use of Enterprise Modelling methods and tools[END_REF] and [START_REF] Bubenko | An Intentional Perspective on Enterprise Modeling[END_REF] we have argued that EM usage is heavily influenced by a large number of situational factors, one of which is the intention behind its use. Knowledge about the purpose of a particular EM venture is essential when making decisions about which modeling language, way of working, tool support etc. is appropriate. It is important to bear in mind that organizations do not use EM methods only for the sake of using methods. They want to solve a particular business problem and EM is only one of several instruments in the problem solving process. In [START_REF] Persson | An explorative study into the influence of business goals on the practical use of Enterprise Modelling methods and tools[END_REF] and [START_REF] Bubenko | An Intentional Perspective on Enterprise Modeling[END_REF] we have stated that EM projects usually have the following purposes: -To develop the business. This entails, e.g., developing business vision, strategies, redesigning business operations, developing the supporting information systems, etc. Business development is one of the most common purposes of EM. It frequently involves change management -determining how to achieve visions and objectives from the current state in organizations. Business process orientation is a specific case of business development -the organization wants to restructure/redesign its business operations. -To ensure the quality of the business operations. This purpose primarily focuses on two issues: 1) sharing the knowledge about the business, its vision, and the way it operates, and 2) ensuring the acceptance of business decisions through committing the stakeholders to the decisions made. A motivation to adopt EM is to ensure the quality of operations. Two important success factors for ensuring quality are that stakeholders understand the business and that they accept/are committed to business decisions. Recently, organizations have taken an increased interest in Knowledge Management (KM), which concerns creating, maintaining and disseminating organizational knowledge between stakeholders. Sharing business knowledge becomes instrumental when organizations merge or collaborate in carrying out a business process. One aspect of this is terminology. EM has a role to play here as it aims to create a multifaceted "map" of the business as a common platform for communicating between stakeholders. One KM perspective is keeping employees informed with regard to how the business is carried out. Most modern organizations consider that the commitment of stakeholders to carry out business decisions is a critical success factor for achieving high quality business operations. Differences in opinion about the business must hence be resolved, requiring that communication between stakeholders be stimulated. EM, particularly using a participative approach, can be effective to obtain such commitment. -To use EM as a problem solving tool. EM is here only used for supporting the discussion among a group of stakeholders trying to analyze a specific problem at hand. In some cases making an EM activity is helpful when capturing, delimiting, and analyzing the initial problem situation and deciding on a course of action. In such cases EM is mostly used as a problem solving and communication tool. The enterprise model created during this type of modeling is used for documenting the discussion and the decisions made. The main characteristics of this purpose are that the company does not intend to use the models for further development work and that the modeling activity has been planned to be only a single iteration. In some cases this situation changes into one of the other EM purposes because the organization sees EM as beneficial or the problem turns out to be more complex than initially thought and more effort is needed for its solution.
EM usually is organized in the form of a project or it is a part of a larger, e.g. organizational or information system (IS) development, project. The resulting models, however, might be used on a more permanent basis e.g. during run-time of an IS or for knowledge management purposes.
In this paper we focus on a particular EM method, Enterprise Knowledge Development (EKD) [START_REF] Bubenko | User Guide of the Knowledge Management Approach Using Enterprise Knowledge Patterns[END_REF]. Both authors of the paper have been involved in developing and using its previous and current versions since the beginning of the 1990-ies. We firmly believe that it is essential that method developers from time to time critically assess their method/s, take potential improvements into consideration and consequently develop a new and improved version of the method.
The goal of the paper is to identify and discuss next generation improvements to the EKD method, based on empirical observations. The improvements fall into four categories: 1) the modeling language, 2) the modeling process, 3) tool support, and 4) other improvements. Hence, the paper is a step towards a new and improved version of EKD.
The remainder of the paper is organized as follows. Section 2 describes the research method while section 3 introduces the EKD EM method in its recent version. Section 4 presents the potential developments in terms of the EKD's modeling language, while section 5 focuses on how the proposed modeling process of EKD can be improved. Section 6 addresses requirements for tool support and section 7 outlines some other possible directions for improvement of EKD. The paper ends with some concluding remarks in section 8.
Research Approach
The empirical sources of this paper are:
• Extensive fieldwork applying versions of EKD to a variety of problems,
• Interview studies involving experienced EM consultants and method developers.
The most influential fieldwork cases were, for the most part, carried out within international research projects financed by the European Commission. An overview of the cases is given in Table 1. The applications that contributed to this paper took place in the years 1993-2008. Their processes and their outcomes were observed and analyzed. Collected data and experiences from method development, fieldwork and interviews were analyzed. Two interview studies focusing on the intentional and situational factors that influence participatory EM and EM tool usage as reported in [START_REF] Persson | An explorative study into the influence of business goals on the practical use of Enterprise Modelling methods and tools[END_REF] and [START_REF] Persson | The Practice of Participatory Enterprise Modelling -a Competency Perspective[END_REF] were also carried out. A more extensive presentation of these cases is available in [START_REF] Stirna | An Enterprise Modeling Approach to Support Creativity and Quality in Information Systems and Business Development, Innovations in Information Systems Modeling: Methods and Best Practices[END_REF]. In addition, EKD and its earlier versions have been used in a number of smaller problem solving and organizational design cases at many organizations in Sweden and Latvia. The application context if these cases have been change management -the organizations have had intentions to either develop new solutions to their business problems or to improve the efficiency of the existing once. All these projects have applied the participatory approach to EM involving one or two modeling facilitators. There have been projects with more than 30 EM sessions (e.g. at Vattenfall and Riga City Council) involving people from all management levels as well as from operational levels. There have also been projects where the modeling has been limited to a few (2-4) modeling sessions with top and middle level management (e.g. at Verbundplan and SYSteam). Most of the modeling participants did not have any prior training with EKD, but some of them were experienced with other Conceptual Modeling methods Empirical data from modeling activities of the above mentioned types were documented as written notes and analyzed. In addition, interviews with EM practitioners about EM practice were transcribed and analyzed using Grounded Theory [START_REF] Glaser | The Discovery of Grounded Theory: Strategies for Qualitative Research[END_REF] data analysis. The data and analyses have then been used as input for a series of argumentative syntheses targeting:
1. Requirements on EM related to the purposes of using EM, reported in [START_REF] Bubenko | An Intentional Perspective on Enterprise Modeling[END_REF], 2. Core competencies of an EM practitioner, reported in [START_REF] Persson | Towards Defining a Competence Profile for the Enterprise Modeling Practitioner[END_REF], and 3. The relationship between core competencies and the purposes of EM, reported in [START_REF] Stirna | Purpose Driven Competency Planning for Enterprise Modeling Projects[END_REF]. The goal of this series of analyses has been to establish a line of research that addresses different aspects of EM from a purpose and situational perspective.
Throughout the research process we have also reflected on the advantages and drawbacks of the EKD EM method as related to different purposes, in particular when analyzing the requirements that different purposes pose on an EM method. Most relevant to this paper are the requirements on the modeling language, modeling process and tool support.
Hence, the results from these analyses in combination with our general knowledge about the field of EM have guided the identification of a number of potential improvements to the EKD method.
The EKD Enterprise Modeling Method -History and Current State
In Scandinavia, methods for Business or Enterprise Modeling (EM) were initially developed in the 1980-ies by Plandata, Sweden [START_REF] Willars | Handbok i ABC-metoden[END_REF] and later refined by the Swedish Institute for System Development (SISU). A significant innovation in this strand of EM was the notion of business goals as part of an Enterprise Model, enriching traditional model component types such as entities and business processes. The SISU framework was further developed in the ESPRIT projects F3 -"From Fuzzy to Formal" and ELEKTRA -"Electrical Enterprise Knowledge for Transforming Applications". The current framework is denoted EKD -"Enterprise Knowledge Development" [START_REF] Bubenko | User Guide of the Knowledge Management Approach Using Enterprise Knowledge Patterns[END_REF]. The method is, hence, a representative of the Scandinavian strand of EM methods.
In our view, an EM method is more than a modeling language. An EM method has an intended process -including ways of working, EM project management and competency management -by which the enterprise models are produced. It also proposes which tools should be used during that process.
The EKD modeling language
EKD -Enterprise Knowledge Development method [START_REF] Bubenko | User Guide of the Knowledge Management Approach Using Enterprise Knowledge Patterns[END_REF] is a representative of the Scandinavian strand of EM methods. It defines the modeling process as a set of guidelines for a participative way of working and the modeling product in terms of six sub-models, each focusing on a specific aspect of an organization (see table 2).
The modeling components of the sub-models are related between themselves within a sub-model (intra-model relationships), as well as with components of other sub-models (inter-model relationships). Figure 1 shows inter-model relationships. The ability to trace decisions, components and other aspects throughout the enterprise is dependent on the use and understanding of these relationships. For instance, statements in the GM need to be defined more clearly as different concepts in the CM. A link is then specified between the corresponding GM component and the concepts in the CM. In the same way, goals in the GM motivate particular processes in the BPM. The processes are needed to achieve the goals stated. A link therefore is defined between a goal and the process. Links between models make the model traceable. They show, for instance, why certain processes and information system requirements have been introduced. While different sub-models address the problem domain from different perspectives, the inter-model links ensure that these perspectives are integrated and provide a complete view of the problem domain. They allow the modeling team to assess the business value and impact of the design decisions. There are two alternative approaches to notation in EKD: (1) A fairly simple notation, suitable when the domain stakeholders are not used to modeling and the application does not require a high degree of formality and (2) a semantically richer notation, suitable when the application requires a higher degree of formality and/or the stakeholders are more experienced with modeling. The modeling situation at hand should govern the choice of notation, which will be shown in the subsequent discussion about the method. The full notation of EKD can be found in [START_REF] Bubenko | User Guide of the Knowledge Management Approach Using Enterprise Knowledge Patterns[END_REF].
The EKD Modeling Process
In order to achieve high quality results, the modeling process is equally important as the modeling language used. There are two levels of EM process -the EM project level and the modeling level.
The EM project level, where the modeling activities are placed in a context of purpose. In [START_REF] Persson | Towards Defining a Competence Profile for the Enterprise Modeling Practitioner[END_REF] we described the generic process including the activities listed in Table 3. The modeling level where domain knowledge is gathered and enterprise models created and refined. When it comes to gathering domain knowledge to be included in enterprise models, the main EKD way of working is facilitated group sessions. In facilitated group session, participation is consensus-driven in the sense that domain stakeholders "own" the models and govern their contents. In contrast, consultative participation means that analysts create models and domain stakeholders are then consulted in order to validate the models. In the participatory approach stakeholders meet in modeling sessions, led by a facilitator, to create models collaboratively. In the sessions, models are often documented on large plastic sheets using paper cards. The resulting "plastic wall" is viewed as the official "minutes", for which every domain stakeholder in the session is responsible. [START_REF] Persson | Enterprise Modelling in Practice: Situational Factors and their Influence on Adopting a Participative Approach[END_REF]. There are two main arguments for using the participative approach, namely: 1. The quality of models is enhanced if they result from collaboration between stakeholders, rather than from consultants' interpreting stakeholder interviews. 2. The approach involves stakeholders in the decision making process, which facilitates the achievement of acceptance and commitment. This is particularly important when modeling is focused on changing some aspect of the domain, such as e.g. its visions/strategies, business processes and information system support. In a modeling session, the EKD process populates and refines the sub-model types used in that particular session gradually and in parallel. When working with a model type, driving questions are asked in order to keep this parallel modeling process going. This process has three goals: (1) define the relevant inter-model links, (2) to drive the modeling process forward, and (3) ensure the quality of the model. Figure 1 illustrates driving questions and their consequences for establishing inter-model links in the model. It is also argued that shifting between model types while focusing on the same domain problem enhances the participants' understanding of the problem domain and the specific problem at hand. More about the modeling process used in EKD and about facilitating modeling group sessions can be found in [11 and 12].
Tool Support
The EM process needs to be supported by tools. The tool requirements depend on the organization's intentions (e.g. will the models be kept "alive") and situational factors (e.g. the presence of skillful tool operators and resources). More on how to select and introduce EM tools in organizations is available in [START_REF] Stirna | The Influence of Intentional and Situational Factors on EM Tool Acquisition in Organisations[END_REF]. There are several categories of tools that can be considered.
Group meeting facilitation tools. There are a variety of tools supporting collaboration and meeting, e.g. GroupSystems, Adobe Connect, CURE. These tools can be used to support modeling. They have become more sophisticated and popular. However, they still lack specific support for participative EM, e.g. for guiding the modeling process or "close to reality" graphic resolution. We recommend using a large plastic sheet and colored notes to document the model during a modeling session. Then modeling can be set up in almost any room with a sufficiently large and flat wall. Also, it allows the participants to work on the model without impeding each other. If a computerized tool and a large projection screen are used, the participants have to "queue" in order to enter their contributions. This usually slows down the creative process. In addition, the "plastic wall" is also cheap and does not require technicians to set it up.
After the modeling session the models on plastic may be captured with a digital camera. If they are to be preserved, e.g. included in reports, posted on the intranet, they need to be documented in a computerized modeling tool. This category of tools includes simple drawing tools and more advanced model development and management tools. In "stand-alone" projects only drawing support may be needed. If so, simple drawing tools such as Microsoft Visio and iGrafx FlowCharter have proven to be useful and cost-effective [START_REF] Persson | An explorative study into the influence of business goals on the practical use of Enterprise Modelling methods and tools[END_REF]. In other cases, e.g. when enterprise models need to be communicated to large audiences or linked with existing information systems, more advanced tools should be used. In this category of tools we find, for instance, Aris (IDS Scheer) and Metis (Troux Technologies). Apart from modeling tools EM projects need group communication and collaboration tools. We have successfully used Basic Support for Collaborative Work (BSCW) tool (Fraunhofer).
Business requirements for EM tools include integration of EM tools with MS Office, model visualization and presentation requirements (often in web-format) as well as reporting and querying requirements. We have also observed a growing need to connect models to information systems, thus making the models executable. An extended presentation of requirements for EM tools is available in [START_REF] Stirna | The Influence of Intentional and Situational Factors on EM Tool Acquisition in Organisations[END_REF] In the following sections we now discuss potential developments of the EKD method, based on previous research and experiences from using the method in various contexts and for various purposes.
Evolution of the Modeling Language
The EKD modeling language was intended to be fairly simple and flexible in order to be effective in diverse modeling contexts. We intend to keep this principle. There are however several improvement area that need to be worked on.
Attributes of modeling components. In the current version of EKD only a few modeling components have attributes namely, "supports" and "conflicts" relationships in the Goal Model have attribute "strength". Other properties of modeling components we have expressed either within the textual formulation, in a comment field linked to the relevant modeling component, or with a certain loosely defined annotation symbol. The benefit of this way of working is flexibility of representation, but the drawback is a lack of formalism, reduced reusability, and poor scalability. We also envision that future modeling situations will need visualizing various prioritized solutions and alternatives as well as deal with other kinds of model annotations. Hence, EKD models should include attributes similar to those used for requirements management such as priority, risk, status, iteration, difficulty, and cost to implement. More on requirement attributes is available in e.g. [START_REF] Weigers | Software Requirements[END_REF]. In addition, there should be a possibility for a modeler to define custom attributes. A considerable limitation in this regard is tool support. The use of simple drawing tools for up to medium size projects and models is still widespread and management of such attributes in tools like Visio is cumbersome. Hence, a shift to more advanced tools is needed.
Use of known (modern) modeling languages. The EKD notation has been assembled from a number of known modeling notations, such as Data Flow Diagrams and Crow Foot notation for data modeling. The benefit of the current notation is its simplicity. At the same time we have argued that the notation does not really influence the modeling result and hence different notations can also be used. The same is true for modeling languages. In principle, a modeling language used for a specific EKD sub-model can be replaced with another language addressing the same modeling perspective. E.g. the EKD goal modeling language could in principle be replaced with the MAP approach [START_REF] Rolland | A Multi-Model View of Process Modelling[END_REF], and business process modeling language with BPMN [START_REF] Omg | Business Process Model and Notation, version 2[END_REF]. Using these languages in the EKD framework could be seen as quite close to their overall design. A more advanced interchange could also be possible, e.g. we should investigate the possibility for using RuleSpeak [START_REF][END_REF] for representing business rules instead of the current BRM. In this case new guidelines for modeling would also have to be developed. Use of ontologies for documenting the domain language could be used instead of the current EKD Concepts Model. Similar approaches of integrating ontologies with enterprise models have been proposed in [START_REF] Gailly | Ontology-Driven Business Modelling: Improving the Conceptual Representation of the REA Ontology[END_REF]. The challenge is to develop the meta-model in a way that facilitates these customizations of the modeling language. Using many such customizations would require new modeling guidelines and the meta-model should have a "placeholder" for documenting them.
New modeling dimensions. The current principle of modeling with EKD assumes that in most modeling cases everything that needs to be modeled can be modeled with the existing sub-models and modeling components. Hence, if a new modeling requirement emerges, e.g. to model new and specific perspective of the enterprise, two alternatives can be followed. (1) use the existing modeling constructs and, for example, define a concepts model with a specific purpose. This approach is useful if the modeling perspective needed is similar to one of the EKD sub-models. E.g. if we would like to model products and product structures, we could make a specialization of the Concepts Model and call it a Product Model. Another approach (2), is to tailor the EKD meta-model by defining new modeling components and/or submodels/perspectives to represent the required perspectives. Examples of the need for new modeling components could be the following:
• capability expressing an ability to reach a certain business objective within the range of certain contexts by applying a certain solution. Capability would essentially link together business goals with reusable business process patterns that would be applicable to reach the goals within the specified contexts. • Key Performance Indicator (KPI) linked with goals in the goal model. In this case we would have to establish a stereotyped concept and new type of association between the KPI concept and business goal. Having such a construct would also require defining modeling guidelines e.g. driving questions, for defining the KPIs during the modeling session or discovering them from the existing management IS.
Examples of new modeling perspectives that could potentially be needed in the future are: • Context modeling. In this case we could be able to use the Concepts Model to represent the contexts and context properties. A new set of inter-model links would have to be established to Goals Model and to Business Process Model. Reuse modeling. Enterprise Models are reused, and often they become part of patterns. This transformation of models into reusable artifacts often should be modeled. Hence, introducing a reuse perspective in EKD is potentially useful. Modeling components of the reuse perspective would be modeling problem, context, consequences, and usage guidelines.
Evolution of the EKD Modeling Process
The EKD modeling process is participatory as described in section 3.2. This section discussed areas for its improvement.
Two levels of EM process -the EM project level and the modeling level. We consider the development of knowledge about the project level of the process related to the intended use of enterprise models to be necessary in order for the potential benefits of EM to appear. Taking this view of the process means that other issues than the modeling language and the way of working come into play -issues that target the whole model life cycle as well as EM project management. Examples of more detailed issues that need to be addressed are model quality assurance, model implementation in real life organizational/systems development, model maintenance, reuse of models, model retirement, model project preparation and management, competency management. Some of these issues are addressed in the EKD method handbook, but the guidelines need to be refined, complemented and structured according to the project level process.
Competence of EM actors. In order to manage this more complex EM process, a number of different competencies are needed. In [START_REF] Persson | Towards Defining a Competence Profile for the Enterprise Modeling Practitioner[END_REF] and [START_REF] Stirna | Purpose Driven Competency Planning for Enterprise Modeling Projects[END_REF] we have outlined a few core competencies based on the two levels of in the EM process and also related them to the purposes of EM. These developments constitute preliminary steps towards the creation of competency profiles for different roles and purposes in EM that can be used both for EM project planning and for training of EM practitioners. As modeling projects become more and more complex, we believe that there is a need to clearly identify essential roles that can be played by one or different individuals. This would contribute to the quality of EM project planning and execution. One side effect is also that these roles can function as a career path for modeling practitioners. A novice could initially assume the more simple roles (e.g. assistant facilitator) and then develop towards being an advanced modeling practitioner in a planned manner.
Integration with pre-existing models. It is quite unusual that an EM project starts from scratch without models previously existing in the organization. For every EM method it is then necessary to describe how such pre-existing models should be integrated in the project at hand. On the whole this is somewhat contradictory with the EKD view that the domain experts create the models in facilitated modeling sessions, but we have to be able to adjust the method for such cases as well.
Selection and adoption of an EM method in a project is an important issue that needs to be addressed in the process of modeling, particularly on the project level. Since most organizations already use various organizational development methods, EKD needs to provide guidance for how to connect to such methods creating "method chains" that will help solve the problem at hand on the project level. This can be facilitated by defining how the output from these other methods can be used as input to EKD enterprise models. This kind of guidelines can improve the possibility of new or non-commercial methods to be actually used in practice. Otherwise there is a risk that only methods and tools with strong vendor and/or consultant support will be used.
Method customization and packaging is a potential development that also relates to the project level of the process and the purposes of EM. Method support for this would address need for "bundling" parts of the generic method into customized method versions i.e. "dialects" to fit certain situations. The aspects of customization could be modeling notation, semantics, modeling processes and guidelines, as well as tool support. Such bundles could target different types of organizational domains or different types of problems such as cloud computing, IT governance etc. This approach would also facilitate development and selling of consulting services based on EM.
Evolution of Tool Support for the EKD Modeling Process
We have addressed tool support for EM in [START_REF] Stirna | The Influence of Intentional and Situational Factors on EM Tool Acquisition in Organisations[END_REF] more than a decade ago. To a large extent many of the EM tool requirements and usage context still are valid. EM practitioners often use simple drawing tools such as Visio and FlowCharter to document the models because their projects do not require the models to be processed by a tool. I.e. the models are mostly used as documentation of the modeling effort and serve as input for organizational change. They are not automatically imported in some other modeling tool that takes over the development and realizes the models. There are, however, a growing number of projects where EM is a part of a larger development venture and serves "input" to subsequent development activities. At present we can see a trend of tools for business process management, IS development, ERP configuration and governance becoming more mature and widely used. This increases the overall method and tool usage maturity of organizations. As a result, the need to extend the coverage of the current tools supporting EM is more apparent than ten years ago. Hence, EKD needs tool support for managing a repository and open import/export of models. More specifically, the tool support for EKD should evolve in the following directions:
Generation of IS from enterprise models. The current MDD approaches and tools do not support the early stages of system development such as enterprise modeling and requirements in an integrated manner, e.g. see [START_REF] Zikra | Bringing Enterprise Modeling Closer to Model-Driven Development[END_REF]. In [START_REF] Zikra | Aligning Communication Analysis with the Unifying Meta-Model for Enterprise Modeling[END_REF], this challenge is addressed by proposing a unifying meta-model that integrates EM (namely, a modification of EKD) with MDD artifacts. Support for various notations. EKD has followed the principle of using simple and relatively generic modeling notation. But considering the need to share and/or reuse models among different projects and organizations, customizing or even replacing the modeling notation should be possible. Support of known (modern) modeling languages is also needed.
Support for reuse. There are two main cases of reuse (1) development of generic models and then instantiating them to a specific application case, e.g. by adding additional details or introducing variations. In the MAPPER project this was achieved by introducing the concept of task patterns [START_REF] Sandkuhl | Evaluation of Task Pattern Use in Web-based Collaborative Engineering[END_REF] supported by the Troux Architect (formerly Metis) tool and the AKM platform [START_REF] Lillehagen | Active Knowledge Modeling of Enterprises[END_REF] (2) integration of organizational and analysis patterns. Patterns have proven to be useful for EM [START_REF] Sandkuhl | Evaluation of Task Pattern Use in Web-based Collaborative Engineering[END_REF][START_REF] Rolland | Evaluating a Pattern Approach as an Aid for the Development of Organizational Knowledge: An Empirical Study[END_REF]. Enterprise models contain many patterns and they are often built by using patterns. Currently tool support for reuse is limited at the level of storing, searching and retrieving patterns from a corporate knowledge repository. The actual application of a pattern requiring customization and adaptation is a manual process done by developers. There is however a trend of developing consulting services based on existing best practices and patterns. Since EM is part of delivering these services the supporting tools should provide support for designing organizations with patterns.
Simple tools and cloud based tools. Currently EM tools chiefly consist of graphical editors, model management services and repositories. Models from the repository can also be exported and displayed on the web thus not requiring tool installation for browsing. In the future cloud-based applications could be used to support EM. An early example of offering cloud based collaborative modeling is Creately, developed by Cinegrix Pty Ltd., Australia. Cloud-based group support and collaborations tools will most likely merge with modeling tools.
Tool support for "keeping models alive". This means tool support for ensuring that the models are up to date reflecting the reality -the way the organization actually functions. Lately there have been significant advancements in the area of business process management at run-time, but to achieve this for other types of enterprise models (e.g. goals and actors) requires high organizational maturity [START_REF] Stirna | The Influence of Intentional and Situational Factors on EM Tool Acquisition in Organisations[END_REF][START_REF] Wesenberg | Enterprise Modeling in an Agile World[END_REF] because model maintenance roles and processes should be established within the organization. In addition, there should also be tool functionality supporting this. Key area of research towards this would be to provide an approach for collecting feedback to a model without actually manipulating the model. There should be a way to annotate and/or version a model or a model fragment.
Efficient model presentation and manipulation. Tools are often used for presenting models to a larger audience. Using only the scrolling and zooming functionality that is the built into contemporary operating systems is insufficient and often slows down the presentation. Therefore easier and more advanced zooming and model navigation is needed, perhaps allowing to define a set of multi-touch gestures for various model presentation actions.
There are other areas of improvement as well. Chiefly, EKD should also provide more explicit and formal support for various manipulations with the models. The following two areas are of primary concern:
• Quality assurance. The area of EM has accumulated a great deal of knowledge when it comes to improving model quality, c.f. e.g. [START_REF] Krogstie | Process models representing knowledge for action: a revised quality framework[END_REF][START_REF] Stirna | Anti-patterns as a Means of Focusing on Critical Quality Aspects in Enterprise Modeling[END_REF][START_REF] Mendling | What Makes Process Models Understandable?[END_REF][START_REF] Krogstie | Model-Based Development and Evolution of Information Systems: A Quality Approach[END_REF]. However, in practice quality assurance is done by modelers, largely without specific tool support. Not all quality factors can be related to formal properties of the model, but there are a significant number of factors that can be supported by tools. Hence, more development should be targeting automatic analysis and suggesting for improvements. • Code or model generation. There should be algorithms for generating design models or code from enterprise models. This aspect is not currently supported in EKD. In general, transforming business and enterprise models to other development artifacts is an underdeveloped research area and more attention to it should be devoted. This improvement aspect should be seen together with generating an IS from enterprise models mentioned in the previous section. Method deployment and user support is another additional aspect where EKD needs some development. An EM method can be seen as successful only if it is successfully used in practice. EKD has been used by a significant number of organizations in various projects. However, the uptake of EKD, i.e. that an organization has continued to use it without the support of external experts and consultants, has not been widespread. We do not have reliable data, but some evidence suggests that such organizations are not more than seven or eight. Some more organizations have chosen different EM approaches after successful experiences with EKD. This leads us to conclude, at least initially, that for en EM method to be taken up by an organization, there should be support for: (1) method acquisition and implementation throughout the organization, (2) defining roles and responsibilities, (3) method usage procedures, and (4) tool usage. These should be described in the method manual. Furthermore, the method vendor should be able to provide user support, not only by answering questions when they arise, but by informing the users about the latest developments, as well as providing training and mentoring. The latter could be considered as too resource consuming for an academic method.
The name of a method is not unimportant. EKD means Enterprise Knowledge Development and name originated from one of the projects where the method was originally developed. Preferably a name should be "pronounceable" and to some extent signal what it is all about. This is something that needs to be considered for the future version of EKD.
Figure 1 :
1 Figure 1: Working with inter-model links (dashed arrows) through driving questions
Table 1 .
1 Overview of main application cases
Organization Domain Period Problems addressed
British Aerospace, Aircraft development and 1992- Requirements Engineering
UK production 1994
Telia AB, Telecommunications 1996 Requirements validation
Sweden industry Project definition
Volvo Cars AB, Car manufacturing 1994- Requirements engineering
Sweden 1997
Vattenfall AB, Electrical power 1996- Change management, Process development,
Sweden industry 1999 Competence management
Riga City Council, Public administration 2001- Development of vision and supporting
Latvia 2003 processes for knowledge management
Verbundplan Electrical power 2001- Development of vision and supporting
GmbH, Austria industry 2003 processes for knowledge management
Skaraborg Health care 2004- Capturing knowledge assets and development
Hospital, Sweden 2007 of a knowledge map of a knowledge
repository.
SYSteam AB, Management 2008 Development of a vision for an employee
Sweden consulting knowledge management portal
Table 2 .
2 Overview of the sub-models of the EKD method[START_REF] Stirna | Participative Enterprise Modelling: Experiences and Recommendations[END_REF])
Goals Model Business Concepts Model Business Process Actors and Technical
(GM) Rules Model (CM) Model (BPM) Resources Component &
(BRM) Model (ARM) Requirements
Model(TCRM)
Focus Vision and Policies and Business Business Organizational Information
strategy rules ontology operations structure system needs
Issues What does the What are the What are the What are the Who are What are the
organization business rules, things and business responsible for business
want to how do they "phenomena" processes? How goals and requirements to
achieve or to support addressed in other do they handle process? How are the IS? How are
avoid and organization's sub-models? information and the actors they related to
why? goals? material? interrelated? other models?
Com- Goal, prob- Business rule Concept, Process, Actor, role, IS goal,
po- lem, external attribute external proc., organizational IS problem,
nents constraint, information set, unit, individual IS requirement,
opportunity material set IS component
Table 3 .
3 Activities in EM (adapted from[START_REF] Persson | Towards Defining a Competence Profile for the Enterprise Modeling Practitioner[END_REF])
Define scope and objectives of the modeling project
Plan for project activities and resources
Plan for modeling session
Gather and analyze background information
Interview modeling participants
Prepare modeling session
Conduct modeling session
Write meeting minutes
Analyze and refine models
Present the results to stakeholders
Concluding Remarks
EKD is an EM method that has been developed by researchers. Its usefulness for business and systems development has been established in a number of cases. However, it is clear to us that even if its principles and components are sound, it takes considerable effort to make a research based method mature, so that it can be easily adopted by organizations and linked to other established and complementing methods and tools, (e.g. Aris), approaches, (e.g. Balanced Scorecards, SAP reference models) and consulting products and services. But at the same time, the method should not give up its overall philosophy of a participatory and agile way of working and its process of iterative and incremental development of models.
In this paper we have taken the first steps towards maturing the EKD method by identifying and describing some improvements, based on empirical research, experience, and literature on EM. Several of the improvements discussed here also pose research challenges as well as practical challenges. Future work will commence by first prioritizing the suggested improvements. Implemented improvements will then be tested in suitable empirical settings. | 43,449 | [
"977607",
"977606"
] | [
"300563",
"301198"
] |
01484386 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484386/file/978-3-642-34549-4_2_Chapter.pdf | Stijn Hoppenbrouwers
email: stijnh@cs.ru.nl
Asking Questions about Asking Questions in Collaborative Enterprise Modelling
Keywords: Collaborative Modelling, Modelling Process, Question Asking, Answer Structuring, Enterprise Modelling, Collaboration Systems
In this paper we explore the subject of question asking as an inherent driver of enterprise modelling sessions, within the narrower context of the 'dialogue game' approach to collaborative modelling. We explain the context, but mostly report on matters directly concerning question asking and answer pre-structuring as a central issue in an ongoing effort aiming for the practiceoriented development of a series of dialogue games for collaborative modelling. We believe that our findings can be relevant and helpful to anyone concerned with planning, executing or facilitating collaborative modelling sessions, in particular when involving stakeholders untrained in systems thinking and modelling.
Introduction
In the field of collaborative enterprise modelling [START_REF] Renger | Challenges in collaborative modelling: a literature review and research agenda[END_REF][START_REF] Barjis | Collaborative, Participative and Interactive Enterprise Modeling[END_REF], in particular in combination with information systems and service engineering, an increasing industrial and academic interest is becoming visible in the combining of advanced collaborative technologies with various types of modelling [START_REF] Hoppenbrouwers | From Dialogue Games to m-ThinkLets: Overview and Synthesis of a Collaborative Modeling Approach[END_REF], e.g. for business process modelling, domain modelling, business rules modelling, or enterprise architecture modelling. This includes support for well established, even traditional setups for modelling sessions (like workshops, interview-like sessions, and multi-participant model reviews) but also more innovative, on-line incarnations thereof, both synchronous and asynchronous, both facilitated and unfacilitated, often related to social media, and often geographically distributed [START_REF] Hoppenbrouwers | Stakeholder Communication, in Agile Service Development -Combining Adaptive Methods and Flexible Solutions[END_REF]. In addition, collaborative modelling is increasingly interwoven with operational (in addition to development) processes in enterprises; it may be initiated as part of a development project but will often become integrated with long-term, persistent 'maintenance' processes realizing enterprise model evolution. This shift in the context of application for enterprise modelling entails increasingly intense collaboration with business stakeholders not trained in established forms of systems modelling [START_REF] Zoet | An Agile way of Working, in Agile Service Development: Combining Adaptive Methods and Flexible Solutions[END_REF].
Collaborative enterprise modelling, as positioned above, includes a small number of approaches focusing on the understanding and support of the process of modelling. Specific approaches to this very much reflect views of what such a process essentially is, which may very greatly. In most cases, emphasis is on 'collaborative diagram drawing' (for example [START_REF] Pinggera | Tracing the process of process modeling with modeling phase diagrams[END_REF]). A different (though not unrelated) approach chooses to view collaborative modelling as a model-oriented conversation in which propositions are exchanged and discussed [START_REF] Rittgen | Negotiating Models[END_REF][START_REF] Ssebuggwawo | Interactions, Goals and Rules in a Collaborative Modelling Session[END_REF].
Beyond theories concerning the nature of collaborative modelling lies the question how to support collaborative model conceptualisation efforts (other than merely by providing some model editor), either by means of software or by less high-tech means. Our own, ongoing attempt to devise an effective practice-oriented framework for the structuring and guiding of modelling sessions has led us to develop something called 'dialogue games for modelling': game-like, open procedures in which explicit rules govern the interactions allowed and required within a structured conversationfor-modelling ( [START_REF] Hoppenbrouwers | Towards Games for Knowledge Acquisition and Modeling[END_REF][START_REF] Hoppenbrouwers | A Dialogue Game Prototype for FCO-IM, in On the Move to Meaningful Internet Systems[END_REF][START_REF] Hoppenbrouwers | A Dialogue Game for Analysing Group Model Building: Framing Collaborative Modelling and its Facilitation[END_REF]; see section 2 for more on this). For some time it has been clear to us that the questions underlying models and modelling efforts are (or should be) an explicit driving force behind the conversations that constitute modelling processes [START_REF] Hoppenbrouwers | Focused Conceptualisation: Framing Questioning and Answering in Model-Oriented Dialogue Games[END_REF]. In this paper, we directly address the issue of questions asking, as well as the pre-structuring and guiding of answers to be given.
This paper is written more from a design point of view than from an analytical or observation (descriptive) point of view. It works directly towards application of the results presented in the design of operational dialogue games. We therefore work under the Design Science paradigm [START_REF] Hevner | Design Science in Information Systems Research[END_REF]. The ideas presented are a result of some experimental designs that were empirically validated on a small scale, but they yet have merely a heuristic status; they are not established practices, nor have they been exhaustively validated. And yet, we believe that the presented approach to question asking, and answer pre-structuring and guiding, is approximately 'right' as well as simply 'useful' since it was not 'just thought up' but carefully distilled through a focused and multifaceted effort to understand, guide and support the systematic asking of questions in detailed conversations-for-modelling.
The main problem addressed thus is that of 'how to ask particular questions in order to guide and drive a conversation for modelling', down to the level of structuring and aiding the actual phrasing of questions. To our best knowledge, this matter has never been addressed with a similar, dedicated and detailed design focus in the field of enterprise modelling, or anywhere else. Purposeful question asking in general has received plenty of attention in the context of interviewing skills (see for example [START_REF] Bryman | Social Research Methods[END_REF], Chapter 18), but an adequately content-related, generative approach we could not find. In the field of speech generation, some attention has been given to model-based question generation (see for example [START_REF] Olney | Question generation from Concept Maps[END_REF]), but here results are too theoretical and too limited to be of help for our purpose. This is why we took a grassroots design approach grounded in observation and reflection on what modellers and facilitators do (or should do) when they formulate questions to drive and guide a modelling process. The result is a small but useful set of concepts and heuristics that can help participants in and facilitators of modelling sessions to think about and make explicit the questions to be asked, from the main questions behind the session as a whole, down to specific questions asked in highly focused parts of the session. While (as discussed) the results have not been tested at great length, they do reflect a focused effort to come up with generally useful concepts and heuristics, spanning several years and a fair number of experimental projects (only some of which have been published; most were graduate projects). For a considerable part, these experiments and studies were conducted in the wider context of the Agile Service Development project as reported in [START_REF] Lankhorst | Agile Service Development: Combining Adaptive Methods and Flexible Solutions[END_REF], and are now continued under the flag of the Collaborative Modelling Lab (CoMoLab) [17].
Dialogue Games for Collaborative Modelling
Our approach to developing means of guiding and structuring conversations-formodelling has led to the design and use of Dialogue Games. Previous to this, it was already theorized [START_REF] Rittgen | Negotiating Models[END_REF][START_REF] Hoppenbrouwers | Formal Modelling as a Grounded Conversation[END_REF] (backed up by analysis of observed collaborative modelling sessions [START_REF] Ssebuggwawo | Interactions, Goals and Rules in a Collaborative Modelling Session[END_REF]) that collaborative modelling as a conversation involves the setting and use of Rules constraining both the Interactions of the conversation as well as its chief outcome (the Model). The Interactions include both the stating of propositions and discussion of those propositions, leading to acceptance of some propositions by the participants. Accepted propositions at a given time constitute the Model at that time [START_REF] Ssebuggwawo | Interactions, Goals and Rules in a Collaborative Modelling Session[END_REF]. Apart from the primary result of modelling (the Model), results may be social in nature, e.g. reaching some level of understanding or consensus, or achieving a sense of ownership of the model. Such goals can also be part of the rules set, and they are also achieved through Interactions. The notions of Rules, Interactions and Models (the basics of the 'RIM framework') can be used for analysis of any modelling session, but they can also be used as a basis for designing support and guidance for such sessions -which is what we did next.
Dialogue Games initially are a theoretical notion from Argumentation Theory going back to [START_REF] Mann | Dialogue Games: Conventions of Human Interaction[END_REF]. A more operational incarnation of dialogue games, an educational tool, was devised in the form of the InterLoc system as reported in [START_REF] Hoppenbrouwers | A Dialogue Game for Analysing Group Model Building: Framing Collaborative Modelling and its Facilitation[END_REF][START_REF] Ravenscroft | Designing interaction as a dialogue game: Linking social and conceptual dimensions of the learning process[END_REF]. The core of this tool is an augmented 'chatbox' in which every contribution of the participants in a chat has to be preceded by an 'opener' chosen from a limited, preset collection (for example "I think that …"; "I want to ask a question: …"; "I agree, because: …"). Thus, the argumentation/discourse structure of the chat is constrained, and users become more aware of the structure of their conversation as it emerges. Also, the resulting chat log (available to the participants throughout) reflects the discourse structure quite transparently, including who said what, and when; this has proved useful both during the conversation and for later reference.
We took this concept and added to it the use of openers to constrain not only the type of contribution to the conversation, but also the format of the answer, for example "I propose the following Activity: …". This blended syntactic constraints with conversational constraints, and gave us access to introducing into dialogue games conceptual elements stemming from modelling languages. In addition, we showed that diagram techniques could easily and naturally be used in parallel to the chat, augmenting the verbal interaction step-by-step (as is common in most types of collaborative modelling) [START_REF] Hoppenbrouwers | A Dialogue Game for Analysing Group Model Building: Framing Collaborative Modelling and its Facilitation[END_REF].
Some new ground was broken by our growing awareness that most conversationsfor-modelling did not have one continuous and undivided focus: one big dialogue game (the whole modelling session) typically consists of number of successive smaller dialogue games focusing on small, easily manageable problems/questions [START_REF] Ssebuggwawo | Interactions, Goals and Rules in a Collaborative Modelling Session[END_REF]; the 'divide and conquer' principle. This principle is confirmed in the literature [START_REF] Prilla | Fostering Self-direction in Participatory Process Design[END_REF][START_REF] Andersen | Scripts for Group Model Building[END_REF]. It led to the introduction of the notion of 'Focused Conceptualisation' or FoCon [START_REF] Hoppenbrouwers | Focused Conceptualisation: Framing Questioning and Answering in Model-Oriented Dialogue Games[END_REF]: 'functional requirements' for modelling sessions (and parts thereof) including the expected type of 'input' (e.g. people; documents, conceptual structures) and desired 'output' (models in some modelling language, for some specific use; also, social results) as well as 'means to achieve the output': focus questions, sub-steps, and possibly some 'rules of the game'. Thus FoCons can help define highly focused dialogue games, with small sets of openers dedicated to answering focus questions that are just a part of the modelling conversation as a whole. Within such limited scopes of interaction, it is much easier to harness known principles from collaboration and facilitation technology (e.g. from brainstorming, prioritizing, problem structuring) to guide and support people in generating relevant and useful answers to questions [START_REF] Hoppenbrouwers | From Dialogue Games to m-ThinkLets: Overview and Synthesis of a Collaborative Modeling Approach[END_REF]. Importantly, this combines the 'information demand' of the general modelling effort with the HCI-like 'cognitive ergonomics' of the tasks set for the participants, which has to match their skills and expertise [START_REF] Wilmont | Abstract Reasoning in Collaborative Modeling[END_REF].
Part of the FoCon approach also is the distinction between the pragmatic goal/focus of a modelling effort and its semantic-syntactic goal/focus. As explained in [START_REF] Hoppenbrouwers | Focused Conceptualisation: Framing Questioning and Answering in Model-Oriented Dialogue Games[END_REF], pragmatic focus concerns the informational and communicational goal of the model: its intended use. One process model, for example, is not the other, even if it is drawn up in the same language (say, BPMN). What are the questions that the model needs to answer? Do they work towards, for example, generation of a workflow system? Process optimization? Establishing or negotiating part of an Enterprise Architecture? Do they concern an existing situation, or a future one? And so on.
Semantic-syntactic focus concerns the conceptual constraints on the model: typically, its modelling language. In some cases, such constraint may actually hardly be there, in which case constraints are perhaps those of some natural language, or a subset thereof (controlled natural language). Practically speaking, a real life modelling effort may or may not have a clearly preset semantic-syntactic focus, but it should always have a reasonably clear pragmatic focus -if not, why bother about the model in the first place? In any case, the pragmatic focus is (or should be) leading with respect to the semantic-syntactic focus.
The pragmatic and semantic-syntactic goals are crucial for identifying and setting questions for modelling.
Questions and Answer Types as Drivers and Constraints
Perhaps the most central argumentation underlying this paper is this: 'if models are meant to provide information, then they aim to answer questions [START_REF] Hoppenbrouwers | A Fundamental View on the Process of Conceptual Modeling[END_REF] -explicitly or not. In that case, in order to provide pragmatic focus to a conversation-for-modelling, it seems quite important to be aware of what questions are to be asked in the specific modelling context; if people are not aware, how can they be expected to model efficiently and effectively?' This suggests that making 'questions asked' explicit (before, during or even after the event) seems at the least a useful exercise for any modelling session. There is of course a clear link here with standard preparations for interviews and workshops. Yet it transpires that in some of the more extreme (and unfortunate) cases, the explicit assignments given or questions asked remain rather course grained, like 'use language L to describe domain D' (setting only the semanticsyntactic focus clearly). If experienced, context-aware experts are involved, perhaps the right questions are answered even if they are left implicit. However, if stakeholders are involved who have little or no modelling experience, and who generally feel insecure about what is expected of them, then leaving all but the most generic questions implicit seems suboptimal, to say the least. Disaster may ensueand in many cases, it has. We certainly do not claim that modellers 'out there' never make explicit the lead questions underlying and driving their efforts. We do feel confident, however, in stating that in many cases, a lot can be gained in this respect. This is not just based on a 'professional hunch', but also on focused discussions with practitioners on this topic, and on a considerable number of research observations of modelling sessions in the field.
Once the importance of questions as a driving force behind conversations for modelling became clear, we became interested in the structures and mechanisms of question asking. It was a natural choice for us to embed this question in the context of dialogue games, where questions are one of the chief Interactions, following Rules, and directly conveying the goals underlying the assignment to create a Model (see section 2).
Questions are a prominent way of both driving and constraining conversations. They coax people into generating or at least expressing propositions aimed to serve a specific purpose (fulfil an information need), but they are also the chief conversational means by which 'answer space' is conceptually restricted, by setting limits of form (syntax) or meaning (semantics) that the answers have to conform to. As explained in Section 2, modelling languages put a 'semantic-syntactic focus' on the expressions that serve to fulfil the pragmatics goal of a modelling effort. Thus, even the demand or decision to use a modelling language is closely related to the asking of questions, and can be actively guided by them.
In the FoCon approach [START_REF] Hoppenbrouwers | Focused Conceptualisation: Framing Questioning and Answering in Model-Oriented Dialogue Games[END_REF] (Section 2), only some minimal attention was paid to the subject of 'focus questions'. We now are ready to address this subject in more depth, and head-on.
Structuring Questions and Answers in Dialogue Games
In our ongoing effort to better understand and structure 'dialogue games for modelling', we have developed a number of prototype dialogue games, still mostly in unpublished bachelor's and master's thesis projects (but also see [START_REF] Hoppenbrouwers | A Dialogue Game Prototype for FCO-IM, in On the Move to Meaningful Internet Systems[END_REF][START_REF] Hoppenbrouwers | A Dialogue Game for Analysing Group Model Building: Framing Collaborative Modelling and its Facilitation[END_REF], as well as [START_REF] Hoppenbrouwers | Stakeholder Communication, in Agile Service Development -Combining Adaptive Methods and Flexible Solutions[END_REF]). Recently, these prototypes and studies have explicitly confronted us with questions about question asking. This has led us to define the following heuristic Question Asking Framework for coherently combining questions and answers, which is put forward in an integrated fashion for the first time in this paper. The following main concepts are involved:
• Main conceptualization Goal(s) behind the questions to ask (G); pragmatic and possibly also semantic-syntactic goals underlying the creation of the model. • The Questions to ask (Q); the actual, complete phrases used in asking focus questions within the conversation-for-modelling • The Answers, which are the unknown variable: the result to be obtained (A) • Possibly, Form/Meaning constraints on the answer (F): an intensional description of the properties the answer should have (for example, that it should be stated in a modelling language, or that it should be an 'activity' or 'actor'). • Possibly, one or more Examples (E) of the kind of answer desired: an extensional suggestion for the answer. While the QAF is by no means a big theoretical achievement, it does provide a good heuristic for the analysis and design of 'question structures' in dialogue games. It is helpful in systematically and completely identifying and phrasing questions and related items (the latter being rather important in view of active facilitation).
Below we will proceed to discuss the concepts of the QAF in more detail, as well as matters of sequence and dynamic context. We will use an explanatory example throughout, taken from our previous work in 'Group Model Building' (GMB), an established form of collaborative modelling in the field of Problem Structuring. Space lacks here for an elaborate discussion of GMB; we will very briefly provide some information below, but for more have to refer to [START_REF] Hoppenbrouwers | A Dialogue Game for Analysing Group Model Building: Framing Collaborative Modelling and its Facilitation[END_REF].
Illustration: Group Model Building
GMB is rooted in System Dynamics and involves the collaborative modelling of causal relations and feedback loops. It aims for the shared understanding between participants of the complex influences among system variables in some system (typically, a business situation calling for an intervention). The process of group model building aims to gradually tease out quantitative variables (providing an abstract analysis and representation of the problem focused on), causal relations between the variables (cause-effect, positive and negative), and feedback loops consisting of sets of circularly related variables. For our current purposes, we will only refer to some basic items in GMB, and show how the QAF items can be deployed in this context.
Goals Questions
As drivers of modelling session as a whole, Goal Questions can be posed. These should clearly describe the pragmatic goals of the session. Semantic-syntactic goals may in principle also be posed here, but things are more complicated for such goals: whether or not they should be explicitly communicated to the participants depends on whether the participants will or will not be directly confronted with the models and modelling language in the session. If not (and in many approaches, including ours, this is common enough), the semantic-syntactic goal is a covert one that is implicitly woven into the operational focus questions and answer restrictions (i.e. openers) of the Dialogue Games (Sections 4.3 and 4.4). This is in fact one of the main points of the dialogue game approach. We will therefore assume here that the semantic-syntactic goals are not explicitly communicated to the participants, though it is certainly always necessary that the over-all semantic-syntactic goals of the modelling effort are established (not communicated) as well as possible and known to the organizers of the session.
Typically, Goal Questions consist of two parts: the main question (of an informative nature), and the intended use that this information will be put to, the purpose. For example:
Main question: "Please describe what factors play a role in increasing the number of students enrolling in the Computer Science curriculum, and how they are related".
Purpose: "This description will be used to identify possible ways of taking action that could solve the problem."
Typically, the main question has a 'WH word' (why, what, how, who, etc.) in it, but this is no requisite. Clearly formulating the main questions is important, and may be hard in that the question may be difficult to formulate (a language issue) but in principle the main question as an item is straightforward. There may be more main questions or assignments (for example expressing social goals like 'reach consensus'), but obviously too many questions will blur the pragmatic focus. As for explicitly stating the purpose, as argued in [START_REF] Hoppenbrouwers | Focused Conceptualisation: Framing Questioning and Answering in Model-Oriented Dialogue Games[END_REF], very much influences the way people conceptualise the model, even at a subconscious level; this is why we advocate including it. Again, it is possible to include more than one purpose here, but this may decrease clarity of focus and can easily reduce the quality of the conceptualisation process and its outcome.
Importantly, main questions and purposes are not reflected in the openers of a Dialogue Game. They give a clear general context for the whole session, i.e of the entire set of 'minigames' (FoCons) constituting the conversation-for-modelling. The Goal questions should be clearly communicated (if not discussed) before the session starts, and perhaps the participants should be reminded of them occasionally (possibly by displaying them frequently, if not continuously).
Focus Questions: Guiding the Conversation
The focus questions are by nature the most crucial item in the QAF. Without exception, they should be covered by at least one opener in their dialogue game, meaning that they are explicitly available as an interaction type to at least one type of participant (role) in at least one dialogue game. In most cases, focus questions will be posed by the facilitator; whether or not they can also be asked by other participants depends on the further game design.
We found that it is helpful to explicitly distinguish two parts of focus questions: the question part, and the topic part. Questions, for example "What might influence …?" are of course incomplete without also mentioning a (grammatical) object of the sentence: what specific entity or domain the main question is applied to. This may seem trivial, but it is crucial in view of actual 'generation' of questions because the topic part of a focus question is as important as the question part, and is highly context dependent. The topic part may be derived from an answer to a previous question that was given only seconds before. Also, the topic part is typically much more context-dependent with respect to terminology: whereas the question part phrasing may be generically useful in diverse contexts (fields, enterprises, departments; situations) the topic will require accurate knowledge of the way participants talk about their enterprise and refer to bits of it. The set of possible topic descriptions is most safely assumed to be infinite, or at least to be quite unpredictable and situational, and therefore 'open'.
As for the more generic 'question part': here too, many questions (being open questions more often than yes/no questions) will be started off with a phrase including a WH-word (often accompanied by a preposition, as in "for what", "by who", etc.).
Clearly many questions are possible, but we do believe that for a particular set of topics/dialogue game types, their number is limited ('closed' sets seem possible, at least at a practical level). Points of view reflected by questions can be based on many different concepts and sources, for example:
• Meta-models (the syntax of a modelling language may dictate, for example, that every 'variable' should be at least a 'cause' or an 'effect' of another variable; causal relations should be marked as either positive (+) or negative (-), and so on) • Aspects of enterprise systems (e.g. following the Zachman framework: whyhow-what-who-where-when combined with the contextual-conceptuallogical-physical-detailed 'levels') • Methods (e.g. questions based on intervention methods: brainstorming, categorizing, prioritizing, and so on). • The classic 'current system' versus 'system-to-be' distinction In fact, it is largely through the asking of focus questions that participants make explicit how they look at and conceptually structure the domains and systems under scrutiny, and also it is the way their 'world view' is imposed upon the conversation, and on other participants.
For all the QAF items, but for the focus questions in particular, great care must be taken that they are phrased clearly and above all understandably in view of the participants' capacities, skills, and expertise [START_REF] Hoppenbrouwers | Stakeholder Communication, in Agile Service Development -Combining Adaptive Methods and Flexible Solutions[END_REF]. This requires quite a high level of language awareness, proficiency and instinct on behalf of, at least, the facilitator. Standard questions (or partial questions), that may have been tested and improved throughout a number of games, may offer some foothold here, but also one must be very much aware that question phrasings fit for one situation may be less appropriate and effective for others.
Forms: Constraining the Answer
Forms are the conceptual frames (in both the syntactic and the semantic sense) in which the answers are to be 'slotted'. The term refers to the 'form' (shape, structure) of the answer but also, and perhaps even more so, to the type of form that needs 'filling in' (template). Importantly, it is possible the form is in fact not there, meaning that in such cases the Goal and Focus questions do all the constraining. However, in particular in cases when some conceptual constraint (modeling language) is involved, offering a Form can be extremely helpful. If indeed we deal with collaborative 'modelling' (instead of, for example, 'decision making' or 'authoring' or 'brainstorming'), some conceptual constraining by means of some structured language seems as good as mandatory, per definition. Yet this does not mean such restricting Forms should necessarily accompany all focus questions: it is quite possible that in earlier phases of conceptualization, no strict form constraint is imposed, but that such constraint is introduced only as the effort is driven home to its end goals. Thus, some (sub) DGs may include Forms, while others may not.
In the basic Dialogue Game designs we have discussed so far, 'answer-openers' are provided that restrict the answer textually, as in "I propose the following variable: …". However, more advanced types of interfacing have always been foreseen here in addition to the basic opener [START_REF] Hoppenbrouwers | Exploring Dialogue Games for Collaborative Modeling, in E-Collaboration Technologies and Organizational Performance: Current and Future Trends[END_REF], for example the use of GUI-like forms [START_REF] Hoppenbrouwers | A Dialogue Game Prototype for FCO-IM, in On the Move to Meaningful Internet Systems[END_REF], and even interactive visualizations (simple diagrams). In principle, we can include good old 'model diagram drawing' here as well, though admittedly this does not fit in too well with our general FoCon approach and the verbal nature of conversations. Yet in the end, our credo is: whatever works, works.
Checking and enforcement of form-conform answering can be implemented in degrees. Below we suggest some (increasing) levels of forms checking:
• Unrestricted except by goal and focus questions • Mere textual constraint (e.g. by using simple openers)
• Using typed fields for individual words • Using typed fields and checking the fields syntactically • Using typed fields and checking the fields semantically • Offering a limited set of (checked) choices Note that such checking/enforcing mechanisms are of course already well known in common information-and database system interfaces and functionality (data integrity checks, etc.) and in various kinds of advanced model and specification editors.
In addition to offering template-like forms, we found that it is a good idea to add some explicit verbal description and explanation of the conceptual constraints, for example: "A 'variable' is described as a short nominal phrase, preferably of no more than four words, describing something that causes changes in the problem variable, or is affected by such changes. Variables should concern things that are easily countable, usually a 'number' or 'quantity' of something".
A final note on openers: while in this section we focused on conceptually constrained answer-openers, in view of Dialogue Games at large it is important to realize that more generic, conversation-oriented openers can be used alongside Forms, e.g. "I don't think that is a good idea, because …", "I really don't know what to say here", "I like that proposition because …", and so on. This makes it possible to blend discussion items and highly constrained/focused content items. Based on our experience with and observations of real life modelling sessions, such a blend is required to mirror and support the typical nature and structure of conversations-formodelling. Given that a chat-like interface and log is present underlying the whole modelling process, advanced interfacing can still produce chat entries (automatically generated) while conversational entries can be more directly and manually entered in the chat.
Auxiliary Examples of Answers
The last QAF item is perhaps the least crucial one, and certainly an optional one, but still it can be of considerable help in effectively communicating constraints on answers. Examples of answers are complementary to Forms, where in logical terms Examples offer more of a (partial) 'extensional definition' than the 'intentional definition' which can be associated with Forms. In addition, it is possible to provide some (clearly marked!) negative examples: answers that are not wanted.
Generally it seems to work well enough to give examples that are illustrative rather than totally accurate. For example, 'variables' in GMB need to be quantifiable, i.e. should concern 'things that can be easily counted' (a phrasing typically used in constraining answers suggesting variables). Positive examples for 'variables' thus could be"
• "Number of items produced"
• "Time spent on preparations"
• "Number of kilometres travelled"
• "Number of rejections recorded", whereas negative examples could be:
• NOT "willingness to cooperate"
• NOT "liberty to choose alternatives"
• NOT "aggressive feelings towards authority" The need for the use of Examples varies. In general, they will be most useful when participants are confronted with some Question-Form combination for the first time, leaving them somewhat puzzled and insecure. Experience shows that it is often recommendable to remove examples as soon as 'the coin drops', but to keep them close at hand in case confusion strikes again.
Dynamic Sets and Sequences of Questions
When analysing, describing and supporting structured processes, it is always tempting to picture them as deterministic flows. As reported in [START_REF] Ssebuggwawo | Interactions, Goals and Rules in a Collaborative Modelling Session[END_REF][START_REF] Hoppenbrouwers | Method Engineering as Game Design: an Emerging HCI Perspective on Methods and CASE Tools[END_REF], actual dialogues structures are far too unpredictable to capture by such means, switching often between various foci and modes. This is one of the main reasons why we have opted for a rulebased, game-like approach from the start. However, this does not mean that modelling sessions and dialogue games are wholly unstructured. There certainly can be a logic behind them, reflecting the way they work towards their Goals in a rational fashion (often by means of interrelated sub-goals). Our way out of this is indeed to define a number of complementary FoCons (DGs) that cover all 'interaction modes' to be expected in a particular modelling session. The participants, and especially the facilitator, are then free (to a greater or lesser degree) to choose when they go to which DG, and thus also in which order. However, there may be some input required to start a certain FoCon; for example, in GMB it is no use trying to determine the nature of a feedback loop if its variables have not been adequately defined. Thus, a simple logic does present itself. In our experience, this logic is best operationalized by the plain mechanism of preconditions on DGs, making them available (or not) given the presence of some minimal information they need as 'input'. In addition, the facilitator has an important role to play in switching between DGs: determining both when to switch, and where to jump to. The definition of heuristic or even rules for making such decisions is a main interest for future research. Besides the simple inputbased logic mentioned above, we expect that other aspects and best practices will be involved here, but we cannot put our finger on them yet.
The above implies that the sequence in which questions are to be asked cannot be predicted, nor does it need to be. Which questions are asked in which order is determined by:
• Which questions are part of a particular DG (with some specific focus)
• In what order the questions are asked within that DG, which depends on active question choosing by the facilitator, but equally so on the highly unpredictable conversational actions taken by the participants • In what order the session jumps from one DG to another, as mostly determined by the facilitator. In this sense, a modelling session has the character of a semi-structured interview rather than that of a structured one.
Finally we consider the challenge of generating, dynamically and on the spot, the detailed content of each question item during a series of interrelated DGs. We believe that in many cases, a manageable number of basic interaction modes can be discerned beforehand, i.e. in the preparatory phase of organized modelling sessions, and perhaps even as part of a stable 'way of working' in some organizational context. Thus, DGs can be designed, including:
• the question parts of Focus Questions • the Forms • the Examples However, this excludes some more context-dependent items:
• both the main question and the purpose parts of the Goal questions • the topic parts of the Focus Questions These items will have to be formulated for and even during every specific DG. Some of them may be predictable, since they may be based on specific information about the domain available before the session is initiated. However, a (large) part of the domain-specific information may emerge directly from the actual session, and also 'previously available information' may change because of this. The main question and the purpose parts of the Goal questions at least can be determined in preparation of a particular session, typically in project context [START_REF] Hoppenbrouwers | Stakeholder Communication, in Agile Service Development -Combining Adaptive Methods and Flexible Solutions[END_REF][START_REF] Zoet | An Agile way of Working, in Agile Service Development: Combining Adaptive Methods and Flexible Solutions[END_REF], and will usually remain pretty stable during a modelling session. This leaves the topic parts of the Focus Questions: what topic the individual, opener-born question phrasings are applied to.
As discussed in Section 4.3, such topic phrasings are highly context specific. If they are to be inserted on the spot by facilitators or other participants in an unsupported environment, they will demand a lot from the domain awareness and language capacity of those involved. Fortunately, such capacity is usually quite well developed, and the task is challenging but not unfeasible -as has often been shown in practice. Yet let us also consider tool support. If partially automated support is involved (DGs as a case of collaboration technology [START_REF] Hoppenbrouwers | From Dialogue Games to m-ThinkLets: Overview and Synthesis of a Collaborative Modeling Approach[END_REF]), close interaction will be required between the question generator and the structured repository of information available so far. Needless to say this poses rather high demands on accessibility, performance, and well-structuredness of such a repository. Yet not in all cases will generation of questions (based on the knowledge repository) be fully automatic: in many cases, the facilitator or other participants may be offered a choice from the (limited) number of items relevant in the present context.
Conclusion and Further Research
We have presented a discussion of a number of issues with respect to 'asking questions' in context of collaborative modelling sessions in enterprise engineering. Central in this discussion was the Question Asking Framework (QAF), a heuristic construct of which the concepts can help analysis and design of question-related aspects of (in particular) highly focused sub-conversations. Our discussion was set against the background of 'Dialogue Games for modelling'. The findings have already been used, to a greater or lesser extent, in the design of prototype Dialogue Games in various modelling contexts.
We are now collaborating with two industrial parties who have taken up the challenge of bringing the Dialogue Game idea to life in real projects. We work towards the creation of a reasonably coherent set of support modules that enable the rapid development and evolution of Dialogue Games for many different purposes and situations, involving a number of different flavours of modelling -both in view of the modelling languages and techniques involved, and of the style and setup of collaboration [START_REF] Lankhorst | Agile Service Development: Combining Adaptive Methods and Flexible Solutions[END_REF]. Most of the ideas and concepts put forward in this paper have already played a role in design sessions, in which they turned out extremely helpful. Together with other concepts from the Dialogue Game approach, they enabled us to create a good and clear focus for talking about modelling sessions in a highly specific, support-oriented way. While further validation of the presented concepts certainly needs to be pursued in the near future, we do claim that a first reality check and operational validation has in fact been performed, with satisfactory results. Among many possible topics for further research, we mention some interesting ones:
• Effective capturing of generic rules for facilitation in DGs • Decision making for jumping between DGs • Optimal ways of communicating rules, goals, assignments and and directives in DGs • Interactive use of advanced visualisations blended with chat-like dialogues • Limitations and advantages of on-line, distributed collaborative modelling using DGs • Using DGs in system maintenance and as an extension of helpdesks • Making intelligent suggestions based on design and interaction patterns and using AI techniques • Automatically generating questions and guiding statements for use in DGs, based on natural language generation and advanced HCI techniques
Fig. 1 .
1 Fig. 1. Concepts of the heuristic Question Asking Framework (QAF)In Fig.1., we show the basic concepts plus an informal indication (the arrows) of how the elements of the QAF are related in view of a generative route from Goal to Answer: based on the pragmatic and possibly also the semantic-syntactic goal of the effort at hand, a set of questions are to be asked. For each Question, and also very much dependent on its Goal, auxiliary means are both intensional (F) and extensional (E) descriptions of the sort of answer fulfilling Q. Combinations of Q, F and E should lead to A: the eventual Answer (which as such is out of scope of the framework).While the QAF is by no means a big theoretical achievement, it does provide a good heuristic for the analysis and design of 'question structures' in dialogue games. It is helpful in systematically and completely identifying and phrasing questions and related items (the latter being rather important in view of active facilitation).Below we will proceed to discuss the concepts of the QAF in more detail, as well as matters of sequence and dynamic context. We will use an explanatory example throughout, taken from our previous work in 'Group Model Building' (GMB), an established form of collaborative modelling in the field of Problem Structuring. Space lacks here for an elaborate discussion of GMB; we will very briefly provide some information below, but for more have to refer to[START_REF] Hoppenbrouwers | A Dialogue Game for Analysing Group Model Building: Framing Collaborative Modelling and its Facilitation[END_REF].
Fig. 2 .
2 Fig. 2. Example of a causal loop diagram resulting from a GMB dialogue game
Acknowledgements
We are grateful for early contributions made to the ideas presented in this paper by Niels Braakensiek, Jan Vogels, Jodocus Deunk, and Christiaan Hillen. Also thanks to Wim van Stokkum, Theodoor van Dongen, and Erik van de Ven. | 44,995 | [
"1003526"
] | [
"348023",
"300856"
] |
01484387 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484387/file/978-3-642-34549-4_3_Chapter.pdf | Julia Kaidalova
email: julia.kaidalova@jth.hj.se
Ulf Seigerroth
email: ulf.seigerroth@jth.hj.se
Tomasz Kaczmarek
email: t.kaczmarek@kie.ue.poznan.pl
Nikolay Shilov
Practical Challenges of Enterprise Modeling in the light of Business and IT Alignment
Keywords: Enterprise Modeling, Business and IT Alignment, EM practical challenges
The need to reduce a gap between organizational context and technology within enterprise has been recognized and discussed by both researchers and practitioners. In order to solve this problem it is required to capture and analyze both business and IT dimensions of enterprise operation. In this regard, Enterprise Modeling is currently considered as widely used and powerful tool that enables and facilitates alignment of business with IT. The central role of EM process is EM practitioner -a person who facilitates and drives EM project towards successful achievement of its goals. Conducting EM is a highly collaborative and nontrivial process that requires considerable skills and experience since there are various challenges to manage and to deal with during the whole EM project. Despite quite wide range of related research, the question of EM challenges needs further investigation, in particular concerning the viewpoint of EM practitioners. Thus, the purpose of this paper is to identify challenges that EM practitioners usually face during their modeling efforts taking into consideration potential influence of these challenges on successful conduct of EM and on alignment of Business and IT thereafter.
Introduction
Successful business management in the dynamically evolving environment demands considerable agility and flexibility from decision makers in order to remain competitive. As a part of business changes and business redesign, there is also a need to have clear understanding about current way of business operation. [START_REF] Stirna | Anti-patterns as a Means of Focusing on Critical Quality Aspects in Enterprise Modeling[END_REF] argue that Enterprise Modeling (EM) is one of the most powerful and widely used means that meets both types of needs. They mark out two general purposes that EM can be used for. The first purpose is business development, for example, development of business vision and strategies, business operations redesign, development of the supporting information systems, whereas the second one is ensuring business quality, for example, knowledge sharing about business or some aspect of business operation, or decision-making.
EM is a process for creating enterprise models that represent different aspects of enterprise operation, for example, goals, strategies, needs [START_REF] Stirna | Integrating Agile Modeling with Participative Enterprise Modeling[END_REF]. The ability of enterprise models to depict and represent enterprise from several perspectives to provide a multidimensional understanding makes EM a powerful tool that also can be used for Business and IT alignment (BITA) [START_REF] Seigerroth | Enterprise Modelling and Enterprise Architecture: the constituents of transformation and alignment of Business and IT[END_REF][START_REF] Wegmann | Business and IT Alignment with SEAM[END_REF]. In general the problem of BITA has received great attention from both practitioners and researchers [START_REF] Chan | IT alignment: what have we learned[END_REF];; [START_REF] Luftman | Key issues for IT executives[END_REF]. This branch of EM focuses on the gap between the organizational context and technology (information systems in particular) that is pervasive in organization operations and provides a backbone as well as communication means for realizing the organization goals. Particularly, in the domain of modeling similar calls for alignment of information systems and business emerged within various modeling efforts [START_REF] Grant | Strategic alignment and enterprise systems implementation: the case of Metalco[END_REF][START_REF] Holland | A Framework for Understanding Success and Failure in Enterprise Resource Planning System Implementation[END_REF][START_REF] Seigerroth | Enterprise Modelling and Enterprise Architecture: the constituents of transformation and alignment of Business and IT[END_REF].
EM is usually a participative and collaborative process, where various points of view are considered and consolidated [START_REF] Stirna | Integrating Agile Modeling with Participative Enterprise Modeling[END_REF]. Two parties of EM are participants from the enterprise itself and EM practitioner (or facilitator) that leads modeling session(s). The first group of stakeholders consists of enterprise employees who have to share and exchange their knowledge about enterprise operations (domain knowledge). There are various factors that can hinder the process of sharing knowledge between enterprise members, for example, as the project progresses the enterprise becomes less interested to allocate their most knowledgeable human resources to modeling sessions, since it can be considered as waste of time (Barjis, 2007). The second party of EM is the EM practitioner a person who facilitates and drives EM project process (partly or fully) towards effectively achieving its goals [START_REF] Persson | Towards Defining a Competence Profile for the Enterprise Modeling Practitioner[END_REF]. This role is responsible for making sure that the project resources are used properly in order to achieve the goals of the project and to complete the project on time (ibid, [START_REF] Rosemann | Four facets of a process modeling facilitator[END_REF]. Thus, EM practitioner needs to have considerable experience and broad range of knowledge regarding EM execution, since various problems and challenges occur both during execution of EM sessions and follow-up stages of EM [START_REF] Stirna | Anti-patterns as a Means of Focusing on Critical Quality Aspects in Enterprise Modeling[END_REF].
The need for documentation guidelines related to EM has been revealed and highlighted by several researchers, i.e. cf. [START_REF] Stirna | Anti-patterns as a Means of Focusing on Critical Quality Aspects in Enterprise Modeling[END_REF]. Identification of factors that can hinder successful application of EM can be considered as one aspect of such guidelines. Several researchers have claimed that there is a need to investigate challenging factors as an important component of EM practice (Bandara et al., 2006;[START_REF] Seigerroth | Enterprise Modelling and Enterprise Architecture: the constituents of transformation and alignment of Business and IT[END_REF][START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF]. This has surfaced the need to investigate factors that are considered as challenging from the viewpoint of EM practitioners. In particular, it is interesting to identify challenges that EM practitioner are facing during both EM sessions and the follow-up stages of EM project. Identification and description of these challenges can serve as a considerable help for EM practitioners, which can facilitate successful accomplishment of EM project and in turn support BITA within modeled enterprise. The research question of the paper is therefore defined according to below.
What challenges do enterprise modeling practitioners face during EM? The rest of the paper is structured in the following way: Section 2 presents related research, Section 3 describes the research method that has been applied to address the research question, in Section 4 and Section 5 results are presented. The paper then ends with c conclusions and discussion of future work in Section 6.
Related Research
A need to deal with a gap between organizational context and technology within enterprise has been recognized and discussed by research community for quite some time [START_REF] Orlikowski | An improvisational model for change management: the case of groupware[END_REF]. Several researchers have emphasized the need to capture dimensions of both business and IT during design and implementation of IS (i.e. cf. [START_REF] Gibson | IT-enabled Business Change: An Approach to Understanding and Managing Risk[END_REF]. In this respect, EM serves as a widely-used and effective practice, because of the core capability of enterprise models to capture different aspects of enterprise operation. Thus, EM currently gets more and more recognition as a tool that can be used for alignment of business with IT [START_REF] Seigerroth | Enterprise Modelling and Enterprise Architecture: the constituents of transformation and alignment of Business and IT[END_REF].
Performing EM successfully is a nontrivial task that requires considerable skills and experience since there are various issues to manage and to deal with during the whole EM project [START_REF] Stirna | Participative Enterprise Modeling: Experiences and Recommendations[END_REF]. Among core challenges of EM [START_REF] Barjis | Collaborative, Participative and Interactive Enterprise Modeling[END_REF] highlights the complex sociotechnical nature of an enterprise and conflicting descriptions of the business given by different actors. [START_REF] Indulska | Business Process Modeling: Current Issues and Future Challenges[END_REF] present the work that is dedicated to current issues and future challenges of business process modeling with regard to three points of views: academics, practitioners, and tool vendors. The main findings of their work are two lists with top ten items: current business process modeling issues and future business process modeling challenges. They also mention a number of areas that attract attention of practitioners, but still have not been considered by academics, for example, value of business process modeling, expectations management and others. [START_REF] Delen | Integrated modeling: the key to holistic understanding of the enterprise[END_REF] investigates challenges of EM and identified four challenges with regard to decision mak point of view: heterogeneous methods and tools, model correlation, representation extensibility, and enterprise model compiling.
Another research that investigates the question of EM challenges is presented by [START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF]. Their work identifies four challenges of EM, which will serve as a basis for our work. The first challenge is Degree of formalism. There are different modeling notations (from formal machine interpretable languages to very informal rich pictures). The expressivity of the selected formalism impacts the final model. The second one is Degree of detail. Is a problem of deciding how many things need to be put into a model at different layers of EM in order to describe a certain situation. The third challenge is Accuracy of the view. It is a challenge of selecting a point of view during modeling. The fourth one is Change and model dependencies. This challenge refers to the fact that modeling is usually done in a constantly changing environment. Models should direct the change in the enterprise, but also models undergo changes. In a multi-layered modeling a change at one layer of the model might has consequences on other layers, and can reflect the change that the enterprise undergoes.
Apart from that, there are several research directions that we consider as related research, below we present three of them. The first are practical guidelines to perform EM. Guidelines are always created in response to challenges and problematic issues that arise during practical activities, therefore it can be possible to get an idea about EM challenges by looking on practical guidelines to perform EM. The second research direction is facets and competence of EM practitioner, which focuses on key factors that determine competence of EM practitioner and highlights, first and foremost, the core questions that EM practitioner is supposed to solve. The third related research direction is EM critical success factors, which focuses on identification of factors that are crucial for success of EM efforts. Since significant part of EM efforts is done by EM practitioner, it is possible to get an idea about EM challenges based on EM critical success factors. Combined overview of these related research directions provided us with a broad foundation regarding potential EM challenges. It helped us on further stages of research, including construction of interview questions and conducting of interviews with respondents.
Practical guidelines to perform EM
There are several papers that are introducing different kinds of guidelines for carrying out EM. [START_REF] Stirna | Participative Enterprise Modeling: Experiences and Recommendations[END_REF] describe a set of experiences related to applying EM in different organizational contexts, after what they present a set of generic principles for applying participative EM. Their work marks out five high-level recommendations of using participative EM. Presented generic recommendations are the following: assess the organizational context, assess the problem at hand, assign roles in the modeling process, acquire resources for the project in general and for preparation efforts in particular, conduct modeling sessions. [START_REF] Stirna | Anti-patterns as a Means of Focusing on Critical Quality Aspects in Enterprise Modeling[END_REF] introduce guidelines for carrying out EM in form of antipatterns EM common and reoccurring pitfalls of EM projects. Presented antipatterns address three aspects of EM the modeling product, the modeling process, and the modeling tool support. For example, the second group consists of the following anti-patterns: everybody is a facilitator, the facilitator acts as domain expert, concept dump and others. Group addressing EM tool support contains the next everyone embraces a new tool and others.
Facets and Competence of Enterprise Modeling Practitioner
The significance of the EM practitioner role for overall success of EM project is admitted and discussed by several researchers. Among others, [START_REF] Persson | Towards Defining a Competence Profile for the Enterprise Modeling Practitioner[END_REF] have presented a work that analyses competence needs for the EM practitioner with regard to different steps in the EM process. They consider that EM process consists of the following activities: project inception and planning, conducting modeling sessions, delivering a result that can be used for subsequent implementation project. Two main competence areas that are identified here are competences related to modeling (ability to model;; ability to facilitate a modeling session) and competences related to managing EM projects (for example, ability to select an appropriate EM approach and tailor it in order to fit the situation at hand;; ability to interview involved domain experts).
Another view on competence of EM practitioner is presented by [START_REF] Rosemann | Four facets of a process modeling facilitator[END_REF]. They argue that key role of the modeling facilitator has not been researched so far and present a framework that describe four facets (the driving engineer, the driving artist, the catalyzing engineer, and the catalyzing artist) that can be used by EM practitioner.
Critical success factors
Critical success factors within the context of EM research can be defined as key factors that ensure the modeling project to progress effectively and complete successfully [START_REF] Bandara | Factors and Measures of Business Process Modelling: Model Building Through a Multiple Case Study[END_REF]. [START_REF] Bandara | Factors and Measures of Business Process Modelling: Model Building Through a Multiple Case Study[END_REF] divide critical success factors of business process modeling into two groups: project-specific factors (stakeholder participation, management support, information resources, project management, modeler experience) and modeling-related factors (modeling methodology, modeling language, modeling tool).
The work of [START_REF] Rosemann | Critical Success Factors of Process Modelling for Enterprise Systems[END_REF] identifies the factors that influence process modeling success. Among them they mention: modeling methodology, modeling management, user participation, and management support.
Research Method
General overview of the research path is presented in Figure 1. As a basis for the present work we have used the work of [START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF] that is dedicated to multi-layered EM and its challenges in BITA. Our study started from interview design that could fulfill two purposes: validate EM challenges that have been preliminary presented by [START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF] and identify other EM challenges. It is important to mention that both kinds of challenges were supposed to be identified considering their potential influence on successful EM execution and, in its turn, on alignment of business and IT subsequently.
Interview design
In order to identify practical challenges that EM practitioners face it was decided to conduct semi-structured interviews. This kind of empirical research strategy is able to provide in-depth insight into practice of EM and, what is even more important, it allows steering respondents into desired direction in order to receive rich and detailed feedback.
Interview questions consisted of two parts that could provide investigation of EM challenges that have a potential to influence BITA of modeled enterprise: questions with a purpose to identify challenges that EM practitioners face and questions with a purpose to validate preliminary set of EM challenges (identified in [START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF]. In combination these two groups of questions were supposed to provide comprehensive and integral picture of EM practical challenges. Questions were constructed in such a way that it was possible to identify challenges in both direct and indirect ways. Except from a few examples below the full list of questions can be accessed for download 1 .
The first part of the interviews had the intention to disclose the most significant challenges that respondents face during EM. In order to carry out this part of interview we have designed a set of direct questions (among others,
). The second group of questions had particular intention to validate preliminary set of EM challenges. This group included both direct and indirect questions. For example, validation of Degree of Formalism challenge has been done with the help of direct question ( consider degree of formali
) and a number of indirect questions (among others,
). Having these two types of questions helped us to look into the real fact of the matter instead of just checking it superficially. It should be noted that during further analysis of answers regarding one or another challenge. In other words, challenge was considered as admitted by particular respondent even if he/she admit it only during answers on indirect questions.
The final question of interviews has been designed in such a way that we could conclude the discussion and get filtered and condensed view on EM practical challenges
). An intention here was to make respondents to reconsider and rank the challenges that they have just mentioned, so that it is possible to see which of those they consider as the most important.
Selection of respondents
Since we have chosen interviews as an empirical method of our work a significant part of the work was dedicated to choosing the right respondents. It was important to find people with considerable EM experience within SMEs. Finally, four respondents with 10-16 years of EM experience have been chosen. Chosen EM practitioners have mostly been working with SMEs within Sweden: Respondent 1 (Managing partner Skye AB), Respondent 2 (Test Manager at The Swedish Board of Agriculture), Respondent 3 (Senior Enterprise Architect at Enterprise Design, Ferrologic AB), and Respondent 4 (Senior Business Consultant at Department for Enterprise Design Ferrologic AB).
Conduct of interviews
Interviews started from a preliminary stage during which respondents have been provided with brief description of previously identified EM challenges (in work of [START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF]. This stage had a goal to start and facilitate further discussion by either admitting or denying identified challenges. It also served as a warm-up that opens the main part of the interview, which came right after. The rest of interviews consisted in discussion of prepared question in a very open-ended manner. In other words, respondents were able to build their answers and argumentation quite freely and unconstrained, however, prepared interview questions served as a directive frame for our conversation.
Analysis of interview data and results generation
Interviews have been recorded and analyzed afterwards. During analysis of interview data our goal was to detect all challenges that have been mentioned by interview respondents, but, what is even more important, it was necessary to logically group detected challenges. This was done by documenting mentioned challenges in a structured manner and putting those challenges, which were related to each other, into one coherent category. Thus, it was possible to generate the main part of results: a set of conceptually structured EM practical challenges. Moreover, we could introduce another part of results, which are general recommendations to deal with presented challenges. However, it is important to make clear differentiation between two deliverables of the present study, since the way to obtain the general structure of EM practical challenges (analysis of interview data as such) differs from the way to obtain general recommendations to deal with those challenges (analysis of generated challenges taking into consideration interview data). Results of interview study are presented in the next section.
Results of Interview Study
As it would be expected, EM practitioners are facing various challenges during EM. Several statements of respondents helped us to identify two central activities that unite these challenges (c.f. Figure 2
below). espondent 1) oom with computer and capturing political aspect and human aspect. These aspects are the most difficult in modeling
Respondent 3)
1. E xtracting information about enterprise
T ransforming information into enterprise models
Fig. 2. Two challenging activities of EM.
Thus, it was possible to distinguish the first challenging activity, which is extracting information about enterprise by EM practitioner, from the second one, which is further transformation of this information into enterprise models. Interestingly enough, two out of four respondents have strongly emphasized the importance and complexity of the first activity, not the second one.
those challenges together are much smaller than challenges with getting the (Respondent 3) -related issues are underestimated! It is people that we are working with. We create models, we build models and we can be very specific about relations between them, but that is just technical stuff. The important thing is to get people that
Respondent 4) Below we present detailed description of challenges that have been identified. In order to generate presented items we considered and, if possible, grouped all challenges that have been mentioned by interview respondents. Statements of interview respondents that we relied on when identifying and generating EM challenges are available for download2 .
Challenges that are related to extracting information about enterprise
This group includes challenges that EM practitioner face while obtaining information about enterprise operation during EM workshops and other fact-finding activities.
Right information
This challenge is related to the fact that it is usually quite problematic to get information that is really relevant for solving particular modeling problem. According to our respondents, quite often they need to be very persistent and astute while communicating with enterprise in order to make them share their knowledge about enterprise operation. Often it leads to the situation when EM practitioner finally has too much information, with different degree of validity and accuracy. The answers also indicate the problem of fuzziness of information, white spots that the participants don't know about and possible inaccuracies in the information obtained from them. This might pose the challenge for modeling which typically requires accurate, complete and clear information.
Group dynamic and human behavior
Another challenge is that EM practitioner is supposed to deal with group of people that have various tempers, models of behavior and, what is even more important, relations between them. It undoubtedly leads to building unique group dynamic that has to be considered and controlled by EM practitioner in order to steer modeling sessions efficiently.
Shared language and terminology
During EM project different stakeholders usually have different background and consequently different understanding of used terms and relations between these terms. It leads to various problems during EM sessions when stakeholders use different names to address the same concept or, on the contrary, the same names when talking about totally different things. In addition, in some cases employees of an enterprise use some unique terminology that EM practitioner is not familiar with, so EM practitioner needs to adapt in-flight. All these factors lead to the strong need to create shared terminology between project stakeholders in order to create a common ground for efficient communication.
The purpose of EM and roles of stakeholders within it
One of the most problematic issues during EM project is to make project stakeholders understand the essence of EM as such, since in most of the cases they are not familiar with executive details of EM and with idea of EM in general. Clarification of it might include different aspect: general enlightenment of purposes and goals of EM project;; description of roles and relevant responsibilities that different stakeholders are supposed to have within EM project together with description of EM practitioner role;; explanation of key capabilities of enterprise models, for example, difference between enterprise models and other representative artifacts.
Challenges that are related to transforming information into enterprise models
This area includes challenges that EM practitioner face while transforming information about enterprise operation into enterprise models. In contrast to the process of obtaining information, this process mostly does not involve collaboration of EM practitioner with other stakeholders. It is a process of enterprise models creation in some tangible or intangible form, so that it will be possible to use them further.
Degree of formalism
This challenge is related to degree of formalism that is supposed to be used during whole EM project, since existing modeling notations vary from very formal machine interpretable languages to very informal with quite rich pictures (when EM practitioner decides how to document different kinds of findings). From one point of view, it is preferable to use quite formal notation, since in this way enterprise models can be used and reused further even during other projects. However, using formal notation with some stakeholders can hinder the process of modeling, since they might become overloaded and stressed by describing enterprise operations in a way that is too formal for them. Thus, the choice of formalism degree is a quite challenging task that EM practitioner is supposed to solve.
Degree of detail
This challenge is about how many details each layer of enterprise model should have. Degree of detail can be high (which includes plenty of details within the model) and low (which includes quite general view on enterprise operation). From one point of view it is important to describe enterprise operation with a high degree of detail, so that it will be possible to see as much elements and interaction between them as possible. However, sometimes it is crucial to have a general view on enterprise functioning, since stakeholders, to the contrary, are interested in rather overall view on it. Thus, the challenge is to leave on enterprise model only important and required details.
Modeling perspective
It is a challenge of selecting point of the view during EM. Certainly, enterprise models are able to represent various views on enterprise functioning, which makes them indispensable to deal with different views of stakeholders and with different aspects of enterprise operation. However, in some cases it can be problematic to understand the consequences of adopting certain point of the view on one layer of modeling. In addition, it might be not easy to see how this point of view on one layer will affect other layers.
Change and model dependencies
This challenge is related to the fact that EM is always done in constantly changing environment, which cause the need to keep track of coming changes and update models accordingly. In multi-layered EM it can be quite problematic to keep track of influence of model change on one layer on models on other layers. Some tools enable automatic fulfillment of this task, whereas others do not have such capability.
Scope of the area for investigation
This is a challenge that is related to limiting the scope of the interest during EM. On the one hand, it is important to have rather broad overview of enterprise functioning, since it can provide comprehensive and clear view on all actors and cause-effect relationships that take place within modeled enterprise. However, having very broad view can hinder efficient EM, since in this case EM practitioner need to analyze enormous amount of information instead of focusing on the most problematic areas. Thus, it can be quite problematic to define the scope of investigation in properly.
Overall conceptual structure of EM practical challenges
Taking into consideration interview findings and previous work of [START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF], it was possible to build conceptual structure of challenges that EM practitioners face. With the help of interviews it was possible to reveal general conceptual distinction between two challenging areas of EM, that is why it was reasonable to divide EM challenges into two groups. The first group consist of challenges that are related to extraction of information about enterprise, i.e. extract right information, manage group dynamic and human behavior, use shared language and terminology, clarify the purpose of EM and roles of stakeholders within it. The second group consisted of challenges that are related to transforming extracted information into models, i.e. choose degree of formalism, choose degree of detail, adapt modeling perspective, keep track of change and model dependencies, define and stick to the scope of the area for investigation (see Figure 3 below).
General Recommendations to Deal with EM Practical Challenges
In Section 4 we have presented results of performed interview study, which concluded in building conceptual structure of EM practical challenges. Afterwards we could analyze created structure of EM challenges, while keeping in mind views and opinions of interview respondents, therefore it was also possible to generate a number of general recommendations that can help EM practitioners to cope with identified EM challenges (see Table 3
Conclusions and Future Work
The need to successfully conduct EM in order to align business and IT is acknowledged and discussed, thereby practical challenges of EM are turning out to be an important aspect to investigate. The main purpose of the work was to identify challenges that EM practitioners face during EM. Correspondingly, the main finding of the work is a set of conceptually structured practical challenges of EM. It includes two groups of challenges that take place within EM: extracting of information that is related to enterprise operation and transforming this information into models. Challenges that have been discovered within the first activity are right information, group dynamic and human behavior, shared language and terminology, the purpose of EM and roles of stakeholders within it. The second group involves the following challenges: degree of formalism, degree of detail, modeling perspective, change and model dependencies, and scope of the area for investigation. Moreover, work introduced a number of general recommendations that can help EM practitioner to deal with identified challenges.
From practical point of view presented challenges and general recommendations can be considered as supportive guidelines for EM practitioners, which, in its turn, can facilitate successful EM execution and subsequently ensure BITA. From scientific point of view identified challenges and general recommendations can serve as a contribution to the particular areas of EM practical challenges and documented guidelines for conducting EM, which, in a broad sense, makes an input to the question of EM successful execution and, correspondingly, to the question of BITA.
The study has several limitations, which we plan to address in future research. One of them is related to the fact that the data collected at this stage of the study was limited to the Swedish context. We plan to validate the results also for other regions. An important aspect of future work is therefore to elaborate created conceptual structure of EM challenges into comprehensive framework with the help of solid empirical contribution from international EM practitioners, since it is interesting to get a broader picture of EM practical challenges taking into consideration international modeling experience. Second, it would be useful to validate the results obtained from our initial group of practitioners with a larger, more diverse group. This is also subject to our future work. Another aspect that should be considered in future is enhancement of recommendations to deal with EM challenges.
Fig. 1 .
1 Fig. 1. General research path.
Fig. 3 .
3 Fig. 3. Overall conceptual structure of EM practical challenges.
The next stage included selection of respondents, after what it was possible to conduct interviews. Then collected empirical data has been analyzed, after what it was possible to generate the results in order to answer research question.
Mutli-layered
Enterprise Modeling
and its Challenges in
Business and IT
Alignment (Kaczmarek
et al., 2012)
Interview design Purposed on: -Validation of preliminary identified EM new EM challenges -Identification of challenges Selection of respondents SMEs modeling with -EM practitioners with significant experience of Conduct of interviews A nalysis of data from interviews Results generation -Overall conceptual structure of EM practical challenges -General recommendations to deal with identified challenges
Table 1 . EM challenges and general recommendations to deal with them
1 below). General recommendations have been generated taking into consideration opinions of interview respondents. For example, recommendation R15 that can help to cope eep the balance between readability of model and functionality of it depending on the given we considered statements of Respondent 1 and Respondent 3 Sometimes you end up in a need to decide what would be the best: to create good graphical representation or to create sound and valid model. In some cases customers want to generate code from the model, so if the model is inconsistent they definitely get problems with their code generation. Respondent 3 The problem when you make the model in formal way is that, when you try to describe it, you can really get in trouble with communication Respondent 1). Another example is recommendation R23 that can deal with challenge of defining the scope of the area for investigation. It is has been formulated considering statements of Respondent 1 and Respondent 2 ( We need to know what we should do and to focus on that. Respondent 1;; If you have a problem and stakeholders think it lies in this area, it is not enough to look at that area, because you need larger picture to really understand the problem. That is why you always need to look at a bigger area in the beginning to get a total picture. It is important that you do not go too Respondent 2). Lift a focus if models are unnecessary detailed. R17. It is usually reasonable to work with different degree of detail, since often it is important to see business on different levels. R18. When communicating with participants it is usually reasonable to step up from the current level of detail and start asking WHY question instead of HOW question. R19. Define the degree of detail on initial stage of EM taking into consideration goals and purpose of EM project. On the initial stage of EM look at a larger area than on what stakeholders are describing, however, stay focused on identified problematic areas during further stages.
Challenge Challenge General recommendations
area
Extracting Right R1. Capture what stakeholders know for sure, not what they
enterprise- information believe is true.
related R2. Build group of participants for modeling session from
information people with relevant knowledge and suitable social skills.
Group R3. Make everyone involved.
dynamic and R4. Work with session participants as with group.
human R5. Avoid working with too large groups of participants
behavior during EM sessions.
R6. Make sure that you are solving the right task that is
given by right people.
Shared R7. Conduct some kind of education (for example, warm-
language and up introduction as start of modeling sessions).
terminology R8. Depending on audience ground your explanation on
literature, experiences from previous projects or even on
http://hem.hj.se/~kaijul/PoEM2012/
http://hem.hj.se/~kaijul/PoEM2012/
Acknowledgements
This work was conducted in the context of a COBIT collaboration project ), which is financed by Swedish Foundation for International Cooperation in Research and Higher Education. COBIT is an international collaboration project between Jönköping University (Sweden), Poznan University of Economics (Poland) and St. Petersburg Institute for Informatics and Automation (Russia).
We acknowledge Kurt Sandkuhl, Karl Hammar and Banafsheh Khademhosseinieh for their valuable advices and interesting conceptual discussions during the process of paper writing. | 39,160 | [
"1003527",
"1003528",
"1003529",
"992762"
] | [
"452135",
"452135",
"300731",
"471046"
] |
01484389 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484389/file/978-3-642-34549-4_5_Chapter.pdf | Ilia Bider
Erik Perjons
email: perjons@dsv.su.se
Mturi Elias
Untangling the Dynamic Structure of an Enterprise by Applying a Fractal Approach to Business Processes
Keywords: Business Process, Enterprise Modeling, Fractal Enterprise
A promising approach for analyzing and designing an enterprise is to consider it as a complex adaptive system (CAS) able to self-adjust to the changes in the environment. An important part of designing a CAS model is to untangle the dynamic structure of an enterprise. This paper presents a procedure for identifying all processes that exist in an enterprise as well as their interconnections. The procedure makes use of a number of process-assets and asset-processes archetypes. The first ones help to find out what assets are needed for a particular process, the second ones help to find out supporting processes that are needed to have each type of assets ready available for deployment. The procedure is based on the ideas of fractal organization where the same pattern is repeated on different levels. The uncovered dynamic structure of an enterprise can support strategic planning, change management, as well as discovering and preventing misbalances between its business processes. The paper also presents an example of applying the procedure to research activities of a university.
Introduction
One of the main characteristics of the environment in which a modern enterprise functions is its high dynamism due to globalization and speedy technological progress. To survive and grow in the dynamic environment with global competition for customers, capital and skilled workforce, a modern enterprise should be able to quickly adapt itself to changes in the environment, which includes using opportunities these changes offer for launching new products and services.
This new enterprise environment has already attracted attention of researchers who started to consider an enterprise as a complex adaptive system (CAS) able to selfadjust to the changes in the environment [START_REF] Piciocchi | Managing Change in Fractal Enterprises and IS Architectures from a Viable Systems Perspective[END_REF][START_REF] Valente | Demystifying the struggles of private sector paradigmatic change: Business as an agent in a complex adaptive system[END_REF][START_REF] Engler | Modeling an Innovation Ecosystem with Adaptive Agents[END_REF][START_REF] Ramanathan | Fractal architecture for the adaptive complex enterprise[END_REF]. The long-term goal of our research project is to create a practical methodology for modeling an enterprise as a multilayered CAS capable of self-adaptation without centralized planning mechanism. Building such a model requires finding interconnections between various components of the enterprise. Such interconnections should allow efficient information exchange between the layers so that changes in various parts of the enterprise environment are promptly discovered and dealt with. The objective of having such a model is to help an enterprise to better understand its existing structure so that it could be fully exploited and/or improved.
In the short term, our research is currently focused on getting answers to the following two interconnected questions:
• How to find all processes that exist in an enterprise? This is not a trivial matter as only most visible processes catch attention of management and consultants. These processes represent only the tip of an iceberg of what exists in the enterprise in half-documented, or in totally undocumented form (tacit knowledge).
• What types of interconnections exist between different business processes and how they can be represented in an enterprise model? The answer is needed to get a holistic view on the enterprise processes which is one of the objectives of having an enterprise model.
Besides helping to achieve our long terms goals, such answers, if found, have their own practical application. Without knowing all business processes and their interconnections, it is difficult to plan any improvement, or radical change. Changes introduced in some processes without adjusting the associated processes may have undesirable negative consequences. Having a map of all processes and their connections could help to avoid such situations.
This paper is devoted to finding answers to the above two questions. This is done based on the enterprise model from [START_REF] Bider | Modeling an Agile Enterprise: Reconciling Systems and Process Thinking[END_REF] that represents an enterprise as consisting of three types of components: assets (e.g., people, infrastructure, equipment, etc.), sensors and business process instances. The working hypothesis, when answering the questions above, is that the processes and their relationships can be uncovered via the following procedure. One starts with the visible part of the iceberg, so-called main processes. Here, as main we count processes that produce value for which some of the enterprise external stakeholders are ready to pay, e.g., customers of a private enterprise, or a local government paying for services provided to the public. Typical examples of main processes are hard (e.g., a computer) or soft (e.g., software system) product manufacturing, or service delivery (e.g., educational process at a university). When the main processes are identified, one proceeds "under water" following up assets that are needed to run the main processes. Each assets type requires a package of so-called supporting processes to have the corresponding assets in "working order" waiting to be deployed in the process instances of the main process. To supporting processes belong, for example, human resources (HR) processes (e.g., hiring or retiring members of staff) that insure the enterprise having right people to be engaged in its main processes.
To convert the working hypothesis above into a procedure that could be used in practice, we introduce:
• Process-assets archetypes (patterns) that help to find out what assets are needed for a particular process, especially for a main process from which we start unwinding, • Assets-processes archetypes (patterns) that help to find out supporting processes that are needed to have each type of assets ready available for deployment.
Having these archetypes/patterns will help us to unveil the dynamic process structure of an enterprise starting from the main process and going downwards via repeating pattern "a main process->its assets->processes for each assets->assets for each process-> …". As the result we will get an indefinite tree consisting of the same type of elements. Such kind of structures is known in the scientific literature under the name of fractal structures [START_REF] Mcqueen | Physics and fractal structures[END_REF].
Based on the deliberations above, the goal of this paper is to introduce the processassets and asset-processes archetypes/patterns, and show how to use them in practice to untangle the dynamic structure of an enterprise. The example we use for the latter is from the academic world. We start from one of the main processes -research project -in the university world and unwind it according to the procedure outlined above. The example was chosen based on the authors having their own experience of this process type as well as easy access to the expertise of the colleagues. The chosen example does not mean that the procedure is applicable only to the university world. When discussing the archetypes, we will give examples from other types of enterprises as well.
The research presented in the paper is done in the frame of the design science paradigm [START_REF] Peffers | Design Science Research Methodology for Information Systems Research[END_REF][START_REF] Bider | Design science research as movement between individual and generic situation-problem-solution spaces[END_REF]. The goal of such kind of research is finding and testing a generic solution [START_REF] Bider | Design science research as movement between individual and generic situation-problem-solution spaces[END_REF], or artifact in terms of [START_REF] Peffers | Design Science Research Methodology for Information Systems Research[END_REF], for a class of practical problems. The archetypes and procedure of using them suggested in the paper constitutes a design science artifact for getting an answer for the two main questions discussed. Though most of the concepts used in building this artifact are not new, the artifact itself, which is the main contribution of the paper, as a whole is new and original. In addition, we do not know any research work specifically devoted to finding answers to the questions above. So our solution, even if not perfect, can be used in practice until a better one could be found.
The rest of the paper is structured in the following way. In Section 2, we present an overview of our three-layered enterprise model from [START_REF] Bider | Modeling an Agile Enterprise: Reconciling Systems and Process Thinking[END_REF]. In Section 3, we discuss process and assets archetypes (patterns). In section 4, we apply these patterns to unwind parts of the dynamical structure of a university. In Section 5, we discuss some related works. Section 6 discusses the results achieved and plans for the future.
The Assets-Sensors-Processes Model of an Enterprise
Our starting point is a systemic approach to the enterprise modeling from [START_REF] Bider | Modeling an Agile Enterprise: Reconciling Systems and Process Thinking[END_REF]. We consider an enterprise as a system that reacts on different situations constantly emerging in its environment or inside itself to maintain the balance between itself and environment or inside itself. An emerging situation is dealt by creating a respondent system [START_REF] Lawson | A Journey Through the Systems Landscape[END_REF] that is disbanded after the situation has been dealt with. The respondent system is built from the assets that the larger system already has. Some of these assets are people, or other actors (e.g., robots). Other assets are control elements, e.g., policy documents, which define the behavior of the respondent system.
To deal with emerging situations effectively, an enterprise creates templates for the majority of known types of situations. Such a template is known under different names, like project template, business process definition, business process type, or business process model. We will refer to it as to Business Process Template (BPT). BPT contains two parts:
1. Start conditions that describe a situation which warrants creation of a respondent system 2. Execution rules that describe a composition and behavior of a respondent system A respondent system created according to the BPT template has different names, e.g., a project or a case. We will refer to such a system as to Business Process Instance (BPI).
Note that PBTs can exist in an organization in an explicit or implicit form, or a combination of both. Explicit BPTs can exist as written documents (e.g. employee's handbooks or position descriptions), process diagram, or built in computerized system that support running BPIs according to the given BPTs. Implicit BPTs are in the head of the people engaged in BPIs that follows given BPTs. These BPTs belongs to what is called tacit knowledge.
Based on the systemic view above, we consider an enterprise as consisting of three types of components, assets, sensors and BPIs, depicted in Fig. 1, and explained below: ─ People with their knowledge and practical experiences, beliefs, culture, sets of values, etc. ─ Physical artifacts -computers, telephone lines, production lines, etc. ─ Organizational artifacts, formal as well as informal -departments, teams, networks, roles, etc. ─ Information artifacts -policy documents, manuals, business process templates (BPTs), etc. To information artifacts belong both written (documented) artifacts, and tacit artifacts -the ones that are imprinted in the people's heads (e.g., culture)
The assets are relatively static, which means that by themselves they cannot change anything. Assets are activated when they are included in the other two types of components. Assets themselves can be changed by other types of components when the assets are set in motion for achieving some goals. Note that assets here are not regarded in pure mechanical terms. All "soft" assets, like sense of common goals, degree of collaborativeness, shared vision, etc., belong to the organizational assets. Note also that having organizational artifacts does not imply a traditional function oriented structure. Any kind of informal network or resource oriented structural units are considered as organizational artifacts.
2. Sensors are a set of (sub)systems, the goal of which is to watch the state of the enterprise itself and its environment and catch impulses and changes (trends) that require firing of BPIs of certain types. We need a sensor (which might be a distributed one) for each BPT. The work of a sensor is governed by the Start Conditions of the BPT description (which is an informational artifact). A sensor can be fully automatic for some processes (an order placed by a customer in a webbased shop), or require human participation to detect changes in the system or its surroundings. 3. BPIs -a set of respondent systems initiated by sensors for reaching certain goals and disbanded when these goals are achieved. The behavior of a BPI system is governed by the Execution Rules of the corresponding BPT. Dependent on the type, BPIs can lead to changes being made in the assets layer. New people are hired or fired, departments are reorganized, roles are changed, new policies are adopted, BPT descriptions are changed, new BPTs are introduced, and obsolete ones are removed.
Process-Assets and Asset-Processes Archetypes
In [START_REF] Bider | Modeling an Agile Enterprise: Reconciling Systems and Process Thinking[END_REF], we have discussed several types of interrelationships between the components of an enterprise overviewed in the previous section, namely:
1. Sensors and BPIs use assets to complete their mission: to discover needs for fire a BPI for a sensor, or to attain a goal for BPI. 2. BPIs can change the assets 3. A sensor, as well as BPI can be recursively decomposed using the assets-sensorsprocesses model of Fig. 1.
In this paper, we concentrate only on the first two types of relationships between the components of the enterprise, leaving the third type, process decomposition, outside the scope of this paper. In other words, we will not be discussing any details of the internal structure of processes, focusing only on what types of assets are needed for running process instances of a certain type and in what way process instances can affect the assets.
The Process-Assets Archetype for Main Processes
We consider as enterprise any organization the operational activities of which are financed by external stakeholders. It can, for example, be a private company that gets money for its operational activities from the customers, a head office of an interest organization that gets money from the members, or a public office that gets money from the taxpaying citizens or inhabitants. We consider a main (or core) process to be a process that produces value to the enterprise's external stakeholders for which they are willing to pay. Our definition of the term main (or core) process may not be the same as those of others [START_REF] Hammer | How Process Enterprises Really Work[END_REF][START_REF] Scheer | ARIS -Business Process Modeling[END_REF]. For example, we consider as main processes neither sales and marketing processes, nor product development processes in a product manufacturing company. However, our definition of the main process does cover processes of producing and delivering products and services for external stakeholders, which is in correspondence with other definitions of main processes [START_REF] Hammer | How Process Enterprises Really Work[END_REF][START_REF] Scheer | ARIS -Business Process Modeling[END_REF].
Main processes are the vehicles of generating money for operational activities. To get a constant cash flow, an enterprise needs to ensure that new business process instances (BPIs) of main processes are started with some frequency. To ensure that each started BPI can be successfully finished, the enterprise needs to have assets ready to be employed so that the new BPI gets enough of them when started. We consider that any main process requires the following six types of assets (see also Fig. 2 and3):
1. Paying stakeholders. Examples: customers of a private enterprise, members of an interest organization, local or central government paying for services provided for the public.1 2. Business Process Templates (BPTs). Examples are as follows. For a production process in a manufacturing company, BPT includes product design and design of a technological line to produce the product. For a software development company that provides customer-built software, BPT includes a software methodology (project template) according to which their systems development is conducted. For a service provider, BPT is a template for service delivery. 3. Workforce -people trained and qualified for employment in the main process.
Examples: workers at the conveyor belt, physicians, researchers. 4. Partners. Examples: suppliers of parts in a manufacturing process, a lab that complete medical tests on behalf of a hospital. Partners can be other enterprises or individuals, e.g., retired workers that can be hired in case there is temporal lack of skilled workforce to be engaged in a particular process instance. 5. Technical and Informational Infrastructure -equipment required for running the main process. Examples: production lines, computers, communication lines, buildings, software systems etc. 6. Organizational Infrastructure. Examples: management, departments, teams, policies regulating areas of responsibilities and behavior.
Below we give some additional clarification on the list of assets above.
• The order in which the asset types are listed is arbitrary, and does not reflect the importance of assets of a given type; all of them are equally important. • Our notion of asset does not coincide with the one accepted in the world of finance [START_REF] Elliott | Financial Accounting and Reporting[END_REF]. Except the technical infrastructure, all assets listed above belong to the category of so-called intangible assets of the finance world. Intangible assets usually lack physical substance and their value is difficult to calculate in financial terms. Technical infrastructure belongs to the category of fixed (i.e., having physical substance) tangible (i.e., the value of which is possible to calculate in financial terms) assets.
• All of the following three types of assets -paying stakeholders, skilled workforce, and partners -belong to the category of stakeholders. We differentiate them by the role they play in the main business processes. Paying stakeholders, e.g., customers, pay for the value produced in the frame of process insatnces. Workforce directly participates in the process instances and get compensation for their participation (e.g., in the form of salary). Partners provide the process with resources needed for process instances to run smoothly, e.g., electricity (power provider), money (banks or other type of investors), parts, etc. Partners get compensation for their products and services in form of payment, profit sharing, etc.
Fig. 2. The process-assets archetype for main processes
Fig. 3. An example of instantiation of the process-assets archetype for main processes
The type of processes (main) together with types of assets required for running it constitute a process-assets archetype2 for main processes. Graphically it is depicted in the form of Fig. 2, in which the process type is represented by an oval and assets types -by rectangles. An arrow from the process to an asset shows the needs to have this type of assets in order to successfully run process instances of the given type. A label on an arrow shows the type of assets. Instantiation of the archetype is done by inserting labels inside the oval and rectangles. Fig. 3 is an example of such instantiation for a product manufacturing process.
3.2
The Asset-Processes Archetype
In Section 3.1, we have introduced six types of assets that are needed to ensure that BPIs of a main process run smoothly and with required frequency. Each assets type requires a package of supporting processes to ensure that it is in condition ready to be employed in BPIs of the main process. We present this package as consisting of three types of processes connected to the life-cycle of each individual asset (see also an example in Fig. 4):
1. Acquire -processes that result in the enterprise acquiring a new asset of a given type. The essence of this process depends on the type of asset, the type of the main process and the type of the enterprise. For a product-oriented enterprise, acquiring new customers (paying stakeholders) is done through marketing and sales processes. Acquiring skilled work force is a task completed inside a recruiting process. Acquiring a new BPT for a product-oriented enterprise is a task of new product and new technological process development. Creating a new BPT also results in introducing a new process in the enterprise. 2. Maintain -processes that help to keep existing assets in right shape to be employable in the BPIs of a given type. For customers, it could be Customer Relationship Management (CRM) processes. For workforce, it could be training.
For BPT, it could be product and process improvement. For technical infrastructure, it could be service. 3. Retire -processes that phase out assets that no longer can be used in the main process. For customers, it could be discontinuing serving a customer that is no longer profitable. For BPTs, it could be phasing out a product that no longer satisfies the customer needs. For workforce, it could be actual retirement.
Fig. 4. An example of instantiation of the asset archetype
The asset-processes archetype can be graphically presented in the form of Fig. 4. In it, the asset type is represented by a rectangle, and a process type -by an oval. An arrow from the asset to a process shows that this process is aimed at managing assets of the given type. The label on the arrow shows the type of the process -acquire, maintain, or retire. Instantiation of the archetype is done by inserting labels inside the rectangle and ovals. Actually, Fig. 4 is an example of such instantiation for the customer's assets in a manufacturing company (on the difference between archetypes and instantiations, see Fig. 2 and3 and the text related to them in Section 3.1).
Archetypes for Supporting Processes
Types of assets that are needed for a supporting process can be divided into two categories, general asset types, and specific ones. General types are the same as for the main process, except that a supporting process does not need paying stakeholders.
The other five types of assets needed for a main process: BPT, workforce, partners, technical and informational infrastructure, organizational infrastructure, might be needed for a supporting process as well. Note also that some supporting processes, e.g., servicing a piece of infrastructure, can be totally outsourced to a partner. In this case, only the partner's rectangle will be filled when instantiating the archetype for such a process.
Additionally to the five types of assets listed above, other types of assets can be added to a specific category of supporting processes. We have identifying two additional assets for supporting processes of acquiring an asset that belongs to the category of stakeholders, e.g., paying stakeholders, workforce, and partners: • Value proposition, for example, description of products and/or services delivered to the customer, or salary and other benefits that an employee gets. • Reputation, for example, of being reliable vendor, or being a great place of work.
Adding the above two asset types to the five already discussed, gives us a new process-assets archetype, i.e., the archetype for the acquiring stakeholders. An example of instantiation of such an archetype is presented in Fig. 5. There might be other specific archetypes for supporting processes, but so far we have not identified any more of them.
Fig. 5. An example of instantiation of the process-assets archetype for acquiring stakeholders
Harnessing the Growth of the Processes-Assets Tree
Using archetypes introduced above, we can unwind the process structure of the enterprise. Potentially the resulting tree will grow down and in breadth indefinitely.
As an enterprise has a limited size, there should be some mechanisms that contain this growth and, eventually, stops it. We see several mechanisms that harness the growth:
• Some processes, e.g., maintenance of infrastructure, can be outsourced to a partner.
In this case, only the partner part of a corresponding archetype will be filled. • Some processes can share assets, e.g., workforce and BPT. For example, recruiting of staff can be done according to the same template and by the same employees working in the HR department independently whether the recruitment is done for the employees of main or supporting processes.
• Some processes can be used for managing more than one asset. For example, the assets Product offers from Fig. 5 (Value proposition asset) and Product&Technological process design from Fig. 3 (BPT asset) are to be acquired by the same process of New product development. There is too tight interconnection between these two assets so that they cannot be created separately, e.g.:
─ The offers should be attractive to the customer, so the product should satisfy some customer needs ─ The price should be reasonable, so the technological process should be designed to ensure this kind of a price • A process on an upper level of the tree can be employed as a supporting process on the lower level, which terminates the growth from the corresponding node. For example, one of the "supporting" processes for acquiring and maintaining the asset Brand reputation from Fig. 5 is the main production process itself which should provide products of good quality.
Testing the Model
The archetypes introduced in Section 3 were obtained by abstracting known facts about the structure and functioning of a manufacturing company. Therefore, testing the ideas should be done in a different domain. We choose to apply the model to an academic "enterprise", more exactly, we start unwinding the Research project process. The result of applying the process-assets archetype from Fig. 2 to this process is depicted in Fig. 6.
Fig. 6. Instantiation of the process-assets archetype for the main process: Research project
The main difference between Fig. 3, which instantiates product manufacturing, and Fig. 6 is that Research project has financiers rather than customers as paying stakeholders. The result of a research process is new knowledge that is accessible for everybody, but is financed by few, including private donors who might not directly benefit from their payments. Financiers can be of several sorts: • Research agencies giving grants created by local or central governments, or international organizations • Industrial companies that are interested in the development of certain areas of science • Individuals that sponsors research in certain areas Let us consider that a financier is a research agency giving research grants. Then, applying the asset-processes archetype from Section 3.2 to the leftmost node (Financiers) of Fig. 6, we get an instantiation of this archetype depicted in Fig. 7.
Fig. 7. Instantiation of the assets-processes archetype for a financier Research agency
Applying the Acquiring the stakeholders archetype from Section 3.3 to the leftmost node of Fig. 7 (Identifying & pursuing funding opportunities), we will get its instantiation depicted in Fig. 8 (only the first four assets are presented in this figure).
Fig. 8. Instantiation of the Acquiring stakeholders archetype to Identifying and pursuing funding opportunities
We made an experiment of interviewing two research team leaders in our institution based on Fig. 6, 7, 8. They managed to identify their core research areas and what kind of reputation they use when applying for grants. This took some time, as they did not have explicit answers ready. They also noted that the model helps to better understand the supporting processes around their research work. This experiment, albeit limited, shows that the model can be useful in understanding the dynamic structure of an enterprise. However, more experiments are required to validate the usefulness of our approach.
Related Research
Analysis of enterprises based on the idea of fractality has been done by several researchers and practitioners, e.g., [START_REF] Ramanathan | Fractal architecture for the adaptive complex enterprise[END_REF], [START_REF] Hoverstadt | The Fractal Oragnization: Creating Sustainable Oragnization with the Viable System Model[END_REF], [START_REF] Sandkuhl | Analysing Enterprise Models from a Fractal Organisation Perspective -Potentials and Limitations[END_REF], [START_REF] Canavesio | Enterprise modeling of a project-oriented fractal company for SMEs networking[END_REF]. Their approaches differ from that of ours, which comes as no surprise as there is no accepted definition of what fractals mean in respect to the enterprise world. In essence, fractals are a high-level abstract idea of a structure with a recurring (recursive) pattern repeating on all levels. Dependent on the perspective chosen for modeling of a real life phenomenon, this pattern will be different for different modelers. Below, due to the size limitations, we only shortly summarize the works on fractal structures in enterprise modeling, and show the difference between them and our approach.
The book of Hoverstadt [START_REF] Hoverstadt | The Fractal Oragnization: Creating Sustainable Oragnization with the Viable System Model[END_REF] uses the viable system model (VSM) to unfold the fractal structure of the enterprise via the system -subsystems' relationships. Subsystems are considered as having the same structure and generic organizational characteristics as the system in which they are enclosed. The resulting structure helps to analyze whether there is a balance between the subsystems. Overall, our long term goal is similar to Hoverstadt's: create a methodology for modeling an enterprise as a multilayered complex adaptive system. However, we use a completely different approach to enterprise modeling, instead of system subsystems relationships, we interleave processes and assets when building an enterprise model.
Another approach to analysis of enterprise models based on the idea of fractality can be found in Sandkuhl & Kirikova [START_REF] Sandkuhl | Analysing Enterprise Models from a Fractal Organisation Perspective -Potentials and Limitations[END_REF]. The idea is to find fractal structures in an enterprise model built when using a general modeling technique. [START_REF] Sandkuhl | Analysing Enterprise Models from a Fractal Organisation Perspective -Potentials and Limitations[END_REF] analyzes two such models in order to find fractals in it. The results are mixed, some fractals are found, but the suspicion that many others are missed remains, due to they may not be represented in the models analyzed. The approach in [START_REF] Sandkuhl | Analysing Enterprise Models from a Fractal Organisation Perspective -Potentials and Limitations[END_REF] radically differs from that of ours. We have a hypothesis of a particular fractal structure to be found when analyzing an enterprise, while [START_REF] Sandkuhl | Analysing Enterprise Models from a Fractal Organisation Perspective -Potentials and Limitations[END_REF] is trying to find any types of the fractal structures based on the generic characteristics of organizational fractals.
Canavesio and Martinez [START_REF] Canavesio | Enterprise modeling of a project-oriented fractal company for SMEs networking[END_REF] presents a conceptual model for analyzing a fractal company aiming at supporting a high degree of flexibility to react and adapt quickly to environmental changes. Main concepts are project, resource, goal, actors, plan, and relationships thereof. The approach from [START_REF] Canavesio | Enterprise modeling of a project-oriented fractal company for SMEs networking[END_REF] differs from that of ours in the kind of fractals used for enterprise modeling. Fractals from [START_REF] Canavesio | Enterprise modeling of a project-oriented fractal company for SMEs networking[END_REF] concern the detailed structure of business processes, while we are looking only on the relationships between processes and assets.
The focus on process organization when applying fractal principles can be found in [START_REF] Ramanathan | Fractal architecture for the adaptive complex enterprise[END_REF]. [START_REF] Ramanathan | Fractal architecture for the adaptive complex enterprise[END_REF] is using a pattern of sense-and-respond processes on different organizational levels each consisting of the same pattern: requirement, execution and delivery. The difference between our approach and that from [START_REF] Ramanathan | Fractal architecture for the adaptive complex enterprise[END_REF] is the same as we have mentioned above. [START_REF] Ramanathan | Fractal architecture for the adaptive complex enterprise[END_REF] is looking at the details of individual processes, we are trying to catch general relationships between different processes.
Discussion and Future Research
This paper suggests a new type of enterprise modeling that connects enterprise processes in a tree-like structure where the main enterprise processes serve as a root of the tree. The tree expands via finding all assets needed for smooth functioning of the main processes, and after that, via finding all supporting processes that are needed to handle these assets. The tree has a recursive/fractal form, where instantiations of process-assets archetypes are interleaved with those of asset-processes archetypes.
We see several practical areas where a model connecting all processes and assets in an enterprise could be applied, e.g.:
• As a help in strategic planning for finding out all branches of the processes-assets tree that require adjustments. For example, when sales plans a new campaign that will bring new customers, all assets required by the corresponding main process should be adjusted to satisfy the larger number of customers. This includes workforce, suppliers, infrastructure, etc. The calculation itself can be done with one of the known Systems Thinking methods, e.g., Systems Dynamics.
• To prevent "organizational cancer" as described in [START_REF] Hoverstadt | The Fractal Oragnization: Creating Sustainable Oragnization with the Viable System Model[END_REF], p 57, when a supporting process starts behaving as it were a main one disturbing the balance of the organizational structure. This is typical for IT-departments that may start finding external "customers" for software developed for internal needs.
• As a help in radically changing the direction. When all supporting processes are mapped in the tree, it will be easier for the enterprise to change its business activity by picking up some supporting processes and converting it to the main one, while making appropriate adjustments to the tree. For example, a product manufacturing company could decide to become an engineering company. Such a decision can be made when manufacturing becomes unprofitable, while the company still have a very strong engineering department. An example of such transformation is described in [START_REF] Hoverstadt | The Fractal Oragnization: Creating Sustainable Oragnization with the Viable System Model[END_REF], p. 74. Another example comes from the experience of the first author who worked for an US company that made such transformation twice. First transformation was from being a software consulting business to becoming a software product vendor when the consulting business could not accommodate the existing workforce. The second time it was done in a reverse order when a market for their line of products suddenly collapsed.
As far as future research is concerned, we plan to continue our work in several directions: • Continuing testing. The model presented in this paper has been tested only in a limited scope, and it requires further testing and elaboration. The next major step in our research is to build a full tree with Research project as a root. This will help us to further elaborate the model, and improve our catalog of process archetypes. Furthermore, we need to test this modeling technique in another domain, for example, to build a model for a software development company.
• Continuing working on the graphical representation of the model. Two aspects need to be covered in this respect: ─ Representing multiplicity, e.g., multiple and different assets of the same kind that require different supporting process. ─ Representing sharing assets and processes in the model as discussed in section 3.4. • Using the processes-assets model as a foundation for modeling and designing an enterprise as a CAS (complex adaptive system). Different processes discovered with the procedure suggested in this paper are connected to different parts of the external and/or internal environment of the enterprise. If participants of these processes are entrusted to watch and report on changes in their parts of the environment, it could create a set of effective sensors (see Section 2) covering all aspects of the enterprise environment. Connecting these sensors to firing adaptation processes will close the "adaptation loop". As an example for the above, assume that the recruiting process shows that it becomes difficult to recruit skilled workforce for a main process. This fact can fire an investigative process to find out the reason for these difficulties. It could be that nobody is willing to learn such skills any more, or the competitors are expanding and offer better conditions (e.g., salary), or the enterprise reputation as a good place of work has been shattered. Based on the result of the investigation appropriate changes can be made in HR processes themselves or in completely different parts of the enterprise.
• Another application area of our processes-assets model is analyzing and representing process models in a repository. As pointed in [START_REF] Shahzad | Requirements for a Business Process Model Repository: A Stakeholders' Perspective[END_REF], an attractive alternative to designing business processes from scratch is redesigning existing models. Such an approach requires the use of process model repositories that provide a location for storing and managing process knowledge for future reuse. One of the key challenges of designing such a repository is to develop a method to analyze and represent a collection of related processes [START_REF] Elias | A Business Process Metadata Model for a Process Model Repository[END_REF]. The process-assets and asset-processes archetypes provide a mechanism to analyze and represent the relationships between business processes in a repository. The processes-assets relationships structure, when represented in the repository, will serve as a navigation structure that determines the possible paths for accessing process models by imposing an organized layout on the repository's content.
Fig. 1 .
1 Fig. 1. An enterprise model consisting of three types of components: assets, sensors and BPIs
In some works all paying stakeholders are considered as customers. We prefer to differentiate these two terms as not to be engaged in the discussions not relevant to the issues of this paper.
In this paper, we use term archetype in its general meaning of "the original pattern or model of which all things of the same type are representations or copies", and not as a pattern of behavior as is widely accepted in Systems Thinking literature.
Acknowledgements
We are grateful to our colleagues, Paul Johannesson, Hercules Dalianis and Jelena Zdravkovic who participated in interviews related to the analysis of the research activity reported in Section 4. We are also thankful to David Alman, Gene Bellinger, Patrick Hoverstadt, Harold Lawson and anonymous reviewers whose comments on the earlier draft of this paper helped us to improve the text. | 40,759 | [
"1014409",
"1003536",
"1003537"
] | [
"300563",
"300563",
"300563"
] |
01484390 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484390/file/978-3-642-34549-4_7_Chapter.pdf | Jaap Gordijn
email: j.gordijn@vu.nl
Ivan Razo-Zapata
email: i.s.razozapata@vu.nl
Pieter De Leenheer
email: pieter.de.leenheer@vu.nl
Roel Wieringa
email: r.j.wieringa@utwente.nl
Challenges in Service Value Network Composition
Keywords: service value network, bundling, composition, e 3 service
Commercial services become increasingly important. Complex bundles of these services can be offered by multiple suppliers in a service value network. The e 3 service ontology proposes a framework for semi-automatically composing such a network. This paper addresses research challenges in service value network composition. As a demonstration of the state of the art, the e 3 service ontology is used. The challenges are explained using an example of an Internet service provider
Introduction
Services comprise a significant part of the economy. For instance, in the USA approximately 81.1 % of the employees worked in the service industry in 2011 1 .
Increasingly, such services are ordered and/or provisioned online. For instance, a cinema ticket can be ordered via the Internet, but the customer still has to travel to the cinema, where the service is delivered. Viewing a film, by contrast, can be ordered and provisioned online. Other examples are an email inbox, web-page hosting, or voice over IP (VOIP). The focus of this paper is on services that can be offered and provisioned online, see also Sect. 2, about the virtual ISP example.
Services are ordered and provisioned in a service value network (SVN) (see e.g. [START_REF] Hamilton | Service value networks: Value, performance and strategy for the services industry[END_REF][START_REF] Christopher | Services Marketing: People, Technology, Strategy[END_REF][START_REF] Allee | A value network approach for modeling and measuring intangibles[END_REF][START_REF] Ivan | Fuzzy verification of service value networks[END_REF] for SVN and related concepts). At minimum, a SVN consists of two actors, namely a supplier and a customer. However, in many cases, the SVN will consist of multiple suppliers, each offering a service, who together satisfy a complex customer need. The package of services satisfying the complex customer need is called the service bundle. By using multi-supplier service bundles, each supplier can concentrate on its own core competence, and can participate in satisfying a complex customer need, which it never could satisfy on its own. Moreover, a SVN may contain the suppliers of the suppliers and so on, until we reach the suppliers for which we safely can assume that their services can be provisioned in a known way.
The observation that a SVN may consist of many suppliers leads to the conclusion that formation, or composition of the SVN is a research question in its own right. Specifically, if the customer need is ordered and provisioned online, the composition process should be software supported, and at least be semi-automatic. To this end, we introduce the notion of computational services; these are commercial services which are represented in a machine readable way, so that software can (semi) automatically reason about the required service bundle and the corresponding suppliers. We employ ontologies (see Sect.. 3) for representation and reasoning purposes.
The e 3 service ontology [START_REF] Kinderen | Reasoning about customer needs in multi-supplier ict service bundles using decision models[END_REF][START_REF] Ivan | Fuzzy verification of service value networks[END_REF] and its predecessor serviguration [START_REF] Baida | Software-aided Service Bundling -Intelligent Methods & Tools for Graphical Service Modeling[END_REF] is an approach to semi-automatically compose service value networks. The e 3 service approach takes two perspectives on service composition, namely a customer-and a supplier perspective, and tries to generate a multi-supplier service bundle and the corresponding SVN to satisfy a complex customer need. We use the e 3 service ontology as the baseline for service value network composition.
The e 3 service ontology is not to be confused with Web service technologies such as SOAP, WSDL and UDDI [START_REF] Curbera | Unraveling the web services web: An introduction to soap, wsdl, and uddi[END_REF]. Whereas the focus of e 3 service is on the composition of commercial services, SOAP, WSDL and UDDI facilitate interoperability between software services executing on various software and hardware platforms. Nevertheless, commercial services can be (partly) implemented by means of web service technology. After sketching the state of the art of e 3 service , the contribution of this paper is to explain research challenges with respect to e 3 service , including potential solution directions. Although the research challenges are described in terms of the e 3 service work, we believe the challenges themselves are present in a broader context.
To facilitate the discussion, we create a hypothetical example about a virtual Internet service provider (ISP) (Sect.. 2). Thereafter, we discuss the state of the art with respect to e 3 service , by using the virtual ISP example (Sect.. 3). Then we briefly state our vision about the composition of SVNs (Sect. 4). Subsequently, we present the research directions (Sect. 5). Finally, we present our conclusions (Sect. 6).
Example: The virtual Internet service provider
To illustrate the capabilities of, and research issues with respect to e 3 service , we have constructed a hypothetical educational example about a virtual Internet service provider. This example is inspired on the example in [START_REF] Chmielowiec | Technical challenges in market-driven automated service provisioning[END_REF][START_REF] Kinderen | Reasoning about customer needs in multi-supplier ict service bundles using decision models[END_REF].
The virtual Internet service provider example assumes that an end user (the customer) wants to compose an Internet service provider out of elementary service offered by potentially different suppliers. For example, an offered service bundle may include only basic Internet access (then the bundle consists of only one service). In contrast, a service bundle may be complex such as basic Internet access, an email inbox, an email sending service (e.g. a SMTP service), web page hosting, voice over IP (telephony), a helpdesk, remote disk storage and back up and news. All these service can potentially be offered by different suppliers, so that a multi-supplier service bundle emerges. Moreover, some services may be self-services. For example, the helpdesk service may consist of 1st, 2nd and 3rd line support, and the customer performs the 1st line helpdesk by himself.
3 e 3 service : State of the art This section summarizes the current state of the art of e 3 service . For a more detailled discussion, the reader is referred to [START_REF] Ivan | Fuzzy verification of service value networks[END_REF][START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF][START_REF] Ivan | Handbook of Service Description: USDL and its Methods, chapter Service Network Approaches[END_REF] and [START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF]. Although the indentified research challenges exist outside the context of e 3 service , we take the state of the art of e 3 service as our point of departure.
Impedance mismatch between customer and supplier
A key problem in the composition of service value networks is the mismatch between the customer need and the offered service (bundle) by the supplier(s). The service bundle may contain several features (later called consequences) which are unwanted by the customer, or the bundle may miss required features as wanted by the customer.
Example. The user may want to communicate via text (e.g. email). However, the provider is offering the bundle consisting of email, voice over IP (VoIP), and Internet access. The mismatch is in the VoIP service which is not requested by the customer; the latter one (Internet access) is a required service needed to enable email and VoIP.
To address this mismatch, e 3 service proposes two ontologies: (1) the customer ontology, and (2) the supplier ontology, including automated reasoning capacity.
Customer ontology
The customer ontology borrows concepts and terminology from marketing (see e.g. [START_REF] Kotler | Marketing Management[END_REF] and [START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF]). Key notions in the supplier ontology are need [START_REF] Kotler | Marketing Management[END_REF][START_REF] Arnd | How broad should the marketing concept be[END_REF][START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF][START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF] and consequence [START_REF] Gutman | Laddering theory-analysis and interpretation[END_REF][START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF][START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF]. According to [START_REF] Gutman | Laddering theory-analysis and interpretation[END_REF][START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF][START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF], a consequence is the result from consuming valuable service outcomes. A need may be specified by various consequences [START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF][START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF]. In the current work on e 3 service (of Razo-Zapata et al., ibid) we focus mainly on functional consequences. In the previous example, we have already exemplified the notion of need and consequence.
Supplier ontology
The supplier ontology is fully integrated with the e 3 value ontology [START_REF] Gordijn | Value based requirements engineering: Exploring innovative e-commerce idea[END_REF] and therefore borrows many concepts from the e 3 value ontology [START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF]. Key concepts in the e 3 value ontology are actors who perform value activities [START_REF] Gordijn | Value based requirements engineering: Exploring innovative e-commerce idea[END_REF]. Actors can exchange things of economic value (value objects) with each other via value transfers [START_REF] Gordijn | Value based requirements engineering: Exploring innovative e-commerce idea[END_REF].
Example. An actor can be an Internet service provider (ISP) who performs the activities of access provisioning, email inbox provisioning and email SMTP relaying, web / HTTP hosting, and more. To other actors (customers) a range of services (in terms of value objects) is offered, amongst others email inbox, SMTP relay and hosting of web pages.
To be able to connect the supplier ontology with the customer ontology, value objects have consequences too [START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF]. These consequences are from an ontological perspective similar to the consequences identified by the customer ontology. This allows for matching both kinds of consequences. The fact that a value object can have multiple consequences (and vice versa) models the situation that a customer obtains a value object as a whole (thus with all the consequences it consists of), whereas the customer might be interested in a subset of consequences. It is not possibile to buy consequences separately, as they are packaged into a value object.
Reasoning support
In [START_REF] Ivan | Fuzzy verification of service value networks[END_REF] different reasoning processes are employed than in [START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF]. We restrict ourselves to [START_REF] Ivan | Fuzzy verification of service value networks[END_REF]. In [START_REF] Ivan | Fuzzy verification of service value networks[END_REF], reasoning is explained as a Propose-Critique-Modify (PCM) [START_REF] Balakrishnan Chandrasekaran | Design problem solving: A task analysis[END_REF] problem solving method, consisting of the following reasoning steps:
-Propose -Laddering: A technique to refine needs in terms of functional consequences [START_REF] Gutman | Laddering theory-analysis and interpretation[END_REF][START_REF] Ivan | Fuzzy verification of service value networks[END_REF]. E.g. a complex need (N
Vision on composition of service value networks
Our long term vision can be characterized as follows:
-A multi-perspective view on composition and operation of service value networks. For instance, a business value perspective, a business process perspective, and an IT perspective may be relevant. -Integration of the forementioned perspectives (e.g. cf. [START_REF] Pijpers | Using conceptual models to explore business-ict alignment in networked value constellations case studies from the dutch aviation industry, spanish electricity industry and dutch telecom industry[END_REF]). These perspectives together provide a blueprint of the SVN at hand. -Various ways of composing SVNs, for instance hierarchical composition [START_REF] Kinderen | Reasoning about customer needs in multi-supplier ict service bundles using decision models[END_REF][START_REF] Ivan | Fuzzy verification of service value networks[END_REF] with one party that executes the composition in contrast to self-organization composition, in which participants themselves configure a SVN. -Operationalization of the SVN in terms of processes and supporting IT. In some cases, IT can be dominant, as is for instance the case for the virtual ISP example. -Reconfiguration of the SVN. In some cases it is necessary to reconfigure the SVN based on quality monitoring, disappearing actors, etc.
Although the issues described above might seem only applicable to our vision, areas such as Service-oriented Enterprise Architecture also deal with them by aiming at transparently merging business services (commercial services), software services (web services), platform services and infrastructure services (IT architecture) (see e.g. [START_REF] Wegmann | Business-IT Alignment with SEAM for Enterprise Architecture[END_REF]).
Research challenges in service value networks
Terminologies for customer and supplier ontologies may differ
Theme. Ontology.
Description of the challenge. The current e 3 service has two important assumptions. First, it is assumed that the customer and supplier ontology are linked to each other via a single consequence construct. Perhaps, multiple (e.g. more detailled) customer consequences may map onto one supplier consequence, or vice versa. Second, it is assumed that both the customer and suppliers all use the same terminology for stating the consequences. This challenge supposes first that links via the customer and supplier can involve more complex constructs than is the case right now (so, just one concept: the consequence). Second the challenge includes the idea that -given certain constructs to express what is needed and offered -the customer and suppliers can do so by using different terminology.
Example. With respect to the first assumption, the virtual ISP example may suppose a global customer consequence 'being online' that maps onto a supplier consequence 'email' + 'internet access'. Considering the second assumption, in the virtual ISP example, the desired customer consequence can be 'communicate via text', whereas the stated supplier consequence can be 'electronic mail'.
Foreseen solution direction. Concerning the consequence as matching construct, the goal is to allow for composition of consequences into more complex consequence constructs. E.g. various kind of relationships between consequences can be identified. For instance, in [START_REF] Kinderen | Reasoning about customer needs in multi-supplier ict service bundles using decision models[END_REF], a consequence can depend on other consequences, and can be in a core/enhancing bundling and an optional bundling relationship with other consequences. This can be extended with composition relationships.
With respect to the use of different terminologies, in [START_REF] Pedrinaci | Toward the next wave of services: Linked services for the web of data[END_REF][START_REF] Bizer | Linked Data -The Story So Far[END_REF] a solution is proposed to match various functionalities, expressed in different terminologies, in the context of web services. Perhaps this kind of solution is also of use for commercial services, which are expressed by different terminologies.
The notion of consequence is a too high-level construct
Theme. Ontology.
Description of the challenge. Currently, the e 3 service ontology matches customer needs with supplier service offerings via the notion of consequence. In [START_REF] Kinderen | Reasoning about customer needs in multi-supplier ict service bundles using decision models[END_REF], a distinction is made between functional consequences, and quality consequences. However, a more detailled structuring of the notion of consequences can be useful. It is for instance possible to distinguish various quality consequences, such as timely provisioning of the service, stability of the service supplier, etc.
Example. In case of the virtual ISP example it is possible that a supplier offers Internet access and another supplier offers VoIP (via the offered Internet access). In such a case, it is important that Internet access has sufficient quality for the VoIP service. In this context, quality can be stated by the bandwidth and latency of the network connection, which should be sufficient to carry the VoIP connection.
Foreseen solution direction. An ontology of both functional consequences and quality consequences should be made. Functional consequences are highly domain dependent but for quality consuences, theories on software quality can be of use and SERQUAL [START_REF] Parasuraman | A conceptual model of service quality and its implicationt[END_REF], a theory on quality properties of commercial services. Finally, the Unified Service Description Language (USDL) 2 [5] may be a source.
Matching of customer needs with supplier service offerings is broker-based
Theme. Reasoning.
Description of the challenge. Different approaches can be followed to match customer needs with supplier offerings. In this paper, we distinguish hierarchical matching and self organizing matching. In case of hierarchical matching, there is a party (e.g. a broker) that controls and executes the matching process. Suppliers are simply told by the broker to provide their services in a bundle. In the current work ( [START_REF] Ivan | Fuzzy verification of service value networks[END_REF]) matching is done via a broker. The position of the party who performs the matching is powerful from a business perspective, since such a party determines which actors are providing which services. Other matching models can be distinguished, for instance self organizing models, in which actors collaborate and negotiate about the service bundle to be provided to the customer and there is no central coordinator.
Example. In the virtual ISP example, there would be in the current e 3 service implementation a specific party (the broker) who performs the matching process. This process includes eliciting customer needs, finding the appropriate service bundles and assigning specific services in the bundle to individual suppliers.
Foreseen solution direction. Hierarchical matching is currently party supported by e 3 service as an intermediate top-level party performing the matching process.
The current process can be extended by supporting multiple matching parties who are organized in a matching hierarchy. Additionally, self organizing matching should be supported (as this is an entirely different business model), e.g. via gossiping protocols, which avoid central components such as a broker (see e.g. [START_REF] Datta | Autonomous gossiping: A self-organizing epidemic algorithm for selective information dissemination in wireless mobile ad-hoc networks[END_REF] for gossiping in computer networks).
Restricted knowledge used for need and consequence elicitation
Theme. Reasoning.
Description of the challenge. The current implementation of e 3 service supposes business-to-consumer (B2C) and business-to-business (B2B) relationships. B2C interaction plays a role while executing customer need and consequence elicitation, based on the customer need and service catalogues. B2B relationships play a role during the linking process: if a supplier offers a service to the customer, it is possible that the supplier itself requires services from other suppliers. This is referred to as linking. It is possible that customer to customer (C2C) interaction may play a role during the customer need and consequence elicitation process. For instance, a service value network with the belonging consequences may be built that closely resembles the service value network (and consequences) generated for another customer.
Example. Suppose that a particular customer uses a bundle of Internet access + email (inbox and SMTP) + VoIP, and is satisfied with the bundle. Via a recommender system, this customer may publish his/her experiences with the used service bundle at hand. The service value web configuration components may use information about this published bundle as an example for other service bundles.
Foreseen solution direction. Customer to customer recommendation systems may be used as an input te create a recommender system that registers customer's scores on particular consequences. These scores can then be used in the customer need and consequences elicitation process.
Implementation of e 3 service by web services
Theme. Software tool support.
Description of the challenge. The software implementation of e 3 service is currently Java-and RDF-based. It is possible to think of the software as a set of web services and associated processes that perform the composition of the SVN. Moreover, these web services may be offered (and requested) by multiple suppliers and the customer, so that the composition becomes a distributed task.
Example. In the virtual ISP example, each enterprise that potentially wants to participate in a SVN can offer a set web services. These web services allow the enterprise to participate in the composition process.
Foreseen solution direction. We foresee the use of web-service standards, such SOAP and WSDL to build a configurator that can run as decentralized (meaning: at the customer and suppliers sites) as much as possible. Moreover, a selforganizing implementation obviously should support a fully decentralized architecture.
Conclusion
In this paper, we have introduced a number of research challenges with respect to commercial service value networks in general and the e 3 service ontology in particular. By no means, the list of challenges is complete. The first challenge is to allow a more complex conceptualisation of service characteristics as well as the use of different terminology by the customer and suppliers of services. Another research challenge is to develop a more detailled ontology for functional and quality consequences. Currently, e 3 service uses a brokerage approach for matching. A different approach to be investigated is the self-organizing approach. Furthermore, a research challenge is how to use customer to customer interactions in the process of elicitation of customer needs and consequences.
Finally, a research challenge is how to enable the current e 3 service framework in terms of web services.
C .-Bundling: Finding multi-supplier service bundles that satisfy the customer need[START_REF] Ivan | Fuzzy verification of service value networks[END_REF][START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF]. Bundles may partly satisfy the need, may overlap, or may precisely satisfy the need. E.g. since S 1 cannot provide all the customerdesired F Cs, an extra service such as remote desktop (S 2 ) offering F C 2 can be combined with S 1 to generate a solution bundle. -Linking: Finding additional services needed by the suppliers that provide the service bundle[START_REF] Ivan | Fuzzy verification of service value networks[END_REF][START_REF] Razo-Zapata | Service value networks for competency-driven educational services: A case study[END_REF][START_REF] Gordijn | Generating service valuewebs by hierarchical configuration: An ipr case[END_REF]. E.g. a bundle composed of S 1 and S 2 might need to solve dependencies for S 2 such as a versioning service that provides the updated O.S. to S 2 . -Verify -Analysis of provided, missing and non-required functional consequences and rating the importance of these consequences using a fuzzy inference system[START_REF] Ivan | Fuzzy verification of service value networks[END_REF]. E.g. an SVN providing F C 1 , F C 2 and F C 3 will fit better the customerdesired consequences (and will have a higher score) than any SVN providing only F C 1 and F C 2 or only F C 2 and F C 3 . -Critique -In case the configuration task is unsuccessful: Identification of the source of failure[START_REF] Balakrishnan Chandrasekaran | Design problem solving: A task analysis[END_REF][START_REF] Ivan | Fuzzy verification of service value networks[END_REF]. E.g. after composing an SVN offering F C 1 , F C 2 and F C 3 , the customer might realize that F C 3 is not relevant for him. In this case the customer can indicate that he would like to get alternative SV N s only offering F C 1 and F C 2 . -Modify -Modify the service network of the service bundle based on the results of the critique step cf.[START_REF] Balakrishnan Chandrasekaran | Design problem solving: A task analysis[END_REF][START_REF] Ivan | Fuzzy verification of service value networks[END_REF]. E.g. based on the output, new SV N s can be composed to better fit the customer-desired consequences.
1 ) such as Assuring Business Continuity can be expressed in terms of: Data available in case of emergency (F C 1 ), Application available 24/7 (F C 2 ) and Regulatory compliance (F C 3 ). -Offering: Determination what functional consequences can be offered by suppliers [23, 22, 21]. E.g. a backup service (S 1 ) can offer FCs such as: Data available in case of emergency F C A , Redundancy F C B , Regulatory compliance (F C C ), among others. -Matching: Match customer-desired consequences with supplier-offered consequences [23, 22, 21]. E.g. the customer-desired F C 1 can be matched with the supplier-offered F C A and F C 3 with F C
see http://www.bls.gov/fls/flscomparelf.htm table 7, visited June
21st, 2012
http://www.internet-of-services.com/index.php?id=570&L=0, visited June 21st, 2012
Acknowledgments. The research leading to these results has received funding from the NWO/Jacquard project VALUE-IT no 630.001.205. | 27,447 | [
"1003538",
"1003539",
"1003540"
] | [
"62433",
"62433",
"62433",
"487805",
"303060"
] |
01484400 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484400/file/978-3-642-34549-4_10_Chapter.pdf | Michaël Petit
Christophe Feltus
email: christophe.feltus@tudor.lu
François Vernadat
email: francois.vernadat@eca.europa.eu
Enterprise Architecture Enhanced with Responsibility to Manage Access Rights -Case Study in an EU Institution
Keywords: Access rights management, Business/IT alignment, Enterprise architecture, Responsibility, Case study
An innovative approach is proposed for aligning the different layers of the enterprise architecture of a European institution. The main objective of the alignment targets the definition and the assignment of the access rights needed by the employees according to business specifications. This alignment is realized by considering the responsibility and the accountabilities (doing, deciding and advising) of these employees regarding business tasks. Therefore, the responsibility (modeled in a responsibility metamodel) is integrated with the enterprise architecture metamodel using a structured method. The approach is illustrated and validated with a dedicated case study dealing with the definition of access rights assigned to employees involved in the user account provisioning and management processes.
Introduction
Access rights management is the process encompassing the definition, deployment and maintenance of access rights required by the employees to get access to the resources they need to perform the activities assigned to them. This process is central to the field of information security because it impacts most of the functions of the information systems, such as the configuration of the firewalls, the access to the file servers or/and the authorization to perform software operations. Furthermore, the management of access rights is complex because it involves many employee profiles, from secretaries to top managers, and concerns all the company layers, from the business to the technical ones. On one hand, access rights to IT components must be defined based on functional requirements (defining who can or must use which functionality) and, on the other hand, based on governance needs (defining which responsibility exists at the business level). The functional requirements advocate that, to perform an activity, the employee must hold the proper access rights. The governance needs are those defined by governance standards and norms and those aiming at improving the quality and the accuracy of these access rights [START_REF] Feltus | Enhancement of CIMOSA with Responsibility Concept to Conform to Principles of Corporate Governance of IT[END_REF].
Practically, one can observe [START_REF] Feltus | Strengthening employee's responsibility to enhance governance of IT: COBIT RACI chart case study[END_REF] that the existing access control models [START_REF] Clark | A comparison of commercial and military computer security policies. Security and Privacy[END_REF][START_REF] Covington | Securing context-aware applications using environment roles[END_REF][START_REF] Ferraiolo | Proposed nist standard for role-based access control[END_REF][START_REF] Karp | From abac to zbac: The evolution of access control models[END_REF][START_REF] Covington | A contextual attribute-based access control model. On the Move to Meaningful Internet Systems[END_REF][START_REF] Lang | A exible attribute based access control method for grid computing[END_REF] and rights engineering methods [START_REF] Crook | Modelling access policies using roles inrequirements engineering[END_REF][START_REF] He | A framework for privacy-enhanced access control analysis in requirements engineering[END_REF][START_REF] Neumann | A scenario-driven role engineering process for functional rbac roles[END_REF] do not permit to correctly fulfill these needs, mostly because they are handled at the technical layer by isolated processes, which are defined and deployed by the IT department or by an isolated company unit that, generally, does not consider their management according to the governance needs. To address this problem, the paper proposes an approach based on the employees' responsibilities that are identified and modeled by considering these governance needs. On one hand, the modeling of the responsibility concept permits to consider several dimensions of the links that associate an employee with the activities he/she has to perform. On the other hand, the integration of the responsibility in a business/IT alignment method, for the engineering of access rights, permits to engineer and deploy the rights strictly necessary for the employees, thereby avoiding too permissive (and possibly harmful) access rights.
Enterprise architecture frameworks (EAFs) can be used to model the interrelations between different abstraction layers of a company (e.g. the business, the application and the technical layers) and, according to different aspects such as behavior, the information or the static structure [START_REF] Lankhorst | and the ArchiMate team[END_REF]. These models provide views that are understandable by all stakeholders and support decision making, highlighting potential impacts on the whole enterprise. For instance, the enterprise architecture models can be used to understand the impact of a new business service integrated in the business layer on the technical layer and, consequently, enable analysis of some required server capacity. Conversely, the failure of a server has an impact on one or more applications and therefore on business services. The enterprise architecture models support analysis of the impact of various events or decisions and as such the improvement of alignment. For supporting the alignment between the enterprise layers, the EAFs have undergone major improvements during the first decade of the 2000's and some significant frameworks have been developed such as ArchiMate [START_REF] Lankhorst | and the ArchiMate team[END_REF], the Zachman framework [START_REF] Zachman | The Zachman Framework For Enterprise Architecture: Primer for Enterprise Engineering and Manufacturing By[END_REF] or TOGAF [START_REF]TOGAF (The Open Group Architecture Framework)[END_REF]. Even if the advantages of EAFs are not to be demonstrated anymore, the high abstraction level of the modeled concepts and of the links between these concepts makes it sometimes difficult to use the EAFs to perform, verify or justify concrete alignments. In particular, EAFs do not permit to engineer precisely the access rights provided to the employee at an application layer based on the specification from a business layer.
The paper proposes a contribution to help solving the problem of alignment of access rights with business responsibility originating from governance requirements. The solution extends a particular EAF promoted by the European Commission and used at the European Court of Auditors (ECA) with concepts for representing responsibility at a business level. This extension is obtained by integrating the ECA EA metamodel with the responsibility metamodel of our previously developed Responsibility Modeling Language [START_REF] Feltus | Strengthening employee's responsibility to enhance governance of IT: COBIT RACI chart case study[END_REF][START_REF] Feltus | Enhancement of Business IT Alignment by Including Responsibility Components in RBAC, 5 th Busital workshop[END_REF]. The foreseen advantage of integrating both is the enhancement of the alignment among the concepts from the business perspective, the concepts from the application perspective and the concepts from the technical perspective (see Sect. 3). Ultimately, this alignment will support the definition of the access rights to be provisioned to employees, based on their responsibilities. The applicability of the improved metamodel is demonstrated through a case study performed in a real setting.
The paper is structured as follows. In the next section, the responsibility metamodel is introduced. In Section 3, the ECA EA metamodel is presented and, in Section 4, both are integrated. In section 5, a case study related to the User provisioning and User account management processes is presented. Finally, in Section 6, some conclusions are provided.
Modeling responsibility
The elaboration of the responsibility metamodel (Fig. 1) has been performed on the basis of a literature review. As explained in previous papers [START_REF] Feltus | Strengthening employee's responsibility to enhance governance of IT: COBIT RACI chart case study[END_REF][START_REF] Feltus | Enhancement of Business IT Alignment by Including Responsibility Components in RBAC, 5 th Busital workshop[END_REF], in the first place, it is analyzed how the responsibility is dealt with in information technology professional frameworks, in the field of requirements engineering and role engineering and in the field of access right and access control models [START_REF] Feltus | Enhancement of Business IT Alignment by Including Responsibility Components in RBAC, 5 th Busital workshop[END_REF]. Then, this literature review was completed with an analysis of a state of the art on responsibility in the field of Human Sciences. The responsibility metamodel and its most meaningful concepts have been defined in previous works of the authors [START_REF] Feltus | ReMoLa: Responsibility Model Language to Align Access Rights with Business Process Requirements[END_REF]. The most significant ones, for access rights management, are: the concept of responsibility, which is composed of all accountabilities related to one single business task and that, in order to be honored, require rights (the resources provided by the company to the employee, among which the access rights to information) and capabilities (the qualities, the skills or the resources intrinsic to the employee). The accountability represents the obligation related to what has to be done concerning a business task and the justification that it is done to someone else, under threat of sanction(s) [START_REF] Feltus | ReMoLa: Responsibility Model Language to Align Access Rights with Business Process Requirements[END_REF]. Three types of accountabilities can be defined: the accountability of doing which concerns the act of realizing a business task, the accountability of advising which concerns the act of providing consultancy to allow the realization of the task and the accountability of deciding which concerns the act of directing and making decisions and providing authorization regarding a business task. An employee is assigned to one or more responsibility, which may be, additionally, gathered in business role(s).
ECA EA metamodel
To support the management of its information systems (IS), the European Commission has developed a dedicated architecture framework named CEAF2 that has been deployed in several other European institutions and especially the European Court of Auditors (ECA). The particularity of the CEAF is that it is business and IT oriented and provides a framework for the business entities in relation with IT usage and supporting infrastructure. Considering the business as being at the heart of the framework allows continual business/IT alignment. In addition to its four perspectives, namely "business", "functional", "application" and "data", the CEAF also contains a set of architecture standards that gather methods, vocabulary and rules to comply with. One such rule is, for instance, at the business layer, that the IT department of ECA (DIT) responsible for the management of information technology, needs to understand the business activities to automate them. The DIT has defined its own enterprise architecture metamodel, the ECA EA metamodel based on the CEAF (see Fig. 2). This ECA EA is formalized using an entity-relationship model and is made operational using the Corporate Modeler Suite3 . It is made of the same four vertical layers as the CEAF, each representing a perspective in the architecture, i.e.:
• The business layer, formalizing the main business processes of the organization (process map and process flows in terms of activities). • The functional layer, defining the views needed to describe the business processes in relation with business functions and services. • The application layer, describing the IT applications or ISs and the data exchanges between them. • The technical layer, describing the IT infrastructure in terms of servers, computers network devices, security devices, and so forth.
Each layer includes a set of generic objects, relevant for the layer, and may contain different types of views. Each view is based on one diagram template (Fig. 2). The concepts which are relevant in the context of this paper (i.e. to be integrated with the one of the responsibility metamodel) are described in the next section.
Integrated ECA EA-responsibility metamodel
In this section, the integration of the ECA EA metamodel with the responsibility metamodel is presented. The method proposed by [START_REF] Petit | Some methodological clues for defining a unified enterprise modelling language[END_REF] was used for integrating the metamodels. The three steps of the method are (1) preparation for integration, (2) investigation and definition of the correspondences and (3) integration of both metamodels.
Preparation for integration
Preparing the integration first goes through a primary activity for selecting the subset of concepts from the metamodels relevant for integration. Secondly, a common language for representing both metamodels is selected.
1) Subset of concepts concerned by the integration
This activity of selecting the appropriate subset of concepts considered for the integration has been added to the method of [START_REF] Petit | Some methodological clues for defining a unified enterprise modelling language[END_REF] and is required to address the concepts from the metamodels that are meaningful for the assignment of accountabilities regarding business tasks to the employees and for the definition of the rights and capabilities required therefore. The subset of concepts concerned by the integration, in the ECA EA metamodel of Fig. 2,includes: • The concept of role. This concept is used, according to the ECA EA metamodel documentation, to represent the notion of entity executing a task of a process. It is associated to the concept of a task that it realizes and to the concept of organization to which it belongs. • The concept of task. This concept is used to describe how the activities are performed. A task is achieved by a single actor (not represented in the ECA EA metamodel), is performed continuously and cannot be interrupted. The task is associated to the concept of role which realizes it, to the concept of activity that it belongs to and to the concept of function that it uses. • The concept of function. This concept enables to break-down an IS in functional blocks and functionality items within functional domains. A function block is defined by the business concepts that it manages on behalf of the IS, combining the functions (functions related to business objects) and production rules of the data that it communicates. It is associated to the concept of task, of IS (the application) that implements it and of entity that it accesses in a CRUD mode (Create, Read, Update and Delete). • The concept of entity. This concept represents the business data items conveyed by the IS or handled by an application. In the latter case, it refers to information data. It means that the physical data model implemented is not described in systems/database. The entity is accessed by the function, is associated to flow, is defined by attributes and relationships and is stored in a datastore. • The concept of application. This concept represents a software component that contributes to a service for a dedicated business line or for a particular system. Regarding its relation with other concepts: the application is used by the application service, is made of one or more other application(s), uses a technology, sends and receives flow items and implements functions. In the responsibility metamodel (see Sect. 2), the following concepts defined in [START_REF] Feltus | ReMoLa: Responsibility Model Language to Align Access Rights with Business Process Requirements[END_REF] are kept: responsibility, business role, business task, right, capability, accountability and employee.
2) Selection of a common representation language
For the integration step, UML is used because it is precise enough for this purpose, standard and commonly used. As a consequence, the ECA EA metamodel formalized using the entity-relation model has been translated into a UML class diagram (Fig. 2).
Investigation and definition of the correspondences
In [START_REF] Petit | Some methodological clues for defining a unified enterprise modelling language[END_REF], the author explains that this second step consists in analyzing the correspondences between classes of the two metamodels. These correspondences exist if correspondences among pairs of classes exist and if correspondences between instances of these classes taken pair-wise can be generalized. The correspondences can be identified by analyzing the semantic definitions of the classes and can be validated on instances in models created by instantiating both metamodels for different case studies. Based on the definitions of concepts and on the authors' experience with the case study presented in Sect. 5, three correspondence cases between the concepts of the ECA EA metamodel and the responsibility metamodel have been identified:
• Role from the ECA EA metamodel and business role from the responsibility metamodel: the concept of role in the ECA EA metamodel is represented in the business architecture, is an element that belongs to the organization and realizes business tasks. Hence, it reflects a business role rather than an application role and corresponds, as a result, to the business role of the responsibility metamodel (cf. application role / Role Based Access Control [START_REF] Feltus | Enhancement of Business IT Alignment by Including Responsibility Components in RBAC, 5 th Busital workshop[END_REF]). • Entity from the ECA EA metamodel and information from the responsibility metamodel. The concept of entity in the ECA EA metamodel is equivalent to the concept of information from the responsibility metamodel. Instances of both concepts are accessed by a human or by an application component and specific access rights are necessary to access them. • Task from the ECA EA metamodel and business task from the responsibility metamodel. The concept of task in the ECA EA metamodel and the concept of business task from the responsibility metamodel semantically have the same meaning. The task from the ECA EA metamodel composes the business architecture and corresponds to a task performed on the business side. According to the definition of the ECA concept, it can be noticed that the task is performed by a single actor. This is a constraint that does not exist in the responsibility metamodel and that needs to be considered at the integration step.
Integration of metamodels
The third step defined in [START_REF] Petit | Some methodological clues for defining a unified enterprise modelling language[END_REF] corresponds to the integration of both metamodels. During the analysis of the correspondences between the metamodel concepts, some minor divergences have been observed. Notwithstanding the influence of these divergences, to consider that a sufficient correspondence exists between the elements and to consider them during this third step of integration, these divergences are analyzed in depth and the correspondence rules formalized in order to obtain a well defined and precise integration.
Consequently, to construct the integrated metamodel that enriches the ECA EA metamodel with the responsibility metamodel, a set of integration rules has been defined. Therefore, it is decided that (1) when a correspondence exists between one concept from the ECA EA metamodel and one concept from the responsibility metamodel, the name of the concept from the ECA EA metamodel is preserved, (2) when the concept of the responsibility metamodel has no corresponding concept in the ECA EA metamodel, this concept is integrated in the integrated metamodel and the name from the responsibility metamodel is used, (3) when a correspondence exists with conflicts between the definition of the concepts, the concepts are integrated in the integrated metamodel, the name of the concept from the ECA EA metamodel is preserved and additionally integration constraints to be respected are included in the case of using the integrated metamodel. Finally, (4) when concepts differently exist in both metamodels, the integration preferences are motivated case by case. In the sequel, correspondences between classes are first considered and then correspondences between associations between classes.
1) UML Classes integration a)
Classes that correspond exactly: The role from the ECA EA metamodel and the business role from the responsibility metamodel exactly match. The entity from the ECA EA metamodel and the information from the responsibility metamodel also exactly match.
b) Classes that only exist in one metamodel
Employee, responsibility, right and the type of rights to access information, capability and accountability only exist in the responsibility metamodel. Function only exists in the ECA EA metamodel. c) Classes that correspond under constraints The business task from the responsibility metamodel and the task from the ECA EA metamodel correspond partially. In the ECA EA metamodel, a task is performed by a single actor. The ECA EA metamodel description does not define the granularity level of a business task and, for instance, does not define if "doing a task", "advising for the performance of a task" or "making decision during the realization of a task" are considered as three tasks or as a single one. In the first case, three actors may be assigned separately to each of the three propositions although, in the latter case, only one actor is assigned to it. In the responsibility metamodel, many employees may be assigned to many responsibilities regarding a business task. It can be observed that, in practice, this is often what happens for responsibility, for instance in courts during trials. Therefore, it can be considered, in the integrated metamodel, that a task may be concerned by more than one accountability, themselves composing responsibilities assigned to one or more employees. For instance, let us consider the task to deploy a new software component on the ECA network. There is a first responsibility to effectively deploy the solution. This responsibility is assigned to an IT system administrator who is accountable towards the manager of his unit. This means that he must justify the realization (or absence thereof) of the deployment and that he may be sanctioned positively/negatively by the unit manager. The latter, concerning this deployment, is responsible to make the right decisions, for instance, to decide the best period of the day for the deployment, to give the go/no go for production after performing test, and so forth. This responsibility is directly handled by the unit manager who must justify his decision and is sanctioned accordingly by his own superior, for instance, the department manager, and so forth. This illustration explains how many responsibilities may be related to the same task but assigned to various employees or roles.
d) Classes that exist differently in both metamodels
The concept of access right from the responsibility metamodel and the concept of access mode from the ECA EA metamodel are represented differently. The concept of access right is a type of rights in the responsibility metamodel which semantically corresponds to an access mode in the ECA EA metamodel. In the ECA EA metamodel, the entity is accessed by the concept of function that, additionally, is associated to a task and to an application of the IS that implements it. As a result, the access right is already considered in the ECA EA metamodel, but it is directly associated to the concept of task by the intermediary of function. In the integrated metamodel, the concept of function that is interesting to consider as allowing the connection between concepts from the business architecture, from the application architecture and from the data architecture, is preserved. However, to restrict the usage of a function only for what is strictly necessary, it is not considered that it is associated to a task, but that it is required by a responsibility and necessary for accountability. As such, an employee with the accountability of doing a task gets the right to use a certain function, an employee with the accountability of deciding about the execution of a task gets the right to use another function, and so forth. For example, to record an invoice, a bookkeeper requires the use of the function "encode new invoice". This function is associated to a write access to the invoicing data.
Additionally, the financial controller who controls the invoice requires the use of the "control invoice" function that is associated to a read access to the same invoicing data. 2) UML associations integration a) UML associations from the responsibility metamodel that complete or replace, in the integrated metamodel, the UML associations from the ECA EA metamodel The direct UML association between a role and a task in ECA EA metamodel is replaced by a composition of associations: "a business role is a gathering of responsibilities, themselves made of a set of accountabilities concerning a single business task". This composition is more precise and is therefore retained. The UML association between the task and the function it uses in the ECA EA metamodel is replaced by two UML associations: "an accountability concerning a single business task requires right(s)" and "one type of right is the right to use a function" b) UML associations from the responsibility metamodel, that do not exist in the ECA EA metamodel The following associations are present only in responsibility metamodel and are simply included in the integrated metamodel: "a responsibility requires capabilities", "a responsibility requires rights", "an employee is assigned to one or more responsibility(ies) and to one or more business role(s)", "a capability is necessary for a business task" and "a right is necessary for a business task".
The metamodel resulting from the integration is shown in Fig. 3.
This section reports on the exploitation of the integrated metamodel developed in the previous section on a real-world case study from a European institution in order to validate its applicability and its contribution to the engineering of more accurate access rights. The integrated metamodel was applied for the management of the access rights provided to employees involved in the User provisioning and User account management processes. The case study has been performed over fourteen months, from January 2011 to February 2012. During this period, twelve meetings were organized with the DIT managers of the institution and with the access right administrator to model and assess the processes and to elaborate and assign a set of thirteen responsibilities.
Process description
The user provisioning process is about providing, adapting or removing access rights to a user depending if he is a newcomer arriving at the Court, an employee or an external staff member whose status or job changes or if he is temporarily or definitely leaving the Court. Employee or external staff status changes when, for instance, his job category, department or name changes or when the end date of his contract is modified. The management of the users' identity and access rights are areas in which the DIT is hugely involved. Indeed, since each employee of the ECA needs different access rights on the various ISs, these access rights must be accurately provided according to the user profile.
To manage these rights, the DIT has acquired the Oracle Identity Management (OIM) tool. This tool is central to the identity and user accounts management activity and, as illustrated by Fig. 4, is connected, on the one hand, to the applications that provision the user profiles (COMREF and eAdmin 4 ) and, on the other hand, to the user directories that provision access rights rules (eDir, Active Directory (AD), Lotus Notes (LN), and so forth). COMREF is the central human resource database of the European Commission used by the HR management tool Sysper2 5 . The main COMREF database is located in the EC data center and contains a set of officials and employees' information items such as the type of contract, occupation, grade, marital status, date of birth, place of work, department, career history and so forth. This information is synchronized every day with the COMREF_ECA 6 datastore and with the OIM tool. In parallel, additional information is also uploaded in the OIM tool for the subset of data relative to ECA workers (employees or external staff), directly from the ECA, e.g. the office number, the entry ID card, the phone numbers, the telephone PIN code, and so forth. This information is also daily synchronized with the central COMREF database.
At the business layer, processes have been defined to support the activities of the employees who manage (such as the system administrators) or use the system (such as the secretaries who fill in the data related to the PIN codes or phone numbers). The case study focuses on one of these processes, the user provisioning and user account management process. This process aims at defining an ordinate set of tasks to manage 4 eAdmin is a tool to manage administrative data such as office numbers 5 Sysper2 is the Human Resource Management solution of the European Commission that supports the personnel recruitment, career management, organization chart, time management, etc. 6 COMREF_ECA is a dedicated mirror in Luxembourg of the COMREF database for the officials and employees of the ECA the request, establishment, issue, suspension, modification or closure of user accounts and to, accordingly, provide the employees with a set of user privileges to access IT resources. More specially, the case study focuses on the evolution of this process, due to some recent enhancement of the automation of the provisioning loop between the COMREF database and OIM, and on the new definition of the responsibilities of the employees involved in this process.
Definition and assignment of the responsibilities
A sequence of four steps is applied to model the responsibilities of the employees involved in the upgraded user provisioning and user accounts management process.
1) Identification of business tasks
The business tasks are defined by instantiating the concepts of task from the integrated metamodel (Fig. 3). In this step, the tasks for which responsibilities have to be defined are identified, but tasks that are performed by an application component and for which defining the responsibility is inappropriate according to the definition of the responsibility in Sect. 2 are not considered. After the provisioning process enhancement, six tasks are remaining. These tasks are: "Release Note d'information 7 ", "Complete Sysper2 data entry", "Assign an office number using eAdmin", "Assign a phone number and a PIN code", "Enter phone number and PIN code in OIM" and "Perform auto provisioning and daily reconciliation".
2) Identification of the accountabilities
The accountability, as explained in Sect. 2, defines which obligation(s) compose(s) a responsibility for a business task and which justification is expected. In the ECA EAresponsibility metamodel, this concept of accountability has been preserved since it is important to distinguish what really are the accountabilities of the ECA employees regarding the business tasks. In this step, for each of the tasks, the existing accountabilities are reviewed for each of the responsibilities. Mainly, three of them have been retained. The obligation to "Do" that composes the responsibility of performing the task, the obligation to "Decide about" that composes the responsibility of being accountable for the performance of a task and the obligation to "Advise" that composes the responsibility to give advice for the performance of the task. For example, three types of accountability concern the task "Assign a phone number and a PIN code" and the task "Assign an office number using eAdmin". Three examples explained later in the text are provided in Tables 123.
1) Identification of the rights and capabilities
The rights and capabilities are elements required by a responsibility and necessary to achieve accountabilities (Fig. 1). Both concepts have, naturally, been introduced in the integrated metamodel in Fig. 3. In this step, it is analyzed, accountability by accountability, which capabilities and which rights are necessary to realize the accountability. In the integrated ECA EA-responsibility metamodel, the access right (which is a type of right) is no more directly associated to the realization of an action involving an information (e.g. read a file), but is a right to use a function that realizes, together, an action (e.g.: CRUD) regarding an entity and the use of an application that manipulates this entity. For instance, the Responsibility OIM 7 (Table 1) assigned to Barbara Smith requires using the function that realizes Read-Write access in eAdmin.
Once the responsibilities have been modeled, they can be assigned to employees, considering their role in the organization. As explained in Fig. 3, a responsibility may be assigned directly to an employee or to a role.
2) Assignment of the responsibilities to the employees
In the case study, some responsibilities are directly assigned to employees and others are assigned to roles. For instance, the Responsibility OIM 1 (Table 2) is made of the accountability to do the task "Release Note d'information". This responsibility is assigned to the role Human Resources Directorate/ RCD (recruitment career development), although the Responsibility OIM 10 (Table 3) is made of the accountability to verify the task "Enter Phone number and PIN code in OIM" and is assigned directly to the employee Francis Carambino.
Case study analysis
The instantiation of the responsibilities, after the mapping of the responsibility metamodel with the ECA EA metamodel, brings a set of thirteen responsibilities, from which the following results are observable.
1) Better definition of accountabilities of employees regarding the tasks
Before the case study was performed, the description of the process according to the sole ECA EA metamodel did only provide a list of the roles responsible to perform the tasks. As a result, this description was not accurate enough to know which employees perform which tasks, and which other employees decide about it, give advice and so forth. For instance, some employees did not appear in the process description, although they were involved in it. This was for instance the case of the IAM 8 Service Manager. The description of the process, according to the integrated metamodel gives a clear view on all the accountabilities and their assignments to the employees.
2) Explicit formalization of capabilities required by employees to meet their accountabilities
Before the case study, the description of the process did not address the employee capabilities necessary to perform accountabilities. Employees were assigned to responsibilities without previously knowing if they were capable of assuming them. The description of the process, according to the integrated metamodel, clearly highlights the capabilities necessary to perform the tasks. For instance, to "Complete Sysper2 data entry", the employee needed both a Sysper2 and SQL training and, if someone else is assigned to this responsibility, the same training is required.
3) Explicit formalization of the rights and access rights required by the employees to meet their accountabilities
Another difference in the process description after the case study is that the right, and more specifically the access rights, needed to perform an accountability are clearly enumerated. For instance, to "Complete Sysper2 data entry", it is necessary to have the access right to Read-Write and Modify all Sysper2 functions and the right to use another system called RETO 9 .
4) Possibility to associate tasks to responsibilities or to roles
The final improvement is the possibility to assign a task, either to a role or to a responsibility rather than directly to an employee. This possibility offers more flexibility and reduces the risk of providing access rights to employees that do not need them. As an example, all employees with the role of Human Resources Directorate/ RCD are assigned to the responsibility to "Release Note d'information", although only one employee advises about the assignment of offices. Some other concepts of the responsibility metamodel have not been introduced yet in the integrated metamodel and have not been illustrated in the case study. Indeed, as explained in Section 2, checking the employee's commitment during the assignment of a responsibility or a role was not in the scope of this case study. However, some other cases in the ECA have shown that the commitment influences the way employees accept their responsibilities. For instance, in 2010, ECA bought a highly sophisticated tool to support problems management. During the deployment of the tool in production, the employees have not been informed about their new responsibilities related to the usage of the tool. As a result, they did not commit to these responsibilities and the tool has not been used properly or up to the expectations. The same problem occurred at a later stage when a decision was made to use a tool to manage the CMDB 10 .
Conclusions
The paper has presented a method to improve the alignment between the different layers of an enterprise architecture metamodel and, thereby, to enhance the management of access rights provided to employees based on their accountabilities. This method is based on the integration of an enterprise architecture framework with a responsibility metamodel. The integration of both metamodels has been illustrated using a three-step approach proposed by [START_REF] Petit | Some methodological clues for defining a unified enterprise modelling language[END_REF] and has been applied to the ECA EA metamodel, an EAF of a European institution. A validation has been realized on a real case study related to the user provisioning and user account management processes. The objectives of this case study were to validate (1) the applicability of the integrated metamodel and (2) the engineering of more accurate access rights comparing to the solutions reviewed in [START_REF] Feltus | ReMoLa: Responsibility Model Language to Align Access Rights with Business Process Requirements[END_REF]. The validation has been performed in four phases. First, the accountability of the employees regarding the tasks of the process has been defined. Next, the capabilities required to perform these accountabilities have been formalized. Thirdly, the required rights and access rights have been formalized. Finally, the employees have been associated to responsibilities or to roles. The output of these phases was a set of thirteen responsibilities. The validation shows that using the combination of the ECA EA and the responsibility metamodel brings benefits compared to using ECA only. Additionally, compared to the other approaches, the method offers other possibilities and advantages, including more precise definition of accountabilities of employees regarding tasks, explicit formalization of the rights and capabilities required by the employees to perform the accountabilities (traceability between accountabilities and rights), and formal associations of employees to responsibilities or to business roles. The approach has also been validated, in parallel, with other processes from the healthcare sector and are available in [START_REF] Feltus | Enhancing the ArchiMate ® Standard with a Responsibility Modeling Language for Access Rights Management[END_REF].
Fig. 1 .
1 Fig. 1. The responsibility metamodel UML diagram.
Fig. 2 .
2 Fig. 2. ECA EA metamodel UML diagram
Fig. 3 .
3 Fig. 3. The responsibility metamodel integrated with the ECA EA metamodel
Fig. 4 .
4 Fig. 4. Overview of the ECA OIM architecture
Table 1 .
1 Responsibility OIM 7.
Responsibility OIM 7
Task Assign an office number using eAdmin
Accountability Doing
Employee Barbara Smith
Accountable towards Reynald Zimmermann
Backup Antonio Sanchis
Role Logistic administrator
Backup Role Logistic Head of Unit
Right Read-Write access in eAdmin
Capability eAdmin manipulation training
Table 2 .
2 Responsibility OIM 1.
Responsibility OIM 1
Task Release "Note d'Information"
Accountability Doing
Employee All
Accountable towards Gerald Hadwen
Role Human Resources Directorate/ RCD
Backup Role RCD Unit Manager
Right Read HR workflow, Read Information Note template and Use editing tool
Capability Ability to edit official documents and HR training
Task Release "Note d'Information"
Table 3 .
3 Responsibility OIM 10.
Responsibility OIM 10
Task Enter phone number and PIN code in OIM
Accountability Deciding
Employee Francis Carambino
Accountable towards Marco Jonhson
Backup Philippe Melvine
Role OIM Administrator
Backup Role IAM Service Manager
Right Read-Write access to OIM tool-Phone number application and Read-Write access
to OIM tool-PIN code application
Capability Computer sciences education, two years experience in OIM administration
The Enterprise Engineering Team (EE-Team) is a collaboration between Public Research Centre Henri Tudor, Radboud University Nijmegen and HAN University of Applied Sciences. www.ee-team.eu
CEAF means European Commission Enterprise Architecture Framework.
Modeler Suite from CaseWise (http://www.casewise.com/products/modeler)
In English: Information note
Identity and Access Management
RETO (Reservation TOol) is a personal identification number booking tool common to all institutions
Configuration Management Database, in accordance with ITIL
Acknowledgements
This work has been partially sponsored by the Fonds National de la Recherche Luxembourg, www.fnr.lu, via the PEARL programme | 42,593 | [
"837175",
"986246",
"17748"
] | [
"364917",
"364917",
"371421",
"452132",
"487813"
] |
01484401 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484401/file/978-3-642-34549-4_12_Chapter.pdf | Dirk Van Der Linden
email: dirk.vanderlinden@tudor.lu
Stijn Hoppenbrouwers
email: stijn.hoppenbrouwers@han.nl
Challenges of Identifying Communities with Shared Semantics in Enterprise Modeling
Keywords: enterprise modeling, conceptual understanding, personal semantics, community identification, semantics clustering
In this paper we discuss the use and challenges of identifying communities with shared semantics in Enterprise Modeling. People tend to understand modeling meta-concepts (i.e., a modeling language's constructs or types) in a certain way and can be grouped by this understanding. Having an insight into the typical communities and their composition (e.g., what kind of people constitute a semantic community) would make it easier to predict how a conceptual modeler with a certain background will generally understand the meta-concepts he uses, which is useful for e.g., validating model semantics and improving the efficiency of the modeling process itself. We demonstrate the use of psychometric data from two studies involving experienced (enterprise) modeling practitioners and computing science students to find such communities, discuss the challenge that arises in finding common real-world factors shared between their members to identify them by and conclude that the common (often implicit) grouping properties such as similar background, focus and modeling language are not supported by empirical data.
Introduction
The modeling of an enterprise typically comprises the modeling of many aspects (e.g., processes, resources, rules), which themselves are typically represented in a specialized modeling language or method (e.g., BPMN [START_REF]Object Management Group: Business process model and notation (bpmn) ftf beta 1 for version 2[END_REF], e3Value [START_REF] Gordijn | e-service design using i* and e3value modeling[END_REF], RBAC [START_REF] Ferrariolo | Role-based access control (rbac): Features and motivations[END_REF]). Most of these languages share similar meta-concepts (e.g., processes, resources, restrictions 5 ). However, from language to language (and modeler to modeler) the way in which these meta-concepts are typically used (i.e., their intended semantics) can differ. For example, one modeler might typically intend restrictions to be deontic in nature (i.e., open guidelines that ought to be the case), while a different modeler might typically consider them as alethic conditions (i.e., rules that are strict logical necessities). They could also differ in whether they typically interpret results as being material or immaterial 'things'. If one is to integrate or link such models (i.e., the integrative modeling step in enterprise modeling [START_REF] Lankhorst | Enterprise architecture modelling-the issue of integration[END_REF][START_REF] Kuehn | Enterprise Model Integration[END_REF][START_REF] Vernadat | Enterprise modeling and integration (EMI): Current status and research perspectives[END_REF][START_REF] Opdahl | Interoperable language and model management using the UEML approach[END_REF]) and ensure the consistency and completeness of the involved semantics, it is necessary to be aware of the exact way in which such a meta-concept was used by the modeler. If this is not explicitly taken into account, problems could arise from, e.g., treating superficially similar concepts as being the same or eroding the nuanced view from specific models when they are combined and made (internally) consistent.
To deal more effectively with such semantic issues it is necessary to have some insight into the "mental models" of the modeler. It is important to gain such insight because people generally do not think in the semantics of a given modeling language, but in the semantics of their own natural language [START_REF] Sowa | The Role of Logic and Ontology in Language and Reasoning[END_REF]. Furthermore, some modeling languages do not have an official, agreed-upon specification of their semantics [START_REF] Ayala | A comparative analysis of i*-based agent-oriented modeling languages[END_REF] and if they do, there is no guarantee that their semantics are complete or consistent [START_REF] Breu | Towards a formalization of the Unified Modeling Language[END_REF][START_REF] Nuffel | Enhancing the formal foundations of bpmn by enterprise ontology[END_REF][START_REF] Wilke | UML is still inconsistent! How to improve OCL Constraints in the UML 2.3 Superstructure[END_REF], let alone that users might deliberately or unconsciously ignore the official semantics and invent their own [START_REF] Henderson-Sellers | UML -the Good, the Bad or the Ugly? Perspectives from a panel of experts[END_REF]. Understanding the intended semantics of a given model thus can not come only from knowledge of the language and its semantics, but requires us to spend time understanding the modeler who created the model.
However, one cannot realistically be expected to look into each individual modeler's semantic idiosyncrasies. Instead, a generalized view on how people with a certain background typically understand the common meta-concepts could be used to infer, to some degree of certainty, the outline of their conceptual understanding. Such (stereo)types of modelers could be found by identifying communities of modelers that share similar semantic tendencies for given concepts and analyzing whether they have any shared properties that allow us to treat them as one. As language itself is inherently the language of community [START_REF] Perelman | The New Rhetoric: A Treatise on Argumentation[END_REF] (regardless of whether that community is bound by geography, biology, shared practices and techniques [START_REF] Wenger | Communities of practice: The organizational frontier[END_REF] or simply speech and natural language [START_REF] Gumperz | The speech community[END_REF][START_REF] Hoppenbrouwers | Freezing language : conceptualisation processes across ICT-supported organisations[END_REF]), it is safe to assume that there are communities which share a typical way of using (and understanding) modeling language concepts. This is not to say that such communities would be completely homogeneous in their semantics, but merely that they contain enough overlap to be able to treated as belonging together during a process which integrates models originating from their members without expecting strong inconsistencies to arise in the final product.
Finding such communities based on, for example, empirical data is not a difficult matter in itself. However, going from simply finding communities to understanding them and generalizing them (i.e., to be able to predict on basis of empirical data or prior experience that communities of people which share certain properties will typically have certain semantics) is the difficult step. To do so it is necessary to find identifiers -properties that are shared between the members of a community. These identifiers (e.g., dominant modeling language, focus on specific aspects) are needed to be able to postulate that a given modeler, with a given degree of certainty, belongs to some community and thus likely shares this community's typical understanding of a concept.
In workshop sessions held with companies and practitioners from the Agile Service Development (ASD) 6 project who are involved in different kinds of (collaborative) domain modeling (e.g., enterprise modeling, knowledge engineering, systems analysis) we have found that there are a number of common identifiers modelers are typically (and often implicitly) grouped by. That is, on basis of these properties they are often assigned to collaborate on some joint domain modeling task. These properties are for example a similar background, education, focus on what aspects to model (e.g., processes, goals) in what sector they do so (e.g., government, health care, telecommunications), and used modeling languages. It seems thus that in practice, it is assumed that those who share a background or use similar modeling languages and methods will model alike.
While the wider context of our work is to build towards a theory of how people understand typical modeling meta-concepts (which can aid enterprise modelers with creating integrated models) this paper will focus first on testing the above assumption. To do so we hypothesize that these commonly used properties (e.g., sector, focus, used modeling language) should be reflected in communities that share a similar semantic understanding of common modeling meta-concepts. To test this we will investigate the personal semantics for practitioners and students alike, group them by shared semantics and investigate whether they share these, or indeed, any amount properties. If this is found to be so, it could mean that it is possible to predict, to a certain degree, what (the range of) understanding is that a modeler has for a given concept.
In this stage of our empirical work we have enough data from two of our studies into the conceptual understanding of the common meta-concepts amongst practitioners and students (cf. [START_REF] Van Der Linden | Initial results from a study on personal semantics of conceptual modeling languages[END_REF] for some initial results) to have found several communities that share a similar understanding of conceptual modeling metaconcepts. However, we have began to realize the difficulties inherent in properly identifying them. The rest of this paper is structured as follows. In Section 2 we discuss the used data and how we acquired it. In Section 3 we demonstrate how this kind of data can be analyzed to find communities, discuss the difficulties in identifying common properties amongst their members and reflect on the hypothesis. Finally, in Section 4 we conclude and discuss our future work.
Methods and Used Data Samples
The data used in this paper originates from two studies using semantic differentials into the personal semantics participants have for a number of meta-concepts common to modeling languages and methods used in Enterprise Modeling. The Semantic Differential [START_REF] Osgood | The Measurement of Meaning[END_REF] is a psychometric method that can be used to investigate what connotative meanings apply to an investigated concept, e.g., whether an investigated concept is typically considered good or bad, intuitive or difficult. It is widely used in information systems research and there are IS-specific guidelines in order to ensure quality of its results [START_REF] Verhagen | A framework for developing semantic differentials in is research: Assessing the meaning of electronic marketplace quality (emq)[END_REF]. We use semantic differentials to investigate the attitude participants have towards actors, events, goals, processes, resources, restrictions and results and to what degree they can be considered natural, human, composed, necessary, material, intentional and vague things. These concepts and dimensions originate from our earlier work on categorization of modeling language constructs [START_REF] Van Der Linden | Towards an investigation of the conceptual landscape of enterprise architecture[END_REF]. The resulting data is in the form of a matrix with numeric scores for each concept-dimension combination (e.g., whether an actor is a natural thing, whether a result is a vague thing). Each concept-dimension combination has a score ranging from 2 to -2, denoting respectively agreement and disagreement that the dimension 'fits' with their understanding. A more detailed overview of the way we apply this method is given in [START_REF] Van Der Linden | Beyond terminologies: Using psychometrics to validate shared ontologies[END_REF].
The practitioner data sample (n=12) results from a study which was carried out in two internationally operating companies focused on supporting clients with (re)design of organizations and enterprises. The investigated practitioners all had several years of experience in applying conceptual modeling techniques. We inquired into the modeling languages and methods they use, what sector(s) they operate in, what they model, and what kind of people they mostly interact with. The student data sample (n=19) results from an ongoing longitudinal study into the (evolution of) understanding computing and information systems science students have of modeling concepts. This study started when the students began their studies and had little to no experience. We inquired into their educational (and where applicable professional) background, knowledge of modeling or programming languages and methods, interests and career plans in order to see whether these could be used as identifying factors for a community.
To find communities of people that share semantics (i.e., score similarly for a given concept) we analyzed the results using Repeated Bisection clustering and Principal Component Analysis (PCA). The PCA results and their visualization (see Figs. 2 and1) demonstrate (roughly) the degree to which people share a (semantically) similar understanding of the investigated concepts (for the given dimensions) and can thus be grouped together.
General Results & Discussion
Most importantly, the results support the idea that people can be clustered based on their personal semantics. The PCA data proved to be a more useful resource for investigating the clusters and general semantic distance than the (automated) clustering itself, as we found it was hard to a priori estimate parameters like optimal cluster size and similarity cutoffs. As shown in Figs. 1 and2 there are easily detectable clusters (i.e., communities) for most of the investigated concepts, albeit varying in their internal size and variance. The closer two participants are on both axes, the more similar (the quantification of) their semantics are. for their understanding of goals, processes, resources and restrictions, with some discussed participants highlighted. Colored boxes and circles are used to highlight some interesting results that will be discussed in more detail.
While there are both clusters of people that share a semantic understanding for students and practitioners alike, they do differ to what degree larger clusters can be found. Internal variance is generally greater for students (i.e., the semantics are more 'spread out'). This may be explained by the greater amount of neutral attitudes practitioners display towards most of the dimension (i.e., lack any strongly polarized attitudes) causing a lower spread of measurable seman-tics. Such neutral attitudes might be a reflection of the necessity to be able to effectively interact with stakeholders who hold different viewpoints. Nonetheless, they are still easily divided into communities based on their semantic differences. To demonstrate, we will discuss some of the clusters we found for the understanding practitioners and students have of goals, processes, resources and restrictions. The immediately obvious difference between the practitioners and students is that, where there are clusters to be found amongst the practitioners, they differ mostly on one axis (i.e., component), whereas the students often differ wildly on both axes. Of particular interest to testing our hypothesis are participants 3 & 8, and 2, 7 & 10 from the practitioner data sample. The first community clusters together very closely for their understanding of restrictions (and goals, albeit to a lesser degree) while they differ only slightly for most other concepts. This means one would expect them to share some realworld properties. Perhaps they are people specialized in goal modeling, or share a typical way of modeling restrictions in a formal sense. The second community (participants 2, 7 & 10) cluster together very closely for resources, fairly close for goals and restrictions, while being strongly different when it comes to their understanding of processes. One could expect this to infer that they have some shared focus on resources, either through a language they use (e.g., value-exchange or deployment languages) which are often strongly connected to goals (as either requiring them, or resulting in their creation). On the opposite, one would not necessarily expect there to be much overlap between the participants in regards to processes, as they are grouped with a wide spread.
For the students, there are several potentially interesting communities to look at. Participants 4 & 8 differ strongly for several concepts (e.g., their strong differentiation on two components for resources, and for processes and restrictions), but they have an almost exactly similar understanding of goals. One would expect that some kind of property shared between them might be used to identify other participants that cluster together for goals, but not necessarily share other understandings. Participants 3, 6 & 19 also cluster together closely for one concept resources -but differ on their understanding of the other investigated concepts. As such, if (some) experience in the form of having used specific programming and modeling languages is correlated to their conceptual understanding, one would expect to find some reflection of that in the clusterings of these students.
However, when we add the information we have about the participants (see Tables 1 and2) to these clusters , we run into some problems. It is often the case that communities do not share (many) pertinent properties, or when they do, there are other communities with the same properties that are far removed from them in terms of their conceptual understanding. Take for instance participants 2, 7 & 10 (highlighted with a gray oval) from the practitioner data sample. While they share some properties, (e.g., operating in the same sector, having some amount of focus on processes, and interacting with domain experts), when we look at other communities it is not as simple to use this combination of properties to uniquely identify them. For instance, participants 3 & 8 (highlighted with a black rectangle) cluster together closely in their own right, but do share some overlapping properties (both operate in the government sector). Thus, merely looking at the sector a modeler operates in cannot be enough to identify them. Looking at the combination of sector and focus is not enough as well, as under these conditions participant 8 and 10 should be grouped together because they both have a focus on rules. When we finally look at the combination of sector, focus and interaction we have a bit more chance of uniquely identifying communities, although there are still counter-examples. Participant 9 (highlighted with a gray rectangle), for example, shares all the properties with participants 2, 7 & 10, but is conceptually far removed from all others. In general the dataset shows this trend, providing both examples and counterexamples for most of these property combinations, making it generally very difficult, if not flat-out impossible to to identify communities.
We face the same challenge in the student data sample, although even more pronounced on an almost individual level. There are participants that share the same properties while having wildly varying conceptual understandings. There seems to be some differentiation on whether participants have prior experience, but even then this sole property does not have enough discriminatory power. Take for example participants 4&8 (highlighted with a black rectangle) and participants 3,6&19 (highlighted with a gray oval). Both these communities cluster closely together for a specific concept, but then differ on other concepts. One could expect this has to do with a small amount of properties differing between them, which is the case, as there is consistently a participant with some prior experience in programming and scripting languages amongst them. However, if this property really is the differentiating factor, one would expect that on the other concepts the participants with prior experience (4&6) would be further removed from other participants than the ones without experience are, which is simply not the case. It thus seems rather difficult to link these properties to the communities and their structure.
This challenge could be explained by a number of things. First and foremost would be a simple lack in the amount of properties (or their granularity, as might be the case in the student data sample) to identify communities by, while it is also possible that the investigated concepts were not at the right abstraction level (i.e., either too specific or too vague), or that the investigated concepts were simply not the concepts people use to model. The simplest explanation is that the properties we attempt to identify communities by are not the right (i.e., properly discriminating) ones. It is possible (especially for the student data sample) that some of the properties are not necessarily not right, but that they are not discriminative enough. For example, knowing what modeling languages someone uses could be described in more detail because a language could have multiple versions that are in-use, and it is possible (indeed quite likely) that a language as-used is not the same as the 'official' language. However, this line of reasoning is problematic for two reasons. The first being that these are properties that are used by practitioners to (naively) group modelers together, the second that there is no clear-cut way to identify reasonable other properties that are correlated to the modeling practice. If these properties are not useful, we would have to reject the hypothesis on grounds of them being a 'bad fit' for grouping people. Other properties that could be thought of could include reflections of the cultural background of modelers, however, these are less likely to be of influence in our specific case as the Enterprise Modelers we investigate are all set in a Western European context and there is little cultural diversity (or their granularity, as might be the case in the student data sample) in this sense.
Another explanation could be that the meta-concepts we chose are not at the right abstraction level (i.e., concept width), meaning that they are either too vague or specific. For example, some modelers could typically think on nearinstantiation level while others think more vague. If concepts are very specific one would actually expect to find differences much faster (as the distance between people's conceptual understanding can be expected to be larger), which thus makes it easier to find communities. If they are (too) vague though, people would not differ much because there are not enough properties to differ on in the first place. However, the way we set up our observations rules out the vagueness possibility, as participants were given a semantic priming task before the semantic differential task of each concept. What we investigated was thus their most typical specific understanding of a concept. For this reason it is unlikely the abstraction level of the concepts was the cause of the challenge of identifying the communities.
Finally, the most obvious explanation could be a flaw in our preliminary work, namely that we did not select the right concepts, irrespective of their abstraction level. Considering the concepts were derived from an analysis of conceptual modeling languages and methods used for many aspects of enterprises, and that there simply does not seem a way to do without most of them, we find it very unlikely this is the case. The unlikely option that what we investigated was not actually the modeling concept, but something completely else (i.e., someone considering their favorite Hollywood actors over a conceptual modeling interpretation of actor) can also be ruled out as the priming task in our observation rules out this possibility. It thus seems far more prudent that these potential issues did not contribute to the challenge we face, and we should move towards accepting that identifying communities of modelers based on the investigated properties might not be a feasible thing to do.
While we had hoped that these observations would have yielded a positive result to the hypothesis, the lack of support we have shown means that a theory of predicting how modelers understand the key concepts they use, and thus what the additional 'implicit' semantics of a model could be (as alluded to earlier) is likely not feasible. Nonetheless, the observations do help to systematically clarify that these different personal understandings exist, can be measured, and might be correlated to communication and modeling breakdown due to unawareness of linguistic prejudice. Eventually, in terms of Gregor's [START_REF] Gregor | The nature of theory in information systems[END_REF] types of theories in information systems this information can be used by enterprise modelers and researchers alike to build design theories supporting model integration in enterprise modeling by pointing out potentially sensitive aspects of models' semantics.
If we wanted to simply discount the possibility of these properties being good ways to identify communities that share a semantic understanding of some concepts with, we would be done. But there is more of an issue here, as these properties are being used to identify communities and group people together in practice. Thus, given these findings we have to reject the hypothesis as stated in our introduction, while as of yet not being able to replace it with anything but a fair warning and call for more understanding -do not just assume (conceptual) modelers will model alike just because they have been using the same languages, come from the same background or work in the same area.
To summarize, we have shown that the often implicit assumption that people have strongly comparable semantics for the common modeling meta-concepts if they share an expertise in certain sectors, modeling focus and used languages cannot be backed up by our empirical investigation. While not an exhaustive disproof of the hypothesis by any means, it casts enough doubt on it that it would be a considerate practice for Enterprise Modelers to be more careful and double-check their assumptions when modeling together with, or using models from, others practitioners.
Conclusion and Future Work
We have shown a way to discover communities that share semantics of conceptual modeling meta-concepts through analysis of psychometric data and discussed the difficulties in identifying them through shared properties between their members. On basis of this we have rejected the hypothesis that modelers with certain shared properties (such as used languages, background, focus, etc.) can be easily grouped together and expected to share a similar understanding of the common conceptual modeling meta-concepts.
Our future work involves looking at the used properties in more detail (i.e., what exactly a used language constitutes) and a more detailed comparison of the results of practitioners and students in terms of response polarity and community distribution. Furthermore we will investigate whether there is a correlation between the specific words that a community typically uses to refer to its concepts.
Fig. 1 .
1 Fig.1. Principal components found in the data of concept-specific understanding for practitioners. The visualizations represent (roughly) the distance between understandings individual participants have. The further away two participants are on both axes (i.e., horizontal and vertically different coordinates), the more different their conceptual understanding has been measured to be. Shown are the distances between participants for their understanding of goals, processes, resources and restrictions, with some discussed participants highlighted. Colored boxes and circles are used to highlight some interesting results that will be discussed in more detail.
Fig. 2 .
2 Fig.2. Principal components found in the data of concept-specific understandings for students. The visualizations represent (roughly) the distance between understandings individual participants have. The further away two participants are on both axes (i.e., horizontal and vertically different coordinates), the more different their conceptual understanding has been measured to be. Shown are the distances between for participants for their understanding of goals, processes, resources and restrictions, with some discussed participants highlighted.
Table 1 .
1 Comparison of some practitioners based on investigated properties. The proprietary language is an in-house language used by one of the involved companies.
No. Used languages Sector Focus Interacts with
3 Proprietary Financial, Govern- Knowledge rules, Analists, modelers
ment processes, data
8 UML, OWL, RDF, Government, Rules Business profes-
Mindmap, Rules- Healthcare sionals, policymak-
peak, Proprietary ers, lawyers
2 Proprietary Government Knowledge sys- Managers, domain
tems, processes experts
7 Proprietary, UML, Government, Business processes, Domain experts, IT
Java spatial planning process structure specialists
10 Proprietary, xml, Government, Processes, rules, Domain experts,
xslt finance object definitions java developers
for systems
Table 2 .
2 Comparison of some students based on investigated properties. Profiles are standardized packages of coursework students took during secondary education, nature being natural sciences, technology a focus on physics and health a focus on biology.
No. Study Profile Prior experience
4 Computing Science Nature & Technology & Some programming and
Health scripting experience
8 Computing Science Nature & Technology None
3 Information Systems Nature & Technology None
6 Computing Science Nature & Technology Programming experience
19 Information Systems Nature & Health None
The ASD project (www.novay.nl/okb/projects/agile-service-development/7628) was a collaborative research initiative focused on methods, techniques and tools for the agile development of business services. The ASD project consortium consisted of Be Informed, BiZZdesign, Everest, IBM, O&i, PGGM, RuleManagement Group, Voogd & Voogd, CRP Henri Tudor, Radboud University Nijmegen, University Twente, Utrecht University & Utrecht University of Applied Science, TNO and Novay.
Acknowledgments. This work has been partially sponsored by the Fonds National de la Recherche Luxembourg (www.fnr.lu), via the PEARL programme.
The Enterprise Engineering Team (EE-Team) | 30,548 | [
"1002484"
] | [
"371421",
"300856",
"452132",
"348023",
"300856",
"452132"
] |
01484404 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484404/file/978-3-642-34549-4_8_Chapter.pdf | Eystein Mathisen
email: eystein.mathisen@uin.no
John Krogstie
email: krogstie@idi.ntnu.no
Modeling of Processes and Decisions in Healthcare -State of the Art and Research Directions
In order to be able to deliver efficient and effective decision support technologies within healthcare, it is important to be able to understand and describe decision making in medical diagnosis, treatment and administrative processes. This paper outlines how information can be synthesized, interpreted and used during decision making in dynamic healthcare environments. We intend to develop a set of modeling constructs that describe the decision requirements forming the basis for adequate situation awareness in clinical processes. We propose that a separate decision perspective will 1) enhance the shared understanding of the decision context among clinical staff, and 2) provide a better understanding of how we can design information system support for complex cognitive tasks in dynamic work environments.
Introduction
The clinical and administrative processes in today's healthcare environments are becoming increasingly complex and intertwined and the provision of clinical care involves a complex series of physical and cognitive activities. A multitude of stakeholders and healthcare providers with the need for rapid decision-making, communication and coordination, together with the steadily growing amount of medical information, all contribute to the view of healthcare as a complex cognitive work domain.
The healthcare environments can also be characterized as a very dynamic work environment, in which clinicians rapidly switch between work activities and tasks. The process is partially planned, but at the same time driven by events and interrupts [START_REF] Clancy | Applications of complex systems theory in nursing education, research, and practice[END_REF][START_REF] Dahl | Context in care--requirements for mobile context-aware patient charts[END_REF].
To be able to cope with the dynamism and complexities in their environments, many organizations have been forced to restructure their operations and integrate complex business processes across functional units and across organizational boundaries [START_REF] Fawcett | Process integration for competitive success: Benchmarking barriers and bridges[END_REF]. This has in many cases led to the adoption of process-oriented approaches and enterprise modeling to the management of organizational operations. Process modeling is used within organizations as a method to increase the focus on and knowledge of organizational processes, and function as a key instrument to organize activities and to improve the understanding of their interrelationships [START_REF] Recker | Business process modeling : a comparative analysis[END_REF]. Today, there is a large number of modeling languages with associated notations as we will discuss in more detail in section 3.
Recent work within the healthcare domain has been studying how one can best adopt process orientation and process oriented information systems in order to provide effective and efficient solutions for healthcare processes, exemplified by the concepts of patient care, clinical pathways or patient trajectories [START_REF] Lenz | IT Support for Healthcare Processes -Premises, Challenges[END_REF]. The adoption of process-orientation in the healthcare sector is addressing the quality of the outcomes of care processes (e.g. clinical outcomes and patient satisfaction) as well as improvements in operational efficiency [START_REF] Fryk | A Modern Process Perspective, Process Mapping and Simulation in Health Care[END_REF]).
In this context, it is important to note that research has shown that performance differences between organizations operating in dynamic and complex environments are related to how people decide and act [START_REF] Bourgeois | Strategic Decision Processes in High Velocity Environments: Four Cases in the Microcomputer Industry[END_REF]. Hence, the focus of this research relates to how clinical decision-makers adapt to dynamic and complex healthcare environments and how information is synthesized, interpreted and used during decision-making in these contexts. The concept of decision-making is not a well-researched phenomenon in relation to the mapping and modeling of healthcare processes. It is argued here that the complexity of organizational decision making in general (see e.g. [START_REF] Langley | Opening up Decision Making: The View from the Black Stool[END_REF]) is not reflected in the various modeling languages and methods that are currently available, even though decision making is an inherent and important part of every non-trivial organizational process. Thus, we want to investigate how decision-making expertise can be expressed in enterprise models describing healthcare processes.
The organization of the paper is as follows: Section 2 describes and discusses some of the most prevalent challenges within healthcare. Section 3 presents the theoretical background for the project, with focus on (process) modeling and situation awareness as a prerequisite for decision making, followed by a presentation of decision making theories and process modeling in healthcare. Section 4 gives an overview of the proposed research directions for this area while section 5 provides closing remarks.
Challenges in the Healthcare Domain
The healthcare domain is a typical risky, complex, uncertain and time-pressured work environment. Healthcare workers experience many interruptions and disruptions during a shift. Resource constraints with regards to medical equipment/facilities and staff availability, qualifications, shift and rank (organizational hierarchy) are commonplace. Clinical decisions made under these circumstances can have severe consequences. Demands for care can vary widely due to the fact that every patient is unique. This uniqueness implies that the patient's condition, diagnosis and the subsequent treatment processes are highly situation-specific. Work is performed on patients whose illnesses and response to medical treatment can be highly unpredictable. Medical care is largely oriented towards cognitive work like planning, problem solving and decision making. In addition the many practical activities that are needed to perform medical care -often including the use of advanced technology -requires cognitive work as well. Thus, the needs of individual patients depend on the synchronization of clinical staff, medical equipment and tools as well as facilities (e.g. operating rooms). The management of procedures for a set of operating rooms or an intensive care unit must be planned and the associated resources and activities require coordination [START_REF] Nemeth | The context for improving healthcare team communication[END_REF]. Planning, problem solving and decision making involves the assessment of resource availability, resource allocation, the projection of future events, and assessment of the best courses of action. According to Miller et al. [START_REF] Miller | Care Coordination in Intensive Care Units: Communicating Across Information Spaces[END_REF], members of a healthcare team must coordinate the acquisition, integration and interpretation of patient and teamrelated information to make appropriate patient care decisions.
Clinicians face two types of data processing challenges in decision-making situations 1. Deciding on medical acts -what to do with the patient.
Deciding on coordination acts -which patient to work on next. Knowing what has
been going on in the clinical process enables clinicians to adapt their plans and coordinate their work with that of others. In addition to patient data, these decisions are informed by data about what other personnel are doing and which resources (rooms and equipment) are in use.
From the above discussion, we argue that communication and collaboration for informed decision making leading to coordinated action are among the most prevalent challenges that are experienced within healthcare. Lack of adequate team communication and care coordination is often mentioned as the major reasons for the occurrence of adverse events in healthcare [START_REF] Miller | Care Coordination in Intensive Care Units: Communicating Across Information Spaces[END_REF][START_REF] Reader | Communication skills and error in the intensive care unit[END_REF]. According to Morrow et al. [START_REF] Morrow | Reducing and Mitigating Human Error in Medicine[END_REF], errors and adverse events in medical care is related to four broad areas of medical activities: medical device use, medication use, team collaboration, and diagnostic/decision support. In [START_REF] Eisenberg | The social construction of healthcare teams, in Improving Healthcare Team Communication[END_REF], Eisenberg discusses communication and coordination challenges related to healthcare teams and points out the following requirements of these teams:
• A building of shared situational awareness contributing to the development of shared mental models. • Continuously refreshing and updating the medical team's understanding of the changing context with new information. • Ensuring that team members adopt a notion of team accountability and enables them to relate their work to the success of the team.
In chapter 3 we will look more closely at the theoretical underpinnings of the proposed research, starting with an overview of perspectives to processes modeling.
Theoretical Background
Perspectives to Process Modeling
A process is a collection of related, structured tasks that produce a specific service or product to address a certain goal for some actors. Process modeling has been performed in connection with IT and organizational development at least since the 70ties. The archetypical way to look on processes is as a transformation, according to an IPO (input-process-output) approach. Whereas early process modeling languages had this as a basic approach [START_REF] Gane | Structured Systems Analysis: Tools and Techniques[END_REF], as process modeling have been integrated with other types of conceptual modeling, variants have appeared. Process modeling is usually done in some organizational setting. One can look upon an organization and its information system abstractly to be in a state (the current state, often represented as a descriptive 'as-is' model) that are to be evolved to some future wanted state (often represented as a prescriptive 'to be' model). These states are often modeled, and the state of the organization is perceived (differently) by different persons through these models. Different usage areas of conceptual models are described in [START_REF] Krogstie | Model-Based Development and Evolution of Information Systems: A Quality Approach[END_REF][START_REF] Nysetvold | Assessing Business Process Modeling Languages Using a Generic Quality Framework[END_REF]:
1. Human sense-making: The descriptive model of the current state can be useful for people to make sense of and learn about the current perceived situation. 2. Communication between people in the organization: Models can have an important role in human communication. Thus, in addition to support the sensemaking process for the individual, descriptive and prescriptive models can act as a common framework supporting communication between people. 3. Computer-assisted analysis: This is used to gain knowledge about the organization through simulation or deduction, often by comparing a model of the current state and a model of a future, potentially better state. 4. Quality assurance, ensuring e.g. that the organization acts according to a certified process developed for instance as part of an ISO-certification process. 5. Model deployment and activation: To integrate the model of the future state in an information system directly. Models can be activated in three ways: (a) Through people, where the system offers no active support. (b) Automatically, for instance as an automated workflow system. (c) Interactively, where the computer and the users co-operate [START_REF] Krogstie | Interactive Models for Supporting Networked Organisations[END_REF]. 6. To be a prescriptive model to be used to guide a traditional system development project, without being directly activated.
Modeling languages can be divided into classes according to the core phenomena classes (concepts) that are represented and focused on in the language. This has been called the perspective of the language [START_REF] Krogstie | Model-Based Development and Evolution of Information Systems: A Quality Approach[END_REF][START_REF] Lillehagen | Active Knowledge Modeling of Enterprises[END_REF]. Languages in different perspectives might overlap in what they express, but emphasize different concepts as described below. A classic distinction regarding modeling perspectives is between the structural, functional, and behavioral perspective [19]. Through other work, such as [START_REF] Curtis | Process modeling[END_REF], [START_REF] Mili | Business process modeling languages: Sorting through the alphabet soup[END_REF], F3 [START_REF] Bubenko | Facilitating fuzzy to formal requirements modeling[END_REF], NATURE [START_REF] Jarke | Theories underlying requirements engineering: an overview of NATURE at Genesis[END_REF], [START_REF] Krogstie | Conceptual Modelling in Information Systems Engineering[END_REF][START_REF] Zachman | A framework for information systems architecture[END_REF] additional perspectives have been identified, including object, goal, actor, communicational, and topological. Thus identified perspectives for conceptual modeling are:
Behavioral perspective: Languages following this perspective go back to the early sixties, with the introduction of Petri-nets [START_REF] Petri | Kommunikation mit Automaten[END_REF]. In most languages with a behavioral perspective the main phenomena are 'states' and 'transitions' between 'states'. State transitions are triggered by 'events' [START_REF] Davis | A comparison of techniques for the specification of external system behavior[END_REF].
Functional perspective:
The main phenomena class in the functional perspective is 'transformation': A transformation is defined as an activity which based on a set of phenomena transforms them to another set of phenomena.
Structural perspective: Approaches within the structural perspective concentrate on describing the static structure of a system. The main construct of such languages is the 'entity'.
Goal and Rule perspective:
Goal-oriented modeling focuses on 'goals' and 'rules'. A rule is something which influences the actions of a set of actors. In the early nineties, one started to model so-called rule hierarchies, linking goals and rules at different abstraction levels.
Object-oriented perspective: The basic phenomena of object oriented modeling languages are those found in most object oriented programming languages; 'Objects' with unique id and a local state that can only be manipulated by calling methods of the object. The process of the object is the trace of the events during the existence of the object. A set of objects that share the same definitions of attributes and operations compose an object class.
Communication perspective:
The work within this perspective is based on lan-guage/action theory from philosophical linguistics [START_REF] Winograd | Understanding Computers and Cognition: A New Foundation for Design[END_REF]. The basic assumption of language/action theory is that persons cooperate within work processes through their conversations and through mutual commitments taken within them.
Actor and role perspective: The main phenomena of modeling languages within this perspective are 'actor' and 'role'. The background for modeling in this perspective comes both from organizational science, work on programming languages, and work on intelligent agents in artificial intelligence.
Topological perspective: This perspective relates to the topological ordering between the different concepts. The best background for conceptualization of these aspects comes from the cartography and CSCW fields, differentiating between space and place [START_REF] Dourish | Re-space-ing place: "place" and "space" ten years on[END_REF][START_REF] Harrison | Re-place-ing space: the roles of place and space in collaborative systems[END_REF]. 'Space' describes geometrical arrangements that might structure, constrain, and enable certain forms of movement and interaction; 'place' denotes the ways in which settings acquire recognizable and persistent social meaning through interaction.
Situation and context awareness
A clinician's situation awareness is the key feature for the success of the decision process in medical decision-making. In general, decision makers in complex domains must do more than simply perceive the state of their environment in order to have good situation awareness. They must understand the integrated meaning of what they perceive in light of their goals. Situation awareness incorporates an operator's understanding of the situation as a whole, which forms the basis for decision-making. The integrated picture of the current situation may be matched to prototypical situations in memory, each prototypical situation corresponding to a 'correct' action or decision.
Figure 1 shows the model of situation awareness in decision making and action in dynamic environments. Situation awareness (SA) is composed of two parts: situation and awareness. Pew [START_REF] Pew | The state of Situation Awareness measurement: heading toward the next century[END_REF] defines a 'situation' as "a set of environmental conditions and system states with which the participant is interacting that can be characterized uniquely by a set of information, knowledge and response options." The second part ('awareness') is primarily a cognitive process resulting in awareness. Some definitions put a higher emphasis on this process than the other (situation). For example Endsley [START_REF] Endsley | Toward a Theory of Situation Awareness in Dynamic Systems[END_REF] defines SA as "the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future". The model in Figure 1 defines three levels of situation awareness. The first level is perception, which refers to the perception of critical cues in the environment. Examples of relevant cues in a clinical setting are patient vital signs, lab results and other team member's current activities. The second level (comprehension) involves an understanding of what the integrated cues mean in relation to the clinician's goals. Here, a physician or a team of medical experts will combine information about past medical history, the current illness(es) and treatments to try to understand the significance of data about the patient's condition. The third level is related to projection, i.e. understanding what will happen with the patient in the future. Using the understanding of the current situation, a clinician or a healthcare team can for instance predict a patient's response to a particular treatment process [START_REF] Wright | Building shared situation awareness in healthcare settings[END_REF].
According to Endsley and Garland [START_REF] Endsley | Situation awareness: analysis and measurement[END_REF], situation awareness is in part formed by the availability of information. This information can be obtained from various sources such as sensory information from the environment, visual/auditory displays, decision aids and support systems, extra-and intra-team communication and team member background knowledge and experience. These information sources will have different levels of reliability giving rise to different levels of confidence in various information sources. Information is primarily aimed at: 1) reducing uncertainty in decision-making and 2) interpretation and sense making in relation to the current situation. Hence, situation awareness is derived from a combination of the environment, the system's displays and other people (team members) as integrated and interpreted by the individual.
In the context of figure 1, mental models help (or block) a person or a team in the process of determining what information is important to attend to, as well as helping to form expectations. Without a 'correct' mental model it would be difficult to obtain satisfactory situation awareness. Processing novel cues without a good mental model, strains the working memory and makes achieving SA much harder and much more prone to error. Mental models provide default information (expected characteristics of elements) that help form higher levels of SA even when needed data is missing or incomplete. Mental models affect the way we handle decisions and actions under uncertainty.
Furthermore, any model of information behavior must indicate something about different stakeholder' information needs and sources. SA is a vital component of the decision making process regardless of the dynamics of the environment within which the decisions are made. SA shapes the mental model of the decision maker and as such influences the perceived choice alternatives and their outcomes. Although Endsley's work on situation awareness originated within the military and aviation domains, there has been an increasing interest from other areas of research. Within the field of medical decision making research, Patel et al. [START_REF] Patel | Emerging paradigms of cognition in medical decisionmaking[END_REF] pointed out the limitations of the classical paradigm of decision research and called for more research within medical decision-making as it occurs in the natural setting. Drawing on the concepts of naturalistic decision making and situation awareness, Patel et al. [START_REF] Patel | Emerging paradigms of cognition in medical decisionmaking[END_REF] argues that this will enable us to better understand decision processes in general, develop strategies for coping with suboptimal conditions, develop expertise in decisionmaking as well as obtaining a better understanding of how decision-support technologies can successfully mediate decision processes within medical decision making. Examples of research efforts covering situation awareness and decision making within healthcare can be found within anesthesiology [START_REF] Gaba | Situation Awareness in Anesthesiology[END_REF], primary care [START_REF] Singh | Exploring situational awareness in diagnostic errors in primary care[END_REF], surgical decision making [START_REF] Jalote-Parmar | Situation awareness in medical visualization to support surgical decision making[END_REF], critical decision making during health/medical emergency [START_REF] Paturas | Establishing a Framework for Synchronizing Critical Decision Making with Information Analysis During a Health/Medical Emergency[END_REF] and within evidence-based medical practices in general [START_REF] Falzer | Cognitive schema and naturalistic decision making in evidence-based practices[END_REF]. Decision making theories will be further elaborated in section 3.3.
Returning to the model of situation awareness presented in figure 1, we notice that there are two factors that constrain practitioners in any complex and dynamic work domain [START_REF] Morrow | Reducing and Mitigating Human Error in Medicine[END_REF]: 1) the task/system factors and 2) the individual/team cognitive factors. Task/system factors focuses on the characteristics of the work environment and the cognitive demands imposed on the practitioners operating in the domain under consideration. According to Vicente [START_REF] Vicente | Cognitive Work Analysis : Toward Safe, Productive, and Healthy Computer-Based Work[END_REF], this is called the ecological approach and is influenced by the physical and social reality. The cognitive factors, called the cognitivist approach, is concerned with how the mental models, problem solving strategies, decision making and preferences of the practitioners are influenced by the constraints of the work domain.
In the next section we will discuss the main features of decision making, thus covering the cognitivist perspective. In section 4 we also look closer at how enterprise or process models can be used to describe the task environment (i.e. the ecology).
Theories of clinical decision making -from decision-analytic to intuitive decision models
According to the cognitivist perspective, the level of situation awareness obtained is, among other factors, influenced by the practitioner's goals, expectations, mental model (problem understanding), and training. With reference to Endsley's model in fig. 1, we see that decision making is directly influenced by a person's or a team's situation awareness.
The decision making process can be described in more than one way. A classic description of decision making relates the concept to the result of a gradual processthe decision process -performed by an actor: the decision maker. The philosopher Churchman puts it this way: The manager is the man who decides among alternative choices. He must decide which choice he believes will lead to a certain desired objective or set of objectives [START_REF] Churchman | Challenge to Reason[END_REF]. The decision-making process is described with various action steps and features from one definition to another. Typical steps are the generation of solution alternatives, evaluation of the impact/consequences of options and choice of solutions based on evaluation results and given criteria [START_REF] Ellingsen | Decision making and information. Conjoined twins?[END_REF]. Mintzberg et al. [START_REF] Mintzberg | The Structure of "Unstructured" Decision Processes[END_REF] have identified three central phases in a general decision making process: 1) identification, 2) development and 3) selection described by a set of supporting 'routines' and the dynamic factors explaining the relationship between the central phases and the supporting routines. The identification phase consists of decision recognition and diagnosis (routines), while the development phase consists of the search and design routines. Finally, the selection phase is a highly iterative process that consists of the screening, evaluation-choice and authorization routines. In a similar manner, Power [START_REF] Power | Decision Support Systems: Concepts and Resources for Managers[END_REF] defines a decision process as consisting of seven stages or steps: 1) defining the problem, 2) decide who should decide, 3) collect information, 4) identify and evaluate alternatives, 5) decide, 6) implement and 7) follow-up assessment. In an attempt to improve decision support in requirements engineering, Alenljung and Persson [START_REF] Alenljung | Portraying the practice of decision-making in requirements engineering: a case of large scale bespoke development[END_REF] combines Mintzberg's and Power's staged decision process models. Mosier and Fischer [START_REF] Mosier | Judgment and Decision Making by Individuals and Teams: Issues, Models, and Applications[END_REF] discuss decision making in terms of both front-end judgment processes and back-end decision processes. The front-end processes involve handling and evaluating the importance of cues and information, formulating a diagnosis, or assessing the situation. According to Mosier and Fischer, the back-end processes involve retrieving a course of action, weighing different options, or mentally simulating a possible response. This is illustrated in figure 2.
Judgment Decision
Front-end process Back-end process
Fig. 2. Components of the decision making process (adapted from [47])
The decision making process is often categorized into rational/analytical and naturalistic/intuitive decision making [START_REF] Roy | Decision-making models[END_REF]. This distinction refers to two broad categories of decision-making modes that are not mutually exclusive. This implies that any given decision process in reality consists of analytical as well as intuitive elements. Kushniruk [START_REF] Kushniruk | Analysis of complex decision-making processes in health care: cognitive approaches to health informatics[END_REF] argues that the cognitive processes taking place during clinical decision making can be located along a cognitive continuum, which ranges between intuition and rational analysis. Models of rational-analytical decision-making can be divided into two different approaches, the normative and descriptive approach. The classical normative economic theory assumes complete rationality during decision-making processes using axiomatic models of uncertainty and risk (e.g. probability theory or Bayesian theory) and utility (including multi-attribute utility theory) as illustrated by Expected Utility Theory [50] and Subjective Expected Utility [START_REF] Savage | Foundations of Statistics[END_REF]. Here, the rationally best course of action is selected among all available possibilities in order to maximize returns. Theories of rational choice represent, however, an unrealistic model of how decision makers act in real-world settings. It has been shown that there is a substantial non-rational element in people's thinking and behavior along with practical limits to human rationality. These factors are evident in several descriptive theories, exemplified by Prospect Theory [START_REF] Kahneman | Procpect Theory: An analysis of decision under risk[END_REF], Regret Theory [START_REF] Loomes | Regret Theory: An Alternative Theory of Rational Choice Under Uncertainty[END_REF] as well as Simon's theory of bounded rationality [START_REF] Simon | Rational decision making in business organizations[END_REF]. According to Simon, the limits of human rationality are imposed by the complexity of the world, the incompleteness of human knowledge, the inconsistencies of individual preferences and belief, the conflicts of value among people and groups of people, and the inadequacy of the amount of information people can process/compute. The limits to rationality are not static, but depend on the organizational context in which the decision-making process takes place. In order to cope with bounded rationality, clinical decision makers rely on cognitive short-cutting mechanisms or strategies, called heuristics, which allow the clinician to make decisions when facing poor or incomplete information. There is, however, some disadvantages related to the use of heuristics. In some circumstances heuristics lead to systematic errors called biases [START_REF] Gorini | An overview on cognitive aspects implicated in medical decisions[END_REF] that influence the process of medical decision making in a way that can lead to undesirable effects in the quality of care.
At the other end of the cognitive continuum proposed by Kushniruk [START_REF] Kushniruk | Analysis of complex decision-making processes in health care: cognitive approaches to health informatics[END_REF], one finds naturalistic or intuitive decision making models. Since the 1980s, a considerable amount of research has been conducted on how people make decisions in real-world complex settings (see for example [START_REF] Klein | Naturalistic decision making[END_REF]). One of the most important features of naturalistic decision-making is the explicit attempt to understand how people handle complex tasks and environments. According to Zsambok [START_REF] Zsambok | Naturalistic Decision Making (Expertise: Research & Applications[END_REF], naturalistic decision making can be defined as "how experienced people working as individuals or groups in dynamic, uncertain, and often fast-paced environments, identify and assess their situation, make decisions, and take actions whose consequences are meaningful to them and to the larger organization in which they operate". Different decision models that are based on the principles of naturalistic decision making are Recognition-primed Decision Model [START_REF] Klein | Naturalistic decision making[END_REF][START_REF] Zsambok | Naturalistic Decision Making (Expertise: Research & Applications[END_REF], Image theory [START_REF] Beach | The Psychology of Decision Making: People in Organizations[END_REF], the Scenario model [START_REF] Beach | The Psychology of Decision Making: People in Organizations[END_REF] and Argument-driven models [START_REF] Lipshitz | Decision making as argument-driven action[END_REF]. Details of these models will not be discussed further in this paper.
Research in healthcare decision making has largely been occupied with the 'decision event', i.e. a particular point in time when a decision maker considers different alternatives and chooses a possible course of action. Apart from the naturalistic decision making field, Kushniruk [START_REF] Kushniruk | Analysis of complex decision-making processes in health care: cognitive approaches to health informatics[END_REF] and Patel et al. [START_REF] Patel | Emerging paradigms of cognition in medical decisionmaking[END_REF] has proposed a greater focus on medical problem solving, i.e. the processes that precede the decision event. In essence, this argument is in line with Endsley's model of situation awareness.
Turning our attention to the environmental perspective in Endsley's SA model, we will in the next section discuss the modeling of healthcare processes and workflows.
Process modeling within healthcare
Process modeling in healthcare has previously been applied in the analysis and optimization of pathways, during requirements elicitation for clinical information systems and for general process quality improvement [START_REF] Lenz | IT Support for Healthcare Processes -Premises, Challenges[END_REF][START_REF] Becker | Health Care Processes -A Case Study in Business Process Management[END_REF][START_REF] Ramudhin | A Framework for the Modelling, Analysis and Optimization of Pathways in Healthcare[END_REF][START_REF] Staccini | Modelling health care processes for eliciting user requirements: a way to link a quality paradigm and clinical information system design[END_REF][START_REF] Petersen | Patient Care across Health Care Institutions: An Enterprise Modelling Approach[END_REF]. Other approaches, mainly from the human-factors field, have used process models as a tool for building shared understanding within teams (e.g. [START_REF] Fiore | Process mapping and shared cognition: Teamwork and the development of shared problem models, in Team cognition: Understanding the factors that drive process and performance2004[END_REF]). The adoption of traditional process modeling in healthcare is challenging in many respects. The challenges can, among other factors, be attributed to [START_REF] Miller | Care Coordination in Intensive Care Units: Communicating Across Information Spaces[END_REF][START_REF] Ramudhin | A Framework for the Modelling, Analysis and Optimization of Pathways in Healthcare[END_REF]:
• Interrupt and event driven work, creating the need for dynamic decision making and problem solving. • Processes that span multiple medical disciplines, involving complex sets of medical procedures. • Different types of, and often individualized, treatments.
• A large number of possible and unpredictable patient care pathways.
• Many inputs (resources and people) that can be used in different places.
• Frequent changes in technology, clinical procedures and reorganizations.
In addition, there are different levels of interacting processes in healthcare, as in other organizational domains. Lenz et al. [START_REF] Lenz | IT Support for Healthcare Processes -Premises, Challenges[END_REF] made a distinction between site-specific and site-independent organizational processes (e.g. medical order entry, patient dis-charge or result reporting) and medical treatment processes (e.g. diagnosis or specific therapeutic procedures). These distinctions are shown in table 2 In a similar manner, Miller et al. [START_REF] Miller | Care Coordination in Intensive Care Units: Communicating Across Information Spaces[END_REF] identified four nested hierarchical levels of decision making, including 1) unit resource coordination, 2) care coordination, 3) patient care planning and 4) patient care delivery. They conclude that care coordination and decision making involves two distinct 'information spaces': one associated with the coordination of resources (level 1 & 2 above) and one with the coordination and administration of patient care (level 3 &4) ([10], p. 157). These levels are not independent. Miller et al. found a strong association between patient-related goals and team coordination goals, and called for more research regarding the modeling of information flows and conceptual transitions (i.e. coordination activities) across information spaces. In the remainder of this section we will present a few examples of process modeling efforts related to healthcare settings. This is not a comprehensive review, but serves to illustrate of the type of research that has been done in the area.
Fiore et al. suggests that process modeling can be used as a problem-solving tool for cross-functional teams. They argue that process modeling efforts can lead to the construction of a shared understanding of a given problem [START_REF] Fiore | Process mapping and shared cognition: Teamwork and the development of shared problem models, in Team cognition: Understanding the factors that drive process and performance2004[END_REF]. Here, the modeling process in itself enables team members to improve a limited understanding of the business process in question. In a similar manner, Aguilar-Savén claims that business process modeling enables a common understanding and analysis of a business process and argues that a process model can provide a comprehensive understanding of a process [START_REF] Aguilar-Savén | Business process modelling: Review and framework[END_REF].
In an attempt to investigate how process models can be used to build shared understanding within healthcare teams, Jun et al. identified eight distinct modeling methods and evaluated how healthcare workers perceived the usability and utility of different process modeling notations [START_REF] Jun | Health Care Process Modelling: Which Method When?[END_REF]. Among the modeling methods evaluated were traditional flowcharts, data flow diagrams, communication diagrams, swim-lane activity diagrams and state transition diagrams. The study, that included three different cases in a real-world hospital setting, concluded that healthcare workers considered the usability and utility of traditional flowcharts better than other diagram types. However, the complexity within the healthcare domain indicated that the use of a combination of several diagrams was necessary.
Rojo et al. applied BPMN when describing the anatomic pathology sub-processes in a real-world hospital setting [START_REF] Rojo | Implementation of the Business Process Modelling Notation (BPMN) in the modelling of anatomic pathology processes[END_REF]. They formed a multidisciplinary modeling team consisting of software engineers, health care personnel and administrative staff. The project was carried out in six stages: informative meetings, training, process selection, definition of work method, process description and process modeling. They concluded that the modeling effort resulted in an understandable model that easily could be communicated between several stakeholders.
Addressing the problem of aligning healthcare information systems to healthcare processes, Lenz et al. developed a methodology and a tool (Mapdoc), used to model clinical processes [START_REF] Lenz | Towards a continuous evolution and adaptation of information systems in healthcare[END_REF]. A modified version of UML's Activity Diagram was used to support interdisciplinary communication and cultivate a shared understanding of relevant problems and concerns. Here, the focus was to describe the organizational context of the IT application. They found process modeling to be particular useful in projects where organizational redesign was among the goals.
Ramudhin et al. observed that modeling efforts within healthcare often involved the combination of multiple modeling methods or additions to existing methodology [START_REF] Ramudhin | A Framework for the Modelling, Analysis and Optimization of Pathways in Healthcare[END_REF]. They proposed an approach that involved the development a new modeling framework customized for the healthcare domain, called medBPM. One novel feature of the framework was that all relevant aspects of a process were presented in one single view. The medBPM framework was tested in a pilot project in a US hospital. Preliminary results was encouraging with regard to the framework's ability to describe both "as-is" (descriptive) and "to-be" (prescriptive) processes.
In a recent paper, Fareedi et al. identified roles, tasks, competences and goals related to the ward round process in a healthcare unit [START_REF] Ali Fareedi | Modelling of the Ward Round Process in a Healthcare Unit[END_REF]. They used a formal approach to implement the modeling results in the form of an ontology using OWL1 and the Protégé 269 ontology editor. The overall aim was to improve the effectiveness of information systems use in healthcare by using the model to represent information needs of clinical staff. Another point made by the authors was the formal ontology's direct applicability in improving the information flow in the ward round process. An ontological approach was also taken by Fox et.al: in the CREDO project the aim was to use an ontological approach to task and goal modeling in order to support complex treatment plans and care pathways [ ].
A common feature all the languages used in these research efforts is that they presuppose a rational decision maker following the relatively simple if-then-else or caseswitch structures leading to a choice between one of several known alternatives. Here, the decision process itself is embedded in the upstream activities/tasks preceding the decision point. The decision point will then simply act as the point in time when a commitment to action was made. This is unproblematic for trivial, structured decision episodes, but falls short of describing the factors influencing an unstructured problem/decision situation like the ones encountered within complex and dynamic healthcare processes.
Research Directions
The objective of our research is to use different conceptualizations and models of situation awareness in combination with models of clinical decision making as a "theoretical lens" for capturing and describing the decision requirements (i.e. knowledge/expertise, goals, resources, and information, communication/coordination needs) related to the perception, comprehension and projection of a situation leading up to a critical decision. The aim is to investigate how we can model these requirements as extensions to conventional process modeling languages (e.g. BPMN) possibly in the form of a discrete decision perspective [START_REF] Curtis | Process modeling[END_REF]. The GRAI Grid formalism as described in for instance [START_REF] Lillehagen | Active Knowledge Modeling of Enterprises[END_REF][START_REF] Ravat | Collaborative Decision Making: Perspectives and Challenges[END_REF], is of particular interest to investigate further, as it focuses on the decisional aspects of the management of systems. The GRAI grid de-fines decision centres (points where decisions are made) as well as the informational relationships among these decision points.
In our work, the following preliminary research questions have been identified:
• What can the main research results within clinical decision making and situation awareness tell us about how experts adapt to complexity and dynamism, synthesize and interpret information in context for the purpose of decision making in dynamic work environments? • How can we model the concept of a "situation" and "context" in complex and dynamic healthcare processes characterized by high levels of coordination, communication and information needs? • Will the use of a separate decision perspective in a process model enhance the knowledge building process [START_REF] Fiore | Towards an understanding of macrocognition in teams: developing and defining complex collaborative processes and products[END_REF] and the shared understanding of the decision context among a set of stakeholders? • Will the use of a separate decision perspective in process models lead to a better understanding of how we can design information system support for decisionmaking tasks in dynamic work environments?
To address these areas one needs to design and evaluate a set of modeling constructs that makes it possible to represent aspects of coordination, communication and decision making in clinical processes. This involves identifying relevant case(s) from a healthcare work environment and collecting data using participant observation and interviews of subjects in their natural work settings that can be used as a basis for further research work. The development of the modeling constructs can be done using the principles from design science described for instance by Hevner et al. [START_REF] Hevner | Design Science in Information Systems Research[END_REF] and March et al. [START_REF] March | Design and Natural Science Research on Information Technology[END_REF]. Hevner et al. [START_REF] Hevner | Design Science in Information Systems Research[END_REF] define design science as an attempt to create artifacts that serve human purposes, as opposed to natural and social sciences, which try to understand reality as is. We intend to develop a set of modeling constructs (i.e. design artifacts) that can describe the decision requirements that form the basis for adequate situation awareness in complex and dynamic healthcare processes. By developing a decision view, it is possible to envision process models that communicate a decision-centric view in addition to the traditional activity-, role-or information-centred views. From the previous discussion on situation awareness and decision making models, we intend to define what conceptual elements should be included in the decision view. Taking into consideration Endsley's model of situation awareness, the concept of a situation is central along with what constitutes timely, relevant information attuned to the decision maker's current (but probably changing) goals. A number of criteria have been defined to characterize and assess the quality of enterprise and process models and modeling languages (see for instance [START_REF] Krogstie | Model-Based Development and Evolution of Information Systems: A Quality Approach[END_REF]). Hence, the model constructs developed in relation to the previously mentioned decision view must be evaluated with respect to a set of modeling language quality criteria.
Conclusion
In this paper we have discussed state of the art in modeling of processes and decisions within health care. The paper relates three strands of research: 1) healthcare process modeling, 2) situation awareness and decision-making theories, and 3) decision sup-port technologies with the overall aim of improving decision quality within healthcare.
By studying the dynamic decision making process under complex conditions this can lead us to a better understanding of the communication, coordination and information needs of healthcare personnel operating in dynamic and challenging environments. In addition, we propose that the ability to express these insights as one of several modeling perspectives of healthcare process models could prove to be useful for capturing the requirements that must be imposed on information systems support in dynamic work environments.
Fig. 1 .
1 Fig. 1. Situation awareness (from [31], p. 35)
Table 1 .
1 . Categorization of healthcare processes ([START_REF] Lenz | IT Support for Healthcare Processes -Premises, Challenges[END_REF])
Organizational processes Patient treatment processes
Site-independent Generic process patterns Clinical guidelines
Site-specific adaptation Organization-specific workflows Medical pathways
http://www.w3.org/TR/owl-features/
http://protege.stanford.edu | 48,508 | [
"1003542",
"977578"
] | [
"487817",
"50794",
"50794"
] |
01484405 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484405/file/978-3-642-34549-4_9_Chapter.pdf | Janis Stirna
Jānis Grabis
email: grabis@rtu.lv
Martin Henkel
email: martinh@dsv.su.se
Jelena Zdravkovic
email: jelenaz@dsv.su.se
Capability Driven Development -an Approach to Support Evolving Organizations
Keywords: Enterprise modeling, capabilities, capability driven development, model driven development
The need for organizations to operate in changing environments is addressed by proposing an approach that integrates organizational development with information system (IS) development taking into account changes in the application context of the solution -Capability Driven Development (CDD). A meta-model for representing business and IS designs consisting of goals, key performance indicators, capabilities, context and capability delivery patterns, is been proposed. The use of the meta-model is exemplified by a case from the energy efficiency domain. A number of issues related to use of the CDD approach, namely, capability delivery application, CDD methodology, and tool support also are discussed.
Introduction
In order to improve alignment between business and information technology, information system (IS) developers continuously strive to increase the level of abstraction of development artifacts. A key focus area is making the IS designs more accessible to business stakeholders to articulate their business needs more efficiently. These developments include object-orientation, component based development, business process modeling, enterprise modeling (EM) and software services design. These techniques are mainly aimed at capturing relatively stable, core properties of business problems and on representing functional aspects of the IS [START_REF] Wesenberg | Enterprise Modeling in an Agile World[END_REF]. However, the prevalence and volatility of the Internet shifts the problem solving focus to capturing instantaneous business opportunities [START_REF]Cloud Computing: Forecasting Change[END_REF] and increases the importance of nonfunctional aspects. Furthermore, the context of use for modern IS is not always predictable at the time of design; instead as IS should have the capability to support different contexts. Hence, we should consider the context of use and under which circumstances the IS, in congruence with the business system, can provide the needed business capability. Hence, system's capability is determined not only during the design-time but also at run-time when the system's ability to handle changes in contexts is put to test. The following anecdotal evidence can be used to illustrate importance of capabilities. A small British bakery was growing successfully and decided to promote their business by offering their cupcakes at a discount via collective buying website, Groupon. As a result it had to bake 102 000 cupcakes and suffered losses comparable to its yearly profit. The bakery did not have mechanisms in place to manage the unforeseen and dramatic surge in demand -it did not have the capability of baking 102 000 cupcakes nor mechanisms for foreseeing the consequences. Another example is a mobile telecommunications company offering telephone services over its network, similar in all respects to traditional fixed-line providers. Such a service consists of the same home telephone, with an additional box between the telephone and the wall. However, unlike ordinary fixed-line telephony, it cannot connect to emergency services (112) in the event of a power outage. In this case the provided capability is unstable in a changing context.
A capability-driven approach to development should be able to elevate all such issues and to produce solutions that fit the actual application context.
From the business perspective, we define a capability as being the ability to continuously deliver a certain business value in dynamically changing circumstances. Software applications (and their execution environments) are an integral part of capabilities. This means that it is important to tailor these applications with regard to functionality, usability, reliability and other factors required by users operating in varying contexts. That puts pressure on software development and delivery methods. The software development industry has responded by elaborating Model Driven Development (MDD) methods and by adopting standardized design and delivery approaches such as service-oriented architecture and cloud computing. However, there are a number of major challenges when it comes to making use of MDD to address business capabilities:
§ The gap between business requirements and current MDD techniques. Model driven approaches and tools still operate with artifacts defined on a relatively low abstraction level. § Inability to model execution contexts. In complex and dynamically changing business environments, modeling just a service providing business functionality in very limited context of execution is not sufficient. § High cost for developing applications that work in different contexts. Software developers, especially SMEs, have difficulties to market their software globally because of the effort it takes to adhere to localization requirements and constraints in the context of where the software will be used. § Limited support for modeling changes in non-functional requirements. Model driven approaches focus on functional aspects at a given time point, rather than representing evolution of both functional and non-functional system requirements over time. § Limited support for "plasticity" in applications. The current context-aware and front-end adaptation systems focus mainly on technical aspects (e.g., location awareness and using different devices) rather than on business context awareness. § Limited platform usage. Limited modeling support for defining ability the IS to make use of new platforms, such as cloud computing platforms. Cloud computing is a technology driven phenomenon, and there is little guidance for development of cloud based business applications.
We propose to support the development of capabilities by using EM techniques as a starting point of the development process, and to use model-based patterns to describe how the software application can adhere to changes in the execution context. Our vision is to apply enterprise models representing enterprise capabilities to create executable software with built-in contextualization patterns thus leading to Capability Driven Development (CDD).
The objective of this paper is to present the capability meta-model, to discuss its feasibility by using an example case, and to outline a number of open development issues related to practical adoption of the CDD approach.
The research approach taken in this paper is conceptual and argumentative. Concepts used in enterprise modeling, context representation and service specification are combined together to establish the capability meta-model. Preliminary validation and demonstration of the CDD approach is performed using an example of designing a decision support system for optimizing energy flows in a building. Application of the meta-model is outlined by analyzing its role in development of capability delivery applications. The CDD methodology is proposed following the principles of agile, iterative and real-time software development methodologies.
The remainder of the paper is organized as follows. Section 2 presents related work. In section 3 requirements for CDD are discussed. Section 4 presents the CDD meta-model. It is applied to an example case in section 5. Section 6 discusses aspects of development methodology need for the CDD approach. The paper ends with some concluding remarks in section 7.
Related Work
In the strategic management discipline, a company's resources and capabilities are long-time seen as the primary source of profitability and competitive advantage - [START_REF] Barney | Firm Resources and Sustained Competitive Advantage[END_REF] has united them into what has become known as the resource-based view of the company. Accordingly, Michael Porter's value chain identifies top-level activities with the capabilities needed to accomplish them [START_REF] Porter | Competitive Advantage: Creating and Sustaining Superior Performance[END_REF]. In Strategy Maps and Balanced Scorecards, Kaplan and Norton also analyze capabilities through the company's perspectives, e.g. financial, customers', and other [START_REF] Kaplan | Strategy Maps: Converting Intangible Assets into Tangible Outcomes[END_REF]. Following this, in the research within Business-IT alignment, there have been attempts to consider resources and capabilities as the core components in enterprise models, more specifically, in business value models [START_REF] Osterwlader | Modeling value propositions in e-Business[END_REF][START_REF] Kinderen | Reasoning about customer needs in multi-supplier ICT service bundles using decision models[END_REF]. However, in none of these works, capabilities are formally linked to IS models. In the SOA reference architecture [START_REF] Oasis | Reference Architecture Foundation for Service Oriented Architecture Version 1.0[END_REF] capability has been described as a business functionality that, through a service, delivers a welldefined user need. However, in the specification, not much attention is given to the modeling of capability, nor it is linked to software services. In the Web Service research, capability is considered purely on the technical level, through service level agreements and policy specifications [START_REF] Papazoglou | Design Methodology for Web Services and Business Processes[END_REF].
In order to reduce development time, to improve software quality, and to increase development flexibility, MDD has established itself as one of the most promising software development approaches. However, [START_REF] Asadi | MDA-Based Methodologies: An Analytical Survey[END_REF] show that the widely practiced MDD specialization -Model Driven Architecture [START_REF] Kleppe | MDA Explained[END_REF] and following methodologies, mainly assume requirements as given a priori. [START_REF] Loniewski | A Systematic Review of the Use of Requirements Engineering Techniques in Model-Driven Development[END_REF] and [START_REF] Yue | A systematic review of transformation approaches between user requirements and analysis models[END_REF] indicate that MDA starts with system analysis's models. They also survey various methods for integrating requirements into an overall model-driven framework, but do not address the issue of requirements origination. There is a limited evidence of MDA providing the promised benefits [START_REF] Mohagheghi | Where Is the Proof? -A Review of Experiences from Applying MDE in Industry[END_REF]. Complexity of tools, their methodological weaknesses, and too low abstraction level of development artifacts are among the main areas of improvement for MDD tools [START_REF] Henkel | Pondering on the Key Functionality of Model Driven Development Tools: the Case of Mendix[END_REF].
Business modeling and Enterprise Modeling (EM) [START_REF]Perspectives on Business Modelling: Understanding and Changing Organisations[END_REF] has been used for business development and early requirements elicitation for many years, but a smooth (nearly automated) transition to software development has not been achieved due to immaturity of the existing approaches and lack of tools. Enterprise-wide models are also found in [17], where the enterprise architecture of ArchiMate is extended with an intentional aspect capturing the goals and requirements for creating an enterprise system. A comparable solution is developed in [START_REF] Pastor | Linking Goal-Oriented Requirements and Model-Driven Development[END_REF], where a generic process is presented for linking i* and the OO-Method as two representatives of Goal-Oriented Requirements Engineering (GORE) and MDD, respectively. In [START_REF] Zikra | Bringing Enterprise Modeling Closer to Model-Driven Development[END_REF] a recent analysis of the current state in this area is presented, as well as proposed a meta-model for integrating EM with MDD.
Model driven approaches also show promise to development of cloud-based applications, which has been extensively discussed at the 1st International Conference on Cloud Computing and Service Sciences, c.f. [START_REF] Esparza-Peidro | Towards the next generation of model driven cloud platforms[END_REF][START_REF] Hamdaqa | A reference model for developing cloud applications[END_REF]. However, these investigations currently are at the conceptual level and are aimed at demonstrating a potential of MDD for cloud computing. A number of European research project, e.g. REMICS and SLA@SOI have been defined in this area.
Methods for capturing context in applications and services have achieved high level of maturity and they provide a basis for application of context information in software development and execution. [START_REF] Vale | COMODE: A framework for the development of contextaware applications in the context of MDE[END_REF] describe MDD for context-aware applications, where the context model is bound to a business model, encompassing information about user's location, time, profile, etc. Context awareness has been extensively explored for Web Services, both methods and architectures, as reported in [START_REF]Enabling Context-Aware Web Services: Methods, Architectures, and Technologies[END_REF]. It is also studied in relation to workflow adaptation [START_REF] Smanchat | A survey on context-aware workflow adaptations[END_REF]. Lately, [START_REF] Hervas | A Context Model based on Ontological Languages; a proposal for Information Visualisation[END_REF] has suggested a formal context model, compounded by ontologies describing users, devices, environment and services. In [START_REF] Liptchinsky | A Novel Approach to Modeling Context-Aware and Social Collaboration Processes[END_REF] an extension to State charts to capture context dependent variability in processes has been proposed.
Non-functional aspects of service-oriented applications are controlled using QoS data and SLA. Dynamic binding and service selection methods allow replacing underperforming services in run-time [START_REF] Comuzzi | A framework for QoS-based Web service contracting[END_REF]. However, QoS and SLA focus only on a limited number of technical performance criteria with little regard to business value of these criteria.
In summary, there are a number or contributions in addressing the problem of adjusting the IS depending on the context, however business capability concept is not explicitly addressed in the context development.
Requirements for Capability Driven Development
In this section we discuss a number of requirements motivating the need for CDD.
Currently the business situation in which the IS will be used is predetermined at design time. At run-time, only adaptations that are within the scope of the planned situation can usually be made. But in the emerging business contexts we need rapid response to changes in the business context and development of new capabilities, which also requires run-time configuration and adjustment of applications. In this respect a capability modeling meta-model linking business designs with application contexts and IS components is needed.
Designing capabilities is a task that combines both business and IS knowledge. Hence both domains need to be integrated in such a way that allows establishing IS support for the business capabilities.
Current EM and business development approaches have grown from the principle that a single business model is owned by a single company. In spite of distributed value chains and virtual organizations [START_REF] Davidow | The Virtual Corporation: Structuring and Revitalizing the Corporation for the 21st Century[END_REF] this way of designing organizations and their IS still prevails. The CDD approach would aim to support co-development and co-existence of several business models by providing "connection points" between business models based on goals and business capabilities.
Most of the current MDD approaches are only efficient at generating relatively simple data processing applications (e.g. form-driven). They do not support e.g. complex calculations, advanced user interfaces, scalability of the application in the cloud. CDD should bring the state of the art further by supporting the modeling of the application execution context; this includes the ability to model the ability to switch service providers and platforms. Furthermore, the capability approach would also allow deploying more adequate security measures, by designing overall security approaches at design-time and then customizing them at deployment and run-time.
Foundation for Capability Driven Development
The capability meta-model presented in this section provides the theoretical and methodological foundation for the CDD. The meta-model is developed on the basis of industrial requirements and related research on capabilities. Initial version of such a meta-model is given in Figure 1. The meta-model has three main sections:
§ Enterprise and capability modeling. This focuses on developing organizational designs that can be configured according to the context dependent capabilities in which they will be used. I.e. this captures a set of generic solutions applicable in many different business situations. § Capability delivery context modeling. Represents the situational context under which the solutions should be applied including indicators for measuring the context properties. § Capability delivery patterns representing reusable solutions for reaching business goals under different situational contexts. The context defined for the capability should match the context in which the pattern is applicable in.
Enterprise and Capability Modeling
This part covers modeling of business goals, key performance indicators (KPI), and business processes needed to accomplish the goals. We also specify resources required to perform processes. The associations between these modeling components are based on the meta-model of EM approach EKD [29]. The concept of capability extends this meta-model towards being suitable for CDD.
Capability expresses an ability to reach a certain business objective within the range of certain contexts by applying a certain solution. Capability essentially links together business goals with patterns by providing contexts in which certain patterns (i.e. business solutions) should be applicable.
Each capability supports or is motivated by one business goal. In principle business goals can be seen as internal means for designing and managing the organization and capabilities as offerings to external customers. A capability requires or is supported by specific business processes, provided by specific roles, as well as it needs certain resources and IS components. The distinguishing characteristic of the capability is that it is designed to be provided in a specific context. The desired goal fulfillment levels can be defined by using a set of goal fulfillment indicators -Goal KPIs.
Context Modeling
The context is any information that can be used to characterize the situation, in which the capability can be provided. It describes circumstances, i.e. context situation, such as geographical location, platforms and devices used and as well as business conditions and environment. These circumstances are defined by different context types. The context situation represents the current context status. Each capability delivery pattern is valid for a specific set of context situations as defined by the pattern validity space. The context KPIs are associated with a specific capability delivery pattern. They represent context measurements, which are of vital importance for the capability delivery. The context KPI are used to monitor whether the pattern chosen for capability delivery is still valid for the current context situation. If the pattern is not valid, then capability delivery should be dynamically adjusted by applying a different pattern or reconfiguring the existing pattern (i.e., changing delivery process, reassigning resources etc.). Technically, the context information is captured using a context platform in a standardized format (e.g. XCoA). Context values change according to a situation. The context determines how a capability is delivered, which is represented by a pattern.
Capability Delivery Pattern
A pattern is used to: "describe a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem in such a way that you can use this solution a million times over, without ever doing it the same way twice" [START_REF] Alexander | A pattern language[END_REF]. This principle of describing a reusable solution to a recurrent problem in a given context has been adopted in various domains such as software engineering, information system analysis and design [START_REF] Gamma | Design Patterns: Elements of Reusable Object-Oriented Software Architecture[END_REF] as well as organizational design.
Organizational patterns have proven to be a useful way for the purpose of documenting, representing, and sharing best practices in various domains (c.f. [START_REF] Niwe | Organizational Patterns for B2B Environments-Validation and Comparison[END_REF]).
In the CDD approach we amalgamate the principle of reuse and execution of software patterns with the principle of sharing best practices of organizational patterns. Hence, capability delivery patterns are generic and abstract design proposals that can be easily adapted, reused, and executed. Patterns will represent reusable solutions in terms of business process, resources, roles and supporting IT components (e.g. code fragments, web service definitions) for delivering a specific type of capability in a given context. In this regard the capability delivery patterns extend the work on task patterns performed in the MAPPER project [START_REF] Sandkuhl | Evaluation of Task Pattern Use in Web-based Collaborative Engineering[END_REF].
Each pattern describes how a certain capability is to be met within a certain context and what resources, process, roles and IS components are needed. In order to provide a fit between required resources and available resources, KPIs for monitoring capability delivery quality are defined in accordance with organization's goals. KPIs measure whether currently available resources are sufficient in the current context. In order to resolve resource availability conflicts, conflict resolutions rules are provided.
Example Case
To exemplify the proposed approach we model a case of a building operator aiming to run its buildings efficiently and in an environmentally sustainable manner. The case is inspired by the FP7 project EnRiMa -"Energy Efficiency and Risk Management in Public Buildings" (proj. no. 260041). The objective of the EnRiMa project is to develop a decision support system (DSS) for optimizing energy flows in a building. In this paper we envision how this service will be used after the DSS will be operational. The challenge that the capability driven approach should address is the need to operate different buildings (e.g. new, old, carbon neutral) in different market conditions (e.g. fixed energy prices, flexible prices), different energy technologies (e.g. energy storage, photovoltaic (PV)), and with different ICT technologies (e.g. smart sensors, advanced ICT infrastructure, closed ICT infrastructure, remote monitoring, no substantial ICT support). The EnRiMa DSS aims to provide building specific optimization by using customized energy models describing the energy flows for each building. The optimization can be based on using building data from the onsite buildings management systems, for example giving the current temperature and ventilation air flow. The project also aims to provide a DSS that can be installed onsite or via deployment in the cloud.
Fig. 2. A generic goal model for a building operator
Enterprise Modeling
The top goal is refined into a number of sub-goals, each lined to one or several KPIs. This is a simplification; in real life there are more sub-goals and KPIs to consider than figure 2 shows. In this particular case the decomposition of the top goal into the five sub-goals should be seen in conjunction with the KPIs. I.e. the building operator wants to achieve all of the sub-goals, but since that is not possible for each particular building the owner defines specific KPIs to be used for the optimization tasks.
In summary, KPIs are used for designing the capabilities to set the level of goal fulfillment that is being expected from the capabilities. In the capability driven approach presented here we use indicators to define different level of goal fulfillment that we can expect.
Processes are central for coordinating the resources that are needed for a capability. In this case there are processes that are executed once e.g. for the initial configuration of the system and then-re executed when the context changes. We here include four basic processes:
Energy audit and configuration process. As a part of improving the energy efficiency of a building there is a need to perform an energy audit and to configure the decision support system with general information on building. The energy audit will result in a model of the building energy flows, for example to determine how much of the electricity that goes to heating, and to determine the technical equipment (such as boilers) efficiency level. Besides the energy flow there is also a need to configure the system with information of the glass area of the building, hours of operation and so on. Depending on the desired capability the process can take a number of variants, ranging from simple estimation to full-scale audits. Note that if the context changes, for example if the installed energy technology in the building changes, there is a need to repeat the configuration. We here define two variant of this process: Template based -using generic building data to estimate energy flows, Full energy auditdoing a complete energy flow analysis, leading to a detailed model of the building ICT infrastructure integration process. To continuously optimize the energy efficiency of a building there is a need to monitor the building behavior via its installed building management system. For example, by monitoring the temperature changes the cooling system can be optimized to not compensate for small temperature fluctuations. This process can take several variants, depending on the context in the form of the building management system ability to integrate with external systems. In this case we define two variants: Manual data entry -data entered manually, Integration -data fetched directly from the building management system. The actual integration process depends on which building management system is installed (e.g. Siemens Desigo system).
Deployment process. Depending on the access needs the decision support system can be executed at the building site, at a remote locations, or in a cloud platform provided by an external provider. Process variants: On-site, External, Cloud provider.
Energy efficiency monitoring and optimization process. This process is at the core of delivering the capability, i.e. monitoring, analyzing and optimizing the energy flows is what can lead to a lower the energy consumption. A very basic variant, addressing a simple context is to just monitor for failures in one of the building systems. A more advanced variant, catering to highly automated buildings is to perform a daily, automated, analysis to change the behavior of the installed building technologies. Process variants: Passive monitoring -monitoring for failures, Active optimization -performing pro-active optimizations based on detailed estimations Depending on the context the variants of these processes can be activated, this will be described in the next section.
Context Modeling
The DSS can be deployed to a wide range of contexts. To exemplify the varying conditions we here describe two simplified context types:
Older building, low ICT Monitoring -where the building got a low degree of ICT integration abilities, and the overall desire of the building owner is to monitor the buildings energy usage and minimize costs.
Modern building, high ICT infrastructure -where integration with the building system is possible, a building model allowing continuous optimizations is possible, and the building owner wants to balance CO 2 emissions and cost minimization.
Each of these context types can be addressed by capabilities (see figures 3 and 4) that guide through selecting the right processes or process variants; this will be further described in the section on patterns. The examples here present the enterprise models at design-time. To detect a context change at runtime we define a set of context-KPIs. These allow us to monitor the goal fulfillment at runtime by comparing the measurable situational properties. For example, Context KPI: Energy consumption 200 kWh/m 2 should be compared with the actual energy consumption (see figure 3). DSS and which executable components (e.g. web-services) should be used. If the building has closed system, then manual data input should be used instead. Table 1 shows two capabilities and their relation to variants of the energy audit and integration with the existing ICT systems of the building. Moreover we identify those context KPIs that can be of use when monitoring the process execution.
Discussion
In this section we will discuss issues pertinent to usage of CDD, namely capability delivery application (CDA), CDD methodology, and tool support.
Capability Delivery Application
A company requesting a particular capability represents it using the concepts of CDD meta-model. The main principle of CDD is that, in comparison to traditional development methods, the software design part is supported by improving both the analysis side and the implementation side. From the analysis side, the capability representation is enriched and architectural decisions are simplified by using patterns. From the implementation side, the detailed design complexity is reduced by relying on, for example, traditional web-services or cloud-based services. The resulting CDA is a composite application based on external services.
Figure 5 shows three conceptual layers of the CDA: (1) Enterprise Modeling layer; (2) design layer; and (3) execution layer. The EM layer is responsible for high level of representation of required capabilities. The design layer is responsible for composing meta-capabilities from capability patterns, which is achieved by coupling patterns with executable services. The execution layer is responsible for execution of the capability delivery application and its adjustment to the changing context.
The requested capability is modeled using the EM techniques and according to the capability meta-model as described in this paper. The patterns are analyzed in order to identify atomic capabilities that can be delivered by internal or external services by using a set of service selection methods. These service selection methods are based on existing service selection methods [START_REF] Chen | A method for context-aware web services selection[END_REF]. Availability of internal services is identified by matching the capability definition against the enterprise architecture, and a set of the matching rules will have to be elaborated.
Fig. 5. Layered view of capability delivery application
A process composition language is used to orchestrate services selected for delivering the requested capability. The process composition model includes multiple process execution variants [START_REF] Lu | On managing business processes variants[END_REF]. The capabilities are delivered with different frontends, which are modelled using an extended user interface modelling language. The external services used in CDA should be able to deliver the requested performance in the defined context. The necessary service functionality and non-functional requirements corresponding to the context definition are transformed into a service provisioning blueprint [START_REF] Nguyen | Blueprint Template Support for Engineering Cloud-Based Services[END_REF], which is used as a starting point for binding capability delivery models with executable components and their deployment environment. The service provisioning blueprint also includes KPIs to be used for monitoring the capability delivery. We envision that the CDA is deployed together with its simulation model and run-time adjustment algorithms based on goal and context KPIs. The key task of these algorithms is enacting of the appropriate process execution variant in response to the context change.
Business capabilities also could be delivered using traditional service-oriented and composite applications. However, the envisioned CDA better suites the requirements of CDD by providing integration with enterprise models and built-in algorithms for dynamic application adjustment in response to changing execution context.
The Process of Capability Driven Development
Enterprise modelling layer
Design layer
Execution layer
An overview of the envisioned CDD process is shown in Figure 6. It includes three main capability delivery cycles: 1) development of the capability delivery application; 2) execution of the capability delivery application; and 3) capability refinement and pattern updating. These three cycles address the core requirements of the CDD by starting development with enterprise level organizational and IS models, adjustment of the capability delivery during the application run-time and establishing and updating capability delivery patterns. CDD should also encompass run-time adjustment algorithms because the capability is delivered in a changing context, where both business (e.g., current business situation (growth, decline), priorities, personnel availability) and technical (e.g., location, device, workload) matters. Once the CDA is developed and deployed, it is continuously monitored and adjusted according to the changing context. Monitoring is performed using KPIs included in the system during the development and adjustment is made using algorithms provided by the CDD methodology.
Tool support also is important for CDD. EM is a part of CDD and for this purpose a modeling tool is needed. It should mainly address the design phase because at runtime tools provided by the target platform will be used.
We are currently planning to develop an open source Eclipse based tool for CDD and will use Eclipse EMF plug-in and other relevant plug-ins as the development foundation. Models are built on the basis of extensions of modeling languages such as EKD, UML and executable BPMN 2.0.
Concluding Remarks and Future Work
We have proposed an approach that integrates organizational development with IS development taking into account changes in the application context of the solution -Capability Driven Development. We have presented a meta-model for representing business designs and exemplified it by a case from the energy efficiency domain. This in essence is research in progress, and hence, we have also discussed a number of issues for future work related to use of the CDD approach, namely, capability delivery application, CDD methodology, and tool support also are discussed.
The two important challenges to be addressed are availability of patterns and implementation of algorithms for dynamic adjustment of CDA. In order to ensure pattern availability an infrastructure and methods for life-cycle management of patterns is required. In some cases, incentives for sharing patterns among companies can be devised. That is particularly promising in the field of energy efficiency. There could be a large number of different adjustment algorithms. Elaboration and implementation should follow a set of general, open principles for incorporating algorithms developed by third parties.
The main future directions are throughout validation of the capabilities meta-model and formulation of rules for matching required capabilities to existing or envisioned enterprise resources represented in a form of enterprise models and architectures.
Fig. 1 .
1 Fig. 1. The initial capability meta-model
Fig. 6 .
6 Fig. 6. Capability Driven Development methodology
Table 1 .
1 Example of two context patterns, each making use of process variants.
Capability delivery Capability: Old building, low Capability: Modern building, high
pattern contains: ICT ICT
ICT infrastructure Pattern: Manual data entry Pattern: Integrate with Siemens
integration process Desigo
Energy audit and Pattern: Template based audit Pattern: Run full energy audit
configuration
To support development of CDA, a CDD methodology is needed. It is based on agile and model driven IS development principles and consists of the CDD development process, a language for representing capabilities according to the CDD meta-model, as well as modeling tools. The main principles of the CDD methodology should be:§ Use of enterprise models understandable to business stakeholders, § Support for heterogeneous development environment as opposed to a single vendor platform, § Equal importance of both design-time and run-time activities with clear focus on different development artifacts, § Rapid development of applications specific to a business challenge, § Search for the most economically and technically advantageous solution,
Capability Enterprise Capability delivery
definition architecture patterns
Process composition Front-end modelling blueprint Service provisioning
Capability delivery application
KPI & Algorithms
Context platform Application simulation model External services
The patterns shown here omit details such as forces and usage guidelines, e.g. explaining how to apply and use the processes and/or executable services. In a real life case they should be developed and included in the pattern body.
Capability Delivery Patterns
The EnRiMa DSS will be used to balance various, often contradictory, operator goals, e.g. to lower the energy costs in buildings and to reduce CO 2 emissions. Each building however is different, and thus the context of execution for the system will vary. Therefore we design a set of process variants. The role of capability delivery patterns is to capture and represent which process variants should be used at which contexts delivering which capabilities. For example, if a building has Siemens Desigo building management system, then a pattern describing how to integrate it with the EnRiMa | 38,875 | [
"977607",
"1002486",
"1003544",
"942421"
] | [
"300563",
"302733",
"300563",
"300563"
] |
01353135 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://hal.science/hal-01353135/file/Liris-5889.pdf | Fernando De
Katherine Breeden
Blue Noise through Optimal Transport
Keywords: CR Categories: I.4.1 [Image Processing and Computer Vision]: Digitization and Image Capture-Sampling Blue noise, power diagram, capacity constraints
We present a fast, scalable algorithm to generate high-quality blue noise point distributions of arbitrary density functions. At its core is a novel formulation of the recently-introduced concept of capacityconstrained Voronoi tessellation as an optimal transport problem. This insight leads to a continuous formulation able to enforce the capacity constraints exactly, unlike previous work. We exploit the variational nature of this formulation to design an efficient optimization technique of point distributions via constrained minimization in the space of power diagrams. Our mathematical, algorithmic, and practical contributions lead to high-quality blue noise point sets with improved spectral and spatial properties.
Introduction
Coined by [START_REF] Ulichney | Digital Halftoning[END_REF], the term blue noise refers to an even, isotropic, yet unstructured distribution of points. Blue noise was first recognized as crucial in dithering of images since it captures the intensity of an image through its local point density, without introducing artificial structures of its own. It rapidly became prevalent in various scientific fields, especially in computer graphics, where its isotropic properties lead to high-quality sampling of multidimensional signals, and its absence of structure prevents aliasing. It has even been argued that its visual efficacy (used to some extent in stippling and pointillism) is linked to the presence of a blue-noise arrangement of photoreceptors in the retina [START_REF] Yellott | Spectral consequences of photoreceptor sampling in the rhesus retina[END_REF]].
Previous Work
Over the years, a variety of research efforts targeting both the characteristics and the generation of blue noise distributions have been conducted in graphics. Arguably the oldest approach to algorithmically generate point distributions with a good balance between density control and spatial irregularity is through error diffusion [START_REF] Floyd | An adaptive algorithm for spatial grey scale[END_REF][START_REF] Ulichney | Digital Halftoning[END_REF], which is particularly well adapted to low-level hardware implementation in printers. Concurrently, a keen interest in uniform, regularity-free distributions appeared in computer rendering in the context of anti-aliasing [START_REF] Crow | The aliasing problem in computer-generated shaded images[END_REF]. [START_REF] Cook | Stochastic sampling in computer graphics[END_REF] proposed the first dart-throwing algorithm to create Poisson disk distributions, for which no two points are closer together than a certain threshold. Considerable efforts followed to modify and improve this original algorithm [START_REF] Mitchell | Generating antialiased images at low sampling densities[END_REF][START_REF] Mccool | Hierarchical Poisson disk sampling distributions[END_REF][START_REF] Jones | Efficient generation of Poisson-disk sampling patterns[END_REF][START_REF] Bridson | Fast Poisson disk sampling in arbitrary dimensions[END_REF][START_REF] Gamito | Accurate multidimensional Poisson-disk sampling[END_REF]]. Today's best Poisson disc algorithms are very efficient and versatile [START_REF] Dunbar | A spatial data structure for fast Poisson-disk sample generation[END_REF][START_REF] Ebeida | Efficient maximal Poisson-disk sampling[END_REF], even Figure 1: Memorial. Our variational approach allows sampling of arbitrary functions (e.g., a high-dynamic range image courtesy of P. Debevec), producing high-quality, detail-capturing blue noise point distributions without spurious regular patterns (100K points, 498 s).
running on GPUs [START_REF] Wei | Parallel Poisson disk sampling[END_REF][START_REF] Bowers | Parallel Poisson disk sampling with spectrum analysis on surfaces[END_REF][START_REF] Xiang | Parallel and accurate Poisson disk sampling on arbitrary surfaces[END_REF]. Fast generation of irregular low-discrepancy sequences have also been proposed [START_REF] Niederreiter | Random Number Generation and Quasi-Monte-Carlo Methods[END_REF][START_REF] Lemieux | Fast capacity constrained Voronoi tessellation[END_REF]]; however, these methods based on the radical-inverse function rarely generate highquality blue noise.
In an effort to allow fast blue noise generation, the idea of using patterns computed offline was raised in [Dippé and Wold 1985]. To remove potential aliasing artifacts due to repeated patterns, [START_REF] Cohen | Wang tiles for image and texture generation[END_REF] recommended the use of non-periodic Wang tiles, which subsequently led to improved hierarchical sampling [START_REF] Kopf | Recursive Wang tiles for real-time blue noise[END_REF]] and a series of other tile-based alternatives [START_REF] Ostromoukhov | Fast hierarchical importance sampling with blue noise properties[END_REF]Lagae and Dutré 2006;[START_REF] Ostromoukhov | Sampling with polyominoes[END_REF]. However, all precalculated structures used in this family of approaches rely on the offline generation of high-quality blue noise.
Consequently, a number of researchers focused on developing methods to compute point sets with high-quality blue noise properties, typically by evenly distributing points over a domain via Lloyd-based iterations [McCool and Fiume 1992;[START_REF] Deussen | Floating points: A method for computing stipple drawings[END_REF][START_REF] Secord | Weighted Voronoi stippling[END_REF][START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF][START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF][START_REF] Chen | Variational blue noise sampling[END_REF], electro-static forces [START_REF] Schmaltz | Electrostatic halftoning[END_REF], statistical-mechanics interacting Gaussian particle models [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]], or farthest-point optimization [Schlömer et al. 2011]. These iterative methods consistently generate much improved point distributions, albeit at sometimes excessive computational complexity.
Finally, recent efforts have provided tools to analyze point sets using spatial/spectral [Lagae and Dutré 2008;Schlömer and Deussen 2011] and differential [START_REF] Wei | Differential domain analysis for non-uniform sampling[END_REF] methods. Extensions to anisotropic [Li et al. 2010b;[START_REF] Xu | Blue noise sampling of surfaces[END_REF], non-uniform [START_REF] Wei | Differential domain analysis for non-uniform sampling[END_REF], multiclass [START_REF] Wei | Multi-class blue noise sampling[END_REF]], and general spectrum sampling [START_REF] Zhou | Point sampling with general noise spectrum[END_REF]] have also been recently introduced.
Motivation and Rationale
Despite typically being slower, optimization methods based on iterative displacements of points have consistently been proven superior to other blue noise generation techniques. With the exception of [START_REF] Schmaltz | Electrostatic halftoning[END_REF][START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF], these iterative approaches rely on Voronoi diagrams and Lloyd's relaxations [START_REF] Lloyd | Least squares quantization in PCM[END_REF]]. To our knowledge, the use of Lloyd's algorithm for blue noise sampling was first advocated in [START_REF] Mccool | Hierarchical Poisson disk sampling distributions[END_REF] to distribute points by minimizing the root mean square (RMS) error of the quantization of a probability distribution. However, the authors noticed that a "somewhat suboptimal solution" was desirable to avoid periodic distribution: Lloyd's algorithm run to convergence tends to generate regular regions with point or curve defects, creating visual artifacts. Hence, a limited number of iterations was used in practice until [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF] proposed the use of a Capacity-Constrained Voronoi Tessellation (CCVT), a rather drastic change in which a constraint of equi-area partitioning is added to algorithmically ensure that each point conveys equal visual importance. However, this original approach and its various improvements rely on a discretization of the capacities, and thus suffer from a quadratic complexity, rendering even GPU implementations [Li et al. 2010a] unable to gracefully scale up to large point sets. Two variants were recently proposed to improve performance, both providing an approximation of CCVT by penalizing the area variance of either Voronoi cells [START_REF] Chen | Variational blue noise sampling[END_REF] or Delaunay triangles [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF]].
Contributions
In this paper, we show that CCVT can be formulated as a constrained optimal transport problem. This insight leads to a continuous formulation able to enforce the capacity constraints exactly, unlike related work. The variational nature of our formulation is also amenable to a fast, scalable, and reliable numerical treatment. Our resulting algorithm will be shown, through spectral analysis and comparisons, to generate high-grade blue noise distributions. Key differences from previous methods include:
• a reformulation of CCVT as a continuous constrained minimization based on optimal transport, as opposed to the discretized approximation suggested in [START_REF] Balzer | Capacity-constrained Voronoi diagrams in finite spaces[END_REF]; • an optimization procedure over the space of power diagrams that satisfies the capacity constraints up to numerical precision, as opposed to an approximate capacity enforcement in the space of Delaunay triangulations [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF] or Voronoi diagrams [START_REF] Chen | Variational blue noise sampling[END_REF]]; • a regularity-breaking procedure to prevent local aliasing artifacts that occur in previous approaches.
Redefining Blue Noise through Optimal Transport
Before presenting our algorithm for point set generation, we spell out our definition of blue noise as a constrained transport problem. We consider an arbitrary domain D over which a piecewise-continuous positive field ρ (e.g., intensity of an image) is defined.
Background
Two crucial geometric notions will be needed. We briefly review them next for completeness.
Optimal Transport. The optimal transport problem, dating back to Gaspard Monge [START_REF] Villani | Optimal Transport: Old and New[END_REF]], amounts to determining the optimal way to move a pile of sand to a hole of the same volume-where "optimal" means that the integral of the distances by which the sand is moved (one infinitesimal unit of volume at a time) is minimized.
The minimum "cost" of moving the piled-up sand to the hole, i.e., the amount of sand that needs to be moved times the Lp distance it has to be moved, is called the p-Wasserstein metric. The 2-Wasserstein metric, using the L2 norm, is most common, and is often referred to as the earth mover's distance. Optimal transport has recently been of interest in many scientific fields; see [START_REF] Mullen | HOT: Hodge Optimized Triangulations[END_REF][START_REF] Bonneel | Displacement interpolation using Lagrangian mass transport[END_REF][START_REF] De Goes | An optimal transport approach to robust reconstruction and simplification of 2d shapes[END_REF]
V w i = {x ∈ D| x -xi 2 -wi ≤ x -xj 2
-wj, ∀j}. The power diagram of (X, W ) is the cell complex formed by the power cells V w i . Note that when the weights are all equal, the power diagram coincides with the Voronoi diagram of X; power diagrams and their associated dual (called regular triangulations) thus generalize the usual Voronoi/Delaunay duality.
Blue Noise as a Constrained Transport Problem
Sampling a density function ρ(x) consists of picking a few representative points xi that capture ρ well. This is, in essence, the halftoning process that a black-and-white printer or a monochrome pointillist painter uses to represent an image. In order to formally characterize a blue noise distribution of points, we see sampling as the process of aggregating n disjoint regions Vi (forming a partition V of the domain D) into n points xi: if ρ is seen as a density of ink over D, sampling consists in coalescing this distribution of ink into n Dirac functions (i.e., ink dots).
We can now revisit the definition of blue noise sampling through the following requirements:
A. Uniform Sampling: all point samples should equally contribute to capturing the field ρ. Consequently, their associated regions Vi must all represent the same amount m of ink:
mi = V i ρ(x) dx ≡ m.
B. Optimal Transport: the total cost of transporting ink from the distribution ρ to the finite point set X should be minimized, thus representing the most effective aggregation. This ink transport cost for an arbitrary partition V is given as
E(X, V) = i V i ρ(x) x -xi 2 dx,
i.e., as the sum per region of the integral of all displacements of the local ink distribution ρ to its associated ink dot.
C. Local Irregularity: the point set should be void of visual artifacts such as Moiré patterns and other aliasing effects; that is, it should be free of local spatial regularity.
Note that the first requirement implies that the resulting local point density will be proportional to ρ as often required in importance sampling. The second requirement favors isotropic distribution of points since such partitions minimize the transport cost. The final requirement prevents regular or hexagonal grid patterns from emerging. Together, these three requirements provide a densityadapted, isotropic, yet unstructured distribution of points, capturing the essence of a blue noise as a constrained transport problem.
Power Diagrams vs. Voronoi Diagrams
While the cost E may resemble the well-known CVT energy [START_REF] Du | Centroidal Voronoi Tessellations: Applications and algorithms[END_REF], the reader will notice that it is more general, as the cells Vi are not restricted to be Voronoi. In fact, [START_REF] Aurenhammer | Minkowski-type theorems and least-squares clustering[END_REF] proved that capacity constrained partitions (requirement A) that minimize the cost E (requirement B) for a given point set are power diagrams. So, instead of searching through the entire space of possible partitions, we can rather restrict partitions V to be power diagrams, that is, Vi ≡ V w i . Within this subspace of partitions, the cost functional E coincides with the 0 -HOT2,2 energy of [START_REF] Mullen | HOT: Hodge Optimized Triangulations[END_REF]] (i.e., the power diagram version of the CVT energy). This difference is crucial: while methods restricting their search to Delaunay meshes [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF] or Voronoi diagrams [START_REF] Chen | Variational blue noise sampling[END_REF]] can only approximate the constraints in requirement A, this power diagram formulation has the additional variables (weights) necessary to allow exact constraint enforcement, thus capturing sharp feature much more clearly than previous methods (see Sec. 5).
In fact, all of our results exhibit uneven weights as demonstrated in Fig. 2, reinforcing the importance of power vs. Voronoi diagrams.
Variational Formulation
Leveraging the fact that requirements A and B can only be enforced for power diagrams, we describe next our variational characterization of blue noise distributions of weighted point sets (X, W ). Requirement C will be enforced algorithmically, as discussed in Sec. 4.6, by detecting regularity and locally jittering the point set to guide our optimization towards non-regular distributions.
Functional Extremization
We can now properly formulate our constrained minimization to enforce requirements A and B.
Lagrangian formulation.
A common approach to deal with a constrained minimization is to use Lagrange multipliers Λ={λi}i=1...n to enforce the n constraints (one per point) induced by requirement A. The resulting optimization procedure can be stated as:
Extremize E(X, W ) + i λi mi -m
with respect to xi, wi, and λi, where the functional E is now clearly labeled with the point set and its weights as input (since we know that only power diagrams can optimize the constrained transport energy), and mi is the amount of ink in the region V w i :
E(X, W ) = i V w i ρ(x) x-xi 2 dx, mi = V w i ρ(x) dx. (1)
Simpler formulation. The Lagrangian multipliers add undue complexity: they contribute an additional n variables to the optimization. Instead, one can extremize a simpler function F depending only on the weighted point set: we show in the appendix that the extremization above is equivalent to finding a stationary point of the following scalar functional:
F(X, W ) = E(X, W ) - i wi mi -m . (2)
With n fewer variables to deal with, we will show in Sec. 4 that blue noise generation can be efficiently achieved. Test from [START_REF] Secord | Weighted Voronoi stippling[END_REF]] (20K points). While [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] does not capture density gradients very cleanly (see close-ups), our result is similar to CCVT [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]] on this example, at a fraction of the CPU time. Comparative data courtesy of the authors.
Functional Properties
The closed-form expression of our functional allows us not only to justify the Lloyd-based algorithmic approaches previously used in [START_REF] Balzer | Capacity-constrained Voronoi diagrams in finite spaces[END_REF][START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]Li et al. 2010a], but also to derive better numerical methods to find blue noise point sets by exploiting a few key properties.
F(X, W ) : F(X, W ) : F(X, W ) : For a fixed set of points X, the Hessian of our functional w.r.t. weights is the negated weighted Laplacian operator as shown in the appendix. Consequently, extremizing F is actually a maximization with respect to all wi's. This is an important insight that will lead us to an efficient numerical approach comparable in speed to recent approximate CCVT methods [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF][START_REF] Chen | Variational blue noise sampling[END_REF]], but much faster than the quadratic scheme used in [START_REF] Balzer | Capacity-constrained Voronoi diagrams in finite spaces[END_REF][START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]Li et al. 2010a].
F(X, W ) : F(X, W ) : F(X, W ) : Now for a fixed set of weights W , our functional is the 0 -HOT2,2 energy of [START_REF] Mullen | HOT: Hodge Optimized Triangulations[END_REF]] (i.e., the power diagram version E of the CVT energy), with one extra term due to the constraints. Several numerical methods can be used to minimize this functional. Note that, surprisingly, the functional gradient w.r.t. positions turns out to be simply
∇x i F = 2mi xi -bi , with bi = 1 mi V w i xρ(x)dx, (3)
because the boundary term of the Reynolds' transport theorem cancels out the gradients of the constraint terms (see appendix). Extremizing F thus implies that we are looking for a "centroidal power diagram", as xi and its associated weighted barycenter bi have to match to ensure a zero gradient.
Discussion
We now discuss the key differences between our transport-based formulation and previous CCVT methods.
Discrete vs. Continuous Formulation. The initial CCVT method and its improvements [START_REF] Balzer | Capacity-constrained Voronoi diagrams in finite spaces[END_REF][START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]Li et al. 2010a] adopted a discrete formulation in which the density function ρ is represented by a finite set of samples, with the number of samples being "orders of magnitude" larger than the number n of points. Blue noise point sets are then generated via repeated energydecreasing swaps between adjacent clusters, without an explicit use of weights. This discrete setup has several numerical drawbacks. First, while samples can be thought of as quadrature points for capacity evaluation, their use causes accuracy issues: in essence, using samples amounts to quantizing capacities; consequently, the transport part of the CCVT formulation is not strictly minimized. Second, the computational cost induced by the amount of swaps required to reach convergence is quadratic in the number of samples-and thus impractical beyond a few thousand points. Instead, we provided a continuous functional whose extremization formally encodes the concept behind the original CCVT method [START_REF] Balzer | Capacity-constrained Voronoi diagrams in finite spaces[END_REF].
The functional F in Eq. 2 was previously introduced in [START_REF] Aurenhammer | Minkowski-type theorems and least-squares clustering[END_REF]] purely as a way to enforce capacity constraints for a fixed point set; here we extend F as a function of weights wi and positions xi, and the closed-form gradient and Hessian we explicitly derived will permit, in the next section, the development of a fast numerical treatment to generate high-quality blue noise distributions in a scalable fashion, independently of the sampling size of the density function.
Approximate vs. Exact Constraints. Attempts at dealing with CCVT through continuous optimization have also been investigated by sacrificing exact enforcement of capacity constraints. In [START_REF] Balzer | Voronoi treemaps for the visualization of software metrics[END_REF][START_REF] Balzer | Capacity-constrained Voronoi diagrams in continuous spaces[END_REF]], for instance, a point-by-point iterative approach is used to minimize the capacity variance of Voronoi cells to best fit the capacity constraints; [START_REF] Chen | Variational blue noise sampling[END_REF] recommend adding the capacity variance as a penalty term to the CVT energy instead; [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF] take a dual approach by minimizing capacity variance on Delaunay triangles instead of Voronoi cells. These different variants all mix the requirements of good spatial distribution and capacity constraints into a single minimization, leading to an over-constrained formulation. Minima of their functionals thus always represent a tradeoff between capacity enforcement and isotropic spatial distribution. Instead, our formulation allows exact capacity constraints by controlling the power diagram through the addition of a weight per vertex: we can now optimize distribution quality while constraining capacity, resulting in high quality blue noise sampling of arbitrary density field (see quadratic ramp in Fig. 10 for a comparison with recent methods).
Numerical Optimization
We now delve into the numerical methods and algorithmic details we use to efficiently generate blue noise point distribution based on our variational formulation.
Overall Strategy
We proceed with point set generation by computing a critical point of the functional F defined in Eq. 2: we extremize the functional F by repeatedly performing a minimization step over positions followed by a projection step over weights to enforce constraints. The power diagram of the weighted point set is updated at each step via the CGAL library [2010]. While this alternating procedure is typical for non-linear multivariable problems, we will benefit from several properties of the functional as already alluded to in Sec. 3:
• enforcing the capacity constraints for a fixed set of point positions is a concave maximization; • minimizing F for a fixed set of weights is akin to the minimization of the CVT energy, for which fast methods exist; • staying clear of regular patterns is enforced algorithmically through a simple local regularity detection and removal. These three factors conspire to result in a fast and scalable generation of high-quality blue noise point sets as we discuss next.
Constraint Enforcement
For a given set of points X, we noted in Sec. 3.1 that finding the set of weights Wopt to enforce that all capacities are equal is a concave maximization. Fast iterative methods can thus be applied to keep computational complexity to a minimum.
Since the Hessian of F(X, W ) is equal to the negated weighted Laplacian ∆ w,ρ (see appendix), Newton iterations are particularly appropriate to find the optimal set of weights Wopt. At each iteration, we thus solve the sparse, (Poisson) linear system: ∆ w,ρ δ = m-m1 m-m2 . . . m-mn t , (4) where the righthand side of the equation is equal to the current gradient of F w.r.t. weights. A standard line search with Armijo condition [START_REF] Nocedal | Numerical optimization[END_REF] is then performed to adapt the step size along the vector δ before updating the vector W of current weights. Given that the Hessian is sparse and symmetric, many linear solvers can be used to efficiently solve the linear system used in each Newton iteration; in our implementation, we use the sparse QR factorization method in [START_REF] Davis | Algorithm 915, SuiteSparseQR: Multifrontal multithreaded rank-revealing sparse QR factorization[END_REF]]. Typically, it only takes 3 to 5 such iterations to bring the residual of our constraints to within an accuracy of 10 -12 .
Transport Minimization
For a fixed set of weights W , we can move the locations of the n points in order to improve the cost of ink transport F(X, W ). Previous CCVT-based methods [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]Li et al. 2010a] used Lloyd's algorithm as the method of choice for their discrete optimization. In our continuous optimization context, we have more options. A Lloyd update where positions xi are moved to the barycenter bi of their associated weighted cell V w i can also be used to reliably decrease the transport cost: indeed, we prove in the appendix that the gradient of F(X, W ) is a natural extension of the gradient of the regular CVT energy. However, Lloyd's algorithm is a special case of a gradient descent that is known to suffer from linear convergence [START_REF] Du | Centroidal Voronoi Tessellations: Applications and algorithms[END_REF]. We improve the convergence rate through line search, again using adaptive timestep gradient descent with Armijo conditions as proposed in [START_REF] Mullen | HOT: Hodge Optimized Triangulations[END_REF]]. Note that quasi-Newton iterations as proposed in [START_REF] Liu | On Centroidal Voronoi Tessellation -energy smoothness and fast computation[END_REF] for the CVT energy are not well suited in our context: alternating weight and position optimizations renders the approximation of the Hessian matrix from previous gradients inaccurate, ruining the expected quadratic convergence.
Density Integration
Integrations required by our formulation can be easily handled through quadrature. However, poor quadrature choices may impair the convergence rate of our constraint enforcement. Given that blue noise sampling is most often performed on a rectangular greyscale image, we design a simple and exact procedure to compute integrals of the density field ρ inside each cell, as it is relatively inexpensive. Assuming that ρ is given as a strictly-positive piecewise constant field, we first compute the value m used in our capacity constraints by simply summing the density values times the area of each constant regions (pixels, typically), divided by n. We then perform integration within each V w i in order to obtain the mass mi, the barycenter bi, and the individual transport cost for each V w i . We proceed in three steps. First, we rasterize the edges of the power diagram and find intersections between the image pixels and each edge. Next we perform a scan-line traversal of the image and construct pixel-cell intersections. Integrated densities, barycenters, and transport costs per cell are then accumulated through simple integration within each pixel-cell intersection where the density is constant. Note that our integration differs from previous similar treatments (e.g., [START_REF] Secord | Weighted Voronoi stippling[END_REF]Lecot and Lévy 2006]) as we provide robust and exact computation not only for cell capacities, but also for their barycenters and transport costs-thus avoiding the need for parameter tweaking required in quadrature approximations.
Boundary Treatment
While some of the results we present use a periodic domain (see Sec. 5), most sampling applications involve a bounded domain D, often given as a convex polygon (as in the case of a simple image). Dealing with boundaries in our approach is straightforward. First, boundary power cells are clipped by D before computing their cell barycenters bi and capacities mi. Second, the coefficients of the weighted Laplacian ∆ w,ρ are computed through the ratio of (possibly clipped) dual edge lengths and primal edge lengths, as proposed in [START_REF] Mullen | HOT: Hodge Optimized Triangulations[END_REF]]. Thence, the presence of boundaries adds only limited code and computational complexity and it does not affect the convergence rates of any of the steps described above. Note that other boundary treatments could be designed as well, using mirroring or other typical boundary conditions if needed.
Detecting & Breaking Regularities
The numerical procedure described so far solely targets requirements A and B, and as such, nothing preempts regularity. In fact, less regular more regular E hexagonal lattices are solutions to our extremization problem in the specific case of constant density and a toroidal domain-and these solutions correspond to "deep" extrema of our functional, as the cost of ink transport E reaches a global minimum on such regular packings of points. Instead, we algorithmically seek "shallow" extrema to prevent regularity (see inset).
For capacity-constrained configurations, local regularities are easily detected by evaluating the individual terms Ei measuring the transport cost within each region V w i : we assign a regularity score ri per point as the local absolute deviation of Ei, i.e., ri = 1 |Ωi|
j∈Ω i |Ei -Ej|,
where Ωi is the one-ring of xi in the regular triangulation of (X, W ). We then refer to the region around a point xi as aliased if ri <τ , where the threshold τ = 0.25 m 2 in all our experiments. When aliased, a point and its immediate neighbors are jittered by a Gaussian noise with a spatial variance of 1.0/ρ(xi) and maximum magnitude √ m to break symmetries as recommended in [START_REF] Lucarini | Symmetry-break in Voronoi tessellations[END_REF]]. To prevent a potential return to the same crystalline configuration during subsequent optimization steps, we further relocate 1% of the aliased points to introduce defects. Since our numerical approach relies on a line search with Armijo rule (seeking local extrema), starting the optimization from this stochastically scrambled configuration will fall back to a nearby, shallower extremum-hence removing regularity as demonstrated in Fig. 5.
It is worth pointing out that all CVT-based methods (including the existing CCVT schemes) may result in point distributions with local regular patterns. While a few approaches avoided regularity by stopping optimization before convergence, we instead prevent regularity by making sure we stop at shallow minima. This τ -based shallowness criterion can be seen as an alternative to the temperature parameter proposed in [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]], where the level of excitation of a statistical particle model controls the randomness on the formation of point distributions. Our simple approach is numerically robust and efficient: in practice, we observed that the proposed regularity breaking routine takes place at most once in each example test, independently of the value of τ .
Optimization Schedule
We follow a simple optimization schedule to make the generation process automatic and efficient for arbitrary inputs. We start with a random distribution of points conforming to ρ (better initialization strategies could be used, of course). We then proceed by systematically alternating optimization of weights (to enforce constraints, Sec. 4.2) and positions (to minimize transport cost, Sec. 4.3). Weight optimization is initialized with zero weights, and iterated until ∇wF ≤ 0.1 m (the capacity m is used to properly adapt the convergence threshold to the number of points n and the density ρ). For positions, we optimize our functional until ∇ X F ≤ 0.1 √ n m 3 (again, scaling is chosen here to account for density and number of points). We found that performing Lloyd steps until the gradient norm is below 0.2 √ n m 3 saves computation (it typically requires 5 iterations); only then do we revert to a full-blown adaptive timestep gradient descent until convergence (taking typically 10 iterations). Once an extremum of F is found, we apply the regularity detectingand-breaking procedure presented in Sec. 4.6, and, if an aliased point was found and jittered, we start our optimization again. This simple schedule (see pseudocode in Fig. 6) was used as is on all our results.
Results
We ran our algorithm on a variety of inputs: from constant density (Fig. 8) to photos (Fig. 1, 3, and4) and computer-generated images (Fig. 2 and 10), without any need for parameter tuning. Various illustrations based on zoneplates, regularity, and spectral analysis are used throughout the paper to allow easy evaluation of our results and to demonstrate how they compare to previous work. Spectral Properties. The special case of blue noise point distribution for a constant density in a periodic domain has been the subject of countless studies. It is generally accepted that such a point distribution must have a characteristic blue-noise profile for the radial component of its Fourier spectra, as well as low angular anisotropy [START_REF] Ulichney | Digital Halftoning[END_REF]]. This profile should exhibit no low frequencies (since the density is constant), a high peak around the average distance between adjacent points, along with a flat curve end to guarantee white noise (i.e., no distinguishable features) in the high frequency range. Fig. 8 demonstrates that we improve upon the results of all previous CCVT-related methods, and fare arguably better than alternative methods such as [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]]; in particular, we systematically (i.e., not just on average over several distributions, but for every single run) get flat spectrum in low and high frequencies, while keeping high peaks at the characteristic frequency. Note also that the method of [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF] appears to slowly converge to our results when the ratio m/n (using their notation) goes to infinity with, evidently, much larger timings (Fig. 7). // Newton method for W 13:
Enforce-Capacity-Constraints() (lines 26-33)
14:
// Gradient descent for X 15:
d = ∇ X F.
16:
Find β satisfying Armijo condition. 17:
X ← X -βd. Jitter aliased points and immediate neighbors.
23:
Relocate 1% of aliased points. Spatial Properties. We also provide evaluations of the spatial properties of our results. Fig. 8 shows two insightful visualizations of the typical spatial arrangement of our point distributions, side by side with results of previous state-of-the-art methods. The second row shows the gaps between white discs centered on sampling points with a diameter equal to the mean distance between two points; notice the uniformity of gap distribution in our result. The third row compares the number of neighbors for the Voronoi region of each site; as pointed out in [START_REF] Balzer | Capacity-constrained Voronoi diagrams in continuous spaces[END_REF]], the enforcement of the capacity constraints favors heterogenous valences, with fewer noticeable regular regions. Finally, the minimum distance among all points normalized by the radius of a disc in a hexagonal tiling is a measure of distribution quality, known as the normalized Poisson disk radius, and recommended to be in the range [0.65, 0.85] by [Lagae and Dutré 2008]. In all our constant density blue noise examples, the normalized radius is in the range [0.71, 0.76]. Quadratic Ramp. Another common evaluation of blue noise sampling is to generate a point set for an intensity ramp, and count the number of points for each quarter of the ramp. Fig. 10 compares the point sets generated by our technique vs. state-of-the-art methods [START_REF] Balzer | Capacity-constrained Voronoi diagrams in continuous spaces[END_REF][START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF][START_REF] Chen | Variational blue noise sampling[END_REF]. While all the methods recover approximately the right counting of points per quarter, our result presents a noticeably less noisy, yet unstructured distribution of points.
Zoneplates. We also provide zoneplates in Fig. 8 for the function sin(x 2 + y 2 ). Each zoneplate image was created via 32x32 copies of CVT [START_REF] Du | Centroidal Voronoi Tessellations: Applications and algorithms[END_REF] stopped at α = 0.75 CCDT [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF] CapCVT [START_REF] Chen | Variational blue noise sampling[END_REF] [ [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] with T=1/2 CCVT [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF] Our algorithm Fifth row: mean periodograms for 10 independent point sets (except for [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] for which only 5 pointsets were available). Sixth row: radial power spectra-note the pronounced peak in our result, without any increase of regularity. Last row: anisotropy in dB ( [START_REF] Ulichney | Digital Halftoning[END_REF]], p. 56). Data/code for [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] and [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]] courtesy of the authors. Our timings as a function of the number of points exhibit a typical n log n behavior, systematically better than [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]]'s n 2 ; yet, our radial spectra (inset, showing averages over 10 runs with 1024 points) even outperforms the fine 1024-sample CCVT results. (Here, CCVT-X stands for X "points-per-site" as in [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]].)
a 1024-point blue noise patch, followed by a Mitchell reconstruction filter to generate a 1024x1024 image with an average of one point per pixel as suggested in [Lagae and Dutré 2006]. Observe the presence of a second noise ring in previous methods, as opposed to the anti-aliased reconstruction achieved by our method.
Complexity. Previous CCVT methods analyzed the (worst-case) time complexity of a single iteration of their optimization approach. One iteration of our algorithm involves the construction of a 2D power diagram, costing O(n log n). It also involves the enforcement of the capacity constraints via a concave maximization w.r.t. the weights via a step-adaptive Newton method; the time complexity of this maximization is of the order of a single Newton step since the convergence rate is quadratic (see [START_REF] Nocedal | Numerical optimization[END_REF] for a more detailed proof), and therefore incurs the linear cost of solving a sparse (Poisson) linear system. For N -pixel images and n points, the total complexity of our algorithm thus becomes O(n log n + N ), with the extra term corresponding to the cost of locating the pixels within each power cell through scan-line traversal. This is significantly better than the discrete versions of CCVT which were either O(n 2 +nN log N/n) [START_REF] Balzer | Capacity-constrained Voronoi diagrams in continuous spaces[END_REF] or O(n 2 +nN ) [Li et al. 2010a] and of the same order as the CCVT approximations in [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF][START_REF] Chen | Variational blue noise sampling[END_REF]]. However, we can not match the efficiency of the multi-scale statistical particle model introduced in [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]], which scales linearly with the number of points and produces results arguably comparable with the best current methods of blue noise generation. points of the luminance of a high dynamic range 512x768 image (see supplemental material), an order of magnitude more complex than the largest results demonstrated by CCVT-based methods. Note that we purposely developed a code robust to any input and any pointsto-pixels ratio. However, code profiling revealed that about 40% of computation time was spent on the exact integration described in Sec. 4.4; depending on the targeted application, performance could thus be easily improved through quadrature [Lecot and Lévy 2006] and/or input image resampling if needed.
Stopping Criteria. As discussed in Sec. 4.7, we terminate optimization when ∇F < ε, i.e., the first order condition for identifying a locally optimal solution to a critical point search [Nocedal and ). The method of [START_REF] Chen | Variational blue noise sampling[END_REF]] (in green) behaves similarly when using a loose stopping criteria based on the functional decrease per iteration; but becomes twice slower (in blue) if the termination is based on the norm of the functional gradient to guarantee local optimality. The code released by [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF]] (in orange) also exhibits comparable performance by terminating the optimization not based on convergence, but after a fixed number of iterations.
Wright 1999]. Recent optimization-based blue noise methods [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF][START_REF] Chen | Variational blue noise sampling[END_REF], on the other hand, have used the decrease of the objective function per iteration as their stopping criteria. However, a small decrease in the functional does not imply convergence, since a change of functional value depends both on the functional landscape and the step size chosen in each iteration. Favoring guaranteed high quality vs. improved timing, we prefer adopting the first order optimality condition as our termination criteria for robust generation of blue noise distributions. Despite this purposely stringent convergence criteria, the performance of our method is similar to [START_REF] Chen | Variational blue noise sampling[END_REF] with their recommended termination based on functional decrease-but twice faster if the method of [START_REF] Chen | Variational blue noise sampling[END_REF]] is modified to use a stricter termination criterion based on the norm of the functional gradient. [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF] advocate a fixed number of iterations, which, again, does not imply either convergence or high-quality results. Our timings and theirs are, however, similar for the type of examples the authors used in their paper. See Fig. 9 for a summary of the timings of our algorithm compared to the CCVT-based methods of [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF][START_REF] Chen | Variational blue noise sampling[END_REF] for the generation of blue noise sampling of a constant density field.
Future Work
We note that our numerical treatment is ripe for GPU implementations as each element (from power diagram construction to line search) is known to be parallelizable. The scalability of our approach should also make blue noise generation over non-flat surfaces and 3D volumes practical since our formulation and numerical approach generalizes to these cases without modification. Blue noise meshing is thus an obvious avenue to explore and evaluate for numerical benefits. On the theoretical side it would be interesting to seek a fully variational definition of blue noise that incorporates requirements A, B and C altogether. Generating anisotropic and multiclass sampling would also be desirable, as well as extending our regularity-breaking procedure to other CVT-based methods. Finally, the intriguing connection between HOT meshes [START_REF] Mullen | HOT: Hodge Optimized Triangulations[END_REF]] and our definition of blue noise (which makes the Hodge-star for 0-forms not just diagonal, but constant) may deserve further exploration.
Acknowledgements. We wish to thank the authors of [START_REF] Chen | Variational blue noise sampling[END_REF][START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF] for providing data for comparisons, and Christian Lessig for proof-reading. FdG, KB, and MD acknowledge the valuable support of NSF grants DGE-1147470 and CCF-1011944 throughout this project. Observe that our method returns the best matching of the reference percentages, while still presenting an even and unstructured distribution.
Comparative data courtesy of the authors.
Appendix: Functional Properties and Derivatives
In this appendix, we provide closed-form expressions for the first and second derivatives of the functional F defined in Eq. 2.
Notation: We denote by eij the regular edge between two adjacent points xi and xj, and by e * ij the dual edge separating the partition regions V w i and V w j . (Remember that x ∈ e * ij iff xxi 2 -wi = xxj 2 -wj.) We also refer to the average value of the field ρ over e * ij as ρij, and to the one-ring of xi in the regular triangulation of (X, W ) as Ωi.
Reynolds transport theorem: The derivatives of F are most directly found by Reynolds theorem, which states that the rate of change of the integral of a scalar function f within a volume V is equal to the volume integral of the change of f , plus the boundary integral of the rate at which f flows through the boundary ∂V of outward unit normal n; i.e., in terse notation:
∇ V f (x) dV = V ∇f (x) dV + ∂V f (x) (∇x • n) dA.
W.r.t. weights: Since the regions V w i partition the domain D, the sum of all capacities is constant; hence,
∇w i mi + j∈Ω i ∇w i mj = 0.
Moreover, Reynolds theorem applied to the capacities yields
∇w i mj = - ρij 2 |e * ij | |eij| .
Next, by using both Reynolds theorem and the equality of power distances along dual edges, one obtains ∇w i E(X, W ) = j∈Ω i (wj -wi) (∇w i mj) ∇w i j wj(mj -m) = mi -m + j∈Ω i (wj -wi)(∇w i mj).
Therefore, the gradient simplifies to ∇w i F(X, W ) = m -mi.
Combining the results above yields that the Hessian of F with respect to weights is simply a negated weighted Laplacian operator:
∇ 2 w F(X, W ) = -∆ w,ρ with ∆ w,ρ ij = -ρij 2
|e * ij | |eij| For fixed points, F is thus a concave function in weights and there is a unique solution Wopt for any prescribed capacity constraints.
W.r.t. position:
We first note that ∇x i mi + j∈Ω(i) ∇x i mj = 0 as in the weight case. Using the definition of the weighted barycenter bi (Eq. 3), Reynolds theorem then yields ∇x i E(X, W ) = 2 mi(xi -bi) + j∈Ω i (wj -wi)(∇x i mj) ∇x i j wj(mj -m) = j∈Ω i (wj -wi)(∇x i mj) .
Therefore: ∇x i F(X, W ) = 2mi(xi -bi).
Equivalence of Optimizations:
The constrained minimization with Lagrangian multipliers (Eq. 1) is equivalent to extremizing the functional F (Eq. 2). Indeed, observe that any solution of the Lagrangian formulation is a stationary point of the functional F, since we just derived that a null gradient implies that mi = m (constraints are met) and xi = bi (centroidal power diagram). Another way to understand this equivalence is to observe that the gradient with respect to weights of the Lagrangian formulation is ∆ w,ρ (W + Λ); hence, extremization induces that W = -Λ + constant, and the Lagrange multipliers can be directly replaced by the (negated) weights.
Figure 2 :
2 Figure 2: Fractal. Optimal transport based blue noise sampling of a Julia set image (20K points). Colors of dots indicate (normalized) weight values, ranging from -30% to 188% of the average squared edge length in the regular triangulation. The histogram of the weights is also shown on top of the color ramp.
Figure 3 :
3 Figure 3: Zebra. Since our approach accurately captures variations of density, we can blue-noise sample images containing both fuzzy and sharp edges (160K-pixel original image (top right) courtesy of Frédo Durand). 40K points, generated in 159 seconds.
Figure 4 :
4 Figure 4: Stippling. Test from[START_REF] Secord | Weighted Voronoi stippling[END_REF]] (20K points). While[START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] does not capture density gradients very cleanly (see close-ups), our result is similar to CCVT[START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]] on this example, at a fraction of the CPU time. Comparative data courtesy of the authors.
Figure 5 :
5 Figure 5: Breaking Regularity. Optimization of F with a strict convergence threshold ( ∇ X F ≤ 10 -5 ) can produce regularity (left), as revealed by a valence-colored visualization (top) and the distribution of local transport costs Ei (bottom). After jittering and relocating aliased regions (middle, colored cells), further optimization brings the point set to a shallower (i.e., less regular) configuration (right) as confirmed by valences and transport costs.
1: // BLUE NOISE THROUGH OPTIMAL TRANSPORT 2: Input: domain D, density ρ, and number of points n 3: Initialize X with n random points inside D conforming to ρ
Output: n points with blue noise reqs. A, B, and C 26: Subroutine ENFORCE-CAPACITY-CONSTRAINTS(
Figure 6 :
6 Figure 6: Pseudocode of the blue noise algorithm.
Figure 8 :
8 Figure 8: Comparisons. Different blue noise algorithms are analyzed for the case of constant density over a periodic domain; Top row: distributions of 1024 points; Second row: gaps between white discs centered on sampling points, over black background. Notice the uniformity of gap distribution in two rightmost point sets. Third row: coloring based on number of neighbors for the Voronoi region of each site; Fourth row: 1024x1024 zoneplates for the function sin(x 2 + y 2 ) (see Sec. 5 or[Lagae and Dutré 2006] for details). Fifth row: mean periodograms for 10 independent point sets (except for[START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] for which only 5 pointsets were available). Sixth row: radial power spectra-note the pronounced peak in our result, without any increase of regularity. Last row: anisotropy in dB ([Ulichney 1987], p. 56). Data/code for[START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] and[START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]] courtesy of the authors.
Figure 7 :
7 Figure 7: Discrete vs. Continuous CCVT.Our timings as a function of the number of points exhibit a typical n log n behavior, systematically better than[START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]]'s n 2 ; yet, our radial spectra (inset, showing averages over 10 runs with 1024 points) even outperforms the fine 1024-sample CCVT results. (Here, CCVT-X stands for X "points-per-site" as in[START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]].)
Figure 9 :
9 Figure 9: Performance. Our method (in grey) performs well despite a stringent convergence criteria ( ∇F < 0.1 √ nm 3). The method of[START_REF] Chen | Variational blue noise sampling[END_REF]] (in green) behaves similarly when using a loose stopping criteria based on the functional decrease per iteration; but becomes twice slower (in blue) if the termination is based on the norm of the functional gradient to guarantee local optimality. The code released by[START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF]] (in orange) also exhibits comparable performance by terminating the optimization not based on convergence, but after a fixed number of iterations.
Figure 10 :
10 Figure 10: Ramp. Blue noise sampling of a quadratic density function with 1000 points. The percentages in each quarter indicate ink density in the image, and point density in the examples. Observe that our method returns the best matching of the reference percentages, while still presenting an even and unstructured distribution. Comparative data courtesy of the authors.
for graphics applications.
Power Diagrams. From a point set X = {x i }i=1...n a natural
partition of a domain D can be obtained by assigning every location
in D to its nearest point xi ∈ X. The region Vi assigned to point
xi is known as its Voronoi region, and the set of all these regions
forms a partition called the Voronoi diagram. While this geometric
structure (and its dual, the Delaunay triangulation of the point set)
has found countless applications, power diagrams offer an even more
general way to partition a domain based on a point set. They involve
the notion of a weighted point set, defined as a pair (X, W ) =
{(x1, w1), . . . , (xn, wn)}, where X is a set of points and W =
{wi}i∈1...n are real numbers called weights. The power distance
from a position x to a weighted point (xi, wi) is defined as x-
xi 2 -wi, where . indicates the Euclidean distance. Using
this definition, with each xi we associate a power cell (also called
weighted Voronoi region) | 54,552 | [
"7709",
"752254"
] | [
"74355",
"21398",
"413089",
"74355"
] |
01484447 | en | [
"info"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01484447/file/SIIE.pdf | Keywords: speech recognition, deep neural network, acoustic modeling
This paper addresses the topic of deep neural networks (DNN). Recently, DNN has become a flagship in the fields of artificial intelligence. Deep learning has surpassed stateof-the-art results in many domains: image recognition, speech recognition, language modelling, parsing, information retrieval, speech synthesis, translation, autonomous cars, gaming, etc. DNN have the ability to discover and learn complex structure of very large data sets. Moreover, DNN have a great capability of generalization. More specifically, speech recognition with DNN is the topic of our work in this paper. We present an overview of different architectures and training procedures for DNN-based models. In the framework of transcription of broadcast news, our DNN-based system decreases the word error rate dramatically compared to a classical system.
I. INTRODUCTION
More and more information appear on Internet each day. And more and more information is asked by users. This information can be textual, audio or video and represents multimedia information. About 300 hours of multimedia is uploaded per minute [START_REF] Lee | Spoken Content Retrieval -Beyond Cascading Speech Recognition with Text Retrieval[END_REF]. It becomes difficult for companies to view, analyze, and mine the huge amount of multimedia data on the Web. In these multimedia sources, audio data represents a very important part. Spoken content retrieval consists in "machine listening" of data and extraction of information. Some search engines like Google, Yahoo, etc. perform the information extraction from text data very successfully and give a response very quickly. For example, if the user wants to get information about "Obama", the list of several textual documents will be given by Google in a few seconds of search. In contrast, information retrieval from audio documents is much more difficult and consists of "machine listening" of the audio data and detecting instants at which the keywords of the query occur in the audio documents. For example, to find all audio documents speaking about "Obama".
Not only individual users, but also a wide range of This work was funded by the ContNomina project supported by the French National Research Agency (ANR) under contract ANR-12-BS02-0009.
All authors are with the Université de Lorraine, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54506, France, Inria, Villers-lès-Nancy, F-54600, France, CNRS, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54506, France (e-mail: fohr@ loria.fr, illina@loria.fr, mella@loria.fr).
companies and organizations are interested by these types of applications. Many business companies are interested to know what is said about them and about their competitors on broadcast news or on TV. In the same way, a powerful indexing system of audio data would benefit archives. Well organized historical archives can be rich in term of cultural value and can be used by researchers or general public.
Classical approach for spoken content retrieval from audio documents is speech recognition followed by text retrieval [START_REF] Larson | Spoken Content Retrieval: A Survey of Techniques and Technologies[END_REF]. In this approach, the audio document is transcribed automatically using a speech recognition engine and after this the transcribed text is used for the information retrieval or opinion mining. The speech recognition step is crucial, because errors occurring during this step will propagate in the following step.
In this article, we will present the new paradigm used for speech recognition: Deep Neural Networks (DNN). This new methodology for automatic learning from examples achieves better accuracy compared to classical methods. In section II, we briefly present automatic speech recognition. Section III gives an introduction to deep neural networks. Our speech recognition system and an experimental evaluation are described in section IV.
II. AUTOMATIC SPEECH RECOGNITION
An automatic speech recognition system requires three main sources of knowledge: an acoustic model, a phonetic lexicon and a language model [START_REF] Deng | Machine Learning Paradigms for Speech Recognition[END_REF]. Acoustic model characterizes the sounds of the language, mainly the phonemes and extra sounds (pauses, breathing, background noise, etc.). The phonetic lexicon contains the words that can be recognized by the system with their possible pronunciations. Language model provides knowledge about the word sequences that can be uttered. In the state-of-the-art approaches, statistical acoustic and language models, and to some extent lexicons, are estimated using huge audio and text corpora.
Automatic speech recognition consists in determining the best sequence of words (ܹ ) that maximize the likelihood: W = argmax ௐ ܲሺܺ|ܹሻܲሺܹሻ (1) where P(X|W), known as acoustic probability, is the probability of the audio signal (X) given the word sequence W. This probability is computed using acoustic model. P(W), known as language probability, is the probability a priori of the word sequence, computed using the language model. New Paradigm in Speech Recognition:
Deep Neural Networks Dominique Fohr, Odile Mella and Irina Illina A. Acoustic modeling Acoustic modeling is mainly based on Hidden Markov Model (HMM). An HMM is a statistical model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states [START_REF] Rabiner | A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition[END_REF]. HMM is a finite state automaton with N states, composed of three components: ሼ,ܣ ,ܤ Πሽ. ܣ is the transition probability matrix (ܽ is the transition probability from the state i to the state jሻ. Π is the prior probability vector (ߨ the prior probability of state i), and ܤ is the emission probability vector (b j (x) is the probability of emission of observation x being in state j).
In speech recognition, the main advantage of using HMM is its ability to take into account the dynamic aspects of the speech. When a person speaks quickly or slowly, the model can correctly recognize the speech thanks to the self-loop on the states.
To model the sounds of a language (phones), a three-state HMM is commonly chosen (cf. Fig. 1). These states capture the beginning, central and ending parts of a phone. In order to capture the coarticulation effects, triphone models (a phone in a specific context of previous and following phones) are preferred to context-independent phone models.
Until 2012, emission probabilities were represented by a mixture of multivariate Gaussian probability distribution functions modeled as:
ܾ ሺݔሻ = ∑ ܿ ࣨሺ;ݔ ߤ , Σ ሻ ெ ୀଵ (2)
The parameters of Gaussian distributions are estimated using the Baum-Welch algorithm.
A tutorial on HMM can be found in [START_REF] Rabiner | A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition[END_REF]. These models were successful and achieved best results until 2012.
B. Language modeling
Historically, the most common approach for language modeling is based on statistical n-gram model. An n-gram model gives the probability of a word w i given the n-1 previous words:
These probabilities are estimated on a huge text corpus. To avoid a zero probability for unseen word sequences, smoothing methods are applied, the best known smoothing method being proposed by Kneiser-Ney [START_REF] Kneser | Improved Backing-off for m-gram Language Modeling[END_REF].
C. Search for the best sentence
The optimal computation of the sentence to recognize is not tractable because the search space is too large. Therefore, heuristics are applied to find a good solution. The usual way is to perform the recognition in two steps:
• The aim of this first step is to remove words that have a low probability to belong to the sentence to recognize. A word lattice is constructed using beam search. This word lattice contains best word hypotheses. Each hypothesis consist of words, their acoustic probabilities, language model probabilities and time boundaries of the words.
• The second step consists in browsing the lattice using additional knowledge to generate the best hypothesis. Usually, the performance of automatic speech recognition is evaluated in terms of Word Error Rate (WER), i.e. the number of errors (insertions, deletion and substitutions) divided by the number of words in the test corpus.
III. DEEP NEURAL NETWORKS
In 2012, an image recognition system based on Deep Neural Networks (DNN) won the Image net Large Scale Visual Recognition Challenge (ILSVCR) [START_REF] Krizhevsky | ImageNet Classification with Deep Convolutional Neural Networks[END_REF]. Then, DNN were successfully introduced in different domains to solve a wide range of problems: speech recognition [START_REF] Xiong | Achieving Human Parity in Conversational Speech Recognition[END_REF], speech understanding, parsing, translation [START_REF] Macherey | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation[END_REF], autonomous cars [START_REF] Bojarski | End to End Learning for Self-Driving Cars[END_REF], etc. [START_REF] Deng | A Tutorial Survey of Architectures, Algorithms and Applications for Deep Learning[END_REF]. Now, DNN are very popular in different domains because they allow to achieve a high level of abstraction of large data sets using a deep graph with linear and non-linear transformations. DNN can be viewed as universal approximators. DNN obtained spectacular results and now their training is possible thanks to the use of GPGPU (General-Purpose Computing on Graphics Processing Units).
A. Introduction
Deep Neural Networks are composed of neurons that are interconnected. The neurons are organized into layers. The first layer is the input layer, corresponding to the data features. The last layer is the output layer, which provides the output probabilities of classes or labels (classification task).
The output y of the neuron is computed as the non-linear weighted sum of its input. The neuron input x i can be either the input data if the neuron belongs to the first layer, or the output of another neuron. An example of a single neuron and its connections is given in Figure 2.
A DNN is defined by three types of parameters [11]:
• The interconnection pattern between the different layers of neurons; • The training process for updating the weights w i of the interconnections;
• The activation function f that converts a neuron's weighted input to its output activation (cf. equation in Fig. 2). The widely used activation function is the non-linear weighted sum. Using only linear functions, neural networks can separate only linearly separable classes. Therefore, nonlinear activation functions are essential for real data. Figure 3 shows some classical non-linear functions as sigmoid, hyperbolic tangent (tanh), RELU (Rectified Linear Units), and maxout. Theoretically, the gradient should be computed using the whole training corpus. However, the convergence is very slow because the weights are updated only once per epoch. One solution of this problem is to use Stochastic Gradient Descent (SGD). It consists in computing the gradient on a small set of training samples (called mini-batch) and in updating the weights after each mini-batch. This speeds up the training process.
During the training, it may happen that the network learns features or correlations that are specific to the training data rather than generalize the training data to be applicable to the test data. This phenomenon is called overfitting. One solution is to use a development set that should be as close as possible to the test data. On this development set, recognition error is calculated at each epoch of the training. When the error begins to increase, the training is stopped. This process is called early stopping. Another solution to avoid overfitting consists in using regularization. It consists in inserting a constraint to the error function to restrict the search space of weights. For instance, the sum of the absolute values of the weights can be added to the error function [START_REF] Goodfellow | Deep Learning[END_REF]. One more solution to avoid overfitting is dropout [START_REF] Srivastava | Dropout: A Simple Way to Prevent Neural Networks from[END_REF]. The idea is to "remove" randomly some neurons during the training. This prevents neurons from co-adapting too much and performs model averaging.
C. Different DNN architectures
There are different types of DNN regarding the architecture [START_REF] Lecun | Deep Learning[END_REF]:
• MultiLayer Perceptron (MLP): each neuron of a layer is connected with all neurons of the previous layer (feedforward and unidirectional). • Recurrent Neural Network (RNN): when it models a sequence of inputs (time sequence), the network can use information computed at previous time (t-1) while computing output for time t. Fig. 4 shows an example of a RNN for language modeling: the hidden layer h(t-1) computed for the word t-1 is used as input for processing the word t [START_REF] Mikolov | Statistical Language Models based on Neural Networks[END_REF]. • Long Short-Term Memory (LSTM) is a special type of RNN. The problem with RNN is the fact that the gradient is vanishing, and the memory of past events decreases. Sepp Hochreiter and Jürgen Schmidhuber [START_REF] Hochreiter | Long Short-Term Memory[END_REF] have proposed a new recurrent model that has the capacity to recall past events. They introduced two concepts: memory cell and gates. These gates determine when the input is significant enough to remember or forget the value, and when it outputs a value. Fig. 5 displays the structure of an LSTM.
• Convolutional Neural Network (CNN) is a special case of Feedforward Neural Network. The layer consists of filters (cf. Fig. 6). The parameters of these filters are learned. One advantage of this kind of architecture is the sharing of parameters, so there are fewer parameters to estimate. In the case of image recognition, each filter detects a simple feature (like a vertical line, a contour line, etc.). In deeper layer, the features are more complex (cf. Fig. 7). Frequently, a pooling layer is used. This layer allows a non-linear downsampling: max pooling (cf. Fig. 8) computes maximum values on sub-region. The idea is to reduce the size of the data for the following layers. An example of stateof-the-art acoustic model using CNN is given in Fig. 9. The main advantage of RNN and LSTM is their ability to take into account temporal evolution of the input features. These models are widely used for natural language processing. Strong point of CNN is the translation invariance, i.e. the skill of discover structure patterns regardless the position. For acoustic modelling all these structures can be exploited. Fig. 6. Example of a convolution with a filter
1 0 1 0 1 0 1 0 1 ൩
Original image is in green, filter applied on bottom right of image is in orange and convolution result is in pink.
A difficult DNN issue is the choice of the hyperparameters: number of hidden layers, number of neurons per layer, choice of non-linear functions, choice of learning rate adaptation function. Often, some hyperparameters are adjusted experimentally (trial and error), because they depend on the task, the size of the database and data sparsity.
D. DNN-based acoustic model
As said previously, for acoustic modeling, HMM with 3 left-to-right states are used to model each phone or contextual phone (triphone). Typically, there are several thousand of HMM states in a speech recognition system.
In DNN-based acoustic model, contextual phone HMMs are keeped but all the Gaussian mixtures of the HMM states (equation 2) are replaced by DNN. Therefore, DNN-based acoustic model computes the observation probability b j (x) of each HMM phone state given the acoustic signal using DNN networks [START_REF] Hinton | Deep Neural Networks for Acoustic Modeling in Speech Recognition[END_REF]. The input of the DNN will be the acoustic parameters at time t. The DNN outputs correspond to all HMM states, one output neuron for one HMM state.
In order to take into account contextual effects, the acoustic vectors from a time window centered on time t (for instance from time t-5 to t+5) are put together.
To train the DNN acoustic model, the alignment of the training data is necessary: for each frame, the corresponding HMM state that generated this frame should be known. This alignment of the training data is performed using a classical GMM-HMM model.
E. Language model using DNN
A drawback of classical N-gram language models (LM) is their weak ability of generalization: if a sequence of words was not observed during training, N-gram model will give poor probability estimation. To address this issue, one solution is to move to a continuous space representation. Neural networks are efficient for carrying out such a projection. To take into account the temporal structure of language (word sequences), RNN have been largely studied. The best NNbased language models use LSTM and RNN [23][24].
IV. KATS (KALDI BASED TRANSCRIPTION SYSTEM)
In this section we present the KATS speech recognition system developed in our speech group. This system is built using Kaldi speech recognition toolkit, freely available under the Apache License. Our KATS system can use GMM-based and DNN-based acoustic models.
A. Corpus
The training and test data were extracted from the radio broadcast news corpus created in the framework of the ESTER project [START_REF] Povey | The Kaldi Speech Recognition Toolkit[END_REF]. This corpus contains 300 hours of manually transcribed shows from French-speaking radio stations (France Inter, Radio France International and TVME Morocco). Around 250 h were recorded in studio and 50h on telephone. 11 shows corresponding to 4 hours of speech (42000 words) were used for evaluation.
B. Segmentation
The first step of our KATS system consists in segmentation and diarization. This module splits and classifies the audio signal into homogeneous segments: non-speech segments (music and silence), telephone speech and studio speech. For this, we used the toolkit developed by LIUM [START_REF] Rouvier | An Open-source State-of-the-art Toolbox for Broadcast News Diarization[END_REF]. We processed separately telephone speech and studio speech in order to estimate two sets of acoustic models; studio models and telephone models.
C. Parametrization
The speech signal is sampled at 16 kHz. For analysis, 25 ms frames are used, with a frame shift of 10 ms. 13 MFCC were calculated for each frame completed by the 13 delta and 13 delta-delta coefficients leading to a 39-dimension observation vector. In all experiments presented in this paper, we used MCR (Mean Cepstral Removal).
D. Acoustic models
In order to compare GMM-HMM and DNN-HMM acoustic models, we used the same HMM models with 4048 senones. The only difference is the computation of the emission probability (b j (x) of equation 2): for GMM-HMM it is a mixture of Gaussians, for DNN-HMM, it is a deep neural network. Language model and lexicon stay the same. For GMM-HMM acoustic models, we used 100k Gaussians. For DNN, the input of the network is the concatenation of 11 frames (from t-5 to t+5) of 40 parameters. The network is a MLP with 6 hidden layers of 2048 neurons per layer (cf. Fig. 10). The output layer has 4048 neurons (corresponding to 4048 senones). The total number of parameters in DNN-HMM is about 30 millions.
E. Language models and lexicon
Language models were trained of huge text corpora: newspaper corpus (Le Monde, L'Humanité), news wire (Gigaword), manual transcriptions of training corpus and web data. The total size was 1.8 billion words. The n-gram language model is a linear combination of LM models trained on each text corpus. In all experiments presented in this paper, only a 2-gram model is used with 40 million bigrams and a lexicon containing 96k words and 200k pronunciations.
F. Recognition results
Recognition results in terms of word error rate for the 11 shows are presented in Table 1. The confidence interval of these results is about +/-0.4 %. Two systems are compared. These systems use the same lexicon and the same language models but differ by their acoustic models: GMM-HMM and DNN-HMM, so, the comparison is fair. For all shows, the DNN-based system outperforms the GMM-based system. The WER difference is 5.3% absolute, and 24% relative. The improvement is statistically significant. The large difference in performance between the two systems suggests that DNNbased acoustic models achieves better classification and has generalization ability.
Shows
V. CONCLUSION
From 2012, deep learning has shown excellent results in many domains: image recognition, speech recognition, language modelling, parsing, information retrieval, speech synthesis, translation, autonomous cars, gaming, etc. In this article, we presented deep neural networks for speech recognition: different architectures and training procedures for acoustic and language models are visited. Using our speech recognition system, we compared GMM and DNN acoustic models. In the framework of broadcast news transcription, we shown that the DNN-HMM acoustic model decreases the word error rate dramatically compared to classical GMM-HMM acoustic model (24% relative significant improvement).
The DNN technology is now mature to be integrated into products. Nowadays, main commercial recognition systems (Microsoft Cortana, Apple Siri, Google Now and Amazon Alexa) are based on DNNs.
Fig. 1 .
1 Fig. 1. HMM with 3 states, left-to-right topology and selfloops, commonly used in speech recognition.
Fig. 2 .
2 Fig. 2. Example of one neuron and its connections.
Fig. 3 .
3 Fig. 3. Sigmoid, RELU, tangent hyperbolic and maxout nonlinear functions
Fig. 4 .
4 Fig. 4. Example of a RNN.
Fig. 5 .
5 Fig. 5. Example of LSTM with three gates: input gate, forget gate, output gate and a memory cell (from [19]).
Fig. 7 .
7 Fig. 7. Feature visualization of convolutional network trained on ImageNet from Zeiler and Fergus [20].
Fig. 8 .
8 Fig. 8. Max pooling with a 2x2 filter (from www.wildml.com)
Fig. 9 .
9 Fig. 9. The very deep convolutional system proposed by IBM for acoustic modeling: 10 CNN, 4 pooling, 3 full connected (FC MLP) (from [22]).
Fig. 10 .
10 Fig. 10. Architecture of the DNN used in KATS system.
Table 1 .
1 Word Error Rate (%) for the 11 shows obtained using the GMM-HMM and DNN-HMM KATS systems.
# words GMM-HMM DNN-HMM
20070707_rfi (France) 5473 23.6 16.5
20070710_rfi (France) 3020 22.7 17.4
20070710_france_inter 3891 16.7 12.1
20070711_france_inter 3745 19.3 14.4
20070712_france_inter 3749 23.6 16.6
20070715_tvme (Morocco) 2663 32.5 26.5
20070716_france_inter 3757 20.7 17.0
20070716_tvme (Morocco) 2453 22.8 17.0
20070717_tvme (Morocco) 2646 25.1 20.1
20070718_tvme (Morocco) 2466 20.2 15.8
20070723_france_inter 8045 22.4 17.4
Average 41908 22.4 17.1
ACKNOWLEDGMENT
This work was funded by the ContNomina project supported by the French National Research Agency (ANR) under contract ANR-12-BS02-0009. | 23,286 | [
"15652",
"15902",
"15663"
] | [
"420403",
"420403",
"420403"
] |
01484479 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01484479/file/FINAL%20ENGLISH%20VERSION%20Simondon-matrixTotal.pdf | Rémi Jardat
email: r.jardat@istec.fr
StrategyMatrixes as Technical Objects: Using the Simondonian concepts of concretization, milieu, allagmaticprinciples and transindividuality in a business strategy context
Keywords: Simondon, strategy matrix, transindividual Subject classification codes: xxx?
Strategy matrixes as technical objects: Using the Simondonian concepts of concretization, milieu, and
transindividuality in a business strategy context.
Introduction
To date, most management research papers drawing on the work of Gilbert Simondon have relied, chiefly, on a single text, On the Mode of Existence of Technical Objects (1958), which was, in fact, the philosopher"s second and ancillary doctoral thesis. To get to the core of Simondon"s thinking we must turn to a seminal and far more challenging work, which is his first and mainthesis:Individuation in the light of the notions of Form and Information.Simondonnever succeeded in having it published in his lifetime, due to serious health problems that persisted until his death in 1989. His full thesis, as well as other equally important philosophical tracts, was not published until quite recently (Simondon 2005). In the meantime, the great originality and wealth of his thinking became asource of inspiration for a small circle of no lessthinkersthanEdgar Morin and Bruno Latour (Barthélémy, 2014, 180-186). But onlyStiegler (1994,1996,2001,2004,2006) has truly delved into Simondon"s ideas headlong, while openly declaring himself aSimondonian. More recently,his focus has been squarely on proletarianization seen through a pattern ofalienation, as witnessed in relations between man and the technical object (or technological system),described by Simondonin his secondary thesis (1958,(328)(329)337). In relying not only on Simondon"s secondary thesis, but also on notions developed in his main thesis, including some fundamental schema and concepts found there, this paper seeks to make a novel contribution to management research by taking a broader approach to Simondon than has been the case with studies undertaken so far.
The empirical data used in this paper consist of archival materialsthat allow us to trace the emergence in France of a field of knowledge that underlies strategic management studies. In the modernizing upheaval of the post-war years, strategic management tools appearedthere amid the rise of a "field," as defined by Bourdieu (1996: 61), that wasfraught withdisputes about legitimacy; and it exhibited "metastable" inner and outer surroundings, or amilieu, as defined by Simondon (2005: 16, 32-33), all of which proved conducive to the crystallizationof new ways of thinking. In what was a largely local, intellectual breeding ground (albeitone open to certain outside influences), a debate of ideas eventually gave rise to what Michel Foucault termed a savoir (1969 : 238) in terms of organizational strategy materials; namely, a set of objects, modes of expression, concepts and theoretical choicesshaped according to their own, specific body of rules.
In keeping with Michel Foucault"s "archaeological" approach,which is driven by data collection, we reviewed a set of post-war academic, specialistand institutional literature up until 1975,when the field of strategic management studies seems to have becomerelatively stable, or at least hadattained metastability, as far as itsrationale and discursive rules were concerned.
The author conducted an initial Foucauldiananalysis of this material, as part of an extensive research study, whose results remain unpublished and have not yet been translated into English. For our present purposes, we have returned to that material, recognizing it asan important trace back to a metastable system of knowledge in a structuration stage, where management-related technical objects were being created.
(1) Using those archives, this paper focuses onanalyzing the technicityand degree of individuation behind strategic matrixes, while looking at how they originated.
Hence, wehave tested and validated the relevancy of evaluating an abstract, cognitivemanagement-related object by reference to ascale that Simondon had developed for concrete technical objects. We also show that the "concretization" and "technicity" categories have retained their relevancy for studying the production of new managerial matrixes in a variety of cultural contexts.
(2) Our findings call for an initial, theoretical discussion, concerning the notion of technical culture. Specifically, we shall see how the Simondonian notion of transindividualism makes it possible to address factors governing theemergence and transmission of these objects.
(3) In a second discussion, on epistemological issues, Foucauldian archeology andSimondonianallagmaticprincipleswill be contrastedin terms of how they open up new insights or tensions regarding the strategic matrix. Such an exercise is possible because the genesis of a management-related technical object brings into play,simultaneously,principles of both operation and structure. It also, offers management a valuableglimpse intothe realm that is occupied by whatSimondon calls the essence of technicity (Simondon, 1958, 214).
Genesisand concretization of strategymatrixes in the French industrial and institutional milieu
Can strategy matrixes by studied as technical objects and, if so, to what extent do Simondonianconcepts help explain their success, manner of use, and limitations? In addressing that question, we shall examinein (1.1)how Simondonuses the notions oftechnicityandthe technicalindividualin relation to material objects, which allowstheseconcepts to be applied to abstract technological objects.Then, in(1.2) using an archive of strategy-related knowledge defined according to certain, specific parameters,we shall examinetechnicity and degree of individuation inthe context of strategymatrixes.Lastly, in (1.3) we will try to determine the extent to which matrixes do or do not develop their own technological milieus, as they are transmitted across most every continent andcultural context.
The Simondonian notion of Technicity: ontology, the individualand milieu.
Simondon defines three stages underpinning the ontological status of technology by introducing the differences between "technical (or technological) elements," "technical individuals," and "technical totalities" or "ensembles."The isolated, individual technological object is comprised of technological elements, or components; and, for the purposes of broad-scalefabrication and applications, the object must be brought together with a variety of other technological objects and integrated into a vast installation, or ensemble.
Figure1: Different versions of a technologicalobject, theadze, byLeroi-Gourhan (1971 [1943], p. 187).
The mature technological object, as described by Simondon, would appear to correspond to adze n° 343.
To illustrate this point, while drawing on a related study undertaken by André Leroi-Gourhan [1943] (1971, 184-189), let us consider how Simondonlooks at the process of development of the seemingly simple woodworking tool, the adze (Simondon, 1958, 89-89):
The technologicalelements of the adze consist not only of its various physical parts (the blade and the shaft) but also the convergence of the totality of each of its functions as a tool: "a cutting edge,""a heavy, flat part that runs from the socket to the cutting edge," and "a cutting edge that is more strongly steel-plated than any other part" The adze is a technical individual because it is a unity of constituent elements that are brought together to generate a productive "resonance," as each part is held in place by, and supports, the other, to ensure correct functioning and offer resistance against the forces of wear and tear:
"The molecular chains in the metal have a certain orientation which varies according tolocation, as is the case withwood whose fibers are so positioned as togive the greatest solidness and elasticity, especially in the middle sectionsbetween the cutting edge and the heavy flat sectionextending from the socket to the cutting edge; this area close to the cutting edge becomes elastically deformed during woodworking because it acts as wedge and lever on the piece of wood in the lifting process."
It is as if this tool "as a whole were made of a plurality of differently functioning zones soldered together."
The adze-as-technical-object is totally inconceivable, and could not have been manufactured efficiently, had it not been for the technical ensemble that gave it its shape and was ultimately transmitted across time:
The tool is not made of matter and form only; it is made of technical elements developed in accordance with a certain scheme of functioning and assembled as a stable structure by the manufacturing process. The tool retains in itself the result of the functioning of a technical ensemble. The production of a good adze requires the technical ensemble ofthe foundry, the forge, and the quench hardening process.
That three-stage ontology ofelement-individual-ensemble is behind whatSimondonterms the technicity of the object, and this is what makes it possible to generalize the concept beyond material objects alone: it is made of "technical elements developed according to a certain scheme of functioning and assembled as a stable structure by the manufacturing process." 1 Technical objects exhibit"qualities that relate neither to pure matter nor to pure form, but that belong to the intermediary level of schemes" (Simondon, 1958, p. 92). Technicity has much more to do withthe relational than the material2 : that is, the technological object is nothing more than an ensemble of relationships between technical elements, as expressed in thought, that areestablished, implemented then repeatedly reintroduced, re-activated. And the ensemble drivesits design, manufacture, use and maintenance.
For Simondon,technicity is a rich and complex notion: there are degrees of technicity, and through the process he dubs as concretization, an object evolves and becomes increasingly technical.As Simondon uses the term, it is not at all to be taken in direct opposition with the notion of abstraction. Concretizationof a technical object occurs through a series of improvements, which can sometimes be progressive and incremental, or sometimes even brutal, as an object condenses each of the various functionsinto a tighter and tightercohesion,using fewerconstitutive elements, holdingout the possibility of enhanced performance, greater structural integrity, better reliability and optimal productivity of manufacture: "with each structural element performing several functions instead of just one" (Simondon, 1958, p. 37). Under concretization, as each technical element grows in sophistication, another process, called individuation, ensures, simultaneously, that the technical object becomes indivisible and autonomous, in the technical field.In the caseof cathode tubes, for example, "the successive precisions and reinforcementsincorporatedinto this system serve to counterany unforeseen drawbacks that arise during its use and transform improvementsinto stable features " (pp. 36-37). In that light, "modern electronic tubes"can be seen as more individualized objects, because they are more concretized than the "primitive triode" (ibid.).
In the world of technical objects,the different degrees oftechnicityreflect a more general Simondonianontology, which introduces several stages of individuation (Simondon, 2005). ForSimondon, an individual, that is, any given entity, is never really complete but is constantly engaged in a process of becoming. In the Simondonian ontology, the question of being or not being an individual is a side issue. It is more salient to ask whether an entity is or is not engaged in the process of individuation, whereby a technical object, for example, tends to become more and more of an individual. In that perspective, there is no stable, static,individual/non-individual dichotomy but, rather, successive degrees of individuation or dis-individuation, with the death of a human being, and the attendant decomposition, being a prime illustration.
Moreover, technical objects cannot undergo further individuation without progressively co-creating their associated environment, or milieu.The milieuis a part of the technical object"s (and also the human being"s) surroundings,and, whenever it is sufficiently close to the object, itcontributes to its creation,potentially to the point ofmodifying its basic attributes, while also providing the object with the resources needed for its proper functioning. That is a singular notion insofar as it challenges the entity vs. environmentduality traditionally invoked in management scienceor the inside versus outsidedichotomy found in modern life sciences. Indeed, just as living beings have their own interior milieu 3 where the workings of their vital mechanisms depend on an extra-cellular fluid environment (not unlike the saltwater sea environment that harbored the first single-celled organisms), technical individuals develop in conjunction with their environment, which is both within and without. It is in those exact terms that Simondonexplains the technical object of the 1950s (1958, p. 70):
The associated milieu is the mediator of the relationship between manufactured technical elements and natural elements within which the technical being functions.That is not unlike the ensemble created by oil and water in motion within the Guimbal turbine and around it.
That idea is of paramount importance in understanding the triadic concept of the technical element, the technical individual and the technical ensemble. An individualcan be identified by the unity of its associated milieu. That is, on the individual level, technical elements"do not have an associated milieu" (Simondon, 1958, p. 80), whereas technical ensembles do not depend on any one, single milieu: "We can identify a technical individual when the associated milieu exists as prerequisite of itsfunctioning, and we can identify an ensemble when the opposite is the case." (Simondon, 1958, p. 75).
"
The living being is an individual that brings with it its associated milieu" (Simondon, 1958, p. 71)
The technicity ofstrategy matrixes: an overview of their genesis based on archivesobtained from the field in post-war France.
The archives that we havecompiled consist of the entire set of strategy-related literature published in France,from 1945 to1976. For the sake of completeness, relevant texts and periodicals from the national library, theBibliothèqueNationale de France (BNF), have also been included.The list of the 200archives,selected from among thousands of brochures, periodicals and other texts, as well as their content, has been kept available for scientificreference. Our selection was guided by the idea that "strategists" are those whoattempt to describe management-strategy-related practices or theory formarket actors who are not directly affected by the strategy itselfusually, academics, journalists, business gurus and popularizers of business strategy, etc. In that period, well before the three leading strategy consulting firms (BDG, ADL, McKinsey) appeared on the scene in France and introduced highly elaborate strategy tools, there was a relatively rich variety of local, strategically-focused firms producing their own matrixes. After looking at documents focusing specifically on corporate strategy, we drew up the following list, which we have presented in chronological order (each matrix is covered by a separate monographic study, shown in Appendixes 1 through 6):
"Sadoc"s Table of Ongoing Adaptation to Market Changes" [START_REF] Octave | Stratégies type de développement de l"entreprise[END_REF]See Appendix 1.
The "Panther/Elephant" Matrix [START_REF] Charmont | La Panthèreoul"elephant[END_REF], See Appendix 2 A French translation of the "Ansoff Product/Market Growth Matrix" for Concentric Diversification (1970). See Appendix 3.
"Morphological Boxes" and"Morphological Territories" [START_REF] Dupont | Prévision à long termeetstratégie[END_REF]. See Appendix 4. The"Houssiaux Matrix"for analyzing relations between multinational business enterprises and national governments (Houssiaux, 1970).See Appendix 5.
The "Bijon matrix" analyzing the link between firms" profitability and market share [START_REF] Bijon | Recherched"unetypologie des entreprises face au développement[END_REF].See Appendix 6.
Out of all of these strategic analysis tools, only the American models, which havesince been translated into French (Ansoff, 1970),have stood the test of time and continue to serve as key reference guides for strategy professionals and instructors alike (see Appendix 7). Unfortunately, all purely French-designed models have fallen into obscurity, even though the range of strategic options they offer is as broad as those found in both contemporary and subsequent American tools (which is the case, most notably, with the Sadoc/Géliniermatrix (1963)). One can only wonder if cultural bias played a role in this turn of events, where American culture"s soft power eventually swept away everything in its path.
Of course, that would be a far too simplistic explanation. More intriguingly, and more pertinent to our current discussion, is the role played by technical culture, under the definition put forward by Simondon (1958). All of these matrixes (i) share most of their technical elements in commonand (ii) can be classified into technical ensembles with considerable overlap. Yet (iii) they differ greatly in terms of their degrees of concretizationand the intensity of the role played by their milieu. To paraphrase sociologist Bruno Latour, it "puts down the entire world on a flimsy sheet of paper," [2006,57]).
(i) Technical elements common to most matrixensembles.
All of the matrixes under study give a composite picture of market position in relation to a relatively high number of strategic choices (16 under theHoussiaux matrix),using atwo-dimensional chart thatfacilitatesmemorization and ranking.The technical elementsof a matrix are thus extremely simple: two axes, with segmentation variables, yielding a two-dimensional segmentation result.It should be noted, however, that, depending on the matrixes, someelements are more specialized and sophisticated than others.Whereas the morphological boxes (Appendix 4)and the panther/elephantmatrix (Appendix 1) have axes that are fully segmented, alternating between different types of strategicparameters,othersuse graduated scales: identifying products at a greater or lesser remove fromthe company"s current line ofactivity, in the case of the Ansoff matrix (Appendix 3); degreeof centralization of state regulatory policies, in the case of the Houssiauxmatrix (Appendix 5); graduation of one of the two axes (phase of the product life cycle) in the case of Gélinier"s "Table of Adaptation to Market Changes"(Appendix 2).
These same basic elements continue to appear in subsequent or newer strategy planning models, as a simple bibliographical search reveals. Entering a query into the EBSCO Business Source Premier database,using the key words "strategy" and"matrix"and the Boolean search operators "and"and"or,"yielded70 articles recently published by academic journals.Of those, 14display strategic choice matrixes, some of which deal with"marketing strategy"but use a similar matrix structure. And it should be noted that some recent matrixes are "skewed": the polarity of one of the axes is reversed in relation to the orthogonal axis, making it difficult to track a diagonal gradient (e.g. Azmi, 2008). But they are notable less for the elements they contain than their arrangement. However that may be, these matrixes have been analyzed and included in the list annexed in Appendix 7, and illustratedin Appendix 84 .
(ii) Technical ensemblesthat are found in virtually every matrixand fall within the scope of a metastable discourse specific toPost-War France
The axis of each matrix typically included a range of economic or institutional factorsnecessary to build a strategic knowledge base, well before the three major, classic Anglo-American models were introduced, as we showed in a previous paper on these archives (Author). Previously, after collecting archival literature and analyzing it in relation toMichel Foucault"s archeological approach, we examined the institutional conditions that made it possible forthe object of knowledgethe firm"s strategic choice to emerge and be identified, In addition, we explored how it became possible to classify various strategic choiceby categories. That discursive and institutional ensemble that gave rise tothe "strategic choice"as object of knowledgeis,in our view,the technical ensemble in which the object "strategix matrix" has been created.
It must be borne in mind that a special set of circumstances prevailed in postwar France, as the State and business enterprises vied to become the sole, fixed point of referencefor stakeholders, when makingcommercial/economic decisions.At issue was the question of who should ultimately hold sway over management thinking, either the State [START_REF] Pierre | Stratégieséconomiques, etudes théoriqueset applications aux entreprises[END_REF][START_REF] Macaux | Les concentrations d"entreprises, débats qui ontsuivi la conference de M. de termont[END_REF], trade unions (Pouderoux, p. 242), employers and business managers (Termont, 1955;[START_REF] Macaux | Les concentrations d"entreprises, débats qui ontsuivi la conference de M. de termont[END_REF]Demonque, 1966) or consultants [START_REF] Octave | La function de contrôle des programmesd[END_REF]. The country"s industrial economy, which had been largely imported from Anglo-American models, offered at least a discursive answer to the question, well before the writings of Michael Porter appeared between1970 and 1980. Notably, the French translation of Edith Penrose"sThe Theory of the Growth of the Firm, in1961, offered valuable insight intothe interplay between growth, profitability, firm size and the industrial sector, which underlay "standard strategy scenarios"(e.g. [START_REF] Octave | Stratégies type de développement de l"entreprise[END_REF],established byFrench strategists. Penrose"s work alsohelped resolve some of the above-mentionedcontroversies that had embroiled French institutions, regarding who would serve as the point of reference and undertakedecision-making initiatives.
Theformulationofstandard strategy scenariosalso gave new life toinformation and reporting plans as well asthe tenor of economic debate,by refocusing them on the nowlegitimate pursuit of corporate success.
In sum, despite cultural particularities, the intensifying competitive pressure on business enterprises, coupled with the ability to collect sectoral economic data, createda metastable setting such that the technical object "matrix" gained utility and could be produced locally by different authors (albeit with certain variations), although technical elementswere virtually, if not entirely, identical.
(iii) Sharply contrasting degrees of concretization.
The study of matrixes that emergedin the French discursive sphere during the postwar period highlighted several functions or operations(seeAppendix monographs n°1 to n°6), which converged toward legitimizingthe choices made byexecutive management and reinforcing its ability to use arguments to exhibit its mastery of the complex and changing reality of the organizational environment:
Compressing function, insofar as matrixes offer the corporate manager several succinct criteria for decision-making and control,making it possible to reduce the number of profit-generating/growth factors at the corporate level, keeping only those that appear to be relevant; Linking function, because matrixesrender the changing situation of the business enterprise more comprehensiblethrough their invariant laws, whichallows simplicity to guide complex choices; Totalizing function,offering the company director the assurance that linking could apply to a seamless, boundless world.
However, as was the case for material technical objects that were studied by Simondon, the capacity todeliver these properties in abstract technical objects,such as strategy matrixes, wasbound to generate unexpected or "unlooked-for side effects" (Simondon, 1958, p. 40). And there are,indeed, a number of underlying tensions between each of the three principal properties of the strategic object: The tension between linkingandtotalizing: the orientation toward totalizingleads to embracing an overly broad picture of reality, as variables are so numerous and disparate that linking them becomes impossible. As a result, it is difficult to understand how an elaboratedHoussiaux matrix (Appendix 5),for example, can serve as a guide for management in arbitrating differences and establishing priorities between theoverseas subsidiaries of a multinational firm positioned in various boxes of the matrix.
The tension betweenlinkingand compressing: this occurs as the ideal offinding the "one best way"of taking a course of action, on the one hand, is pitted against the ideal of the firm that can remain flexible and open to active participation by stakeholders.The typological matrixessuch as the panther/elephant matrix [START_REF] Charmont | La Panthèreoul"elephant[END_REF] are cruel reminders of that difficulty. These tensions can be resolved more or less successfully depending on the matrixes employed. Indeed, according to how they are arranged, the technical elementsof certain matrixes can serve multi-functional and complementary purposes, including acting as sources of mediation, which are aimed at alleviating these tensions: Categorization, mediationbetween compressingand linking.
The intersection between two segmented axes brings about a coincidence between two company-related typologies, and a kind of natural correspondence seems to develop between them. Typically, in the panther/elephant matrix(Appendix 1) or in the Houssiaux matrix (Appendix 5)that categorization generatesan action-reaction type linkingschema. That is, these matrixes gave rise to a series of connections between certain types of courses of action adopted by rival firms andthe (state-controlled) contextual settings, on the one hand, and strategic counter-measures or responses, on the other.
Hierarchical ordering, or mediation between linkingand totalizing.For centuries, science hasstriven to decipher and manipulate nature by trial and error.The classic examples for engineering specialists are Taylor expansions, which make it possible to approximate most functions as closely as one wishes by simply adding a series of functions with increasing exponentials (x, x2, x3, etc.). This approach, which involves making repeated adjustments until the formula converges toward a fixed point, relies on applying a hierarchy: the coefficient is determined for x, where the degree of x is one, then two, then three, etc., until the desired level of precision is reached. In a quite similar fashion, a hierarchy can be applied to approximate as closely as desired a reality that is never entirely attainable.None of those mechanisms used to perform partial totalizationswould work, was is not possible to prioritize the descriptive parameters from the most important to the least important. The ranking of criteria must be seen then, to a lesser degree, as a form of totalizing that opens a zone of reconciliation in which linkingand totalizingcan co-exist. Matrixes whose axes are not only segmented according to A, B C, etc., but also graduated between a pole that minimizesthe value of a parameter and another that maximizes it (e.g.: a product that is more or less different fromthe firm"s current line of activityin the Ansoffmatrix[Appendix 3], or a State that is more or lessauthoritative in theHoussiaux matrix) apply such a hierarchical ordering.
Interpolation, or mediation betweentotalizingandcompressing. Graduated axes, particularly when they present a continuum of options, like the horizontal axis of the Ansoffmatrix (1970), use a linear interpolation, i.e. showing intermediate categories at finer and finer intervals along the matrix"s generating axis. By offering a spectrum of options, ranging between two extremes, it is possible to play very locallywith economies of scale, when greater precision is desired.
When that logic is taken to the extreme, a continuous gradient appears in the matrix, for example inMorphological Territory or in the "concentric diversifications" cell of the Ansoff matrix.
All three means of "tension-dampening"can be achieved at the same time through a single technical configuration, which we see inboth theAnsoff matrix and theHoussiaux matrix, as well as in the matrixes created in the 1970s (BCG, ADL and McKinsey) and even in more recent examples (e.g. the "Entrepreneurial Strategy Matrix" [Sonfield&Lussier, 1997, Sonfieldet al., 2001], and the "Etnographic Strategic
Matrix" [Paramo [START_REF] Morales | Ethnomarketing, the cultural dimension of marketing[END_REF]). There is adiagonal line that is clearly implied though not expressed explicitly in the matrix,emergingfrom the milieusector of the matrix, as in the case of the Ansoff model (1970), which serves as a first example (seeAppendix 3). Indeed, through thediagonal gradient, these matrixes present all strategic options in successive strata, whether in continuous, discrete, orcumulative series. However, stratifying data can involve arranging it in hierarchical order (between a lower and higher graded status) as well as dividing it into categories (because there are different ranges or strata of data). It can also entailinterpolation, because intermediate levels or grades of sample data can be represented spatially in the form of a radial (stratified) graph (polar visibility graph) or a rectilinearly layered (stratified) drawing, (see figures below), making it possible to create a multi-scale visualization, so that a viewer can zoom-in for a more detailed view.
X Stage III Ansoff matrix (Appendix 3) Horizontal Axis X X X X X Vertical Axis X X X X X Demarcate d Milieu par les axes X X Stage IV Bijon matrix (Appendix 5) Horizontal Axis X X X X Vertical Axis X X X X Axe Diagonal X X X Milieu
By arranging these matrixes according to the number of functions performed simultaneously in each matrix element, and according to the associated milieu, we can identify the different degrees of concretization that steadily intensify as we move from themorphological boxtoward theAnsoff matrix. Although the Bijon matrix closely resembles the latter, in functional terms, it boasts an additional technical element (the diagonal axis), which tendsto dilute the functions carried out by the two orthogonal axes andconfuse the reader regarding their milieu of interaction. Thus, it cannot be seen asan advancement compared to theAnsoff matrix, but is, instead, a regression.Because it includes an additional axis to show a diagonal gradient, theBijon matrixis closer to the ideal-type model of the "primitive and abstract" technical object where "each structurehas a clearly defined function, and, generally, a single one" (Simondon, 1958: 41). TheAnsoff matrix, whose finely graduated and polarized axes act in synergywith a milieu that follows the reader"s natural points of orientation (left-right, up-down)is enough to suggesta concentric gradient of diversification, belongs toa more "concrete"
stage. Importantly, it meets the criterion laid out by Simondon (1958: 41) whereby "a function can be performed by a number ofsynergistically associated structures,"whereas, through the corresponding milieu that is established, "each structure performs … essential and positive functions that are integrated into the functioning of the ensemble" (ibid:41). Lastly, theAnsoff matrix alsoexhibits this same type of refinement, which ultimately "reduces the residual antagonisms that subsist between the functions" (ibid.:46). Thefiner and finer graduations on each axisgive rise to (1) hierarchical orderingand(2) interpolation, which reduce the residual incompatibilitiesthat exist between the functions, namely,(1) betweenlinking and totalizing, and (2)betweentotalizing and compressing.
Thischart is not intended to recount the history of matrix models (especially since no chronological order is given) but, rather, to identifytheirtechnical genesis, which, in the interest of simplicity, has been broken down into different stages of concretization:
stage (I) marked by the emergence of the box, using the example of morphological boxand thepanther/elephant matrix; stage (II) which saw the introduction of the incomplete-gradient matrix, illustrated by the Gélinier/Sadocmatrix and the morphological territories; stage III marked by the diagonal-gradient matrix making full use of the stratification properties ofthe axes and cross-axesmilieu, illustrated by the Ansoff matrix; and stage IV, which witnessedthe hyper-specialized matrix. The latter stage reflects a kind ofhypertely, whichSimondondescribed in these words (1958: 61):
The evolution of technical objects can move towards some form of hypertely, resulting in a technical object becoming overly specialized and ill-adaptedtoeven the slightest change in operating or manufacturingneeds and requirements;
(iv) Tracing the genesis of the strategy matrix through a process of individuation.
Our archival research, whichwas limited to literature produced in France between the years 1960 and1971,showed that different iterations of the same technical objectthestrategy matrix-emerged over the years, while presenting highly variable degrees of concretization. By classifying the objects according to their degreeof concretization, we obtaina picture of thetechnical genesisof the matrix. The fact that only the most technologically evolved version, the type III matrix, is still in use in its original form, suggests that the genesis of the strategy matrix was a form of technical progress.
However that may be, given the extremely local nature of the findings, theirvalidityandrelevancy remain problematic.
Cross-cultural validityand relevancyof the technical genesisofstrategy matrixes
We looked at a recent worldwide sample of strategy matrixes produced in academia (seechart in Appendix7 and figuresinAppendix 8). Such matrixes continueto be producedthroughout the world and, in reviewing them, it is easy to immediately identify four common stages of individuation of the object-matrix(see the "technical status"column in the chart). A review of the sample prompts the following remarks :
(1) The "strategy matrix" technical object is used in a wide variety of geographical locations and specialized contexts (the Anglo-American and Hispanic worlds, and in India, etc.), but there is no technical stage that seems to be country-or context-specific. Although the production of strategy matrixes can be associated with a certain "technical culture,"it transcends the expected cultural divisions.
(2) Stage III of the diagonal matrix is the most widely used and reproduced, which gives credence to the idea that it is the most "concrete"stage, offering the most flexibility, and representsan advance in relation to stages I and II.
(3) The long-standing production of Stage I matrixes is noteworthy.This phenomenonwas described and explained by Simondonhimself with regard to"material" technical objects (Simondon, 1958).
(4) Eachstage IV hyper-specializedmatrix exhibits a singular and distinct architecture, without having benefitted from any substantial re-use or generalization. Here,the Simondonianconcept of hypertely seemsto apply to these matrices as a whole.
We did not, however, find any studiesthat attempted, as we have here, to give a careful consideration of why and how a strategy matrix came to be designed, yet alone attempt to explain how choices were made regarding its structure, its components and their interactions, synergies and/or incompatibilities. Although strategy matrixes appears to have thrived over a long period of time, no "mechanology" (Simondon, 1958 : 81) of them as technical objectsseems to havebeen developedor been taken into account by the authors. It would seem that the question of outlining a frameworkfor describing and teaching about the technical culture of matrixes remains to be investigated (Simondon, 1958 : 288).
2. Theoretical Discussion: the transindividualand laying the ground for a technical culture to come.
Simondon"sargument that the essence of the technical object resides in the scheme of the system in which it existsand not in its matter or form (1958, p. 6) opens the way for two complementary avenues of research for management sciences. The first consists of developing an approach for examining all kinds of abstract management tools as technical objects. That is what we just illustrated in considering the strategy matrix and is various iterations.In the invention of a strategy matrix, the generation of a diagonal and dynamic milieu offers a clear illustration of the Simondonian process where (Simondon, 1958, p. 71):
The unity of the future associated milieu (wherecause-and-effect relationships will be deployed to allow the newtechnical object to function), is represented, actedout, likea rolethat is not based on any real character, but is played out by the schemes of the creative imagination.
The only thing that separates this case from Simondon"s studies on mechanical and electronic devices of his day is that, here, "schemes of the creative imagination"andthe scheme of the technical object both exist in a cognitive state.
A second possibilitypresented by this conception of the technical-object-as-aschemeresides in the opportunity to develop a technical culture.
It goes without saying that an encyclopedic, manual-like overview cannot begin to provide areal understanding of management tools in all their strengths and limitations.Conversely, reading pointed case studies or sharing professional experiences (even those put down in text form) cannot give sufficient insight for fully understanding the importance of choosing the right technical tool from among the wide range of existing models, let alone from amongthose that remain to be created. It would seem that the "general technology ormechanology"thatSimondonhad hoped for (1958, p. 58) holds out the possibility of providing management sciences with novel responses to this question. We shall now attempt to advance that argument while relying on the findings of our study of the "strategy matrix" as technical object.
Developing a true technical culture through strategy matrixes would, in our view, accomplish the ideal describedbySimondon (1958, p. 335):
Above the social community of work, and beyond the interindividual relationship that does not arise from the actual performance of an activity, there is a mental and practical universe wheretechnicity emerges, where human beings communicate through what they invent.
This presupposes, above all, that, between the specialist, the instructor and the student, "the technical object be taken for what it is at essence, that is, the technical object as invented, thought out, willed, and assumed by a human subject " (ibid.) Insofar as the essence of technicity resides in the concretization of a schemeof individuation, developing atechnical culture of managementhinges more on the transmission of their genesisthan on the transmission of their history alone. Specifically, transmitting a technical culture of the strategy-matrix objectwould entail explicating the synergetic functioningof its componentsand the degree oftechnicityinvolved in each of itsdifferent iterations, so that the student or the manager is able to invent orperfect his own matrixes, while remaining fully aware of the specific cognitive effectshe wishes to impart with this tool and each of its variants.In that way, a relationship can be formed with the technical object by "creatinga fruitful connection between the inventive and organizing capacities of several subjects" (ibid., p.342).With matrixes, that would mean teaching learners and users to create a more or less successful synergetic interaction between the functions ofcompressing, totalizing,linkingand stratification, as defined above ( §1.2). Simondon (1958, p. 335)defines the relationship that develops between inventors, users and humans as "transindividual"whenever a technical object is "appreciatedand known for what is at its essence." Some might rightfully wonder whetherthe usual educationalapproach to strategy matrixes reflects such a transindividualrelationship or whether, to the contrary, it fails to encourage sufficient consideration of the importance of symbolic machines of management, which risks turning future managers into a "proletarized worker," to borrow the expression coined by [START_REF] Stiegler | MécréanceetDiscrédit T1[END_REF]. We know only too well that it is indeed possible to work as a proletariat while still being"manipulators of symbols," in the words ofRobert Reich (1992).
Simondon posited the notion oftransindividualismbecause, in his thinking, a human, like all living beings, is never definitivelyan individual: "the individual is neither completenor substantial in itself" (2005, p. 216). Everengaged in a necessarily incomplete process of individuation, he has at his corea "reservoir of becoming" and remains a pre-individual(2005 : 167). That enables a part of himself, which is identical to other humans, to fuse with a superior individuated entity.Here, Simondonis describing two separate things (2005: 167). On the one hand, there is an affective dimension, whichwe will not address here, but there is also a cognitive dimension, which consists of schemesystems of thought. Put in more contemporary terms, while taking into account the rise of the cognitive sciences, it can be said that universality andour subconscious cognitive faculties represent the pre-individual reservoir of each human being, whereas the universal understanding of technical schemes among highly dissimilar people is a decidedlytransindividual act, which occurs when human beings, who are quite unalike, activate the same mental operations in a similar way. The Simondoniantransindividualcan, in this way, be seen as a core notion of a universal technical culture, which cuts across ethnic cultures and corporate cultures alike, provided that aunderstanding of the genesis or lineage of technology (notably, managerial techniques) is sufficiently developed.Hopefully, the reader of these lines is only too well aware that it is that very type of transindividual relationship that has begun to develop, here, between himselfor herself and creators of strategy matrixes.
Epistemologicaldiscussion: archeologyandallagmatic operations
In previous studies (author), we showedhow the four above-mentioned operations (compressing, linking, totalizingandstratification) could be readthrough the lens of Michel Foucault"s rules governing discursive formations(1969) and could be extended well beyond matrix models to coverall of the concepts generatedby French strategistsinthe 1960s, encompassing a wide variety of technical elements. Performing such an "Archeology of Strategy-related Knowledge"revealed that the strategy-related data that we collected was stratifiedvia cognitive tools such as matrixes according to institutional positions adopted by executive management..In other words, the epistemological stratification ofstrategy-related data reproducedthehierarchical stratification of the firm. The utopia of an all-powerful, all-knowing executive management was, in a certain manner, created by the very structure of the strategic concepts (Author).
The obvious limitations of an archeological approach are that it merely allows us to identify constants in the structure and the structuration of concepts, and uncover blind spots in concept formation. It must be remembered, too, that archeologyisarcheo-logy, which means it focuses on particular historical moments, seeking to regroupconceptual tools under the same banner withoutclassifying them in relation to each other, or on the basis of their lineage or forms of succession. In contrast, by viewing cognitive tools not only as concepts but as technical objects, after Simondon"s example, it is possible to identify their genesis andmake a cross comparison according to their degree of technicity. TheSimondoniannotion ofconcretizationis intended, here, to complement the Foucauldianrules of discursive formation (Foucault, 1969).
That idea can be developed even further. We need merely consider that Foucauldianarcheology, far from being solely a form of structuralism--which Foucault repeatedly denied to no avail--constituteswhat Gilbert Simondontermed anallagmatic operation or a"science of operations."The operation-based dimension of Foucauldian archeologybecomes clear, for example, through the schemesystemsthat Foucault suggested be employed to identify the rules of concept formation inany given corpus of knowledge (although that must be seen as only a preliminary step). Below, we present an outline of Foucault"s procedures of intervention in relation to discursive statements (1969 : 78):
Foucault"s procedures of intervention (1969) Operations performed through the "strategy matrix" considered as a technical object" Techniques of rewriting Redistribution of a model into a twodimensional type model Methods of transcribingaccording to a more or less formalized and artificial language Assigning a name to each strategy (e.g."concentric diversification") Methods of translating quantitative statements into qualitative formulations, and vice-versa Place categories on a continuum on each axis Methods ofexpanding approximations of statements and refining their exactitude Make a transition fromdiscrete categories to continuous gradients The way in which the domain of validity of statements is delimited, again through a process of expansion and refinement The way in which a type of statement is transferred from one field of application to another The methods of systematizing propositions that already exist, insofar as they were previously formulated, but in a separate state Include pre-designated strategies (organic growth, diversification) in an over-arching system as potential outcomes or courses of action. Methods of rearranging statements that are already related or linked to each other but have been recombined or redistributed into a new system Stratify the scope of possibilities within the block or partition of the matrix denoted asmilieu.
Table2: Foucault"s procedures of interventionandoperations performed by type III strategy matrixes.
By their very wording, these"rules of concept formation"reveal exactly how they operate. "Transcribing,""translating,""redistributing" (or "rearranging"), etc. are as much cognitive operations as discursive practices. Even if Foucauldian archeology does not draw explicitly on Simondon terminology, it undeniably establishes anexusbetween operation and structure. Written by Michel Foucault as a work of theory, but also as a defense and illustration of the approach he had adopted in his previous works (Foucault, 1961(Foucault, , 1962(Foucault, , 1966) ) at the height ofthestructuralist vogue, The Archeologyof Knowledgeseems almost to be out to confirmSimondon"s assertion that "a science of operations cannot be achieved unless the science of structures senses, from within, the limits of its own domain" (2005: 531).Simondonuses the term"allagmatic"to describethe "theoryof operations"(2005, p. 529). Our study of matrixes seeks to illustrate the intimate links that bind operation and structure, but in light of the conceptual groundworklaid out bySimondon. In Simondon"s view, an operation "precedes and leads up to the appearance of a structure or modifiesit" (ibid., p. 529). He provides a simple illustrationby describing the gesture made by a surveyor who traces a line parallelto a straight line through a point lying outside that straight line. The surveyor"s act is structured on "the parallelism that exists between a straight line in relation to another straight line,"whereas the operation behind that act is "the gesture through which he performs the tracing without really taking much notice to what it is he is tracing."The important thing here is that the operation, the"gesture,"has its own schema of how it is to be carried out. Indeed, to trace a straight line, a whole series of turns of the wrist and movements of the arm, for example, are called into play. The operation entailed in tracing a straight line requires adopting an array of angular positions, in contrast to the parallel lines themselves that will result from the act of tracing. The scheme of the operation (a variation of angles) is thus by no means an exact reflection of the scheme of the structure (strict alignment)needed to carry out the operation itself. Similarly, it can be said that the operations performed by a matrix (dynamic stratification, oriented gradient)do not reflect the static, symmetric, and isotropicschema that underlies the structural framework of each matrix box. The applicability of these concepts to strategy matrixes is obvious.Executive Management is ever confronted by concerns that are syncretic, super-saturatedand contradictory, and there is a constant need to refine and summarize strategy-related data and linkit intelligibly. The inventor of a strategy matrix crystalizesthis field of tensions into a two-dimensional structure that aims to classify, rank, interpolate and stratify it, while offering a metastablesolution toany incompatibilities and conflicting expectations.
The "type III strategy matrix" astechnological individual performs Management teams that compile strategy-related data and input itinto the matrix blocks modulate the data. The result of that operation is, if successful, a syncreticstrategic vision.Here, the matrix has played a rolethat Simondon calls "Formsignal." As for the management researcher, he also engages in a type of conversion action.For him, these conversion actions are neither modulation or demodulation but "analogy," in the full sense of the term as used by Simondon. Modulation and demodulation link operation and structure, whereas analogy links two operations with each other. This is why Simondon calls an analogy an "équivalencetransopératoire" (ibid., p. 531). Specifically, when the researcher or the instructor explicates the genesisof the strategy-matrix-as-technical-object,he or she creates a useful link between theinventor"s crystallizationof the matrix, on the one hand, and, the crystallization that consists in the reader"s understanding of that very same schema, thanks to the information storage and schematizing machine that is his brain, on the other hand.That process is made possible by the fact that we share the same facultiesof intelligence, which are a part of our common transindividuality. In Simondon words, "It is human understanding of, and knowledge about, the same operative schemas that human thought transfers "(2005 : 533). In an analogical operation, Simondonian epistemologyis superimposed onto the ontology. And let us end with a salient quote from the Simondonianphilosopher Jean-HuguesBarthélémy (2014: 27):
In contemplating all things in terms of their genesis, human thoughtparticipates in the construction of its own thinking, instead of confronting it directly, because "understanding" genesis is itself still a genesisfollowed by understanding.
Conclusion
This paper seeks, first and foremost, to make a unique theoretical contribution to management science: we have developed a transcultural theory on the essence of strategy matrixes and their technological genesis. We havealso sought to draw attention to significant methodological issues by testing and validating a study of cognitive management tools, principally by drawing parallels with Simondonianconcepts regarding electronic and mechanical technical objects from the 1950s. In addition, our contribution may be seen as having a number of implications for epistemology: we have highlighted the important structurationalist, as opposed tostructuralist, workings behind Foucauldian archeology. By studying the rules of concept formation that apply to management science, seen as a field of knowledge, we have sought to examine strategic management tools and concepts through an allagmatic perspective, viewing them as technical objects. Lastly, our researchcan have interesting repercussions for education, for we have outlined aneducational approach toexamining the technological culture of management based upon building a link between thetransindividualandthose who create management systems.
The controversies that have arisen pitting individualism against holism, universalism againstculturalism, the structure against dynamism, and beingagainst nothingness, are a reflection ofthegreat, perplexing difficulties that continue to haunt Western thought.
WithSimondon, the notion of genesisis given pride of place, mainly because it alone"presupposes the unity containing plurality" (2005: 266),and is seen asolver of aporia.The fact that a human being is engaged in a continuous genesis of itself is also a fundamental principle behind Simondon"s concept of the transindividual. The allagmatic (2005 : 429), which seeks to grasp the relationship between operationsand Figure 5. Gélinier/Sadoc"s matrix.
The underlying idea behind this matrix is that a firm whose product x competition outcome is unfavorable (typically illustrated by Gélinier in box A4, showing a product in decline stage in a context of intense competition) must change its product focus toward a mix that is more favorable (the arrow drawn by Gélinier points to box C2, indicating a product in the growth stage on a niche market). The product adaptation process is "ongoing" whenever the firm engages in a variety of business activities, where certain ones, as indicated in the upper right-hand section of the matrix, will have to undergo adaptation. The need to implement business changes came as part of a national industrial restructuring effort in the postwar period, after CECA and, later, the Common Market raised the possibility of "converting marginal businesses."
This chart can be viewed as a combination of two technical elements (graduated axes) located within a milieu (demarcated by space on the sheet of paper) that allows them to interact. Each segmented axis projects into the space all available options (i.e. all products or competitive situations falling within one of the types of pre-defined categories). It should be noted that the products axis is not only segmented but graduated as well, since the order of the segments reflects the law of the changing market reality, in contrast with the axis depicting competitive situations. At the same time, the space occupied by the matrix portrays 25 types of strategic situations, reducing the memorizing effort required to interpret the axes and their graduation. Hence, the matrix performs both a totalizing and compressing function. However, there is no clear, explicit method for linking the elements that explain the overall logic of adapting to market changes: financial synergy, the cash flow rationale, and the technology trajectory. In this pioneering technical object, which closely resembles the matrix designed by Arthur D. Little, the underlying portfolio assumptions are confined to a risk minimization strategy, at best. Structurally, there is no clear means of locating the milieu or zone of interaction between the two matrix axes; that is, there is no diagonal line created by the interaction of the different characteristics on each matrix axis.
Appendix 2: The "Panther/Elephant" matrix
A new management approach is beginning to appear on the horizon and is poised to challenge if not surpass the traditional "best management practices" spirit. For it is becoming increasingly clear that the quality of business management is no longer enough to guarantee success, as managers find themselves faced by an emerging breed of "flexible, fearless, but highly successful and visionary entrepreneurs."
Claude Charmont has proposed a model relying on all of these assumptions, giving it a form that represents one of the first and most highly original uses of strategy matrixes.
It classifies firms according to their business outlook within a two-dimensional array ("square matrix)"), with the first variable representing the degree of "best-practice spirit," and the second measuring the degree of "entrepreneurial spirit." memorization of 4 quadrants is reduced to the memorization of two axes). No diagonal effect is produced by combining axes, and there appears to be no means of circulating within the two-dimensional space, so that the matrix does not generate its own milieu.
A translation of The
New Type
Conglomerate (Heterogeneous) Diversification
(1) Related marketing efforts/systems and technology.
( Normal reading orientation (in the direction of the slope of the diagonal line) Lines at an iso-distance from the firm's current situation the term) into a continuum of options defined by their distance from the current situation, portrayed as concentric circles dubbed "contiguous zones."
From a functional point of view, the Ansoff matrix can be considered simply as a condensation, into a single object, of elements that appear in the morphological box and morphological territory model. Taking two more ungainly tools and combining them into a single, more "concretized" tool that is technically more sophisticated, is analogous to the laboratory machines whose fit is not yet optimal, as described by Simondon to illustrate the pre-individual stages that mark the genesis of a technical object. Likewise, depending on whether or not the firm"s growth (tr) exceeds its financial capacity (te), it will position itself to the left or to the right of the median line (te/tr = 1):
Tranlsation:
The firm loses its financial equilibrium
The firm improves its cash flow The most favorable situation for the firm is that of "industry leader" shown in quadrant te>tr>tm.
That situation can deteriorate toward either of two directions, each of which is linked to a specific type of management error: a) a "myopic view of the environment," in which a firm that is growing slower than the market experiences a dramatic loss in its growth capacity (a scenario depicted in the area below the main diagonal) and b)
"disregard of financial imperatives," where a growth crisis also places the firm in a difficult financial situation. In this approach, the path taken by a firm can be seen in the model (Bijon did not create the model used for this paper): Although there are considerable differences in the parameters at play, as well as in the underlying commercial and economic factors, the outcomes obtained from using these models are likely to be scarcely different. .
-
Most favourable situation
Deterioration due to adopting a myopic view of the environment
Deterioration of financial situation due to disregard of financial imperatives structure The four strategic alternatives [START_REF] Tavana | Euclid: Strategic Alternative Assessment Matrix[END_REF] Total
Category x Brand
IV
Although the axes have their own gradient, the "jigsaw" segmentation of the bidimensional space has a central pole, in discrepancy with the angular position of the poles of the axes. Two gradient compete with each other and blur each other. The strategy reference point matrix [START_REF] Fiegenbaum | Strategic Reference Point Theory[END_REF]
Time x Internal-External x Inputs-Outputs
IV
A three-dimensional matrix drawn in rough perspective. This destroys the milieu of the matrix. The object has lost its individuality.
Table 3. Technical stages of contemporary matrixes
The tension betweenthe compressingandtotalizing: Is it possible to give a condensed overview of corporatecourses of actions and financial performance and, at the same time, describe the totalityofstrategy factors? That is what explains the continual oscillation betweenhighly reductive 4-box matrixes, and 9-or 16-box models intended to describe reality in more detail.
,Figure 2 .
2 Figure 2. Twotypes of stratification graphs
Simondonposits concepts defining the relationship between structure, operationand the individual. Referring to paradigms found in the field of physical chemistry and information theory (ibid. p. 536),he defines modulationanddemodulationas two possible ways of linking an operationand a structure (ibid: 531). "Modulation is the act of bringing together operation and structure into an active ensemble called the modulator" and the act of demodulation is the exact opposite: separation.Each individual is, for Simondon"a domain of reciprocal convertibilityof operation into structure and structure into operation,"i.e. "the milieu oftheallagmatic act" (p. 535). An individual can inhabit twopossible states. The first is the so-called "syncretic"state of the individual engaged in the process of individuation, where operation and structure are still fused and indistinguishable; and the lack of distinguishability is the nature ofhis metastable situation: "the individualisfraught with tension, oversaturation, incompatibility. " (p. 535). That same individualsometimes enters another so-called "analytic"state, in which structure and operation exist correlatively, and the individualbecomes individuated.
Figure 3 :
3 Figure 3: Conversion actionsbetween operation" and "structure," based onSimondon"s concepts (2005: 535-536).
with oriented and graduated orthogonal axes, and the two-dimensional stratified milieu that emerges.
Fig. 6 .
6 Fig. 6. The Panther/elephant Matrix
Fig.7. TheAnsoff Diversification matrix (below : a re-transcription in English of the French document)
Figure 8 .
8 Figure 8. Stratifiedmilieu within the Ansoff diversification matrix
Figure 1 .
1 Figure 1. The firm"s growth rate compared to the market growth rate
Fig. 14 Figure 3 .
143 Fig. 14 The second segmentation within The Bijon Matrix
Fig. 16 .
16 Fig. 16. Strategic trends awkwardly suggested by the Bijon matrix
Table 1 :
1 The Strategy Matrix as Technical Object, viewed by degrees of intensifying
concretization and stages of development
Charmont matrix is shown below:
Entrepreneurial Spirit
Strong Weak Strong
Best 3. Conservative, 4. Firms enjoying fast-
management well-managed growing diversification
practices" firms but selective in exploring
spirit new avenues to profits
Weak 1. Bureaucratic 2. Dynamic, forward-
and conservative moving firms
firms characterized by a high
number of failed ventures
This relationship is not inconsistent with realistic metaphysics. Although Simondon did not advocate substantialism, he adhered to the philosophy of a "realism of relationships"(Barthélémy, 2008: 18-34).
These visual depictions follow the example of Simondon"s technical Atlas, which was used to support his arguments(Simondon, 1958[START_REF] Gilbert | [END_REF]).
structure, opens the way for resolving other incompatibilities. We hope that, in elaborating these topics in the context of specific management objects, our findings will incite the academic community to someday devise a true technical culture of management. And although that day may prove to be a long way off, we can only hope that Simondon"swish, expressed in 1958, will ultimately be realized (p. 298):
Through the generalization of the fundamental 'schemas', a 'technic of all techniques' could be developed: just as pure sciences have sets of precepts and rulesto be followed, we might imagine creating a pure technology or a general technology.
1. "The living being is an individual who carries within himself his associated milieu" (Simondon, 1958, p. 71) List of references (Auteur) (Auteur) Azmi, FezaTabassum (2008). Organizational Learning: Crafting a Strategic Framework. ICFAI Journal of Business Strategy. Jun2008, Vol. 5 Issue 2, p58-70. Banerjee, Saikat (2008). Strategic Brand-Culture Fit: A conceptual framework for brand management. Journal of Brand Management. May2008, Vol. 15 Issue 5, p312-321. Barthélémy, Jean-Hugues (2008). Simondonoul'encyclopédismegénétique. Paris: Presses Universitaires de France. Barthélémy, Jean-Hugues (2014). Simondon. Paris: Les Belles Lettres.
Appendix 1: the "Sadoc/ Gélinier Matrix" Gélinier (1963, pp. 158-169) designed a matrix portraying the correlation between certain types of situations and appropriate strategic responses, containing 8-variable values. It makes cross-tabulations between variables, but only for variables 1 and 2, through "Sedoc"s Table of Ongoing
Appendix 3. Ansoff's diversification matrix
In a work that has been translated into French, Ansoff (1970, chap. 7) lays out his thoughts on diversificationstrategies, using a matrix that exhibits a high degree of technicity. In Prévision à long termeetstratégie, Christophe [START_REF] Dupont | Prévision à long termeetstratégie[END_REF] attempts to establish a link between technology planning and strategic management. He presents two analytical tools that seem to have played a primordial role in the genesis of matrixes: the "morphological box" and "morphological territory."
New Products
New
The "morphological box" is a technology forecasting and planning tool that is still used, to this day, in France (Godet, 1997), for all kinds of forward-looking studies.
Every possible configuration is represented by an n-tuple[Pij], with a combination of values using a set of descriptive parameters indicating possible future scenarios or situations (following the example given in Dupont"s book, we have shown variables in sextuples). Some parameters have fewer possible alternatives than others, and "prohibited" scenarios are indicated with an "X."
(Lines : Descriptive Parameters)
Fig. 9 The morphological boxes (Columns : Options)
The author then introduces the notion of the difference, or distance, between the possible scenarios (in the same way that the distance between vectors is calculated in mathematics), which leads to the definition of "morphological territories," that is, concentric zones in which future situations are shown at a further and further remove from the current situation: Type of strategy adopted depending on disparities between industrial policies It should be noted that interpreting the policy recommendation is relatively straightforward: the multinational firm should adopt a less-integrated business model as the degree of divergence between the national policies rises.
Strategies in light of disparities in industrial Policies
List of Situations
J. Houssiaux"s chart is canonical but is not a "matrix" strictly speaking. In a true matrix, two different parameters are represented in each matrix cell in order to show a unique and unrepeated combination of values. Here, to the contrary, the model repeats a value ("severity" of state policies), placing it in two different blocks on either side of the main diagonal line. That explains why the chart is perfectly symmetrical, forming a rectangle that has been cut into two congruent triangles, with the same value in both the upper and lower halves. A single triangle, using either the upper or lower half of the chart, would have sufficed for presenting all of the information shown here. And so, not only is this not the most optimal use of space, it illustrates a very poor use of the compressing effect.
Nonetheless, the graduated axes of the matrix generate, a diagonal slope. Similarly, the fact that the full range of possible industrial policy options is covered by each axis performs a good totalizing function.
Appendix 6: The Bijon matrix
The author laid out a theory of making the right strategic choice, based on the perceived growth potential of the firm and its markets, respectively. The model shows a "twodimensional space" divided into six sectors, and requires a minimum of mathematical proficiency if it is to be used to good effect.
To construct this type of "matrix," the author defines three values, the third of which proves more difficult to express as a testable value than the first two:
"The market growth rate (tm)" If the firm is highly diversified "a different approach may have to be adopted, separately, for each of the firm"s business units" (p. 224)
"The "reasonable" growth rate (te) is the highest growth rate that the firm can allow itself to achieve without making a structural change to its balance sheet "
It is a "function of its cash-flow, its ability to negotiate borrowings on financial markets, and make sensible income-producing investments" (p. 224).
The firm"s position (or the position of a diversified firm"s business unit) is
shown on these three parameters in a plane (te/tr) x (te/tm).
Depending on whether or not the firm grows faster than the market where it operates, it will occupy one or the other side of the diagonal on this plane:
The firm increases its market share The firm"s market share decreases Let us look, first, at the differences:
As regards the calculated values, the BCG matrix cross-tabulates industry growth rate factors and their relative market shares. These values may appear to be constructed solely from data visible from outside the firm/industry, independently of its financial structure, management, etc., whereas the Bijon matrix cross-tabulates growth rates in terms of the value (te), which is clearly variable dependent on the firm"s balance sheet structure.
The other differences (division of the matrix into 4 parts instead of 6, no express requirement to use a portfolio with the Bijon matrix) are minor by comparison to the difference mentionedabove.
-There are also a number of important points that the models share in common:
For one thing, the planned or forecast values for both models are very similar.Indeed, in both cases, they create a diagonal effect within the matrix that naturally draws the eyes in the direction of its slope, to view the path taken by the firm.
In addition, when examining the commercial and economic laws underlying both models, we recognize an even closer similarity. On the one hand, the BCG matrix enjoys economic relevancy only because it bears out the law of the stages of industrial maturity, which itself is founded on an interpretation of the "experience curve": the more an industrial sector matures, the more a dominant market position in that particular sector is required in order to generate a cash flow from that sector. On the other hand, the Bijon model has a predictive value only if the "reasonable growth ratevalue(te) is constantly updated, insofar as it measures a firm"s capacity to supply capital that it has not applied toward its own growth. Although there is no explicit law of maturity justifying this model, the presence of the value (te) ensures that it is, in fact, taken into account, in the event that it indeed proves valid. The Bijonmodel rests on weaker assumptions than those inherent to the BCG model, and reveals itself to be more general in scope. It could be said, then, that the primary difference between the two tools is their difference in presentation: while the BCG matrix takes into account the firm"s financial resources only implicitly, through the law requiring that a balanced portfolio be maintained, which guides the manner in which its results are interpreted, the Bijon matrix displays its internal features explicitly in the matrix coordinates. In contrast, the need "not to lag behind when entering the market" is, in the case of the [START_REF] Spencer | An analysis of the product-process matrix and repetitive manufacturing[END_REF] Product structure x Process
III
The diagonal gradient is extremely explicit | 72,263 | [
"12868"
] | [
"57129"
] |
01484503 | en | [
"info",
"scco"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01484503/file/NER2017_DiscreteMotorImagery.pdf | Sébastien Rimbert
Cecilia Lindig-León
Mariia Fedotenkova
Laurent Bougrain
Modulation of beta power in EEG during discrete and continuous motor imageries
In most Brain-Computer Interfaces (BCI) experimental paradigms based on Motor Imageries (MI), subjects perform continuous motor imagery (CMI), i.e. a repetitive and prolonged intention of movement, for a few seconds. To improve efficiency such as detecting faster a motor imagery, the purpose of this study is to show the difference between a discrete motor imagery (DMI), i.e. a single short MI, and a CMI. The results of experiment involving 13 healthy subjects suggest that a DMI generates a robust post-MI event-related synchronization (ERS). Moreover event-related desynchronization (ERD) produced by DMI seems less variable in certain cases compared to a CMI.
I. INTRODUCTION
Motor imagery (MI) is the ability to imagine performing a movement without executing it [START_REF] Avanzino | Motor imagery influences the execution of repetitive finger opposition movements[END_REF]. MI has two different components, namely the visual-motor imagery and the kinesthetic motor imagery (KMI) [START_REF] Neuper | Imagery of motor actions: Differential effects of kinesthetic and visual-motor mode of imagery in single-trial {EEG}[END_REF]. KMI generates an event-related desynchronization (ERD) and an event-relatedsynchronization (ERS) in the contralateral sensorimotor area, which is similar to the one observed during the preparation of a real movement (RM) [START_REF] Pfurtscheller | Event-related eeg/meg synchronization and desynchronization: basic principles[END_REF]. Compared to a resting state, before a motor imagery, firstly there is a gradual decrease of power in the beta band [START_REF] Kilavik | The ups and downs of beta oscillations in sensorimotor cortex[END_REF](16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30) of the electroencephalographic signal, called ERD. Secondly, a minimal power level is maintained during the movement. Finally, from 300 to 500 milliseconds after the end of the motor imagery, there is an increase of power called ERS or post-movement beta rebound with a duration of about one second.
Emergence of ERD and ERS patterns during and after a MI has been intensively studied in the Brain-Computer Interface (BCI) domain [START_REF] Jonathan Wolpaw | Brain-Computer Interfaces: Principles and Practice[END_REF] in order to define detectable commands for the system. Hence, a better understanding of these processes could allow for the design to better interfaces between the brain and a computer system. Additionally, they could also play a major role where MI are involved such as rehabilitation for stroke patients [START_REF] Butler | Mental practice with motor imagery: evidence for motor recovery and cortical reorganization after stroke[END_REF] or monitoring consciousness during general anesthesia [START_REF] Blokland | Decoding motor responses from the eeg during altered states of consciousness induced by propofol[END_REF].
Currently, most of the paradigms based on MIs require the subject to perform the imagined movement several times for a predefined duration. In this study, such a task is commonly referred to as a continuous motor imagery (CMI). However, first the duration of the experiment is long, second a succession of flexions and extensions generates an overlapping of ERD and ERS patterns making the signal less detectable. *This work has been supported by the Inria project BCI LIFT 1 Neurosys team, Inria, Villers-lès-Nancy, F-54600, France 2 Artificial Intelligence and Complex Systems, Université de Lorraine, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54506 3 Neurosys team CNRS, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54506
In fact, one simple short MI, referred in this article as a discrete motor imagery (DMI), could be more useful for two reasons. Firstly, a DMI could be used to combat fatigue and boredom for BCI-users improving ERD and ERS production [START_REF] Ahn | Performance variation in motor imagery braincomputer interface: a brief review[END_REF]. Secondly, the ERD and ERS generated by the DMI could be detectable at a higher quality and more rapidly compared to a CMI. This was found in a previous study that established a relationship between the duration of MI and the quality of the ERS extracted and showed that a brief MI (i.e. 2 seconds MI) could be more efficient then a sustained MI [START_REF] Thomas | Investigating brief motor imagery for an erd/ers based bci[END_REF]. Our main hypothesis is that a DMI generates robust ERD and ERS patterns which could be detectable by a BCIsystem. To analyze and compare the modulation of beta band activity during a RM, a DMI and a CMI, we computed timefrequency maps, the topographic maps and ERD/ERS%.
II. MATERIAL AND METHODS
A. Participants
13 right-handed healthy volunteer subjects took part in this experiment (7 men and 6 women, from 19 to 43 years old). They had no medical history which could have influenced the task. All subjects gave their agreement and signed an information consent form approved by the ethical INRIA committee before participating.
1) Real movement: The first task consisted of an isometric flexion of the right index finger on a computer mouse. A low frequency beep indicated when the subject had to execute the task.
2) Discrete imagined movement: The second task was a DMI of the previous real movement.
3) Continuous imagined movement: The third task was a CMI during four seconds of the real movement of the first task. More precisely, the subject imagined several (around four) flexions and extensions of the right index finger. This way, the DMI differed from the CMI by the repetition of the imagined movement. The number of imagined flexions was fixed (4 MIs). For this task, two beeps, respectively with low and high frequencies, separated by a four second delay, indicated the beginning and the end of the CMI.
B. Protocol
Each of the three tasks introduced in section II corresponds to a session. The subjects completed three sessions during the same day. All sessions were split into several runs. Breaks of a few minutes were planned between sessions and between runs to avoid fatigue. At the beginning of each run, the subject was told to relax for 30 seconds. Condition 1 corresponded to RMs was split into 2 runs of 50 trials.
Conditions 2 and 3 corresponded to discrete and continuous imagined movements, respectively, was split into 4 runs of 25 trials. Thus, 100 trials were performed by subjects for each task. Each experiment began with condition 1 as session 1. Conditions 2 and 3 were randomized to avoid possible bias cause by fatigue, gel drying or another confounding factor.For conditions 1 and 2, the timing scheme of a trial was the same: one low frequency beep indicated the start followed by a rest period of 12 seconds. For condition 3, a low frequency beep indicated the start of the MI to do during 4 seconds, followed by a rest period of 8 seconds. The end of the MI is announced by a high frequency beep (Fig. 1). C. Electrophysiological data EEG signals were recorded through the OpenViBE [START_REF] Renard | Openvibe: An open-source software platform to design, test and use brain-computer interfaces in real and virtual environments[END_REF] platform with a commercial REFA amplifier developed by TMS International. The EEG cap was fitted with 9 passive electrodes re-referenced with respect to the common average reference across all channels over the extended international 10-20 system positions. The selected electrodes are FC3, C3, CP3, FCz, Fz, CPz, FC4, C4, Skin-electrode impedances were kept below 5 kΩ.
D. EEG data analysis
We performed time-frequency analysis using spectrogram method(Fig. 2). The spectrogram is a squared magnitude of the short-time Fourier transform. As the analysis window in the method of spectrogram we used Gaussian window with α = 2.5 [START_REF] Harris | On the use of windows for harmonic analysis with the discrete Fourier transform[END_REF] with overlap by one time point between the subsequent segments. The length of the window was chosen such as to give the frequency resolution ∆f = 1 Hz.
To evaluate more precisely this modulation we computed the ERD/ERS% using the "band power method" [START_REF] Pfurtscheller | Event-related eeg/meg synchronization and desynchronization: basic principles[END_REF] with a matlab code. First, the EEG signal is filtered between 15-30 Hz (beta band) for all subjects using a 4th-order Butterworth band-pass filter. Then, the signal is squared for each trial and averaged over trials. Then it is smoothed using a 250millisecond sliding window with a 100 ms shifting step. Finally, the averaged power computed for each window was subtracted and then divided by the averaged power of a baseline corresponding to 2 seconds before each trial.
In addition, we computed the topographic maps of the ERD/ERS% modulations for all subjects (see Fig. 3).
III. RESULTS
A. Electrophysiological results
To verify if a DMI generates ERD and ERS patterns which could be detectable by a CMI, we studied the following three features: (i) the time-frequency analysis for the electrode C3, (ii) the relative beta power for the electrode C3 and (iii) the topographic map built from the 9 selected electrodes. Electrode C3 is suitable for monitoring right hand motor activity. A grand average was calculated over the 13 subjects. We used a Friedman's test to analyze whether ERS were significantly and respectively different during the three conditions. Because participants were asked to close her eyes, the alpha band was disturbed (confirmed by the time-frequency analysis) and not considered for this study. Consequently values corresponding to the desynchronization appears smaller because they were only analyzed in the beta band. For this reason, section III is mainly focused on the ERS.
1) Real movement: Fig. 2.A illustrates a strong synchronization in the 17-20 Hz band appearing 2 seconds after the start beep and confirmed the activty in the beta band. The ERD/ERS% averages (Fig. 2.D) indicate that one second after the cue, the power in the beta band increases by around 80%, reaches its maximum and returns to the baseline 4 seconds after. The evolution from ERD to ERS is rapid (less than one second) and should be linked to the type of movement realized by the subjects. Interestingly, each subject (except Subject 13) has a same ERD/ERS% profile (i.e. a strong beta rebound) after the real movement. Subject 13 has no beta rebound after the movement but has a stronger ERD, it is particularly true for the other conditions. The grand average topographic map (Fig. 3) shows that the ERS is more important on the area of the electrode C3. However, the ERS is also present around other electrodes, as well as the ipsilateral one.
2) Discrete motor imagery: Fig. 2.B shows a strong modulation in the 16-22 Hz band starting 2 seconds after the start beep. The ERS post-MI reaches 28% which is less stronger compare to the other tasks (Fig. 2.E). Some subjects (S1, S2, S5, S6, S10) have a stronger robust ERS produced by DMI while others have no beta rebound. This confirms that a DMI could be used in BCI domain. The lack of beta rebound (S3, S4, S11) could be caused to the difficulty of the DMI task. Indeed, post-experiment questionnaires showed that some subjects had difficulties in performing this task. The grand average (Fig. 3) shows desynchronization around 5% over the C3 area. One second later, the beta rebound appears, and is more present around the C3 area.
3) Continuous motor imagery: During the CMI, the subjects imagined several movements in a time window of 4 seconds. Fig. 2.C show a global decrease of activity during the CMI and stronger modulation in 16-21 Hz after the MI. The results of the grand average showed a low desynchronization during this time window. It is interesting to note that some subjects (S2, S10) have no desynchronization during the CMI task and could have a negative effect on the classification phase. Other subjects (S6, S1, S7) have a different profile which shows that a first ERS is reached one second after the beginning of the CMI, then the power increases and decreases again, being modulated during 3 seconds. Indeed, this ERD can be considered as the concatenation of several ERDs and ERSs due to the realization of several MIs. Indeed, for some subjects (S1, S6 or S9) the first ERD (23%) is reached during the first second after the MI. The topographic map shows that during the first second after the start beep, an ERD is lightly visible, but there is difficulty to identify a synchronization or a desynchronization. Understanding of individual ERD and ERS profiles between subjects for the CMI task is crucial to improve the classification phase in a BCI.
4) Comparison between RM, DMI and CMI: We observe that the ERS is stronger for a real movement. In fact, the beta rebound is 60% larger for a RM than for a MI. Although the ERS is stronger during a DMI than a CMI for some subjects (S2 and S6), this result is not statistically significant according to the Friedman test. The ERS of the CMI is stronger than the ERS of a DMI in average. For both DMI and CMI, the ERD is stronger and lasts longer than for the real movement. For some subjects (S1, S6 and S10) ERD produced during the CMI is more variable and seems to be the result of a succession of ERD and ERS generated by several MI.
IV. DISCUSSION
The subjects carried out voluntary movements, DMI and CMI of an isometric flexion of the right hand index finger. Results show that the power of the beta rhythm is modulated during the three tasks. The comparison between ERSs suggests that subjects on average have a stronger ERS during a CMI than a DMI. However, this is not the case for all subjects.
A. EEG system
It is well established that a large number of electrodes allows to have a good estimation of the global average potential of the whole head [START_REF] Dien | Issues in the application of the average reference: review, critiques, and recommendations[END_REF]. Although we are focused on specific electrodes, our results were similar by using method of the derivation, which corresponded to the literature. We choosed to study C3 without derivation because we are interested to designing a mnimal system to detect ERD and ERS during general anesthesia conditions.
B. ERD/ERS modulation during real movements
The results are coherent with previous studies describing ERD/ERS% modulations during motor actions.The weakness of the ERD can be linked to the instruction that was focused more on the precision than the speed of the movement [START_REF] Pastötter | Oscillatory correlates of controlled speed-accuracy tradeoff in a response-conflict task[END_REF].
C. ERS modulation during motor imageries
The results show that the beta rebound is lower after a DMI or a CMI than after a real movement, which has been already been demonstrated previously [START_REF] Schnitzler | Involvement of primary motor cortex in motor imagery: a neuromagnetic study[END_REF]. However, the novelty is the beta rebound is stronger on average after a CMI than DMI for a few subjects.
D. ERD modulation during continuous motor imagery
When the subjects performed the CMI, the ERD was highly variable during the first 4 seconds. For some subjects, our hypothesis is there are some intern-ERD and intern-ERS into this period. The difficulty is that the CMI involve several MI, that are not synchronized across trials, unlike the DMI which starts and ends at roughly the same time for each trial, due to the cue. Normally, for continuous real movement, the ERD was sustained during the execution of this movement [START_REF] Erbil | Changes in the alpha and beta amplitudes of the central eeg during the onset, continuation, and offset of longduration repetitive hand movements[END_REF]. However, in our data it is possible to detect several ERDs during the 4 seconds of CMI where the subject performed 3 or 4 MIs. This assumes that the ERD and ERS components overlap in time when we perform a CMI. Several studies already illustrate the concept of overlap of various functional processes constituting the beta components during RMs [START_REF] Kilavik | The ups and downs of beta oscillations in sensorimotor cortex[END_REF]. This could explain why the ERD during a CMI could be less detectable and more varied than the ERD during a DMI. To validate this hypothesis, we plan to design a new study to explore how two fastsuccessive movements (or MIs) can affect the signal in the beta frequency band.
V. CONCLUSIONS This article examined the modulation of beta power in EEG during a real movement, a discrete motor imagery (DMI) and a continuous motor imagery (CMI). We showed that during a real voluntary movement corresponding to an isometric flexion of the right hand index finger a low ERD appeared, and was followed by a rapid and powerful ERS. Subsequently, we showed that the ERD and ERS components were still modulated by both a DMI and a CMI. The ERS is present in both cases and shows that a DMI could be used in BCI domain. In future work, a classification based on the beta rebound of a DMI and a CMI will be done to complete this study and confirm future impact of DMI task in BCIdomain to save time and avoid fatigue.
Fig. 1 .
1 Fig. 1. Timing schemes of a trial for each task: Real Movement (RM, top); Discrete Motor Imagery (DMI, middle); Continuous Motor Imagery (CMI, bottom). The DMI and CMI sessions are randomized.
Fig. 2 .
2 Fig. 2. Left side: time-frequency grand average (n = 13) analysis for the RM (A), the DMI (B), the CMI (C) for electrode C 3 . A red color corresponds to strong modulations in the band of interest. Right side: grand average ERD/ERS% curves (in black, GA) estimated for the RM (D), the DMI (E), the CMI (F) within the beta band (15-30 Hz) for electrode C 3 . The average for each subject is also presented.
Fig. 3 .
3 Fig. 3. Topographic map of ERD/ERS% (grand average, n=13) in the 15-30 Hz beta band during Real Movement (top), Discrete Motor Imagery (middle) and Continuous Motor Imagery (bottom). The red color corresponds to a strong ERS (+50%) and a blue one to a strong ERD (-40%). The green line indicates when the start beep sounds and the purple line indicates when the end beep sounds to stop the CMI. On this extrapolated map only recorded electrode will be considered (FC3, C3, CP3, FCz, Fz, CPz, FC4, C4, CP4). | 18,589 | [
"774179",
"1062"
] | [
"213693",
"213693",
"213693",
"413289",
"213693"
] |
01484574 | en | [
"info",
"scco"
] | 2024/03/04 23:41:48 | 2017 | https://inria.hal.science/hal-01484574/file/chi-teegi-interactivity.pdf | Jérémy Frey
email: jeremy.frey@inria.fr
Renaud Gervais
email: renaud.gervais@inria.fr
Thibault Lainé
email: thibault.laine@inria.fr
Maxime Duluc
email: maxime.duluc@inria.fr
Hugo Germain
email: hugo.germain@inria.fr
Stéphanie Fleck
email: stephanie.fleck@univ-lorraine.fr
Fabien Lotte
email: fabien.lotte@inria.fr
Martin Hachet
email: martin.hachet@inria.fr
Scientific Outreach with Teegi, a Tangible EEG Interface to Talk about Neurotechnologies
Keywords: Tangible Interaction, EEG, BCI, Scientific Outreach ACM Classification H.5.1 [Multimedia Information Systems]: Artificial, augmented, and virtual realities, H.5.2 [User Interfaces]: Interaction styles, H.1.2 [User/Machine Systems]: Human information processing, I.2.6 [Learning]: Knowledge acquisition
Teegi is an anthropomorphic and tangible avatar exposing a users' brain activity in real time. It is connected to a device sensing the brain by means of electroencephalography (EEG). Teegi moves its hands and feet and closes its eyes along with the person being monitored. It also displays on its scalp the associated EEG signals, thanks to a semi-spherical display made of LEDs. Attendees can interact directly with Teegi -e.g. move its limbs -to discover by themselves the underlying brain processes. Teegi can be used for scientific outreach to introduce neurotechnologies in general and brain-computer interfaces (BCI) in particular.
Introduction
Teegi (Figure 1) is a Tangible ElectroEncephaloGraphy (EEG) Interface that enables novice users to get to know more about something as complex as neuronal activity, in an easy, engaging and informative way. Indeed, EEG measures the brain activity under the form of electrical currents, through a set of electrodes placed on the scalp and connected to an amplifier (Figure 2). EEG is widely used in medicine for diagnostic purposes and is also increasingly explored in the field of Brain-Computer Interfaces (BCI). BCIs enable a user to send input commands to interactive systems without any physical motor activities or to monitor brain states [START_REF] Pfurtscheller | Motor imagery and direct brain-computer communication[END_REF][START_REF] Frey | Framework for electroencephalography-based evaluation of user experience[END_REF]. For instance, a BCI can enable a user to move a cursor to the left or right of a computer screen by imagining left or right hand movements respectively. BCI is an emerging research area in Human-Computer Interaction (HCI) that offers new opportunities. Yet, these emerging technologies feed into fears and dreams in the general public ("telepathy", "telekinesis", "mind-control", ...). Many fantasies are linked to a misunderstanding of the strengths and weaknesses of such new technologies. Moreover, BCI design is highly multidisciplinary, involving computer science, signal processing, cognitive neuroscience and psychology, among others. As such, fully understanding and using BCI can be difficult.
In order to mitigate the misconceptions surrounding EEG and BCI, we introduced Teegi in [START_REF] Frey | Teegi: Tangible EEG Interface[END_REF], as a new system based on a unique combination of spatial augmented reality, tangible interaction and real-time neurotechnologies. With Teegi, a user can visualize and analyze his or her own brain activity in real-time, on a tangible character that can be easily manipulated, and with which it is possible to interact. Since this first design, we switched from projection-based and 3D tracking technologies to a LEDs-based semi-spherical display (Figure 3). All the electronics are now embedded. This way, Teegi became self-contained and can be easily deployed outside the lab. We also added servomotors to Teegi, so that he can move and be moved. This way, we can more intuitively describe how hands and feet movements are linked to specific brain areas and EEG patterns. Our first exploratory studies in the lab shown that interacting with Teegi seemed to be easy, motivating, reliable and informative. Since then, we confirmed that Teegi is a relevant training and scientific outreach tool for the general public. Teegi as a "puppet" -an anthropomorphic augmented avatar -proved to be a good solution in the field to break the ice with the public and explain complex phenomena to people from all horizons, from children to educated adults. We tested Teegi across continents and cultures during scientific fairs before thousands of attendees, in India as well as in France.
Description of the system
The installation is composed of three elements: the EEG system that records brain signals from the scalp, a com-puter that processes those signals and the puppet Teegi, with which attendees interact.
EEG signals can be acquired from various amplifiers, from medical grade equipment to off-the-shelf devices. The choice of system mainly depends on the brain states that one wants to describe through Teegi. For instance, our installation focuses on the brain areas involved in motor activity, hence we require electrodes over the parietal zone. We use Brain Products' LiveAmp1 and Neuroelectrics' Enobio2 systems. The former has 32 gel-based electrodes, which give more accurate readings but are more tedious to setup. The Enobio has 20 "dry" electrodes, making it easier to switch the person whose brain activity is being monitored, but it is more prone to artifacts -e.g. if the person is not sitting. Both those systems are mobile and wireless.
The readings are sent to a computer. Those signals are acquired and processed by OpenViBE 3 , an open-source software dedicated to BCIs. OpenViBE acts as an abstraction layer between the amplifiers and Teegi, sending processed EEG signals through wifi to Teegi -for more technical details about the signal processing, see [START_REF] Frey | Teegi: Tangible EEG Interface[END_REF].
Teegi is 3D printed, designed to be both attractive and hold the various electronic components. It embeds a Raspberry Pi 3 and NiMh batteries (autonomy of approximately 2 hours). A python script on the Raspberry Pi handles the 402 LEDs (Adafruit Neopixel) covering the "head", which are connected to its GPIO pins. For a smoother display, the light of the LEDs is diffused by a 3mm thick cap made of acrylic glass. Two 8-by-8 white LEDs matrices picture the eyes. The script also commands the servomotors placed in the hands and feet, 4 Dynamixel XL320.
Scenario
Teegi possesses two operating modes: avatar and puppet. As an avatar, it uses the EEG system and directly translates the brain states being recorded into movements and brain activity display. As a puppet, the EEG is not used and one could interact freely with Teegi (move its limbs, close its eyes with a trigger), as a way to discover which brain regions are involved in specific motor activities or in vision.
Typically, a demonstration of Teegi starts by letting the audience play with the puppet mode. When one closes Teegi's eyes, she would notice that the display changed in the "back" of the head. We then explain that the occipital area holds the primary visual cortex. When ones move the left hand, a region situated on the right part of Teegi's scalp is illuminated. When the right hand is moved it is the opposite, LEDs situated on the left turn blue or red. We take this opportunity to explain that the body is contralaterally controlled; the right hemisphere controls the left part of the body and vice versa. Depending on the nature of the attendees, we can go further and explain the phenomenon of desynchronization that takes place within the motor cortex when there is a movement, and the synchronization that occurs between neurons when it ends.
With few intuitive interactions, Teegi is a good mediator for explaining basic neuroscience. When used as an avatar, the LED display and Teegi's servomotors are linked to the EEG system -for practical reasons one of the demonstrators wear the EEG cap. We demonstrate that when the EEG user closes her eyes, Teegi closes his. Moreover, Teegi's hands and feet move according to the corresponding motor activity (real or imaged) detected in the EEG signal. During the whole activity, Teegi's brain areas are illuminated according to the real-time EEG readings.
Audience and Relevance
The demonstration is suitable for any audience: students, researchers, naive or expert in BCI. We would like to meet with our HCI pairs to discuss the utility of tangible avatars that are linked to one's physiology. We believe that such interfaces, promoting self-investigation and anchored in reality, are a good example of how the field could contribute to education (e.g. [START_REF] Ms Horn | Comparing the use of tangible and graphical programming languages for informal science education[END_REF]) -moreover when it comes to rather abstract information. Teegi could also foster discussions about the pitfalls of BCI; for example it is difficult to avoid artifacts and perform accurate brain measures.
Overall, Teegi aims at deciphering complex phenomenon as well as raising awareness about neurotechnologies. Beside scientific outreach, in the future we will explore how Teegi could be used to better learn BCIs and, in medical settings, how it could help to facilitate stroke rehabilitation.
Figure 1 :
1 Figure 1: Teegi displays brain activity in real time by means of electroencephalography. It can be used to explain to novices or to children how the brain works.
Figure 2 :
2 Figure 2: An electroencephalography (EEG) cap.
Figure 3 :
3 Figure 3: Teegi possesses a semi-spherical display composed of 402 LEDs (left) which is covered by a layer of acrylic glass (right).
http://www.brainproducts.com/
http://www.neuroelectrics.com/
http://openvibe.inria.fr/
Acknowledgments
We want to thank Jérémy Laviole and Jelena Mladenović for their help and support during this project. | 9,880 | [
"562",
"962497",
"1003562",
"1003563",
"1003564",
"946542",
"4180",
"18101"
] | [
"179935",
"487838",
"179935",
"179935",
"179935",
"179935",
"234713",
"179935",
"3102"
] |
01484636 | en | [
"info",
"scco"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01484636/file/NER2017_ImprovingClassificationWithBetaRebound%20%281%29.pdf | Sébastien Rimbert
Cecilia Lindig-León
Laurent Bougrain
Profiling BCI users based on contralateral activity to improve kinesthetic motor imagery detection
Kinesthetic motor imagery (KMI) tasks induce brain oscillations over specific regions of the primary motor cortex within the contralateral hemisphere of the body part involved in the process. This activity can be measured through the analysis of electroencephalographic (EEG) recordings and is particularly interesting for Brain-Computer Interface (BCI) applications. The most common approach for classification consists of analyzing the signal during the course of the motor task within a frequency range including the alpha band, which attempts to detect the Event-Related Desynchronization (ERD) characteristics of the physiological phenomenon. However, to discriminate right-hand KMI and left-hand KMI, this scheme can lead to poor results on subjects for which the lateralization is not significant enough. To solve this problem, we propose that the signal be analyzed at the end of the motor imagery within a higher frequency range, which contains the Event-Related Synchronization (ERS). This study found that 6 out of 15 subjects have a higher classification rate after the KMI than during the KMI, due to a higher lateralization during this period. Thus, for this population we can obtain a significant improvement of 13% in classification taking into account the users lateralization profile.
I. INTRODUCTION
Brain-Computer interfaces (BCI) allow users to interact with a system using brain activity modulation mainly in electroencephalographic (EEG) signals [START_REF]Brain-Computer Interfaces: Principles and Practice[END_REF]. One major interaction mode is based on the detection of modulations of sensorimotor rhythms during a kinesthetic motor imagery (KMI), i.e, the ability to imagine performing a movement without executing it [START_REF] Guillot | Brain activity during visual versus kinesthetic imagery: an FMRI study[END_REF], [START_REF] Neuper | Imagery of motor actions: Differential effects of kinesthetic and visualmotor mode of imagery in single-trial EEG[END_REF]. More precisely, alpha [START_REF] Hashimoto | EEG-based classification of imaginary left and right foot movements using beta rebound[END_REF][START_REF] Allison | Could Anyone Use a BCI[END_REF][START_REF] Pfurtscheller | Motor imagery and direct braincomputer communication[END_REF][START_REF] Fok | An eeg-based brain computer interface for rehabilitation and restoration of hand control following stroke using ipsilateral cortical physiology[END_REF][START_REF] Medical | World medical association declaration of Helsinki: ethical principles for medical research involving human subjects[END_REF][START_REF] Renard | Openvibe: An open-source software platform to design, test and use brain-computer interfaces in real and virtual environments[END_REF][START_REF] Blankertz | Optimizing spatial filters for robust EEG single-trial analysis [revealing tricks of the trade[END_REF] and beta rhythms [START_REF] Hohne | Motor imagery for severly motor-impaired patients: Evidence for brain-computer interfacing as superior control solution[END_REF][START_REF] Ang | Filter bank common spatial pattern algorithm on BCI competition IV datasets 2a and 2b[END_REF][START_REF] Duprès | Supervision of timefrequency features selection in EEG signals by a human expert for brain-computer interfacing based on motor imagery[END_REF](18)(19)(20)(21)(22)(23)(24)(25) modulations can be observed measuring Event-Related Desynchronization (ERD) or Synchronization (ERS). In particular, before and during an imagined movement, there is a gradual decrease of power, mainly in the alpha band. Furthermore, after the end of the motor imagery, in the beta band, there is an increase of power called ERS or post-movement beta rebound [START_REF] Pfurtscheller | Event-related EEG/MEG synchronization and desynchronization: basic principles[END_REF].
A KMI generates an activity over specific regions of the primary motor cortex within the contralateral hemisphere of the body part used in the process [START_REF] Pfurtscheller | Functional brain imaging based on ERD/ERS[END_REF]. Some BCIs are based on this contralateral activation to differentiate the cerebral activity generated by a right-hand KMI from a left-hand KMI [START_REF] Qin | ICA and Committee Machine-Based Algorithm for Cursor Control in a BCI System[END_REF]. Usually, the modulation corresponding to a user interaction is scanned in specific frequency bands such as Alpha, Beta or Alpha+Beta . This activity is mainly *This work has been supported by the Inria project BCI LIFT 1 Neurosys team, Inria, Villers-lès-Nancy, F-54600, France 2 Artificial Intelligence and Complex Systems, Université de Lorraine, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54506 3 Neurosys team CNRS, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54506 observed, during the KMI in the 8-30 Hz band, which merge alpha and beta bands, or after the KMI in the beta band [START_REF] Hashimoto | EEG-based classification of imaginary left and right foot movements using beta rebound[END_REF].
Detection rates for these two KMI tasks vary with subjects and could be improved. Indeed, between 15% and 30% of the users are considered as BCI-illiterate and cannot control a BCI [START_REF] Allison | Could Anyone Use a BCI[END_REF]. In this article, we suggest that some of the so-called BCI-illiterate subjects have poor performance due to poor lateralization during the KMI task. Several studies showed activity only in the contralateral area [START_REF] Pfurtscheller | Motor imagery and direct braincomputer communication[END_REF] for a KMI, but other studies showed that ERD and ERS are also in the ipsilateral area [START_REF] Fok | An eeg-based brain computer interface for rehabilitation and restoration of hand control following stroke using ipsilateral cortical physiology[END_REF] and could be a problem for BCI classification.
According to our knowledge, no studies compare the classifier accuracy based on signals observed during the KMI versus after the KMI. In this article, we hypothesize the possibility to define specific profile of BCI users based on the contralateral activity of the ERD and the ERS. We define three BCI profiles based on accuracy: users with good accuracy i) during the KMI in the Alpha band, ii) during the KMI in the Alpha+Beta bands and iii) after the KMI in the Beta band. We also show that the accuracy is linked to the absence or presence of a contralateral activity during the observed periods.
II. MATERIAL AND METHODS
A. Participants
Fiftenn right-handed healthy volunteer subjects took part in this experiment (11 men and 4 women, 19 to 43 years old). They had no medical history which could have influenced the task. All experiments were carried out with the consent agreement (approved by the ethical committee of INRIA) of each participant and following the statements of the WMA declaration of Helsinki on ethical principles for medical research involving human subjects [START_REF] Medical | World medical association declaration of Helsinki: ethical principles for medical research involving human subjects[END_REF].
B. Electrophysiological data
EEG signals were recorded by the OpenViBE [START_REF] Renard | Openvibe: An open-source software platform to design, test and use brain-computer interfaces in real and virtual environments[END_REF] platform from fiftenn right-handed healthy subjects at 256 Hz using a commercial REFA amplifier developed by TMS International TM . The EEG cap was fitted with 26 passive electrodes, namely Fp1; Fp z ; Fp2; F z ; FC5; FC3; FC1; FC z ; FC2; FC4; FC6; C5; C3; C1; C z ; C2; C4; C6; CP5; CP3;CP1; CP z ; CP2; CP4; CP6 and P z , re-referenced with respect to the common average reference across all channels and placed by using the international 10-20 system positions to cover the primary sensorimotor cortex.
C. Protocol
Subjects were asked to perform two different kinesthetic motor imageries to imagine the feeling of the movement (left hand and right hand). They were seated in a comfortable chair with the arms at their sides in front of a computer screen showing the cue indicated the task to perform. The whole session consisted of 4 runs containing 10 trials per task for a total of 40 trials per class.
Two panels were simultaneously displayed on the screen, which were associated from left to right, to the left hand and right hand. Each trial was randomly presented and lasted for 12 seconds, starting at second 0 with a cross at the center of each panel and an overlaid arrow indicating for the next 6 seconds the task to be performed.
D. Common Spatial Pattern
We used the algorithm called Common Spatial Pattern (CSP) to extract motor imagery features from EEG signals;
this generated a series of spatial filters that were applied to decompose multi-dimensional data into a set of uncorrelated components [START_REF] Blankertz | Optimizing spatial filters for robust EEG single-trial analysis [revealing tricks of the trade[END_REF]. These filters aim to extract elements that simultaneously maximize the variance of one class, while minimizing the variance of the other one. This algorithm has been used for all conditions: the three frequency bands (Alpha, Beta and Alpha+Beta) during the ERD (0-6s) and ERS (6-12s) time windows (Figure 2).
E. ERD/ERS patterns
To evaluate more precisely the modulation which appeared during the two different time windows, we computed the ERD/ERS% using the "band power method" [START_REF] Pfurtscheller | Event-related EEG/MEG synchronization and desynchronization: basic principles[END_REF] with a Matlab code. First, the EEG signal was filtered considering one of the three different frequency bands (7-13 Hz, Alpha band; 15-25 Hz, Beta band; Alpha+Beta 8-30 Hz) for all subjects using a 4th-order Butterworth band-pass filter. Then, the signal was squared for each trial and averaged over trials. Then it is smoothed using a 250-ms sliding window with a 100 ms shifting step. The averaged power computed for each window was subtracted and then divided by the averaged power of a baseline corresponding to a 2s window before each trial. Finally, the averaged power computed for each window was subtracted and then divided by the averaged power of a baseline corresponding 2s before each trial. This transformation was multiplied by 100 to obtain percentages. This process can be summarized by the following equation:
ERD/ERS% = x 2 -BL 2 BL 2 × 100 , (1)
where x 2 is the average of the squared signal over all trials and samples of the studied window, BL 2 is the mean of a baseline segment taken at the beginning of the corresponding trial, and ERD/ERS% is the percentage of the oscillatory power estimated for each step of the sliding window. It is done for all channels separately.
ERD and ERS are difficult to observe from the EEG signal. Indeed, an EEG signal expresses the combination of activities from several neuronal sources. One of the most effective and accurate techniques used to extract events is the average technique [START_REF] Quiroga | Single-trial event-related potentials with wavelet denoising[END_REF]. We decided to use this technique to represent the modulation of power of the Alpha, Beta and Alpha+Beta rhythms for two KMIs tasks.
III. RESULTS
A. Three BCI user profiles
Table 2 shows the best accuracy obtained for each subject on a discriminative task of left-hand and right-hand KMI according to the three profiles defined in Section I. Thus, 6 subjects have a higher accuracy looking at the Beta band after the KMI, 3 subjects have a higher accuracy looking at the Alpha band during the KMI and 6 subjects have a higher accuracy looking at the Alpha+Beta band during the KMI. The best averaged accuracy over subjects were obtained considering modulations during KMI (in alpha or in alpha+beta bands). However, looking at the individual performances, we can see that 6 subjects were better considering the beta band after the KMI. For this population we can obtain a significant improvement of 13% in classification considering the activity after the KMI versus during the KMI. Using the best profile for each subject improves the averaged accuracy of 6%.
B. Classification rate and contralateral ERD/ERS activity
Subjects with a higher accuracy in the Beta band after the KMI (Profile 2) have a strong ERS in contralateral during this period and a bilateral desynchronization during the KMI in the Alpha and Alpha+Beta bands (see subject 2, Fig. 4). This result is confirmed by the grand average map (Fig. 3) which shows also an ipsilateral ERD after the KMI. Finally, bilaterally ERD during the KMI, contralateral ERS and ipsilaterad ERD after the KMI could explain the high accuracy for these subjects. To validate our hypothesis, we show that the contralateral activity of subject 2 is higher for KMIs tasks on the post-KMI period in the Beta band (Fig. 5).
Conversely, subjects with a higher accuracy in the Alpha and Alpha+Beta bands during the KMI (Profiles 1 and 3) have a strong contralateral ERD during the task (Fig. 3 and Fig. 4). After the KMI, in the three frequency bands, they have no contralateral ERS or no Beta rebound on the motor cortex (see subject 10, Fig. 4). Figure 6 shows that the contralateral activity of subject 10 is higher for KMIs tasks during the KMI period in the Alpha band. Fig. 6. Box plots of the power spectrum for Subject 10 (Profile 1) within the alpha band and the beta band over electrodes C3 and C4 for right hand and left KMIs. It can be noticed that there is a higher difference between the contralateral activity during the KMI period in the alpha band.
IV. DISCUSSION
Subjects carried out left-hand KMIs and right-hand KMIs. Results show that 6 out of 15 subjects had a higher classification accuracy based on the post-KMI period in the beta band. This specific accuracy is due to a higher lateralization of ERD and ERS during this period.
Our study shows results which could allow to design an adaptive BCI based on contralateral activity on the motor cortex. The importance of BCI users profiles, especially for patients with severe motor impairments has already been established by other studies [START_REF] Hohne | Motor imagery for severly motor-impaired patients: Evidence for brain-computer interfacing as superior control solution[END_REF]. Moreover, it appears that there can be important changes of the contralateral activity under the choice of the frequency band [START_REF] Ang | Filter bank common spatial pattern algorithm on BCI competition IV datasets 2a and 2b[END_REF], [START_REF] Duprès | Supervision of timefrequency features selection in EEG signals by a human expert for brain-computer interfacing based on motor imagery[END_REF]. This is why, if we expect designing an adaptive BCI based on the specific contralateral activity of the motor cortex, it is necessary to merge these two methods.
More subjects are necessary to precise this BCI user profile. However, we investigated other KMIs (not detailed in this article), especially combined KMI (i.e. right-hand and left-hand KMI together versus right-hand KMI) and it appears that some subjects have the same BCI profile.
V. CONCLUSIONS
In this article, we analyzed classification accuracies to discriminate right-hand and left-hand kinesthetic motor imageries. More specifically, we distinguished two periods (i.e., during the KMI and after the KMI) for three frequency bands (Alpha, Beta and Alpha+Beta). We defined three BCI profiles based on the accuracy of 15 subjects: users with a good accuracy i) during the KMI in the alpha band, ii) during the KMI in the Alpha+Beta band and iii) after the KMI in the Beta band. This work showed that 6 out of 15 subjects had a higher classification accuracy after the KMI in the beta band, due to a contralateral ERS activity on the motor cortex. Finally, taking into account the user's lateralization profile, we obtained a significant improvement of 13% in classification for these subjects. This study show that users with a low accuracy analyzing the EEG signals during the KMI cannot be considered as BCI-illiterate. Thus, in future work, an automatic method to profile BCI users will be done allowing to design an adaptive BCI based on the best period to observe a contralateral activity on the motor cortex.
Fig. 1 .
1 Fig. 1. Time scheme for the 2-class setup: left-hand KMI and right-hand KMI. Each trial was randomly presented and lasted for 12 second(s). During the first 6 seconds, users were asked to perform the motor imagery indicated by the task cue. The use of each body part was indicated by the presence of arrows: an arrow pointing to the left side on the left panel for a left hand KMI, an arrow pointing to the right side on the right panel for a right hand KMI. After 6s, the task cue disappeared and the crosses were remaining for the next 6 seconds indicating the pause period before the next trial started.
Fig. 2 .
2 Fig. 2. Accuracy results obtained by a Linear Discriminant Analysis (LDA) and using the CSP algorithm as feature extraction on the 2 classes (left-hand KMI and right-hand KMI) for 15 subjects. The classification method was applied on three frequency band (Alpha, Beta and Alpha+Beta) on the ERD time window (0-6s) and on the ERS time window (6-12s).
Fig. 3 .
3 Fig. 3. Topographic map of ERD/ERS% on three frequency bands (Alpha:7-13 Hz; Beta:15-25 Hz; Alpha+Beta:8-30 Hz) for two KMI tasks (left-hand and right-hand). Profile 1 represents grand average for Subject 10, 13 and 14, who have better performance during the ERD phase (0-6 seconds) in Alpha band. Profile 2 represents grand average for Subject 2, 4, 5, 6, 7 and 12, who have better performance during the ERS phase (6-12 seconds) in Beta band. Profile 3 represents grand average for Subject 1, 3, 8, 9, 11 and 15, who have better performance during the ERD phase (0-6 seconds) in Alpha+Beta band. The red color corresponds to a strong ERS and a blue one to a strong ERD.
Fig. 4 .
4 Fig. 4. Topographic map of ERD/ERS% in three frequency bands (Alpha:7-13 Hz; Beta:15-25 Hz; Alpha+Beta:8-30 Hz) for two KMI tasks (left hand and right hand). Subject 10 is representative of Profile 1. Subject 2 is representative of Profile 2. Subject 11 is representative of Profile 3. The red color corresponds to a strong ERS and a blue one to a strong ERD.
2 Fig. 5 .
25 Fig.[START_REF] Pfurtscheller | Functional brain imaging based on ERD/ERS[END_REF]. Box plots of the power spectrum for Subject 2 (Profile 2) within the Alpha band and the Beta band over electrodes C3 and C4 for right hand and left hand KMIs. It can be noticed that there is a higher difference between the contralateral activity during the post-KMI period in the Beta band. | 18,910 | [
"1062"
] | [
"213693",
"213693",
"213693"
] |
01484673 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484673/file/978-3-642-36611-6_10_Chapter.pdf | Björn Johansson
email: bjorn.johansson@ics.lu.se
Feedback in the ERP Value-Chain: What Influence has Thoughts about Competitive Advantage
Keywords: Competitive Advantage, Enterprise Resource Planning (ERP), ERP Development, Resource-Based View, Value-Chain
Different opinions about whether an organization gains a competitive advantage (CA) from an enterprise resource planning (ERP) system exist. However, this paper describes another angle of the much reported competitive advantage discussion. The basic question in the paper concerns how thoughts about receiving competitive advantage from customizing ERPs influences feedback in ERP development. ERP development is described as having three stakeholders: an ERP vendor, an ERP partner or re-seller, and the ERP end-user or client. The question asked is: What influence has thoughts about receiving competitive advantage on the feedback related to requirements in ERP development? From a set of theoretical propositions eight scenarios are proposed. These scenarios are then illustrated from interviews with stakeholders in ERP development. From an initial research, evidence for six of these eight scenarios was uncovered. The main conclusion is that thoughts about competitive advantage seem to influence the feedback, but not really in the way that was initial assumed. Instead of, as was assumed, having a restrict view of providing feedback stakeholders seems to be more interested in having a working feedback loop in the ERP value-chain making the parties in a specific value-chain more interested in competing with other parties in other ERP valuechains.
Introduction
Competitive Advantage (CA) and how organizations gain CA from Information and Communication Technologies (ICTs) are subjects that have been discussed extensively. Different opinions on the answer to the question as to whether ICTs enable organizations to gain CA exist. Some proponents, such as Carr [START_REF] Carr | IT Doesn't Matter[END_REF], claim that the technology is irrelevant since it can be treated as a commodity. Others, such as Tapscott [START_REF] Tapscott | The engine that drives success[END_REF], argue for its importance while still other writers say it depends on how the technology is used and that it is how business processes are managed that are primary for gaining CA [START_REF] Smith | IT doesn't matter -business processes do: a critical analysis of Nicholas Carr's I.T[END_REF]. However, in reviewing the academic literature there seems to be a common understanding that it is not the technology as such that eventually provides organizations with CA but how the technology is managed and used [START_REF] Mata | Information technology and sustained competitive advantage: A resource-based analysis[END_REF].
However, in this paper another perspective of CA in relation to Enterprise Resource Planning systems (ERPs) is discussed, and that is how the ERP value-chain stakeholders' interests in maintaining or improving their CA may influence feedback related to requirements of ERPs. When distinguishing between the stakeholders in the ERP value-chain and their relative positions, the subject becomes more complex. The research builds on a set of propositions suggesting what gives stakeholders in the ERP value-chain their CA. The propositions are then presented as win-lose scenarios that are discussed using preliminary findings from an empirical study.
The principle question addressed in this paper is: What influence has thoughts about receiving competitive advantage on the feedback related to requirements in ERP development?
The rest of the paper is organized as follows: The next section defines ERPs and describes the ERP value-chain and its stakeholders. Section 3 then define CA and describe ERPs and CA from the resource-based view of the firm perspective. This is followed by a presentation of the propositions and a table suggesting CA scenarios in relation to the different stakeholders in the ERP value-chain. The penultimate section presents eight scenarios together with some preliminary findings from own as well as extant studies. Finally some concluding remarks in addition with directions for future research are presented.
ERPs, the ERP Value-Chain and its Stakeholders
ERPs are often defined as standardized packaged software designed with the aim of integrating the internal value chain with an organization's external value chain through business process integration [START_REF] Lengnick-Hall | The role of social and intellectual capital in achieving competitive advantage through enterprise resource planning (ERP) systems[END_REF][START_REF] Rolland | Bridging the Gap Between Organisational Needs and ERP Functionality[END_REF], as well as providing the entire organization with common master data [START_REF] Hedman | ERP systems impact on organizations[END_REF]. Wier et al. [START_REF] Wier | Enterprise resource planning systems and non-financial performance incentives: The joint impact on corporate performance[END_REF] argue that ERPs aim at integrating business processes and ICT into a synchronized suite of procedures, applications and metrics which transcend organizational boundaries. Kumar and van Hillegersberg [START_REF] Kumar | ERP experiences and evolution[END_REF] claim that ERPs that originated in the manufacturing industry were the first generation of ERPs. Development of these first generation ERPs was an inside-out process proceeding from standard inventory control (IC) packages, to material requirements planning (MRP), material resource planning (MRP II) and then eventually expanding it to a software package to support the entire organization (second generation ERPs). This evolved software package is sometimes described as the next generation ERP and labeled as ERP II which, according to Møller [START_REF] Møller | ERP II: a conceptual framework for next-generation enterprise systems[END_REF], could be described as the next generation enterprise systems (ESs).
This evolution has increased the complexity not only of usage, but also in the development of ERPs. The complexity comes from the fact that ERPs are systems that are supposed to integrate the organization (both inter-organizationally as well as intra-organizationally) and its business processes into one package [START_REF] Koch | ERP-systemer: erfaringer, ressourcer, forandringer[END_REF]. It can be assumed that ERPs as well as how organizations use ERPs have evolved significantly from a focus on manufacturing to include service organizations [START_REF] Botta-Genoulaz | An investigation into the use of ERP systems in the service sector[END_REF]. These changes have created a renewed interest in developing and selling ERPs. Thus, the ERP market is a market that is in flux. This impacts not only the level of stakeholder involvement in an ERP value-chain [START_REF] Ifinedo | ERP systems success: an empirical analysis of how two organizational stakeholder groups prioritize and evaluate relevant measures[END_REF][START_REF] Somers | A taxonomy of players and activities across the ERP project life cycle[END_REF], but also how these different stakeholders gain CA from developing, selling, or using ERPs. It is clear that a user organization no longer achieves CA just by implementing an ERP [START_REF] Karimi | The Impact of ERP Implementation on Business Process Outcomes: A Factor-Based Study[END_REF][START_REF] Kocakulah | Enterprise Resource Planning (ERP): managing the paradigm shift for success[END_REF]. Fosser et al., [START_REF] Fosser | ERP Systems and competitive advantage: Some initial results[END_REF] present evidence that supports this and at the same time show that for some organizations there is a need to implement an ERP system for at least achieving competitive parity. They also claim that the way the configuration and implementation is accomplished can enhance the possibility to gain CA from an ERP system, but an inability to exploit the ERP system can bring a competitive disadvantage. This is in line with the assumption from the resource-based view that it is utilization of resources that makes organizations competitive and just implementing ERPs provides little, if any, CA [START_REF] Mata | Information technology and sustained competitive advantage: A resource-based analysis[END_REF]. One reason for this could be that the number of organizations that have implemented ERPs has exploded. Shehab et al. [START_REF] Shehab | Enterprise resource planning: An integrative review[END_REF] claim that the price of entry for running a business is to implement an ERP, and they even suggest that it can be a competitive disadvantage if you do not have an ERP system. Beard and Sumner [START_REF] Beard | Seeking strategic advantage in the post-net era: viewing ERP systems from the resource-based perspective[END_REF] argue that through reduction of costs or by increasing organizations revenue, ERPs may not directly provide organizations with CA. Instead, they suggest that advantages could be largely described as value-adding through an increase of information, faster processing, more timely and accurate transactions, and better decision-making.
In contrast to the above analysis, development of ERPs is described as a valuechain consisting of different stakeholders, as shown in Figure 1. The value-chain differs between different business models, however, it can be claimed that the presented value-chain is commonly used in the ERP market. The presented valuechain can be seen as an ERP business model that has at least three different stakeholders: ERP software vendors, ERP resellers/distributors, and ERP end-user organizations (or ERP customers). It can be said that all stakeholders in the valuechain, to some extent, develop the ERP further. However, what it is clear is that the feedbacks, related to requirements, from users are of importance for future development. The software vendors develop the core of the system that they then "sell" to their partners that act as resellers or distributors of the specific ERP. These partners quite often make changes to the system or develop what could be labeled as add-ons to the ERP core. These changes or add-ons are then implemented in order to customize the ERP for a specific customer. In some cases the customer develops the ERP system further either by configuration or customization. At this stage of the value-chain it can be argued that the "original" ERP system could have changed dramatically from its basic design. This ERP development value-chain may result in the ERP software vendors not having as close connection to the end-user that they would choose and they do not always understand what functionalities are added to the end-users' specific ERP systems. Therefore is feedback in the ERP value-chain essential for future development. The stakeholders in the ERP value-chain have different roles; accordingly, they have different views of CA gained from ERPs. One way of describing this is to use a concept from the resource-based view: core competence [START_REF] Javidan | Core competence: What does it mean in practice?[END_REF]. Developing ERPs are normally the ERP software vendor's core competence. The ERP reseller/distributors' core competence should also be closely related to ERPs, but it is unclear if development should be their core competency. Their core competences could or should be marketing and implementing ERPs. However, this probably varies between ERP resellers/distributors; for some it could be development of add-ons that constitute one of their core competences. When it comes to end-user organizations, it can be said that ERP development definitely is not their core competence. However, they are involved in the ERP development value-chain, since it is crucial for an organization to have alignment between its business processes and supporting technology. To further discuss this ERPs and CA are discussed from the resource-based view of the firm in the next section.
ERP and Competitive Advantage seen from the Resource-Based View
Whether an organization (the customer in figure 1) gains CA from software applications depends, according to Mata et al. [START_REF] Mata | Information technology and sustained competitive advantage: A resource-based analysis[END_REF], as well as Kalling [START_REF] Kalling | Gaining competitive advantage through information technology: a resource-based approach to the creation and employment of strategic IT resources[END_REF], on how these resources are managed. The conclusion Mata et al. [START_REF] Mata | Information technology and sustained competitive advantage: A resource-based analysis[END_REF] draw is that among attributes related to software applications -capital requirements, proprietary technology, technical skills, and managerial software applications skills -it is only the managerial software application skills that can provide sustainability of CA. Barney [START_REF] Barney | Firm resources and sustained competitive advantage[END_REF] concludes that sources of sustained CA are and must be focused on heterogeneity and immobility of resources. This conclusion builds on the assumption that if a resource is evenly distributed across competing organizations and if the resource is highly mobile, the resource cannot produce a sustained competitive advantage as described in the VRIO framework (Table 1).
The VRIO framework aims at identifying resources with potential for having sustained competitive advantage by answering the questions, is a resource or capability…If all answers are answered affirmative, the specific resource has the potential to deliver sustained competitive advantage to the organization. However, to do that, it has to be efficient and effectively organized. Barney [23] describes this as exploiting the resource. If the organization is a first-mower in the sense that it is the first organization that uses this type of resource in that specific way, it can quite easily receive competitive advantage, but, it can be temporary. How long time the competitive advantage lasts is a question of how hard it is for others to imitate the usage of that resource. This means that the question of how resources are exploited by the organization is the main factor when it comes to if the competitive advantage becomes sustainable or not. The framework, Table 1, which employs Barney 's [START_REF] Barney | Firm resources and sustained competitive advantage[END_REF] notions about CA and ICT in general, has been used extensively [START_REF] Lengnick-Hall | The role of social and intellectual capital in achieving competitive advantage through enterprise resource planning (ERP) systems[END_REF][START_REF] Beard | Seeking strategic advantage in the post-net era: viewing ERP systems from the resource-based perspective[END_REF][START_REF] Kalling | Gaining competitive advantage through information technology: a resource-based approach to the creation and employment of strategic IT resources[END_REF][START_REF] Fosser | Organisations and vanilla software: What do we know about ERP systems and competitive advantage?[END_REF]. What the conducted research implies is that CA can be difficult but not impossible to achieve if the resource is difficult to reproduce (e.g. the role of history, causal ambiguity and social complexity). Fosser et al., [START_REF] Fosser | Organisations and vanilla software: What do we know about ERP systems and competitive advantage?[END_REF] conclude that the real value of the resource is not the ICT in itself, but the way the managers exploit it, which is in line with the resourcebased view of the firm and the value, rareness, imitability and organization (VRIO) framework.
Quinn and Hilmer [START_REF] Quinn | Strategic Outsourcing[END_REF] argue that organizations can increase the CA by concentrating on resources which provide unique value for their customers. There are many different definitions of CA; however, a basic definition is that the organization achieves above normal economic performance. If this situation is maintained, the CA is deemed to be sustained. Based on the discussion above and the statement made by Quinn and Hilmer [START_REF] Quinn | Strategic Outsourcing[END_REF], Table 2 suggests what outcome of CA could be and how it potentially could be gained by different stakeholders in the ERP development valuechain including the end-user. There are some conflicts between attributes for gaining CA, such as developing competitively priced software with high flexibility and developing software that is easy to customize and, at the same time, achieve CA by developing exclusive add-ons.
If the organization is a first mover in the sense that it is the first organization that uses this type of resource in a specific way, it can quite easily gain CA, but it will probably only be temporary. The length of time that the CA lasts depends on how hard or expensive it is for others to imitate the usage of that resource. This means that the question of how resources are exploited by the organization is the main factor when it comes to whether the CA becomes sustainable or not.
Levina and Ross [START_REF] Levina | From the vendor's perspective: Exploring the value proposition in information technology outsourcing(1)(2)[END_REF] describe the value proposition in outsourcing from a vendor's perspective. They claim that the value derived from vendors is based on their ability to develop complementary core competencies. From an ERP perspective, it can be suggested that vendors, as well as distributors (Figure 1) provide value by delivering complementary core competencies to their customers. The evolution of ERPs has made these resources easier to imitate. However, a major barrier to imitation is the cost of implementation [START_REF] Robey | Learning to Implement Enterprise Systems: An Exploratory Study of the Dialectics of Change[END_REF][START_REF] Davenport | Holistic management of mega-package change: The case of SAP[END_REF]. Being competitive in its own market Implementing an ERP system that supports its business processes Implementing an ERP system that is difficult for competitors to reproduce The resource-based view claims that a resource has to be rare or, be heterogeneously distributed, to provide CA. In the case of ERPs, this kind of resource is not rare. There are a lot of possibilities for organizations to implement different ERPs, and the evolution of ICT has made it feasible for more organizations to implement ERPs by decreasing the costs of using ERPs. However, as described by Barney [23] and Shehab et al. [START_REF] Shehab | Enterprise resource planning: An integrative review[END_REF], failure to implement an ERP can also lead to an organization suffering competitive disadvantages.
The CA from ERPs would probably be negated by duplication as well as by substitution. If, for instance, the ERP resellers sold their add-ons to the ERP software vendor, the duplication of that add-on would be quicker and the CA that the ERP reseller previously had would be gradually eroded. However, if they kept the add-on as "their" unique solution, other ERP resellers or ERP software vendors would probably find a substitute to the add-on or develop their own. This implies a conflict between vendors and resellers when it comes to CA and the development of "better" ERPs. This can be explained by realizing that ERP resellers/distributors often develop add-ons which have a specific functionality for solving a particular problem for their customer. This can be seen as one way of customization, where resellers/distributors use their domain knowledge about the customers' industry in addition to their knowledge about the specific customer. This, in effect, allows resellers to increase their CA and earn abnormal returns. Another way is for resellers to sell the add-on to other resellers resulting in the resellers decreasing their CA in the long run. It is probable that resellers who sell their add-on solutions to other resellers would see it as not influencing their CA since they sell the add-on to customers already using the same ERP system and this would not make ERP end-user organizations change resellers. However, the question remains whether the same would apply if the resellers sold the add-on to the software vendor. The answer would depend on the incentives that the resellers had for doing that. If the add-ons were to be implemented in the basic software, the possibility of selling the add-on to client organizations, as well as to other resellers, would disappear.
Beard and Sumner [START_REF] Beard | Seeking strategic advantage in the post-net era: viewing ERP systems from the resource-based perspective[END_REF] investigate whether a common systems approach for implementing ERPs can provide a CA. The focus of their research was to investigate what happens when a variety of firms within the same industry adopt the same system and employ almost identical business processes. Their conclusion is that it seems that ERPs are increasingly a requirement for staying competitive (i.e. competitive parity), and that ERPs can yield at most a temporary CA. From this it can be suggested that ERP end-user organizations want a "cheap" system that they can use to improve their business processes, thereby making a difference compared with other organizations in the same industry. But, since ERPs encourage organizations to implement standardized business processes (so-called "best practice" Wagner and Newell, [START_REF] Wagner | Best' For Whom?: The Tension Between 'Best Practice' ERP Packages And Diverse Epistemic Cultures In A University Context[END_REF]), organizations get locked in by the usage of the system and then, depending on whether they are a first mover or not, they receive only a temporary CA. This implies that the ERP end-user organization often implement an ERP with the objective of having a "unique" ERP system. But does the ERP customer want a unique ERP system? If the customer believes they have a unique business model, it is likely they would want a unique ERP system. However, they also want a system with high interoperability internally, as well as one compatible with external organizations systems. It is likely that end-user organizations have a need for a system that is not the same as their competitors. This is congruent with the ERP resellers/distributors. They receive their CA by offering their customers the knowledge of how to customize an ERP using industries' best practices and, at the same time, how to implement functionality that makes ERP system uniquely different from their competitor's system. Based on this discussion the next section presents some propositions on how thoughts about achieving CA from uniqueness of ERP system influence feedback of requirements in the ERP value-chain.
Propositions on how Competitive Advantages thoughts influence requirements feedback
Proposition 1: Both resellers and end-users (encouraged by resellers) in the ERP value-chain see customization as a way of achieving Competitive Advantage (CA). This could result in resistance to providing software vendors with the information necessary for them to develop ERPs further in the direction of standardization and thereby decreasing the resellers' need to customize the system.
Kalling [START_REF] Kalling | Gaining competitive advantage through information technology: a resource-based approach to the creation and employment of strategic IT resources[END_REF] suggested that the literature on resource protection focuses, to a large extent, on imitation, trade and substitution. He proposed that development of a resource can also be seen as a protection of the resource. Referring to Liebeskind [START_REF] Liebeskind | Knowledge, strategy, and the theory of the firm[END_REF], Kalling posited that the ability to protect and retain resources arises from the fact that resources are asymmetrically distributed among competitors. The problem, according to Kalling, is how to protect more intangible resources such as knowledge. Relating this to ERPs, it follows that knowledge about a specific usage situation of an ERP would be hard to protect by legal means, such as contracts. Another way of protecting resources is, as described by Kalling, to "protect by development." This means that an organization protects existing resources by developing resources in a way that flexibility is increased by adjusting and managing present resources. In the ERP case this could be described as customizing existing ERPs, thereby sustaining CA gained from using the ERP system. Kalling describes this as a way of increasing a time advantage. From the different ERP stakeholders' perspectives, it could be argued that both protection by development, as well as trying to increase the time advantage, influences the direction in which ERPs are developed.
Proposition 2: The conflict between different parties in the ERP value-chain and how they believe they will gain CA influences the feedback in the ERP value-chain. This tends to increases the cost for both development as well as maintenance of ERP systems.
The discussion and propositions so far suggest that decision-makers in organizations and their beliefs regarding how to gain and sustain CA by customization of ERPs, are a major hindrance to the development of future ERPs. This emanates from the assumption that organizations (end users and resellers) protect what customization they have made. The reason why they do so is based on their belief that they will sustain a CA gained by developing, selling or using customized ERPs. However, returning to Table 2 and the suggestion as to what it is that constitute CA for the different stakeholders, it can be concluded that there are some generic influencing factors. The conflicting goals of the three parties in the ERP value-chain increases complexity in the market place. From a resource-based perspective, first mover advantage could be seen as something that influences all stakeholders and their possibility to gain and to some extent sustain CA. The same could also be said about speed of implementation. The main suggestion is that even if the role of history, causal ambiguity and social complexity influences the organizations' possibility to gain CA, the management skills that the organizations have is crucial.
When looking at what improves their market share of the three different stakeholders in the ERP value-chain, it can be proposed that there are no direct conflicts amongst stakeholders. The reason is that they all have different markets and different customers; therefore they do not compete directly with one other. In reality, they have each other as customers and/or providers, as described in Figure 1. It is suggested that further development of ERPs carried out by vendors could result in a higher degree of selling directly to end-customers or other ways of delivering ERPs to end-customers so that the partners will be driven to insolvency and replaced by, for instance, application service provision (ASP) [START_REF] Bryson | Designing effective incentive-oriented contracts for application service provider hosting of ERP systems[END_REF][START_REF] Johansson | Deciding on Using Application Service Provision in SMEs[END_REF], or software as a service -SaaS [START_REF] Jacobs | Enterprise software as service: On line services are changing the nature of software[END_REF] or open source [START_REF] Johansson | ERP systems and open source: an initial review and some implications for SMEs[END_REF][START_REF] Johansson | Diffusion of Open Source ERP Systems Development: How Users Are Involved, in Governance and Sustainability in Information Systems[END_REF]. The first step in this direction would probably be signaled if the add-ons that partners currently deliver to end-customers are implemented in the core product. From this it can be concluded that there is a potential conflict between the different parties in the value-chain when it comes to how different stakeholders gain CA and how that influences future ERP development.
ERP software vendors become competitive if they utilize their resources to develop ERPs that are attractive to the market. ERP resellers/distributors thus need to utilize their resources to become attractive partners when implementing ERPs. Furthermore, ERP end-users need to use the ERP system so that it supports their businesses. In other words, it is how end-user organizations employ the ERP that is of importance, and it could be that having a unique ERP system (Table 1) is not as important as has previously been believed. In other words, while customization is in the interests of the resellers this may not be the case for the end users.
Millman [START_REF] Millman | What did you get from ERP, and what can you get?[END_REF] posits that ERPs are the most expensive but least value-derived implementation of ICT support. The reason for this, according to Millman, is that a lot of ERPs functionality is either not used or is implemented in the wrong way. That it is wrongly implemented results from ERPs being customized to fit the business processes, instead of changing the process so that it fits the ERP [START_REF] Millman | What did you get from ERP, and what can you get?[END_REF]. However, according to Light [START_REF] Light | Going beyond "misfit" as a reason for ERP package customisation[END_REF], there are more reasons for customization than just the need for achieving a functionality fit between the ERP and the organization's business processes. He believes that from the vendor's perspective, customizations might be seen as fuelling the development process. From an end-user' perspective, Light describes customization as a value-added process that increases the system's acceptability and efficiency [START_REF] Light | Going beyond "misfit" as a reason for ERP package customisation[END_REF]. He further reasons that customization might occur as a form of resistance or protection against implementation of a business process that could be described as "best practices." One reason why end-user organizations get involved in ERP development is that they want to adjust their ERPs so that they support their core competences.
Proposition 3: End-users of ERPs and their basic assumption about how they receive CA are encouraged by resellers of ERPs. Resellers want to sustain their CA by suggesting and delivering high levels of ERP customization.
The main conclusion so far can be formulated as follows: Highly customized ERPs deliver better opportunities for CA for the resellers in the ERP value-chain while it decreases the opportunity for both ERP software vendors as well as ERP end-user organizations to attain CA.
To discuss this further, in the next section we propose various scenarios supported by some early empirical data.
Scenarios describing ERP related Competitive Advantage
In this section eight possible scenarios on how thoughts about receiving competitive advantage from a customized ERP system could be described from a CA perspective is presented. The description is based on semi-structured interviews done with an ERP vendor, ERP reseller consultants and ERP customers and recently published studies in two Norwegian companies presented by Fosser et al,. [START_REF] Fosser | ERP Systems and competitive advantage: Some initial results[END_REF][START_REF] Fosser | Organisations and vanilla software: What do we know about ERP systems and competitive advantage?[END_REF]. The interviews with the ERP vendor and the ERP reseller consultants were part of an on-going research project investigating requirements management. The project aimed at gaining knowledge on what factors that influence future development of ERPs. In total there were 11 interviews conducted with different executives at a major ERP vendor organization and three interviews conducted with ERP consultants at a reseller organization. The reseller organization implements and supports different ERP systems, and one of their "products" is the ERP system that is developed by the ERP vendor. The interviews with ERP customers comes from the study done by Fosser et al., [START_REF] Fosser | ERP Systems and competitive advantage: Some initial results[END_REF][START_REF] Fosser | Organisations and vanilla software: What do we know about ERP systems and competitive advantage?[END_REF] (in total 19 interviews) which were part of a research project that aimed at understanding competitive advantage in an ERP context. Citations from interviews done in these different studies are used to illustrate findings and flesh out the content of table 3. Lose Lose Lose Scenario A: It can be said that this is probably the situation that all stakeholders in a business relationship ideally want. However, to have a win-win-win situation in an ERP development value-chain is not straightforward. From the vendors' perspective it means that they should develop an ERP system that is both so generic that the reseller could sell it to a lot of different clients to generate revenue from licenses and at the same time be so specific that the end users could gain a CA from the usage of the standardized system. However, if the vendor manages to develop such a generic form of ERP it is likely that end user would demand an extensive customization effort. The result could then be that the re-seller could sell a lot of consultancy hours for adjusting the software to the business processes in the client's organization. A quotation from an ERP consultant at an ERP reseller organization describes a situation when the feedback loop worked as a win-win-win situation. The ERP consultant said: "Before the ERP vendor merged with a bigger ERP vendor we had a close relationship that actually made it possible to have requests from a specific customer implemented in the system. Now we don't know who to talk with and even if we get a contact with them (the vendor) they are not really interested". He (the ERP consultant) continues with stating that: "We developed a very interesting add-on for a customer, that we then tried to get implemented in the base system but it was impossible. So, we started to sell this add-on to other ERP resellers (of the same system). We did so because we think it will benefit us in the long run if customers feel that the system is interesting -In that way we will probably increase our market".
If this continues for some time it probably ends with a situation as in Scenario E. Scenario E is then the situation when vendor loses and the re-seller and clients win. We see this as a possibility if the re-sellers spend so much time with clients developing ERP systems offering CA while generating large consultancy hours but at the cost of not marketing the base ERP system to new clients. Our early data gathering suggests this scenario is common among the stakeholders. One example of support of this situation is the following statement from an executive at the ERP vendor (the same ERP vendor that was mentioned above by the developer at the ERP reseller).
The executive at the ERP vendor said that: "We don't have enough knowledge about how the system is used and what the user of the system actually wants to have. This makes that future development of the system is extremely hard and it is a fact that there are problems with requirements management in ERP development" Director of Program Management.
Comparing the citations from consultant with the one from the vendor there seems to be a contradiction. The consultant feels it hard to provide feedback while the vendor feels a lack of feedback. From the CA perspective this is hard to explain, however, what can be said is that this specific consultant see an opportunity in increasing its CA by providing feedback to the vendor. The reason for why it does not happen probably is related to lack of resources at the vendor place or a lack of a clear relationship between the parties. One way for the vendor of dealing with this is to get a closer relationship to some ERP resellers -by a relationship program giving some benefits to reseller that have a close relationship with the vendor. However, it demands that they for instance follow a specific process for implementation of the ERP.
This could then result in the situation described in scenario B, in which both the vendor and the re-seller have a win-win situation while the client has a disadvantaged position especially if they do not customize the software to the extent whereby they gain CA. The following quotations from ERP customers describe this situation. "An ERP system is something you just need to do business today. But the way we have implemented it and configured it has given us a competitive advantage." Assistant Director of Logistics.
"I believe that it is mostly a system you need to have. But an ERP system can be utilized to achieve a competitive advantage, if you are skillful." Senior Consultant.
"It keeps us on the same level as our competitors. We are focusing on quality products. That is our competitive advantage. An ERP system cannot help us with that". The Quality Manager.
"I don't think we have got any competitive advantage. All our competitors are running such a system, so it is just something we need to have. It is actually a competitive disadvantage because we have not managed to get as far as the others, with the system." Managing Director.
All these citations describe the situation when the customers see ERP implementation as a necessity to avoid competitive disadvantage. To some extent it can be said that they understand customization as something you do to gain CA, which implies that they all are interested in what other customers do and that could be seen as something that hindrance feedback resulting in the scenario B situation. Another reason why the situation could result in scenario B is that it is shown that if clients customize to a high extent, the long-term maintenance costs of the ERP system becomes so great that the benefits are lost. The following statement from a developer at the ERP vendor supports scenario B.
"It is clearly seen that when a customer implement the ERP system for the first time they customize a lot. When they then upgrade with a new version the extensive customization is much less and when they upgrade with version 3 and/or 4 they hardly don't do any customization. The reason is must likely that they have discovered that customization cost a lot at the same time as they have discovered that they are not that unique that they thought when implementing the first version" Program Manager A.
In the long run this could also result in scenario F. Scenario F describes the situation where the vendor starts to lose market share because clients have problems achieving CA resulting in a bad reputation for the ERP product. The situation of less customization and less demand on add-ons could also result in scenario C. In scenario C, we see a vendor by-passing the reseller and working directly with the client enabling them both to gain a CA. This is somewhat supported by an executive at the ERP vendor, who says: "However, there will probably be a day when the partners not are needed -at least for doing adjustments of ERPs. This is not a problem since the rules of the game always change. And there will still be a need for partners. The partners see themselves as … they understand the customer's problem." Program Manager B.
Scenario D is an interesting scenario since it is only the vendor that shows a winning position. It could be explained by the fact that if the vendor manages to develop a generic ERP system and thereby gain a more or less monopoly status they will have the possibility to sell many licenses. It also shows the situation when the vendor not seems to be dependent on feedback from customers in the development of the ERP. A quotation from an ERP customer describes this clearly: "I try to exploit the available tools in SAP without investing money in new functionality. There are a lot of possibilities in the ERP systems, e.g. HR, which we are working with to utilize our resources more efficiently." Director of Finance.
It could also be that the client needs to buy and implement the ERP since it more or less a necessity to implement an ERP to obtain competitive parity. This means that ERP end-users use the ERP as standardized software and they do not feel that providing feedback to the vendor is of importance.
With scenario G it is probably a situation that the vendor would not allow to continue. However, from the perspective of an ERP customer one motive for restricting the feedback could be justified from this citation: "We have a unique configuration of the system that fits our organization and this gives us a competitive advantage. The IS department is very important in this context." Assistant Director of Logistics. While another citation suggests that providing feedback could be a way of gaining competitive advantage: "I actually hold lectures about how we do things in our organization. I tell others about the big things, but I think it is the small things that make us good. All the small things are not possible to copy. I think it is a strength that we have a rumor for being good at ERP and data warehouse. It gives [us] a good image. Though, we are exposed to head hunters from other organizations." Director of IS.
The empirical data so far did not provide any evidence for scenario G or scenario H. Regarding scenario H it can be stated that from a "prisoner dilemma game" [START_REF] Tullock | Adam Smith and the Prisoners' Dilemma[END_REF] it could happen that all lose, however, from research on the prisoners dilemma game it is clear that if the "game" are repeated the involved parties would start to cooperate [START_REF] Tullock | Adam Smith and the Prisoners' Dilemma[END_REF]. This means that it more or less can be assumed that in the ERP value-chain case in the long-run while the stakeholders work in the direction of scenario A. This also to some extent means that neither of the scenarios (B, D, F and H) giving a lose for clients will be sustainable in the long-run.
Concluding remark and future research
Using an innovative value chain analysis considering the ERP vendor, reseller and client we developed eight scenarios to examine our research question: "What influence has thoughts about receiving competitive advantage on the feedback related to requirements in ERP development?" From the preliminary empirical research evidence to support six of the eight scenarios were found. As the other two were the least likely to occur, the findings encourages to conduct further systematic research in the future to flesh out the findings and to look particularly at ERP acquisitions in a variety of settings. As ERP systems are ubiquitous in modern corporations it is vital that managers consider the value such systems offer in the long term. Furthermore, the analysis offers a more in-depth understanding of the dynamics of the ERP development value chain, its complexity and its impact on competitive advantage for the different stakeholders.
However, returning to the question about how CA thoughts influence feedback in ERP development, it can be stated that it seems to influence the feedback, but not really in the way that were initial assumed. Instead of, as was assumed, having a restrict view of providing feedback stakeholders seems to be more interested in having a working feedback loop in the ERP value-chain making the parties in a specific value-chain more interested in competing with other parties in other ERP value-chains.
For the future, it will be interesting also to try to reveal the patterns that emerge in the value chain and investigate which scenarios are more sustainable in the long-term and how clients can position themselves more effectively to improve their competitive advantage.
Figure 1
1 Figure 1 Stakeholders in the ERP value-chain
Table 1
1 The VRIO framework[START_REF] Barney | Gaining and sustaining competitive advantage[END_REF]
Is a resource or capability…
Valuable? Rare? Costly Exploited by Competitive Economic
to Organisation? Implications Performance
Imitate?
No --- --- No Competitive Below
Disadvantage Normal
Yes No --- Competitive Normal
Parity
Yes Yes No Temporary Above
Competitive Normal
Advantage
Yes Yes Yes Sustained Above
Yes Advantage Competitive Normal
Table 2
2 ERP value-chain stakeholders and competitive advantage
Stakeholder Outcome of Competitive Gained through
Advantage
ERP High level of market share Competitively priced software
Software in the ERP market (e.g. the Highly flexible software
Vendor number software licenses Ease of implementing the software
sold) Ease of customizing the software
ERP High level of market share Knowledge about the customer's
Resellers/dis in the ERP consultancy business
tributor market (e.g. consultancy High level of competence in
hours delivered) development of add-ons that are
seen as attractive by the ERP end-
user organization
High level of competence at
customization
ERP end- High level of market share
user in the customer-specific
organization market (e.g. products or
services sold; rising market
share; lower costs)
Table 3
3 Scenarios describing win or lose relationship
Scenario Vendor Re-Seller Client (end user)
A Win Win Win
B Win Win Lose
C Win Lose Win
D Win Lose Lose
E Lose Win Win
F Lose Win Lose
G Lose Lose Win
H | 45,696 | [
"1001319"
] | [
"344927"
] |
01484675 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484675/file/978-3-642-36611-6_12_Chapter.pdf | Rogerio Atem De Carvalho
Björn Johansson
email: bjorn.johansson@ics.lu.se
Towards More Flexible Enterprise Information Systems
Keywords: Enterprise Information Systems, Domain Specific Languages, Design Patterns, Statechart Diagrams, Natural Language Processing
The aim of this paper is to present the software development techniques used to build the EIS Patterns development framework, which is a testbed for a series of techniques that aim at giving more flexibility to EIS in general. Some of these techniques are customizations or extensions of practices created by the agile software development movement, while others represent new proposals. This paper also aims at helping promoting more discussion around the EIS development questions, since most of research papers in EIS area focus on deployment, IT, or business related issues, leaving the discussion on development techniques ill-treated.
Introduction
In Information Systems, flexibility can be understood as the quality of a given system to be adaptable in a cost and effort effective and efficient way. Although it is usual to hear from Enterprise Information Systems (EIS) vendors that their systems are highly flexible, the practice has shown that customizing this type of system is still a costly task, mainly because there are still based on relatively old software development practices and tools. In this context, the EIS Patterns framework1 is a research project which aims at providing a testbed for a series of relatively recent techniques nurtured at the Agile methods communities, and ported to the EIS arena.
The idea of suggesting and testing new ways for developing EIS was born from accumulated research and experience on more traditional methods, such as Model Driven Development (MDD), on top of the open source ERP5 system [START_REF] Smets-Solanes | ERP5: A Next-Generation, Open-Source ERP Architecture[END_REF]. ERP5 represents a fully featured and complex EIS core, making it hard to test the ideas here presented in their pure form, thus it was decided to develop a simpler framework to serve as a proof of concept of proposed techniques.
This paper is organized as follows: the next topic summarizes the series of papers that forms the timeline of research done on top of ERP5; following this, the proposed techniques are presented, and finally some conclusions and possible directions are listed.
Background
In order to understand this proposal, it is necessary to know the basis from where it was developed, which is formed by a series of approaches developed on top of ERP5. Following the dominant tendency of the past decade, which was using MDD, the first approach towards a formalization of a deployment process for ERP5 was to develop a high-level modeling architecture and a set of reference models [START_REF] Campos | Modeling Architecture and Reference Models for the ERP5 Project[END_REF], as well as the core of a development process [START_REF] Carvalho | A Development Process Proposal for the ERP5 System[END_REF]. This process evolved to the point of providing a complete set of integrated activities, covering the different abstraction levels involved by supplying, according to the Geram [START_REF]IFIP -IFAC GERAM: Generalized Enterprise Reference Architecture and Methodology, IFIP -IFAC Task Force on Architectures for Enterprise Integration[END_REF] framework, workflows for Enterprise, Requirements, Analysis, Design, and Implementation tasks [START_REF] Monnerat | Enterprise Systems Modeling: the ERP5 Development Process[END_REF].
Since programming is the task that provides the "real" asset in EIS development, which is the source code that reflects the business requirements, programming activities must also be covered. Therefore, in "ERP5: Designing for Maximum Adaptability" [START_REF] Carvalho | ERP5: Designing for Maximum Adaptability[END_REF] it is presented how to develop on top of the ERP5's documentcentric approach, while in "Using Design Patterns for Creating Highly Flexible EIS" [START_REF] Carvalho | Using Design Patterns for Creating Highly Flexible Enterprise Information Systems[END_REF], the specific design patterns used to derive concepts from the system's core are presented. Complimentary, in "Development Support Tools for ERP" [START_REF] Carvalho | Development Support Tools for Enterprise Resource Planning[END_REF] two comprehensive sets of ERP5's development support tools are presented: (i) Productrelated tools that support code creation, testing, configuration, and change management, and (ii) Process-related tools that support project management and team collaboration activities. Finally, in "ERP System Implementation from the Ground up: The ERP5 Development Process and Tools" [START_REF] Carvalho | ERP System Implementation from the Ground up: The ERP5 Development Process and Tools[END_REF], the whole picture of developing on top of ERP5 is presented, locating usage of the tools in each development workflow, and defining its domain-specific development environment (DSDE).
Although it was possible to develop a comprehensive MDD-based development process for the ERP5 framework, the research and development team responsible for proposing this process developed at the same time an Enterprise Content Management solution [START_REF] Carvalho | An Enterprise Content Management Solution Based on Open Source[END_REF] and experimented with Agile techniques for both managing the project and constructing the software. Porting this experimentation to the EIS development arena lead to the customization of a series of agile techniques, as presented in "Agile Software Development for Customizing ERPs" [START_REF] Carvalho | Agile Software Development for Customizing ERPs[END_REF].
The work on top of ERP5 provided a strong background, on both research and practice matters, enough to identify the types of relatively new software development techniques that could be used on other EIS development projects. Even more, this exploration of a real-world, complex system, has shown that some other advances could be obtained by going deeper into some of the techniques used, as well as by applying them in a lighter framework, where experimentations results could be quickly obtained.
Enters EIS Patterns
EIS Patterns is a simple framework focused on testing new techniques for developing flexible EIS. It was conceived having the Lego sets in mind: very basic building blocks that can be combined to form different business entities. Therefore, it was built around three very abstract concepts, each one with three subclasses, representing two "opposite" derived concepts and an aggregator of these first two, forming the structure presented in Fig. 1. -Transformation: is a movement inside a node, in other words, the source and destination are the same node, it represents the transformation by machine or human work of a resource, such as drilling a metal plate or writing a report.
-Transportation: is a movement of resources between two nodes, for example, moving a component from one workstation to another, sending an order from the supplier to the customer.
-Process: a collective of transformations and/or transportations, in other words, a business process.
Besides the obvious "is a" and "is composed by" relationships presented in the ontology in Fig. 1, a chain of relationships denote how business processes are implemented: "a Process coordinates Node(s) to perform Operation(s) that operates on Work Item(s)". The semantic meaning of this chain is that process objects control under which conditions node objects perform operations in order to transform or transport resources. This leads to another special relationship which is "a Movement encapsulates an Operation", which means that a movement object will encapsulate the execution of an operation. In practical terms, an operation is the abstract description of a production operation, which is implemented by one or more node objects' methods. When this operation is trigged by a process object, it defers the actual execution to a pre-configured node object's method, and this execution is logged by a movement object, which stores all parameters, date and time, and results of this execution. Therefore, an operation is an abstract concept which can be configured to defer execution to different methods, from different objects, in accordance to the intents of a specific business process instance. In other words, a business process abstraction keeps its logic, while specific results can be obtained by configuration.
Although this execution deference can appear to be complex, it is a powerful mechanism which allows that a given business process model may be implemented in different ways, according to different modeling-time or even runtime contexts. In other words, the same process logic can be implemented in different ways, for different applications, thus leveraging the power of reuse.
It is important to note that in this environment, Processes control the active elements, the Nodes, which in turn operate on top of the passive ones, the Resources. In programming terms, this means that processes are configurable, nodes are extended, and resources are typically "data bag" classes. Therefore, extending the nodes for complying with new business requirements becomes the next point where flexibility must take place.
Using Decorators to Create a Dynamic System
Usually, class behavior is extended by creating subclasses, however, this basic technique can lead to complex, hard to maintain, and even worse, hard-coded class hierarchies. One of the solutions to avoid this is to use the Decorator design pattern [START_REF] Gamma | Design Patterns -Elements of Reusable Object-Oriented Software[END_REF], taking into account the following matters: -While subclassing adds behavior to all instances of the original class, decorating can provide new behavior, at runtime, for individual objects. At runtime means that decoration is a "pay-as-you-go" approach to adding responsibilities.
-Using decorators allows mix-and-matching of responsibilities.
-Decorator classes are free to add operations for specific functionalities.
-Using decorators facilitates system configuration, however, typically, it is necessary to deal with lots of small objects. Hence, by using decorators it is possible, during a business process realization, to associate and/or dissociate different responsibilities to node objects -in accordance to the process logic, and providing two main benefits: (i) the same object, with the same identifier, is used during the whole business process, there is no need for creating different objects of different classes, and (ii) given (i), auditing is facilitated, since it is not necessary to follow different objects, instead, the decoration of the same object is logged. Moreover, it is possible to follow the same object during all its life-cycle, including through different business processes: after an object is created and validated -meaning that it reflects a real-world business entity -it will keep its identity forever 2 .
An important remark is that decorators must keep a set of rules of association, which is responsible for allowing or prohibiting objects to be assigned to new responsibilities. If a given object respects the rules of association of a given decorator, it can be decorated by it. At this point, defining a flexible way of ensuring contracts between decorators and decorated objects is of interest.
Should-dsl: a language for contract checking
Although Should-dsl was originally created as a domain specific language for checking expectations in automated tests [START_REF] Tavares | A tool stack for implementing Behavior-Driven Development in Python Language[END_REF], in the EIS Patterns framework it is also used to provide highly readable contract verifiers, such as: associated |should| be_decorated_by(EmployeeDecorator)
In the case above the rule is auto-explanative: "the associated object should be decorated by the Employee Decorator", meaning that for someone to get manager's skills he or she should have the basic employee's skills first. Besides being human readable, these rules are queryable, for a given decorator it is possible to obtain its rules, as well as the symmetric: for a given node object, it is possible to identify which decorators it can use. Query results, together with the analysis of textual requirements using Natural Language Processing, are used to help configuring applications built on top of the framework.
Using Natural Language Processing to Find Candidate Decorators
It is also possible to parse textual requirements, find the significant terms and use them to query decorators' documentation, so the framework can suggest possible decorators to be used in accordance to the requirements. Decorators' methods that represent business operations -the components of business processes -are specially tagged, making it possible to query their documentation as well as obtain their category. Categories are used to classify these operations, for instance, it is possible to have categories such as "financial", "logistics", "manufacturing" and so on. In that way, the framework can suggest, from its base of decorators, candidates to the users' requirements.
A Domain-Specific and Ubiquitous Language for Modeling Business Process
The ontology presented in Fig. 1, although simple, is abstract enough to represent entities involved in any business process. Moreover, by appropriately using a statechart diagram, it is possible to use a single model to describe a business process, define active entities, as well as to simulate the process.
In order to better describe this proposal, Fig. 2 shows a simple quotation process. Taking into account that a class diagram was used to represent the structural part of the business process 3 , by explicitly declaring the objects responsible for the transitions, it is possible to identify the active elements of the process, all of the Person type: sales_rep, verifier, approver, and contractor; as well as how they collaborate to perform the business process, by attaching the appropriate methods calls. Additionally, in some states, a method is declared with the "/do" tag, to indicate that a simulation can be ran when the process enters these states.
To run these state machine models, Yakindu (www.yakindu.org) could be used. By adapting the statechart execution engine, it is possible to run the model while making external calls to automated tests, giving the user the view of the live system running, as proposed by Carvalho et al. [START_REF] Carvalho | Business Language Driven Development: Joining Business Process Models to Automated Tests[END_REF]. Fig. 2. A simple quotation process using the proposed concepts.
An Inoculable Workflow Engine
Workflow engines provide the basis for the computational realization of business processes. Basically, there are two types of workflow engines: (i) associated to application development platforms or (ii) implemented as software libraries.
EIS patterns uses Extreme Fluidity (xFluidity), a variation of the type (ii) workflow engine, developed as part of the framework. xFluidity is an inoculable (and expellable) engine that can be injected into any Python object, turning it workflowaware. Symmetrically, it can be expelled from the object, turning the object back to its initial structure when necessary. It was developed in this way because type (i) engines forces you to use a given environment to develop your applications, while type (ii) forces you to use specific objects to implement workflows, most of times creating a mix of application specific code and workflow specific statements. With xFluidity it is possible to define a template workflow and insert the code necessary to make it run inside the business objects, while keeping the programming style, standards, naming conventions, and patterns of the development team. In EIS Patterns, xFluidity is used to configure Process objects, making them behave as business processes templates.
Currently xFluidity is a state-based machine, however, it can be implemented using other notations, such as Petri Nets. In that case, no changes are necessary in the inoculated objects, given that these objects do not need to know which notation is in use, they simple follow the template.
Conclusions and Further Directions
This paper briefly presents a series of techniques that can be applied to turn EIS more flexible, including the use of dynamic languages 4 . Although the EIS Patterns framework is a work in progress, it is developed on top of research and practical experience obtained on the development of the ERP5 framework.
This experience led to the use of an abstract core to represent all concepts, while providing flexibility through the use of the Decorator pattern. On top of this technique, Natural Language Processing (NLP) and automated contract checking is used to improve reuse even more and, as a side effect, enhance system documentation, given that developers are forced to provide code documentation as well as to define association contracts through should-dsl, which is a formal way of defining the requirements for the use of decorators to expand the functionality of Node objects.
The integrated use of an inoculable workflow engine, a domain-specific and ubiquitous language, and should-dsl to check association contracts, is innovative and provides more expressiveness to the models and the source code, by the use of a single language for all abstraction levels, which reduces the occurrence of translation errors through these levels. This is an important point: more expressive code facilitates change and reuse, thus increasing flexibility.
Further improvements include the development of a workflow engine based on BPMN, in order to make the proposal more adherent to current tendencies, and provide advances on the use of NLP algorithms to ease identification and reuse of concepts.
Fig. 1 .
1 Fig.1. Ontology representing the EIS Patterns core. Fig 1 is interpreted as follows: Resource: is anything that is used for production. -Material: product, component, tool, document, raw material etc. -Operation: human operation and machine operation, as well as their derivatives. -Kit: a collective of material and/or immaterial resources. Ex.: bundled services and components for manufacturing. Node: is an active business entity that transforms resources. -Person: employee, supplier's contact person, drill operator etc. -Machine: hardware, software, drill machine, bank account etc. -Organization: a collective of machines and/or persons, such as manufacturing cell, department, company, government. Movement: is a movement of a Resource between two Nodes.
Initially discussed at the EIS Development blog through a series of posts entitled EIS Patterns, starting in December
(http://eis-development.blogspot.com).
A more complete discussion on using decorators, with examples, can be found at http://eisdevelopment.blogspot.com.br/2011/03/enterprise-information-systems-patterns_09.html
For a discussion on this see http://eis-development.blogspot.com.br/2010/09/is-java-betterchoice-for-developing.html | 19,289 | [
"1003572",
"1001319"
] | [
"487851",
"487852",
"344927"
] |
01484676 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484676/file/978-3-642-36611-6_13_Chapter.pdf | D Mansilla
email: tina.dmansilla@educ.ar
Pollo - Cattaneo
P Britos
García -Martínez
A Proposal of a Process Model for Requirements Elicitation in Information Mining Projects
Keywords: Process, elicitation, information mining projects, requirements
A problem addressed by an information mining project is transforming existing business information of an organization into useful knowledge for decision making. Thus, the traditional software development process for requirements elicitation cannot be used to acquire required information for information mining process. In this context, a process of requirements gathering for information mining projects is presented, emphasizing the following phases: conceptualization, business definition and information mining process identification.
Introduction
Traditional Software Engineering offers tools and process for software requirements elicitation which are used for creating automatized information systems. Requirements are referred as a formal specification of what needs to be developed. They are descriptions of the system behaviour [START_REF] Sommerville | Requirements Engineering: A Good Practice Guide[END_REF].
Software development projects usually begin by obtaining an understanding of the business domain and rules that govern it. Understanding business domains help to identify requirements at the business level and at product level [START_REF] Lauesen | Software Requirements. Styles and Techniques[END_REF], which define the product to be built considering the context where it will be used. Models such as Context Diagram, Data Flow Diagrams and others are used to graphically represent the business process in the study and are used as validation tools for these business processes. A functional analyst is oriented to gather data about inputs and outputs of the software product to be developed and how that information is transformed by the software system.
Unlike software development projects, the problem addressed by information mining projects is to transform existing information of an organization into useful knowledge for decision making, using analytical tools [START_REF] Pollo-Cattaneo | Proceso de Educción de Requisitos en Proyectos de Explotación de Información[END_REF]. Models for requirements elicitation and project management, by focusing on the software product to be developed, cannot be used to acquire required information for information mining processes. In this context, it is necessary to transform existing experience in the use of requirements elicitation tools in the software development domain into knowledge that can be used to build models used in business intelligence projects and in information mining process [START_REF] Pollo-Cattaneo | Ingeniería de Proyectos de Explotación de Información[END_REF] [5] [6].
This work will describe the problem (section 2), it will present a proposal for a process model for requirements elicitation in information mining projects (section 3), emphasizing in three phases: Conceptualization (section 3.1), Business Definition (section 3.2) and Information Mining Process Identification (section 3.4). Then, a study case is presented (section 4), and a conclusion and future lines of work are proposed (section 5).
State of current practice
Currently, several disciplines have been standardized in order to incorporate best practices learned from experience and from new discoveries.
The discipline of project management, for example, generated a body of knowledge where the different process areas of project management are defined. Software engineering specify different software development methodologies, like the software requirements development process [START_REF] Sommerville | Requirements Engineering: A Good Practice Guide[END_REF]. On the other side, related to information mining projects, there are some methodologies for developing information mining systems such as DM [START_REF] Garcia-Martinez | Information Mining Processes Based on Intelligent Systems[END_REF], P3TQ [START_REF] Pyle | Business Modeling and Business Intelligence[END_REF], y SEMMA [START_REF]SAS Enterprise Miner: SEMMA[END_REF].
In the field of information mining there is not a unique process for managing projects [START_REF] Pollo-Cattaneo | Metodología para Especificación de Requisitos en Proyectos de Explotación de Información[END_REF]. However, there are several approaches that attempt to integrate the knowledge acquired in traditional software development projects, like the Kimball Lifecycle [START_REF] Kimball | The Data Warehouse Lifecycle Toolkit[END_REF], and project management framework in medium and small organizations [START_REF] Vanrell | Modelo de Proceso de Operación para Proyectos de Explotación de Información[END_REF]. In [START_REF] Britos | Requirements Elicitation in Data Mining for Business Intelligence Projects[END_REF] an operative approach regarding information mining project execution is proposed, but it does not detail which elicitation techniques can be used in a project.
The found problem is that previously mentioned approaches emphasize work methodologies associated with information mining projects and do not adapt traditional software engineering requirements elicitation techniques. In this situation, it is necessary to understand the activities that should be taken and which traditional elicitation techniques can be adapted for using in information mining projects.
Proposed Elicitation Requirement Process Model
The proposed process defines a set of high level activities that must be performed as a part of the business understanding stage, presented in the CRISP-DM methodology, and can be used in the business requirements definition stage of the Kimball Lifecycle. This process breaks down the problem of requirement elicitation in information mining projects into several phases, which will transform the knowledge acquired in the earlier stage. Figure 1 shows strategic phases of an information mining project, focusing on the proposed requirement elicitation activities. The project management layer deals with coordination of different activities needed to achieve the objectives. Defining activities in this layer are beyond this work. This work identifies activities related to the process exposed in [START_REF] Kimball | The Data Warehouse Lifecycle Toolkit[END_REF], and can be used as a guide for the activities to be performed in an information mining project.
Business Conceptualization Phase.
The Business Conceptualization phase is the phase of the elicitation process that will be used by the analyst to understand the language used by the organization and the specific words used by the business. Table 1 summarizes inputs and outputs of the Business Conceptualization phase. Interviewing business users will define information related problems that the organization has. The first activity is to identify a list of people that will be interviewed. This is done as part of the business process gathering activity.
In these interviews, information related to business process is collected and modeled in use cases. A business process is defined as the process of using the business on behalf of a customer and how different events in the system occur, allowing the customer to start, execute and complete the business process [START_REF] Jacobson | The Object Advantage. Business Process Reengineering with Object Technology[END_REF]. The Business Analyst should collect specific words used in business processes in order to obtain both a description of the different tasks performed in each function, as well as the terminology used in each use case.
The Use Case modeling task uses information acquired during business data gathering and, as a last activity of this phase, will generate these models.
Business Definition Phase
This phase defines the business in terms of concepts, vocabulary and information repositories. The objective is to document concepts related to business process gathered in the Business Conceptualization Phase and discover its relationships with other terms or concepts. A dictionary is the proposed tool to define these terms. The structure of a concept can be defined as shown in table 3. Once the dictionary is completed the map, the analyst begins to analyze the various repositories of information in the organization. It is also important to determinate volume information, as this data can be used to select the information mining processes applicable to the project. The acquired information is used to build a map, or a model, that shows the relationship between the business use cases, business concepts and information repositories. This triple relationship can be used as the start point of any technique of information mining processes.
Identification of Information Mining Process Phase
The objective of the phase is to define which information mining process can be used to solve the identified problems in the business process. There are several processes that can be used [START_REF] Pollo-Cattaneo | Ingeniería de Procesos de Explotación de Información[END_REF], for instance: ─ Discovery of behavior rules (DBR) ─ Discovery of groups (DOG) ─ Attribute Weighting of interdependence (AWI) ─ Discovery of membership group rules (DMG) ─ Weighting rules of behavior or Membership Groups (WMG) This phase does not require any previous input, so activities can be performed in parallel with activities related to Business Conceptualization. Table 4 shows inputs and outputs of this phase. The list of business problems must be prioritized and be written in natural language, using the user's vocabulary. Only important or critical problems are identified.
Analysis of the problems in the list has to be done. This analysis can be done using the model known as "Language Extended Lexicon (LEL)" [17][18] and can be used as a foundation of the work performed in this phase: breaking down the problem into several symbols presented in the LEL model. This model shows 4 general types of symbol, subject, object, verb and state.
To define useful information mining process, a decision table is proposed. The table analyses LEL structures, concepts identified in the Business Conceptualization phase, existing information repositories and problems to be solved. All the information is analyzed together and according to this analysis, an information mining process is selected as the best option for the project. Table 5 shows the conditions and rules identified as foundations in this work. An important remark is that subjects discovery refers to concepts or subject that hasn't been identified as part of the business domain The objective of the table is to be able to decide, through the analysis of the information gathered about the business, which information mining process can be applied to the project. An important remark is that this decision table will add new knowledge and new rules, with the end of improving the selection technique criteria. With more projects used as input and more experience acquired in these projects, the rules proposed on the table can be adjusted and then we can get a better selection choice.
The Process Information Mining Identification phase is the last phase of the process. The following tasks will depend upon project managing process and tasks defined for the project.
Proof of concept
A case study is presented next to prove the proposed model.
Business Description
A real estate agency works mainly with residential properties in the suburban area. It's lead by two owners, partners in equal shares. This real state agency publishes its portfolio in different media, mostly local real estate magazines. Published properties are in the mid range value, up to three hundred thousand dollars. It only has one store, where all the employees work. The following business roles are covered: a real estate agent, salesman, administrative collaborators and several consultants.
Process Execution
The first step of the process consists in two activities: identify project stakeholders and set the list of people to be interviewed. In this case, with the little business information that we have, we can identify three stakeholders: the owners and the real estate agent.
The second step is to set up the interviews of the stakeholders and gather information related to the business in the study. The following paragraph describes information obtained in the interview.
This agency focuses on leasing and selling real estate. Any person can offer a house for sale. If a house is for sale, the real estate agent will estimate the best offer for the property being sold. When a person has an interest in buying a home, they complete a form with the contact details and characteristics that must meet the property. If there are any properties that meet the requested criteria, they are presented to the customer. The real estate agency considered clients as those who have offered a home to sell or have already begun a process of buying a home offered, and considered interested customers, persons who are consulting on the proposed properties or are looking for properties to buy. If interested customers agree on the purchase of a property will be customers of the estate and begins the process of buying property. The customer contact information and the property details are stored in an Excel file.
In this case, we can identify the following Business Use Cases:
─ Sell a property Use Case, action of a person selling a property. ─ Buy a property Use Case, action of a person buying a property. ─ Show a property managed by the real estate agency Use Case, reflects the action of showing a real estate available for sale to interested parties.
For the Business Definition Phase, the business concept dictionary is created. From gathered information, the concepts shown in table 6 can be identified. Identified concepts are analyzed in order to find relationships between themselves. A class model can show the basic relationships between identified concepts in the case.
From the gathered business information, a problem found is that the real estate agent wants to know, when a property is offered for sale, which customers could be interested in buying the property. Following the identification, a LEL analysis is done with each problem on the list. In this case, the analysis finds the symbols presented in table 7. Idea Idea -The action of showing a property to a customer.
-A Customer state achieved when a property meets his or her requirements Impact Impact -The property must satisfy the customer requirements.
-The property is shown to the interested party.
With the obtained LEL analysis, information repositories and defined business concepts, the information mining process to apply in the project, will be determined. The decision table presented in section 3.3 is used, checking the conditions against the gathered information. The result of this analysis states that the project can apply the process of Discovery Rules of Conduct.
Conclusion
This work presents a proposal of a process model for requirements elicitation in information mining projects, and how to adapt in these projects, existing elicitation techniques. The process breaks down in three phases, in the first phase the business is analyzed (Conceptualization phase), later, a business model is built and defined to understand its scope and the information it manages (Business Definition phase), and finally, we use the business problems found and the information repositories that stores business data as an input for a decision tabke to establish which information mining technique can be applied to a specific information mining project (Identification of an Information Mining Process).
As a future line of work, several cases are being identified to support the empirical case proposed, emphasizing the validation of the decision table presented in section 3.3.
Fig. 1 .
1 Fig. 1. Information Mining Process phases.
Table 1 .
1 Business Conceptualization phase inputs and outputs.
Phase Task Input Input product Representation Transformation technique Output Output Product Representation
Business Understanding Project Definition Project KickOff Project Sponsors Analysis List of users to be interviewed List of users to be interviewed template.
Business Conceptualization Business Process data gathering List of users to be interviewed List of users to be interviewed Interviews Workshops Gathered Information Information gathering template
Business Building Model Gathered Information Information Template gathering Analysis of information gathered Use Case Model Use Case Model template
Table 2 .
2 Table 2 shows inputs and outputs of this phase Business definition phase inputs and outputs
Project Managment
Business Data Data Modeling Evaluation Deployment
Understanding Understanding Preparation
Business Conceptualization Business Definition Identification of Information
Mining Process
Table 3 .
3 Concept Structure
Structure element Description
Concept Term to be defined
Definition Description of the concept meaning.
Data structure Description of data structures contained in the concept
Relationships A List of Relationships with other concepts
Processes A list of processes that use this concept
Table 4 .
4 Inputs and Outputs of Identification of Information Mining Process Phase
Phase Task Input product Input Representation Transformation technique Output Product Output Representation
Identify Use Case Use Case Documentation Problem List Problem List
Identifica- Business Model Model Analysis Template
tion of Problems Template
Information Mining Process Select an information mining process Problem List Concept Dictionary Problem List Template Dictionary Template LEL Analysis An informa-tion Mining process to be applied
Table 5 .
5 Information mining process selection decision table
Condition R01 R02 R03 R04 R05
The action represented by a verb. Associates subjects and objects? Yes No No Yes Yes
Is Analysis of factors required to obtain a group of subjects or objects? No Yes Yes Yes Yes
Actions
The technique to be applied is: DBR DOG AWL DMG WMG
Table 6 .
6 Identified Business Concepts
Selling Customer: A person who offers a property for sale Property Appraisal: Appraisal of property for sale.
Structure: Name and Last Name Structure: Appraisal value (Number)
Contact Information Property ID
Transaction Currency
Relationships: Property Relationships: Property
Property appraisal Customer
Busines Process: Sell a Property Business Processes Sell a Property
Offer a Property
Table 7 .
7 Real Estate agency problems related symbols.
Property [Object] Customer [Subject]
Idea Idea
-It's the object that the real estate agency sells -A Person interested in buying a property.
-It has its own attributes -A Person who is selling a property.
Impact: It is sold to a Customer Impact: Fills a form with buying criteria.
To Offer a property [Verb] Interested [Status] | 19,136 | [
"1003573",
"1003574",
"1003575",
"992693"
] | [
"346011",
"346011",
"300134",
"487856",
"487857"
] |
01484679 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484679/file/978-3-642-36611-6_16_Chapter.pdf | Per Svejvig
Torben Storgaard
Charles Møller
email: charles@business.aau.dk
Hype or Reality: Will Enterprise Systems as a Service become an Organizing Vision for Enterprise Cloud Computing in Denmark?
Keywords: Cloud computing, software as a service (SaaS), enterprise systems, organizing vision, institutional theory
Cloud computing is at "the peak of inflated expectations" on the Gartner Hype Cycle from 2010. Service models constitute a layer in the cloud computing model and Software as a Service (SaaS) is one of the important service models. Software as a Service provides complete business applications delivered over the web and more specific when delivering enterprise systems (ES) applications such as ERP, CRM and others we can further categorize the model as an Enterprise Systems as a Service (ESaaS) model. However it is said that ESaaS is one of the last frontier for cloud computing due to security risk, downtime and other factors. The hype about cloud computing and ESaaS made us speculate about our local context, Denmark, what are the current situation and how might ESaaS develop. We are asking the question: Will ESaaS become an organizing vision in Denmark? We used empirical data from a database with more than 1150 Danish organizations using ES, informal contacts to vendors etc. The result of our study is very surprising as none of the organizations in the database apply ESaaS although recent information from vendors indicates more than 50 ESaaS implementations in Denmark. We discuss the distance between the community discourse and current status of real ESaaS implementations.
Introduction
Cloud computing is on everybody's lips today and is promoted as a silver bullet for solving several of the past problems with IT by offering pay per use, rapid elasticity, on demand self-service, simple scalable services and (perhaps) multi-tenancy [START_REF] Wohl | Cloud Computing[END_REF]. Cloud computing is furthermore marketed as a cost saving strategy appealing well to the post financial crisis situation for many organizations with cloud's "Opex over Capex story and ability to buy small and, if it works, to go big" [START_REF] Schultz | Enterprise Cloud Services: the agenda[END_REF]. Cloud computing has even been named by Gartner " as the number one priority for CIOs in 2011" [START_REF] Golden | Cloud CIO: 5 Key Pieces of Rollout Advice[END_REF]. Gartner position in addition cloud computing at the "peak of inflated expectations" at the Gartner Hype Cycle predicting 2 to 5 years to mainstream adoption [START_REF] Fenn | Hype Cycle for Emerging Technologies[END_REF].
Service models constitute a layer in the cloud computing model and Software as a Service (SaaS) is one of the important types of service models. Software as a Service provides complete business applications delivered over the web and more specific when delivering enterprise systems (ES) applications such as ERP, CRM and others we can further categorize the model as an Enterprise Systems as a Service (ESaaS) model. In this paper we use ESaaS interchangeable with SaaS but also as a more specific concept. As cloud computing is still an evolving paradigm, its definitions, use cases, underlying technologies, issues, risks, and benefits can be refined [START_REF] Mell | The NIST Definition of Cloud Computing[END_REF].
Software as a Service (SaaS) embraces cloud applications for social networks, office suites, CRM, video processing etc. One example is Salesforce.com, a business productivity application (CRM), which relies completely on the SaaS model [START_REF] Voorsluys | Introduction to Cloud Computing[END_REF] consisting of Sales Cloud, Service Cloud and Chatter Collaboration Cloud [START_REF]The leader in customer relationship management (CRM) & cloud computing[END_REF] residing on "[Salesforce.com] servers, allowing customers to customize and access applications on demand" [START_REF] Voorsluys | Introduction to Cloud Computing[END_REF].
However enterprise wide system applications and especially ERP has been considered the last frontier for SaaS where companies has put forward the following reasons preventing them from considering ESaaS (prioritized sequence): (1) ERP is too basic and strategic to running our business, (2) security concerns, (3) ability to control own upgrade process, (4) downtime risk, (5) greater on-premise functionality, (6) require heavy customizations, and finally [START_REF]The leader in customer relationship management (CRM) & cloud computing[END_REF] already invested in IT resources and don't want to reduce staff [START_REF]SaaS ERP: Trends and Observations[END_REF]. A very recent example shows the potential problem with cloud and ESaaS where Amazon had an outage of their cloud services lasting for several days and affecting a large number of customers [START_REF] Thibodeau | Amazon Outage Sparks Frustration, Doubts About Cloud[END_REF].
Despite these resisting factors there seems to be a big jump in ESaaS interest with 39% of respondents willing to consider ESaaS according to Aberdeen's 2010 ERP survey, which is a 61% increase in willingness from their 2009 to 2010 survey [START_REF] Subramanian | Big Jump in SaaS ERP Interest[END_REF], this a furthermore supported by a very recent report from Panorama Consulting Group [START_REF][END_REF] stating the adoption rate of ESaaS to be 17%.
The adoption pattern of ESaaS varies in at least two dimensions company size and application category. Small companies are more likely to adopt SaaS followed by mid-size organizations [START_REF]SaaS ERP: Trends and Observations[END_REF], which might be explained by large companies having a more complex and comprehensive information infrastructure [as defined in 12] compared to small and mid-size companies. CRM applications are more frequent than ERP application [START_REF]SaaS ERP: Trends and Observations[END_REF] where a possible explanation can be the perception of ERP as too basic and strategic to run the business in a ESaaS model.
Most recently, the Walldorf German based ERP vendor SAP have launched an on demand ERP (SaaS) solution: SAP Business By Design that can be seen as prototype ESaaS model [START_REF] Sap | SAP Business ByDesign[END_REF]. SAP Business By Design is a fully integrated on-demand Enterprise Resource Planning (ERP) and business management software solution for small and medium sized enterprises (SME). It is a complete Software as a Service (SaaS) offering for 10-25 users available on most major markets. However, the real cases are actually hard to locate.
Enterprise Systems as a Service -Global and Local Context
Cloud computing appears to have emerged very recently as a subject of substantial industrial and academic interest, though its meaning, scope and fit with respect to other paradigms is hotly debated. For some researchers, Clouds are a natural evolution towards full commercialization of Grid systems, while for others they may be dismissed as a mere rebranding of the existing pay-per-use or pay-as-you-go technologies [START_REF] Antonopoulos | Cloud Computing: Principles, Systems and Applications[END_REF].
Cloud computing is a very broad concept and an umbrella term for refined on demand services delivered by the cloud [START_REF] Voorsluys | Introduction to Cloud Computing[END_REF]. The multiplicity in understanding of the term is probably fostered by the "beyond amazing hype level" [START_REF] Wohl | Cloud Computing[END_REF] underlining the peak in Gartner's Hype Cycle [START_REF] Fenn | Hype Cycle for Emerging Technologies[END_REF]. Many stakeholders (vendors, analysts etc.) jump on the bandwagon inflating the term and "if everything is a cloud, then it gets very hard to see anything" [START_REF] Wohl | Cloud Computing[END_REF], so we need to be very explicit about using the term. We follow the US National Institute of Standards and Technology (NIST) definition [START_REF] Mell | The NIST Definition of Cloud Computing[END_REF]:
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models as illustrated in figure 1 below [START_REF] Williams | A quick start guide to cloud computing: moving your business into the cloud[END_REF].
The notion of the "cloud" as a technical concept is used as a metaphor for the internet and was in the past used to represent the telephone network as an abstraction of the underlying infrastructure [START_REF] Baan | Business Operations Improvement, The New Paradigm in Enterprise IT[END_REF]. There are different deployment models for cloud computing such as private clouds operated solely for an organization, community clouds shared by several organizations, public clouds and hybrid clouds as a composition of two or more clouds (private, community or public) [START_REF] Beck | Agile Software Development Manifesto[END_REF]. The term virtual private clouds have also entered the scene analogous to VPN. There is a controversy about whether private clouds (virtual or not virtual) really is cloud computing [START_REF] Wohl | Cloud Computing[END_REF].
Cloud computing can be divided into three layers namely [START_REF] Voorsluys | Introduction to Cloud Computing[END_REF]: (1) Infrastructure as a Service (IaaS), (2) Platform as a Service (PaaS) and (3) Software as a Service (SaaS). The focus in this paper is on the enterprise systems as a service (ESaaS) where "SaaS is simply software that is delivered from a server in a remote location to your desktop, and is used online" [START_REF] Wohl | Software as a Service (SaaS)[END_REF]. ESaaS usage is expected to expand in 2011 [START_REF] O'neill | Cloud Computing in 2011: 3 Trends Changing Business Adoption[END_REF].
The air is also charged with cloud computing and SaaS in Denmark. Many of the issues discussed in this paper apply to the local Danish context, but there are also additional points to mention. First, Denmark has a lot of small and medium sized organizations (SME's) which are expected to be more willing to adapt ESaaS [START_REF]SaaS ERP: Trends and Observations[END_REF].
Second, the Local Government Denmark (LGDK) (an interest group and member authority of Danish municipalities) tried to implement a driving license booking system based on Microsoft's Azure PaaS, but ran into technical and juridical problems. The technical problems were related to the payment module, logon and data extract from the cloud based solution [START_REF] Elkaer | Derfor gik kommunernes cloud-forsøg i vasken[END_REF]. The legal issue was more serious as LGDK (and the municipalities) was accused by the Danish Data Protection Agency to break the act on processing of personal data, especially about location of data [START_REF] Elkaer | Datatilsynet farer i flaesket på KL over cloud-flop[END_REF]. LGDK decided to withdraw the cloud solution and replaced it with an on premise solution with the comments "cloud computing is definitely more difficult, and harder, than what is mentioned in the booklets" [START_REF] Elkaer | Derfor gik kommunernes cloud-forsøg i vasken[END_REF].
Finally, the CIO from "The LEGO Group", a well-known Danish global enterprise within toy manufacturing, stated in news media that "cloud is mostly hot air". Cloud can only deliver a small fraction of the services that LEGO need and cannot replace their "customized SAP, Microsoft, Oracle and ATG [e-commerce] platforms with end to end business process support". LEGO are using cloud to specific point solutions such as "spam and virus filtering", "credit card clearing" and load-testing of applications but "[t]o put our enterprise-platform on the public cloud is Utopia" [START_REF] Nielsen | LEGO: Skyen er mest varm luft[END_REF].
This section has described the global and local context for ESaaS and both contexts will probably influence Danish organizations and their willingness to adopt these solutions. In the next section we will look into a theoretical framing of the cloud computing impact on the enterprise systems in Denmark.
IS Innovations as Organizing Visions
An Organizing Vision (OV) can be considered a collective, cognitive view of how new technologies enables success in information systems innovation. This model is used to analyze ESaaS in Denmark. [START_REF] Swanson | The Organizing Vision in Information Systems Innovation[END_REF] takes institutional theory into IS research and propose the concept of organizing vision in IS innovation, which they define as "a focal community idea for the application of information technology in organizations" [START_REF] Swanson | The Organizing Vision in Information Systems Innovation[END_REF]. Earlier research has argued that early adoption of a technological innovation is based on rational choice while later adoption is institutionalized. However Swanson and Ramiller suggest that institutional processes are engaged from the beginning. Interorganizational communities create and employ organizing visions of IS innovations. Examples are CASE tools, e-commerce, client server [START_REF] Ramiller | Organizing Visions for Information Technology and the Information Systems Executive Response[END_REF] and Application Service Providers (ASP) [START_REF] Currie | The organizing vision of application service provision: a process-oriented analysis[END_REF] and comparable to management fads like BPR, TQM and quality circles [START_REF] Currie | The organizing vision of application service provision: a process-oriented analysis[END_REF][START_REF] Abrahamson | Management Fashion: Lifecycles, Triggers, and Collective Learning Processes[END_REF]. The organizing vision is important for early and later adoption and diffusion. The vision supports interpretation (to make sense of the innovation), legitimation (to establish the underlying rationale) and mobilization (to activate, motivate and structure the material realization of innovation) [START_REF] Swanson | The Organizing Vision in Information Systems Innovation[END_REF][START_REF] Currie | The organizing vision of application service provision: a process-oriented analysis[END_REF].
The OV model presents different institutional forces such as community discourse, community structure and commerce and business problematic, which are used in the analysis of ESaaS in Denmark.
Research Methodology
The research process started early 2011 where we applied different data collection methods: (1) Queries into HNCO database with ERP, CRM systems, (2) Informal dialogue with ES vendors, and finally (3) Literature search of cloud computing and SaaS (research and practitioner oriented). The second author is employed at a Herbert Nathan & Co (HNCO), a Danish management consulting company within area of ERP, they maintains a database of top 1000 companies in Denmark and their usage of enterprise systems. However we did not find any customers in the database using ESaaS, which were surprising. We repeated our study in spring 2012 and surprisingly got the same result as one year ago. ES as a Service is apparently not used by Top 1000 companies in Denmark. However informal talk with vendors indicates that there might be about 50 references in Denmark, but we have only been able to confirm a small number of these claimed references.
Analysis
The table below shows the analysis concerning ESaaS as an organizing vision (adapted from Figure 3):
Institutional forces Global Context Local Context
Community discourse
Cloud computing has been named by Gartner "as the number one priority for CIOs in 2011" [START_REF] Golden | Cloud CIO: 5 Key Pieces of Rollout Advice[END_REF] Gartner position cloud computing at the "peak of inflated expectations" at the Gartner Hype Cycle predicting 2 to 5 years to mainstream adoption [START_REF] Fenn | Hype Cycle for Emerging Technologies[END_REF] Aberdeen survey and Panorama Consulting Group report shows a big jump in interest in ESaaS / SaaS [START_REF] Subramanian | Big Jump in SaaS ERP Interest[END_REF][START_REF][END_REF] Amazon had an outage of their cloud services lasting for several days and affecting a large number of customers [START_REF] Thibodeau | Amazon Outage Sparks Frustration, Doubts About Cloud[END_REF]. This case received very much press coverage and it would be natural to expect it to have a negative impact on the perception of cloud computing
The global discourse are part of the local Danish discourse, but local stories does also shape the local context Denmark has a lot of small and medium sized organizations (SME's) which are expected to be more willing to adapt ESaaS [START_REF]SaaS ERP: Trends and Observations[END_REF]. That might fertilize the ground for faster adoption of ESaaS The Local Government Denmark (LGDK) tried to implement a driving license booking system based on Microsoft's Azure PaaS, but ran into technical and juridical problems [START_REF] Elkaer | Derfor gik kommunernes cloud-forsøg i vasken[END_REF].
The CIO from "The LEGO Group" stated in news media that "cloud is mostly hot air". Cloud can only deliver a small fraction of the services that LEGO need [ small and medium sized companies the business conditions might be different, and the arguments in favor of cloud computing and ESaaS / SaaS might be more prevailing
Table 1 Analysis of ESaaS as an organizing vision
Table 1 above shows the conditions for ESaaS to become an organizing vision although it would be too early to claim it is an organizing vision especially because the link to practice appears to be uncertain. Our knowledge about the 50 implementations in Denmark is very limited and we do not know the status of the implementations (pilots, just started, normal operation, abandoned etc.).
Discussions
First of all the research indicate that the organizing vision of ESaaS in Denmark is perhaps on a too preliminary stage to make sense. The evidence are scarce or inaccessible which indicate that the idea is either not existing or at an immature state. Given the vast amount of interest, we assume that the concept is either immature or that the ideas will emerge under a different heading that we have not been able to identify.
In any cases we can use the idea of the organizing vision as a normative model for the evolution of the cloud computing concept in an enterprise systems context. This is comparable to Gartner's hype cycle: After the initial peak of inflated expectations we will gradually move into the slope of enlightenment. The organizing vision could be a normative model for making sense of the developments. But only future research will tell.
As a final comment to the organizing vision of ESaaS the following quote from Larry Ellison, the CEO of Oracle from September 2008 sum up the experiences:
The interesting thing about cloud computing is that we've redefined cloud computing to include everything that we already do. I can't think of anything that isn't cloud computing with all of these announcements. The computer industry is the only industry that is more fashion-driven than women's fashion. Maybe I'm an idiot, but I have no idea what anyone is talking about. What is it? It's complete gibberish. It's insane. When is this idiocy going to stop?
Conclusion
This paper has sought to further our understanding of cloud computing, SaaS and with special focus on ESaaS. We described the global and local context for cloud computing and ESaaS / SaaS. We furthermore presented institutional theory extended by the work of Swanson and Ramiller about their concept of organizing visions. We asked the question: Will ESaaS become an organizing vision in Denmark. The paper can give some initial and indicative answers to the question that the community discourse support ESaaS as an organizing vision but the current status of real ESaaS implementations is uncertain.
The paper has only been able to scratch the surface and to give some initial thoughts about ESaaS in the local context. However the paper sets the stage for a longer-term research challenges about ESaaS. First, an obvious extension of this paper is to study the Danish market in much more detail by interviewing the actors in the community structure especially ESaaS customers. Second, comparative studies between countries would also be interesting, does such an organizing vision as ESaaS diffuse similarly or differently and what shapes the diffusion and adoption. Finally, the theoretical framework by Swanson and Ramiller are appealing to study the adoption and diffusion of technology possible extended by this paper's approach with the global and local context. | 20,923 | [
"990471",
"1003581",
"1003582"
] | [
"19908",
"487863",
"300821"
] |
01484684 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484684/file/978-3-642-36611-6_20_Chapter.pdf | Christian Leyh
email: christian.leyh@tu-dresden.de
Lars Crenze
ERP System Implementations vs. IT Projects: Comparison of Critical Success Factors
Keywords: ERP System Implementations vs. IT Projects: Comparison of Critical Success Factors Christian Leyh, Lars ERP systems, IT projects, implementation, critical success factors, CSF, literature review, comparison
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Today's enterprises are faced with the globalization of markets and fast changes in the economy. In order to be able to cope with these conditions, the use of information and communication systems as well as technology is almost mandatory. Specifically, the adoption of enterprise resource planning (ERP) systems as standardized systems that encompass the actions of whole enterprises has become an important factor in today´s business. Therefore, during the last few decades, ERP system software represented one of the fastest growing segments in the software market; indeed, these systems are one of the most important recent developments within information technology [START_REF] Deep | Investigating factors affecting ERP selection in the made-to-order SME sector[END_REF], [START_REF] Koh | Change and uncertainty in SME manufacturing environments using ERP[END_REF].
The demand for ERP applications has increased for several reasons, including competitive pressure to become a low-cost producer, expectations of revenue growth, and the desire to re-engineer the business to respond to market challenges. A properly selected and implemented ERP system offers several benefits, such as considerable reductions in inventory costs, raw material costs, lead time for customers, production time, and production costs [START_REF] Somers | The impact of critical success factors across the stages of enterprise resource planning implementations[END_REF]. The strong demand for ERP applications resulted in a highly fragmented ERP market and a great diffusion of ERP systems throughout enterprises of nearly every industry and every size [START_REF] Winkelmann | Experiences while selecting, adapting and implementing ERP systems in SMEs: a case study[END_REF], [START_REF] Winkelmann | Teaching ERP systems: A multi-perspective view on the ERP system market[END_REF]. This multitude of software manufacturers, vendors, and systems implies that enterprises that use or want to use ERP systems must strive to find the "right" software as well as to be aware of the factors that influence the success of the implementation project. Remembering these so-called critical success factors (CSFs) is of high importance whenever a new system is to be adopted and implemented or a running system needs to be upgraded or replaced. Errors during the selection, implementation, or maintenance of ERP systems, incorrect implementation approaches, and ERP systems that do not fit the requirements of the enterprise can all cause financial disadvantages or disasters, perhaps even leading to insolvencies. Several examples of such negative scenarios can be found in the literature (e.g. [START_REF] Barker | ERP implementation failure: A case study[END_REF], [START_REF] Hsu | Avoiding ERP pitfalls[END_REF]).
However, it is not only errors in implementing ERP systems that can have negative impact on enterprises; errors within other IT projects (e.g., implementations of BI, CRM or SCM systems, etc.) can be damaging as well. Due to the fast growing and changing evolution of technology, it is especially necessary for enterprises to at least keep in touch with the latest technologies. For example, buzz words like "Cloud computing" or "Software as a Service (SaaS)" can be read throughout managerial magazines very often. Therefore, to cope with implementations of these and other systems it is mandatory for the enterprises to be aware of the CSFs for these IT projects as well.
In order to identify the factors that affect ERP system implementations or IT projects, several case studies, surveys, and even some literature reviews have already been conducted by various researchers. However, a comparison of the factors affecting ERP implementation or IT project success has only rarely been done. To be aware of the differences within the CSFs for ERP and IT projects, it is important for the enterprises to be sure to have / to acquire the "right" employees (project leader, project team members, etc.) with adequate know-how and experience.
To gain insight into the different factors affecting ERP implementation and IT project success, we performed a CSF comparison. We conducted two literature reviews, more specifically, systematic reviews of articles in different databases and among several international conference proceedings. This also served to update the existing reviews by including current literature.
The CSFs reported in this paper were derived from 185 papers dealing with ERP systems and from 56 papers referring to factors affecting IT projects' success. The frequency of the occurrence of each CSF was counted. The aggregated results of these reviews as well as the comparison of the reviews will be presented in this paper.
Therefore, the paper is structured as follows: Within the next section our literature review methodology will be outlined in order to render our reviews reproducible. The third section deals with the results of the literature reviews and the comparison of the reviews. We will point out the factors that are the most important and those that seem to have little influence on the success of ERP implementations and IT projects. Finally, the paper concludes with a summary of the results as well as a critical acclaim for the conducted literature reviews.
Research Methodology -Literature Review
Both literature reviews to identify the aforementioned CSFs were performed via several steps, similar to the approach suggested by Webster & Watson [START_REF] Webster | Analyzing the past, preparing the future: Writing a literature review[END_REF]. In general, they were systematic reviews based on several databases that provide access to various IS journals. For the ERP system CSFs, we performed an additional search in the proceedings of several IS conferences. During the review of the ERP papers we identified 185 papers with relevant information concerning CSFs within five databases and among proceedings of five international IS conferences. However the overall procedure for the ERP system review will not be part of this paper. It is described in detail in [START_REF] Leyh | Critical success factors for ERP system implementation projects: A literature review[END_REF], [START_REF] Leyh | Critical success factors for ERP system selection, implementation and postimplementation[END_REF].
The steps of the IT projects' CSF review procedure are presented below. These steps are similar to the ERP CSF review [START_REF] Leyh | Critical success factors for ERP system implementation projects: A literature review[END_REF], [START_REF] Leyh | Critical success factors for ERP system selection, implementation and postimplementation[END_REF]. An overview of the steps is given in Figure 1. However, due to our experience during the first review (duplicates, relevant papers per database and/or proceedings), we reduced the number of databases and did not perform a review among conference proceedings. Step 1: The first step involved defining the sources for the literature review. For this approach, as mentioned, due to our earlier experience in the review procedure, two databases were identified -"Academic Search Complete" and "Business Source Complete." The first contains academic literature and publications of several academically taught subjects with specific focus on humanities and social sciences. The second covers more practical topics. It contains publications in the English language from 10,000 business and economic magazines and other sources.
Step 2: Within this step, we had to define the search terms for the systematic review. Keywords selected for this search were primarily derived from the keywords supplied and used by the authors of some of the relevant articles identified in a preliminary literature review. It must be mentioned that the search term "CSF" was not used within the Academic Search Complete database since this term is also predominantly used in medical publications and journals. As a second restriction, we excluded the term "ERP" from the search procedure in the Business Source Complete database to focus on IT projects other than ERP projects. However, this restriction could not be used within the first database due to missing functionality.
Step 3: During this step, we performed the initial search according to steps 1 and 2 and afterwards eliminated duplicates. Once the duplicates were eliminated, 507 articles remained.
Step 4: The next step included the identification of irrelevant papers. During the initial search, we did not apply any restrictions besides the ones mentioned above. The search was not limited to the research field of IS; therefore, papers from other research fields were included in the results as well. These papers had to be excluded. This was accomplished by reviewing the abstracts of the papers and, if necessary, by looking into the papers' contents. In total, this approach yielded 242 papers that were potentially relevant to the field of CSFs for IT projects.
Step 5: The fifth and final step consisted of a detailed analysis of the remaining 242 papers and the identification of the CSFs. Therefore, the content of all 242 papers was reviewed in depth for the purpose of categorizing the identified success factors. Emphasis was placed not only on the wording of these factors but also on their meaning. After this step, 56 relevant papers that suggested, discussed, or mentioned CSFs remained. The results of the analysis of these 56 papers are described in the following section. A list of these papers will not be part of this article but it can be requested from the first author.
Results of the Literature Review -Critical Success Factors Identified
The goal of the performed reviews was to gain an in-depth understanding of the different CSFs already identified by other researchers. As stated previously, 185 papers that referred to CSFs of ERP implementation projects were identified, as were 56 papers referring to CSFs of IT projects. The identified papers consist of those that present single or multiple case studies, survey results, literature reviews, or CSFs conceptually derived from the chosen literature. They were reviewed again in depth in order to determine the various concepts associated with CSFs. For each paper, the CSFs were captured along with the publication year, the type of data collection used, and the companies (i.e., the number and size) from which the CSFs were derived.
To provide a comprehensive understanding of the different CSFs and their concepts, we described the ERP implementation CSFs in [START_REF] Leyh | Critical success factors for ERP system implementation projects: A literature review[END_REF] and [START_REF] Leyh | Critical success factors for ERP system selection, implementation and postimplementation[END_REF]. There, the detailed definitions of the ERP implementation CSFs can be found. Since most of those CSFs can be matched with CSFs of IT projects (as shown later) we will not describe them within this paper.
Critical Success Factors for ERP System Implementations
Overall, 31 factors (as described in [START_REF] Leyh | Critical success factors for ERP system implementation projects: A literature review[END_REF], [START_REF] Leyh | Critical success factors for ERP system selection, implementation and postimplementation[END_REF]) were identified referring to factors influencing the ERP system implementation success. In most previous literature reviews, the CSFs were grouped without as much attention to detail; therefore, a lower number of CSFs was used (e.g., [START_REF] Somers | The impact of critical success factors across the stages of enterprise resource planning implementations[END_REF], [START_REF] Loh | Critical elements for a successful enterprise resource planning implementation in small-and medium-sized enterprises[END_REF], [START_REF] Finney | ERP implementation: A compilation and analysis of critical success factors[END_REF]). However, we took a different approach in our review. For the 31 factors, we used a larger number of categories than other researchers, as we expected the resulting distribution to be more insightful. If more broad definitions for some CSFs might be needed at a later time, further aggregation of the categories is still possible.
All 185 papers were published between the years 1998 and 2010. Table 1 shows the distribution of the papers based on publication year. Most of the papers were published between 2004 and 2009. Starting in 2004, about 20 papers on CSFs were published each year. Therefore, a review every two or three years would be reasonable in order to update the results of previously performed literature reviews. The identified CSFs and each factor's total number of occurrences in the reviewed papers are shown in the Appendix in Table 4. Top management support and involvement, Project management, and User training are the three most-named factors, with each being mentioned in 100 or more articles.
Regarding the data collection method, we must note that the papers we analyzed for CSFs were distributed as follows: single or multiple case studies -95, surveys -55, and literature reviews or articles in which CSFs are derived from chosen literature -35.
Critical Success Factors for IT Projects
In the second literature review, 24 factors were identified referring to the success of IT projects. Again, we used a larger number of categories and did not aggregate many of the factors since we had good experience with this approach during our first CSF review. All 56 papers were published between the years 1982 and 2011. Table 2 shows the distribution of the papers based on publication year. Most of the papers were published between 2004 and 2011. It must be stated that some of the papers are older than 15 years. However, we included these papers in the review as well. Table 3 shows the results of our review, i.e., the identified CSFs and each factor's total number of occurrences in the reviewed papers. Project management and Top management support are the two most often named factors, with each being mentioned in some 30 or more articles. These factors are followed by Solution fit, Organizational structure, and Resource management, all mentioned in nearly the half of the analyzed articles. As shown in Table 3, due to the smaller number of relevant papers, the differentiation between the separate CSFs is not as clear as with the ERP CSFs. Most differ by only small numbers. Regarding the data collection method, in this review the papers we analyzed for IT projects' CSFs were distributed as follows: single or multiple case studies -16, surveys -27, and literature reviews or articles where CSFs are derived from chosen literature -13.
Comparison of the Critical Success Factors
As mentioned earlier, we identified 31 CSFs dealing with the success of ERP system implementations and 24 factors affecting IT projects' success. The factors are titled according to the naming used most often in the literature. Therefore, we had to deal with different terms in both reviews. However, most of the CSFs (despite their different naming) can be found on both sides. Here, Table 4 in the Appendix provides an overview of the CSF matching.
As shown, there are nine CSFs that occur only in the review of ERP literature. Therefore, these factors are specifically affecting only for ERP implementation projects. However, most of these nine factors are not cited very often, so they seem to be less important than other CSFs mentioned in both reviews. Hence, two of these nine -Business process reengineering (BPR) and ERP system configuration -are in the top 10. Since ERP implementation has a large impact on an enterprise and its organizational structures, BPR is important for adapting the enterprise to appropriately fit the ERP system. On the other side, it is also important to implement the right modules and functionalities of an ERP system and configure them so they fit the way the enterprise conducts business. As not all IT projects have as large an impact on an organization as do ERP implementations, their configuration (or the BPR of the organization's structure) is a less important factor for success.
Within the review of IT project literature, two factors -Resource management and Working conditions -have no match within the ERP implementation CSF list, but here, the first lands in the top five of this review and seems to be an important factor for IT projects' success.
Comparing the top five, it can be found that the two most-often cited factors are the same in both reviews (see Table 3 andTable 4). These top two are followed by different factors in each review. However, it can be stated that project management and the involvement and support of the top management is important for every IT project and ERP implementation. Solution fit (rank #3) and Organizational fit of the ERP system (rank #8), which are matched, are both important factors, but are even more important for IT projects. This is also supported by Organizational structure. This factor is #4 for IT projects but only #27 for ERP implementation. For IT projects, a fitting structure within the enterprise is important since BPR (as mentioned above) is not a factor for those projects. For ERP implementations, the "right" organizational structure is less important, since BPR is done during almost every ERP implementation project and, therefore, the structure is changed to fit the ERP system.
Conclusion and Limitations
The aim of our study was to gain insight into the research field of CSFs for ERP implementations and for IT projects and to compare those CSFs. Research on the fields of ERP system implementations and IT projects and their CSFs is a valuable step toward enhancing an organization's chances for implementation success [START_REF] Finney | ERP implementation: A compilation and analysis of critical success factors[END_REF]. Our study reveals that several papers, i.e., case studies, surveys, and literature reviews, focus on CSFs. All in all, we identified 185 relevant papers for CSFs dealing with ERP system implementations. From these existing studies, we derived 31 different CSFs. The following are the top three CSFs that were identified: Top management support and involvement, Project management, and User training. For factors affecting IT projects' success, we identified 56 relevant papers citing 24 different CSFs. Here, Project management, Top management support, and Solution fit are the top three CSFs.
As shown in Table 1 and Table 2, most of the papers in both reviews were published after 2004. Within the ERP paper review, in particular, about 20 or more CFS-papers have been published each year since 2004. Thus, one conclusion suggests that new literature reviews on the CSFs of ERP systems and even on the CSFs for IT projects should be completed every two or three years in order to update the results.
Due to the quickly evolving technology, it becomes more and more important for companies to be up to date and to at least keep in touch with the latest developments. This is also important for smaller and medium-sized enterprises (SMEs). Especially in the ERP market that became saturated in the segment for large companies at the beginning of this century, many ERP manufacturers have shifted focus to the SMEs segment due to low ERP penetration rates within this segment. Therefore, large market potential awaits any ERP manufacturers addressing these markets. This can be transferred to other software and IT solutions as well. To cooperate with larger enterprises with highly developed IT infrastructure, SMEs need to improve their IT systems and infrastructure as well. Therefore, CSF research should also focus on SMEs due to the remarkable differences between large-scale companies and SMEs. ERP implementation projects and IT projects must be adapted to the specific needs of SMEs. Also, the importance of certain CSFs might differ depending on the size of the organization. Thus, we have concluded that an explicit focus on CSFs for SMEs is necessary in future research.
Regarding our literature reviews, a few limitations must be mentioned as well. We are aware that we cannot be certain that we have identified all relevant papers published in journals and conferences since we made a specific selection of five databases and five international conferences, and set even more restrictions while conducting the IT projects' review. Therefore, journals not included in our databases and the proceedings from other conferences might also provide relevant articles. Another limitation is the coding of the CSFs. We tried to reduce any subjectivity by formulating coding rules and by discussing the coding of the CSFs among several independent researchers. Hence, other researchers may code the CSFs in other ways. [
Figure 1 .
1 Figure 1. Progress of the IT projects literature review
Table 1 . Paper distribution of ERP papers
1
Year 2010 2009 2008 2007 2006 2005 2004
Papers 6 29 23 23 25 18 23
Year 2003 2002 2001 2000 1999 1998
Papers 11 12 5 6 3 1
Table 2 . Paper distribution of IT project papers
2
Year 2011 2010 2009 2008 2007 2006 2005 2004 2003
Papers 4 5 6 6 3 10 4 5 1
Year 2002 2001 1998 1995 1993 1987 1983 1982
Papers 2 1 1 2 2 2 1 1
Table 3 . IT projects CSFs in rank order based on frequency of appearance in analyzed literature
3
Factor Number of Factor Number of
instances instances
Project management 31 Commitment and motivation of 17
the employees
Top management support 30 Implementation approach 17
Organizational structure 26 Communication 15
Solution fit 26 Strategy fit 15
Resources management 25 Change management 14
User involvement 24 Team organization 14
Knowledge & experience 23 Corporate environment 10
Budget / available resources 20 Monitoring 10
Stakeholder management 19 Project scope 10
Leadership 18 Risk management 8
User training 18 Corporate culture 6
Working conditions 18 Legacy systems and IT structure 3
Appendix | 22,564 | [
"1003471"
] | [
"96520",
"96520"
] |
01484685 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484685/file/978-3-642-36611-6_21_Chapter.pdf | Anjali Ramburn1
email: anjali.ramburngopaul@uct.ac.za
Lisa Seymour1
email: lisa.seymour@uct.ac.za
Avinaash Gopaul1
Understanding the Role of Knowledge Management during the ERP Implementation Lifecycle: Preliminary Research Findings Relevant to Emerging Economies
Keywords: Knowledge Management, ERP Implementation, ERP Implementation Phase, Emerging Economy
This work in progress paper presents a preliminary analysis on the challenges of knowledge management (KM) experienced in the ERP implementation phase. This paper is an integral section of an ongoing research focusing on the role of KM during the ERP implementation lifecycle in both large and medium organizations in South Africa. One of the key research objectives is to investigate the core challenges of KM in large and medium organizations in South Africa. A review of the existing literature reveals a lack of comprehensive KM research during the different ERP implementation phases and particularly, in emerging economies. Initial findings include lack of process, technical and project knowledge as key challenges. Other concerns include poor understanding of the need for change, lack of contextualization and management support. This paper closes some of the identified research gaps in this area and should benefit large organizations in the South African economy.
Introduction
Background and Context
Organizations are continuously facing challenges, causing them to rethink and adapt their strategies, structures, goals, processes and technologies in order to remain competitive [START_REF] Bhatti | Critical Success Factors for the Implementation of Enterprise Resource Planning (ERP): Empirical Validation. 2nd International Conference on Innovation in Information Technology[END_REF], [START_REF] Holland | A critical success factors model for ERP implementation[END_REF]. Many large organizations are now dependent on ERP systems for their daily operations. An increasing number of organizations are investing in ERP systems in South Africa. There have been many implementations in the South African public sector such as the SAP implementations at the City of Cape Town and Tshwane Metropolitan Council. The implementation process is however described as costly, complex and risky whereby firms are not able to derive benefits of the systems despite huge investments. Half of all ERP implementations fail to meet the adopting organizations' expectations [START_REF] Jasperson | Conceptualization of Post-Adoptive Behaviours Associated with Information Technology Enabled Work Systems[END_REF]. This has been attributed to the disruptive and threatening nature of ERP implementations [START_REF] Zorn | The Emotionality of Information and Communication Technology Implementation[END_REF], [START_REF] Robey | Learning to Implement Enterprise Systems: An Exploratory Study of the Dialectics of Change[END_REF]. This process however can be less challenging and more effective through proper use of knowledge management (KM) throughout the ERP lifecycle phases. Managing ERP systems knowledge has been identified as a critical success factor and as a key driver of ERP success [START_REF] Leknes | The role of knowledge management in ERP implementation: a case study in Aker Kvaerner[END_REF]. An ERP implementation is a dynamic continuous improvement process and "a key methodology supporting ERP continuous improvement would be knowledge management" [START_REF] Mcginnis | Incorporating of Knowledge Management into ERP continuous improvement: A research framework[END_REF].
Research Problem, Objective and Scope
There has been very little work conducted to date that assesses the practices and techniques employed to effectively explain the impact of KM in the ERP systems lifecycle [START_REF] Parry | The importance of knowledge management for ERP systems[END_REF], [START_REF] Sedera | Knowledge Management for ERP success[END_REF]. Current research in the context of KM focuses mostly on knowledge sharing and integration challenges during the actual ERP adoption process, offering only a static perspective of KM and ERP implementation [START_REF] Suraweera | Dynamics of Knowledge Leverage in ERP Implementation[END_REF], [START_REF] Gable | The enterprise system lifecycle: through a knowledge management lens[END_REF], [START_REF] Markus | Towards a Theory of Knowledge Reuse: Types of Knowledge Reuse Situations and Factors in Reuse Success[END_REF]. A number of organizations see the ERP GO Live as the end of the cycle, and very little emphasis has been given to the post implementation phases.
This research seeks to explore the ERP implementation life cycle from a KM perspective within a South African context and aims at providing a comprehensive understanding of the role of KM practices during the ERP implementation lifecycle. One of the key objectives is to investigate the KM challenges faced by organizations while implementing ERP systems. This paper therefore presents the findings of KM challenges experienced during the implementation phase of an ERP system. It should be noted that the results, discussed in this paper, are an interpretation of the initial findings which is still under review. This analysis will be further developed and elaborated in the subsequent research phases.
Literature Review
Enterprise Resource Planning Systems
An ERP system can be defined as "an information system that enables the integration of transaction-based data and business processes within and across functional areas in an enterprise" [START_REF] Parry | The importance of knowledge management for ERP systems[END_REF]. Some of the key enterprise functions that ERP systems support include supply chain management, inventory control, sales, manufacturing scheduling, customer relationship management, financial and cost management and human resources [START_REF] Sedera | Knowledge Management for ERP success[END_REF], [START_REF] Soffer | ERP modeling: a comprehensive approach[END_REF]. Despite the cost intensive, lengthy and risky process, the rate of implementation of ERP systems has increased over the years. Most of the large multinational organizations have already adopted ERPs as their de facto standard with the aim of increasing productivity, efficiency and organizational competitiveness [START_REF] Pan | Knowledge Integration as a key problem in an ERP Implementation[END_REF].
Role and Challenges of Knowledge Management
KM is defined as an on-going process where knowledge is created, shared, transferred to those who need it, and made available for future use in the organization [START_REF] Chan | Knowledge management for implementing ERP in SMEs[END_REF]. Effective use of KM in ERP implementation has the potential to improve organizational efficiency during the ERP implementation process [START_REF] Leknes | The role of knowledge management in ERP implementation: a case study in Aker Kvaerner[END_REF]. Successful transfer of knowledge between different ERP implementation stakeholders such as the client, implementation partner and vendor is important for the successful implementation an ERP system.
Use of KM activities during the ERP implementation phase ensures reduced implementation costs, improved user satisfaction as well as strategic and competitive business advantages through effective product and process innovation during use of ERP [START_REF] Sedera | Knowledge Management for ERP success[END_REF]. Organizations should therefore be aware of and identify the knowledge requirement for any implementation. However, a number of challenges hindering the proper diffusion of KM activities during the ERP implementation phase have been highlighted. The following potential knowledge barriers have been identified by [START_REF] Pan | Knowledge Integration as a key problem in an ERP Implementation[END_REF].
Knowledge Is Embedded in Complex Organizational
Processes. ERP systems' capabilities and functionalities span across different departments involving many internal and external users, leading to a diversity of interest and competencies in specific knowledge areas. A key challenge is to overcome any conflicting interest in order to integrate knowledge in order to promote standardization and transparency. Knowledge Is Embedded in Legacy Systems. Users are reluctant to use the new systems, constantly comparing the capabilities of the new systems to legacy systems. This is a prevalent mindset which needs to be anticipated and [START_REF] Pan | Knowledge Integration as a key problem in an ERP Implementation[END_REF] suggest the ERP system looks outwardly similar to the legacy system through customization. This can be achieved by "integrating knowledge through mapping of the information, processes, and routines of the legacy system into the ERP systems with the use of conversion templates" [START_REF] Pan | Knowledge Integration as a key problem in an ERP Implementation[END_REF]. Knowledge Is Embedded in Externally Based Processes. ERP systems link external systems to internal ones, as a result external knowledge from suppliers and consultants needs to be integrated in the system. This can be a tedious process and the implementation team needs to ensure that essential knowledge is integrated from the initial implementation phases through personal and working relationships.
Gaps in the Literature
The literature review indicates that most of the studies performed in the context of KM and ERP implementation offer a one dimensional static view of the actual ERP adoption phases without emphasizing the overall dynamic nature of ERP systems. Furthermore, previous studies have failed to provide a holistic view of the challenges, importance, different dimensions and best practices of KM during the whole ERP implementation cycle.
Research Method
Research Paradigm and Approach
This research employs an interpretive epistemology which is ideal for this research as this study focuses on theory building, where the ERP implementation challenges faced by organizations are explored using a knowledge perspective [START_REF] Walsham | Interpretive case studies in IS research: Nature and method[END_REF]. A qualitative research method is deemed suitable as opposed to a quantitative one, as qualitative research emphasizes on non-positivist, non-linear and cyclical forms of research, allowing the scientist to gain new insights of the research area through each iteration aiming to provide a better understanding to the social world [START_REF] Leedy | Practical research: planning and design[END_REF], [START_REF] Strauss | Basics of Qualitative Research: Grounded Theory Procedure and Techniques[END_REF].
Grounded theory seems particularly applicable in the current context as there has been no exhaustive analysis on, barriers, dimensions and role of KM focusing on the whole ERP implementation life cycle in organizations. Grounded theory used in this research is an "inductive, theory-discovering methodology that allows the researcher to develop a theoretical account of the general features of a topic, while simultaneously grounding the account in empirical observations of data" [START_REF] Glaser | The Discovery of Grounded Theory: Strategies for Qualitative Research[END_REF], [START_REF] Orlikowski | CASE tools as organisational change: Investigating incremental and radical changes in systems development[END_REF].
Semi-Structured interviews targeting different ERP implementation stakeholders are being conducted in an organization currently in their ERP implementation phase. The aim is to interview as many participants as possible until theoretical saturation is achieved. Approval for this research has been obtained from the University of Cape Town's ethical committee. Participants have been asked to sign a voluntary participant consent form and their anonymity has been assured.
All the interviews have been recorded and transcribed. Iterative analysis of the collected data has enabled the researcher to understand and investigate the main research problems posed. The transcripts of the interviews have been read a number of times to identify, conceptualise, and categorise emerging themes.
Case Description
This section provides a brief overview of the case organization. Founded in 1923, the company has a number of branches throughout South Africa, employing over 39 000 people. The organization is currently launching the SAP Project Portfolio Management module throughout its different branches across the country. Currently in the implementation stage, an organization wide initial training, involving the employees, has already been conducted. The interviews have been carried out in one of organization's division in Cape Town and purposive sampling has been used to select the interviewees. All the chosen participants had been through the training and were impacted by the SAP implementation process.
Preliminary Findings
Preliminary research findings indicate several challenges with regards to KM in the ERP implementation phase. Most of the barriers identified were either directly or indirectly related to the inadequacies and inefficiencies of knowledge transfer. The section below provides a comprehensive account of the major challenges that have been identified.
Knowledge Management Challenges
Trainer's Lack of Process Knowledge. Interviewees mentioned the training provided was inadequate in various ways. The trainers were not knowledgeable enough; they lacked key SAP skills and did not understand the process from the users' perspective. Since none of the trainers had any experience as end users of the system, there were some inconsistencies in their understanding of the new system from a user perspective. Ownership of roles and tasks were not clearly defined. They also lacked the expertise to engage with the different problems that surfaced during the training and there was no clarification on the information and process flow between the different departments and the individuals as per their role definition.
"However what makes it difficult is that the trainers do not work with the project. They do not know the process entirely and are not aware of what is happening in the background, they only collect data."
Trainer's Lack of Technical Knowledge. The technical knowledge and qualification of the trainers were put into question. The trainers were the admin support technicians who are experts in the current system the interviewees use but did not have enough expertise to deal with the upcoming ERP system. "I think they did not know the system themselves, I had been in training with them for the current program we use and they were totally 100% clued up. You could have asked them anything, they had the answers."
Interviewees' Lack of Technical Knowledge. Interviewees also struggled with use and understanding of the ERP system. They found the user interface and navigation increasingly complex as opposed to their existing system. As a result, they were overcome with frustration and they did not see the importance of the training. "I have not used the system before, so I do not understand it. We struggled with the complexity of the system. The number of steps we had to do made it worse. No one understood what and why we were doing most of the steps." Lack of Knowledge on Need for Change. The interviewees did not understand the benefits of using SAP from a strategic perspective. They questioned the implementation of the new system as they felt their previous system could do everything they needed it to. They had never felt the need for a new system.
Lack of Project knowledge. Interviewees were unaware of the clear project objectives, milestones and deployment activities. The interviewees did not have any information regarding the status of the project activities. They were only aware of the fact that they had to be trained in SAP as this would be their new system in the future but did not exactly know by when they were required to start using the system. Some of them believed they were not near the implementation stage, and the training was only a pilot activity to test whether they were ready for implementation. However, others hoped that the implementation had been cancelled due to the number of problems experienced in the training sessions.
Poor Project Configuration Knowledge. Another key concern voiced related to the complexity of the ERP system as opposed to the existing system the participants are using. They have been working with the current system for a number of years and believed it operated in the most logical way, the same way as to how their minds would function. On the other hand, the ERP system was perceived as complex, the number of steps required to perform for a task seem to have increased drastically. This may be attributed to the lack of system configuration knowledge which could have been essential in substantially decreasing the number of steps required to perform a particular task.
Lack of Knowledge on Management
Initiatives. The interviewees felt they did not have to use or understand the system until they got the 'go ahead' from top and middle management. Interviews indicated that top and middle management had not supported the initiative as yet. Interviewees had received no information or communication on planning, adoption and deployment of the new system from management; hence they showed no commitment towards using the new system.
Lack of Knowledge on
Conclusions and Implications
This paper reports on the preliminary findings based on the implementation activities of an ERP system in a large engineering company in Cape Town. The findings of this study show a number of intra-organizational barriers to efficient knowledge transfer. Inadequate training, lack of technical and project knowledge, lack of management support and change management initiatives have been cited as the major KM challenges. Other fundamental KM challenges include process knowledge, customization and contextualization of knowledge. Seemingly, in a large organization with multiple branches throughout South Africa, understanding the process, contextualization and customization of the training content from the users' perspective is a key aspect to consider during an ERP implementation process.
This research is still ongoing and the subsequent research phases focus on providing a holistic view of the role, different dimensions and best practices of KM during the entire ERP implementation cycle. Upon completion, this research will be of immediate benefit to both academics and practitioners.
From an academic perspective, this study will explore the whole ERP implementation lifecycle from a KM perspective, hence contributing to the existing body of knowledge in this area by attempting to offer a better explanation of the existing theories and frameworks. Since there has not been any study that looked at the entire lifecycle of ERP implementation through a KM perspective in South Africa, this research is unique in nature and is expected to break some new ground in South Africa, aiming to provide an advancement of knowledge in this particular field. Through a practical lens, this research should be of immediate benefit to large and medium organizations. The results of this study can also be useful and applicable to international companies with global user bases.
Change Management Initiatives. Managing change is arguably one of the primary concerns of ERP implementation. The analysis show the lack of importance attributed to this area. Lack of proper communication channels and planning coupled with the absence of change management initiatives resulted in employees' confusion, instability and resistance as shown by quotes below. "We should not have used SAP at all, they should scrap it…If someone new came and asked me whether they should go for the training, I would tell them, try your best to get out of it." Knowledge Dump (Information overload). Information overload was another identified challenge. The training included people from different departments who are associated with different aspects of the process. As a result, the trainers covered various tasks related to various processes in one training session as opposed to focusing on the specific processes that the interviewees understood and were involved with. The participants got confused with regards to their role definition and the ownership of the different activities. The trainers were unable to clear this confusion. This caused a certain level of panic amongst the group; subsequently they lost interest in the training and attributed it as an unproductive process.Poor Contextualization ofKnowledge. Another concern raised was with reference to the lack of customization of the training materials and exercises used resulting in a poor focus on local context. Interviewees could not relate to the training examples given as they were based on the process flow from a different suburb. Interviewees said each suburb has its own way of operating and has unique terms and terminologies. The fact that the examples used came from Johannesburg and not from Cape Town made it harder for the interviewees to understand the overall process. "The examples they used were from Joburg, so they work in a different way to us. The examples should have been customised to how we work in order for us to better understand the process." | 21,464 | [
"1003587",
"1003468"
] | [
"303907",
"303907",
"303907"
] |
01484689 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484689/file/978-3-642-36611-6_25_Chapter.pdf | Nuno Ferreira
email: nuno.ferreira@i2s.pt
Nuno Santos
email: nuno.santos@ccg.pt
Pedro Soares
email: psoares@ccg.pt
Ricardo J Machado
Dragan Gašević
email: dgasevic@acm.org
Transition from Process-to Product-level Perspective for Business Software
Keywords: Enterprise logical architecture, Information System Requirement Analysis, Design, Model Derivation
When there are insufficient inputs for a product-level approach to requirements elicitation, a process-level perspective is an alternative way for achieving the intended base requirements. We define a V+V process approach that supports the creation of the intended requirements, beginning in a process-level perspective and evolving to a product-level perspective trough successive models derivation with the purpose of creating context for the implementation teams. The requirements are expressed through models, namely logical architectural models and stereotyped sequence diagrams. Those models alongside with the entire approach are validated using the architecture validation method ARID.
Introduction
A typical business software development project is coordinated so that the resulting product properly aligns with the business model intended by the leading stakeholders. The business model normally allows for eliciting the requirements by providing the product's required needs. In situations where organizations focused on software development are not capable of properly eliciting requirements for the software product, due to insufficient stakeholder inputs or some uncertainty in defining a proper business model, a process-level requirements elicitation is an alternative approach. The process-level requirements assure that organization's business needs are fulfilled. However, it is absolutely necessary to assure that product-level (IT-related) requirements are perfectly aligned with process-level requirements, and hence, are aligned with the organization's business requirements.
One of the possible representations of an information system is its logical architecture [START_REF] Castro | Towards requirements-driven information systems engineering: the Tropos project[END_REF], resulting from a process of transforming business-level and technological-level decisions and requirements into a representation (model). It is necessary to promote an alignment between the logical architecture and other supporting models, like organizational configurations, products, processes, or behaviors. A logical architecture can be considered a view of a system composed of a set of problem-specific abstractions supporting functional requirements [START_REF] Azevedo | Refinement of Software Product Line Architectures through Recursive Modeling Techniques In[END_REF].
In order to properly support technological requirements that comply with the organization's business requirements, we present in this paper an approach composed by two V-Models [START_REF] Haskins | Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities[END_REF], the V+V process. The requirements are expressed through logical architectural models and stereotyped sequence diagrams [START_REF] Machado | Requirements Validation: Execution of UML Models with CPN Tools[END_REF] in both a process-and a product-level perspective. The first execution of the V-Model acts in the analysis phase and regards a process-level perspective. The second execution of the V-Model regards a product-level perspective and enables the transition from analysis to design trough the execution of the product-level 4SRS method [START_REF] Machado | Transformation of UML Models for Service-Oriented Software Architectures[END_REF]. Our approach assures a proper compliance between the process-and the product-level requirements through a set of transition steps between the two perspectives.
This paper is structured as follows: section 2 presents the V+V process; section 3 describes the method assessment through ARID; in section 4 we present an overview of the process-to product-level transition; in section 5 we compare our approach with other related works; and in section 6 we present the conclusions.
A V+V Process Approach for Information System's Design
At a macro-process level, the development of information systems can be regarded as a cascaded lifecycle, if we consider typical and simplified phases: analysis, design and implementation. We encompass our first V-Model (at process-level) within the analysis phase and the second V-Model (at product-level) in the transition between the analysis and the design. One of the outputs of any of our V-Models is the logical architectural model for the intended system. This diagram is considered a design artifact but the design itself is not restricted to that artifact. We have to execute a V+V process to gather enough information in the form of models (logical architectural model, B-type sequence diagrams and others) to deliver, to the implementation teams, the correct specifications for product realization.
Regarding the first V-Model, we refer that it is executed at a process-level perspective. How the term process is applied in this approach can lead to inappropriate interpretations. Since the term process has different meanings depending on the context, in our process-level approach we acknowledge that: (1) real-world activities of a business software production process are the context for the problem under analysis; (2) in relation to a software model context [START_REF] Conradi | Process Modelling Languages. Software Process: Principles, Methodology, and Technology[END_REF], a software process is composed of a set of activities related to software development, maintenance, project management and quality assurance. For scope definition of our work, and according to the previously exposed acknowledgments, we characterize our process-level perspective by: (1) being related to real-world activities (including business); (2) when related to software, those activities encompass the typical software development lifecycle. Our process-level approach is characterized by using refinement (as one kind of functional decomposition) and integration of system models. Activities and their interface in a process can be structured or arranged in a process architecture [START_REF] Browning | Modeling impacts of process architecture on cost and schedule risk in product development[END_REF].
Our V-Model approach (inspired in the "Vee" process model [START_REF] Haskins | Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities[END_REF]) suggests a roadmap for product design based on business needs elicited in an early analysis phase. The approach requires the identification of business needs and then, by successive artifact derivation, it is possible to transit from a business-level perspective to an IT-level perspective and at the same time, aligns the requirements with the derived IT artifacts. Additionally, inside the analysis phase, this approach assures the transition from the business needs to the requirements elicitation.
In this section, we present our approach, based on successive and specific artifacts generation. In the first V-Model (at the process-level), we use Organizational Configurations (OC) [START_REF] Evan | Toward a theory of inter-organizational relations[END_REF], A-type and B-type sequence diagrams [START_REF] Machado | Requirements Validation: Execution of UML Models with CPN Tools[END_REF], (business) Use Case models (UCs) and a process-level logical architectural model. The generated artifacts and the alignment between the business needs and the context for product design can be inscribed into this first V-Model.
The presented approach encompasses two V-Models, hereafter referred as the V+V process and depicted in Fig. 1. The first V deals with the process-level perspective and its vertex is supported by the process-level 4SRS method detailed in [START_REF] Ferreira | Derivation of Process-Oriented Logical Architectures: An Elicitation Approach for Cloud Design[END_REF]. The process-level 4SRS method execution results in the creation of a validated architectural model which allows creating context for the product-level requirements elicitation and in the uncovering of hidden requirements for the intended product design. The purpose of the first execution of the V-Model regards eliciting requirements from a high-level business level to create context for product design, that can be considered a business elicitation method (like the Business Modeling discipline of RUP).
Fig. 1. The V+V process approach
The second execution of the V-Model is done at a product-level perspective and its vertex is supported by the product-level 4SRS method detailed in [START_REF] Machado | Transformation of UML Models for Service-Oriented Software Architectures[END_REF]. The product-level V-Model gathers information from the context for product design (CPD) in order to create a new model referred as Mashed UCs. Using the information present in the Mashed UCs model, we create A-type sequence diagrams, detailed in [START_REF] Machado | Requirements Validation: Execution of UML Models with CPN Tools[END_REF]. These diagrams are input for the creation of (software) Use Case Models that have associated textual descriptions of the requirements for the intended system. Using the 4SRS method in the vertex, we derive those requirements into a Logical Architectural model. Using a process identical to the one used in the process-level V-Model, we create B-type sequence diagrams and assess the Logical Architectural Model.
The V-Model representation provides a balanced process representation and, simultaneously, ensures that each step is verified before moving into the next. As seen in Fig. 1, the artifacts are generated based on the rationale and in the information existing in previously defined artifacts, i.e., A-type diagrams are based on OCs, (business) use case model is based on A-type sequence diagrams, the logical architecture is based on the (business) use case model, and B-type sequence diagrams comply with the logical architecture. The V-Model also assures validation of artifacts based on previously modeled artifacts (e.g., besides the logical architecture, B-type sequence diagrams are validated by A-type sequence diagrams). The aim of this manuscript is not to detail the inner execution of the V-Model, nor is it to detail the rules that enable the transition from the process-to the product-level, but rather to present the overall V+V process within the macro-process of information systems development.
In both V-Models, the assessment is made using an adaption of ARID (presented in the next section) and by using B-type sequence diagrams to check if the architectural elements present in the Logical Architectural Model produced by the models are contained in the scenarios depicted by the B-type sequence diagrams.
The first V produces a process-level logical architecture (that can be considered the information system logical architecture); the second V produces a product-level logical architecture (that can be considered the business software logical architecture). Also, for each of the V-Models, in the descending side of the V (left side), models created in succession represent the refinement of requirements and the creation of system specifications. In the ascending side (right side of the V), models represent the integration of the discovered logical parts and their involvement in a cross-side oriented validating effort contributing for the inner-validation for macro-process evolution.
V-Model Process Assessment with ARID
In both V-Models execution, the assessments that result from comparing A-and B-type sequence diagrams produce Issues documents. These documents are one of the outputs of the Active Reviews for Intermediate Designs (ARID) method [START_REF] Clements | Active Reviews for Intermediate Designs[END_REF][START_REF] Clements | Evaluating software architectures: methods and case studies[END_REF] used to assess each V-Model execution. The ARID method is a combination of Architecture Tradeoff Analysis Method (ATAM) [START_REF] Clements | Evaluating software architectures: methods and case studies[END_REF] with Active Design Review (ADR) [START_REF] Clements | Evaluating software architectures: methods and case studies[END_REF]. By its turn, ATAM can be seen as an improved version of Software Architecture Analysis Method (SAAM) [START_REF] Clements | Evaluating software architectures: methods and case studies[END_REF]. These methods are able to conduct reviews regarding architectural decisions, namely on the quality attributes requirements and their alignment and satisfaction degree of specific quality goals. The ADR method targets architectures under development, performing evaluations on parts of the global architecture. Those features made ARID our method of choice regarding the evaluation of the in-progress logical architecture and in the assistance to determine the need of further refinements, improvements, or revisions before assuming that the architecture is ready to be delivered to the teams responsible for implementation. This delivery is called context for product implementation (CPI).
Fig. 2. Assessment of the V+V execution using ARID
In Fig. 2, we present the simplified interactions between the ARID-related models in the V+V process. In this figure, we can see the macro-process associated with both V-Models, the transition from one to the other (later detailed) and the ARID models that support the assessment of the V+V execution.
The Project Charter regards information that is necessary for the ongoing project and relates to project management terminology and content [START_REF]Project Management Institute: A Guide to the Project Management Body of Knowledge (PMBOK® Guide)[END_REF]. This document encompasses information regarding the project requirements in terms of human and material resources, skills, training, context for the project, stakeholder identification, amongst others. It explicitly contains principles and policies of the intended practice with people from different perspectives in the project (analysis, design, implementation, etc.). It also allows having a common agreement to refer to, if necessary, during the project execution.
The Materials document contains the necessary information for creating a presentation of the project. It regards collected seed scenarios based on OCs (or Mashed UCs), A-type sequence diagrams and (business or software) Use Case Models. Parts of the Logical Architectural model are also incorporated in the presentation that will be presented to the stakeholders (including software engineers responsible for implementation). The purpose of this presentation is to enlighten the team about the logical architecture and propose the seed scenarios to discussion and create the B-type sequence diagrams based on presented information.
The Issues document supports information regarding the evaluation of the presented logical architecture. If the logical architecture is positively assessed, we can assume that we reached consensus to proceed into the macro-process. If not, using the Issues document it is possible to promote a new iteration of the corresponding V-Model execution to adjust the previously resulting logical architecture to make the necessary corrections to comply with the seed scenarios. Main causes for this adjustment are: (1) bad decisions that were made in the corresponding 4SRS method execution; (2) B-type sequence diagrams not complying with all the A-type sequence diagrams; (3) created B-type sequence diagrams not comprising the entire logical architecture; (4) the need to explicitly placing a design decision in the logical architectural model, usually done by using a common architectural pattern and injecting the necessary information in the use case textual descriptions that are input for the 4SRS.
The adjustment of the logical architectural model (by iterating the same V-Model) suggests the construction of a new use case model or, in the case of a new scenario, the construction of new A-type sequence diagrams. The new use case model captures user requirements of the revised system under design. At the same time, through the application of the 4SRS method, it is possible to derive the corresponding logical architectural model.
Our application of common architectural patterns include business, analysis, architectural and design patterns as defined in [START_REF] Azevedo | Systematic Use of Software Development Patterns through a Multilevel and Multistage Classification[END_REF]. By applying them as early as possible in the development (in early analysis and design), it is possible to incorporate business requirements into the logical architectural model and at the same time assure that the resulting model is aligned with the organization needs and also complies with the established non-functional requirements. The design patterns are used when there is a need to detail or refine parts of the logical architecture and, by itself, to promote a new iteration of the V-Model.
In the second V, after being positively assessed by the ARID method, the business software logical architectural model is considered a final design artifact that must be divided into products (applications) for latter implementation by the software teams.
Process-to Product-level Transition
As stated before, a process-level V-Model can be executed for business requirements elicitation purposes, followed by a product-level V-Model for defining the software functional requirements. The V+V process is useful for both stakeholders, organizations and technicians, but it is necessary to assure that they properly reflect the same system. In order to assure an aligned transition between the process-and product-level perspectives in the V+V process we propose the execution of a set of transition steps whose execution is required to create the Mashed UC model referred in Fig. 1 and in Fig. 2. The detail of the transition rules is subject of future publications.
Like in [START_REF] Azevedo | Refinement of Software Product Line Architectures through Recursive Modeling Techniques In[END_REF][START_REF] Machado | Refinement of Software Architectures by Recursive Model Transformations[END_REF], we propose the usage of the 4SRS by recursive executions with the purpose of deriving a new logical architecture. The transition steps are structured as follows: (1) Architecture Partitioning, where the Process-level Architectural Elements (AEpc's) under analysis are classified by their computation execution context with the purpose of defining software boundaries to be transformed into Product-level (software) Use Cases (UCpt's.); (2) Use Case Transformation, where AEpc's are transformed into software use cases and actors that represent the system under analysis through a set of transition patterns that must be applied as rules; (3) Original Actors Inclusion, where the original actors that were related to the use cases from which the architectural elements of the process-level perspective are derived (in the first V execution) must be included in the representation; (4) where the model is analyzed for redundancies; and (5) Gap Filling; where the necessary information of any requirement that is intended to be part of the design and that is not yet represented, is added, in the form of use cases.
By defining these transition steps, we assure that product-level (software) use cases (UCpt) are aligned with the architectural elements from the process-level logical architectural model (AEpc); i.e., software use case diagrams are reflecting the needs of the information system logical architecture. The application of these transition rules to all the partitions of an information system logical architecture gives origin to a set of Mashed UC models.
Comparison with Related Work
An important view considered in our approach regards the architecture. What is architecture? In the literature there is a plethora of definitions but most agree that an architecture concerns both structure and behavior, with a level of abstraction that only regards significant decisions and may be in conformance with an architectural style, is influenced by its stakeholders and the environment where it is intended to be instantiated and also encompasses decisions based on some rationale or method.
It is acknowledged in software engineering that a complete system architecture cannot be represented using a single perspective [START_REF] Sungwon | Designing logical architectures of software systems[END_REF][START_REF] Kruchten | The 4+1 View Model of Architecture[END_REF]. Using multiple viewpoints, like logical diagrams, sequence diagrams or other artifacts, contributes to a better representation of the system and, as a consequence, to a better understanding of the system. Our stereotyped usage of sequence diagrams adds more representativeness value to the specific model than, for instance, the presented in Krutchen's 4+1 perspective [START_REF] Kruchten | The 4+1 View Model of Architecture[END_REF]. This kind of representation also enables testing sequences of system actions that are meaningful at the software architecture level [START_REF] Bertolino | An explorative journey from architectural tests definition down to code tests execution[END_REF]. Additionally, the use of this kind of stereotyped sequence diagrams at the first stage of analysis phase (user requirements modeling and validation) provides a friendlier perspective to most stakeholders, easing them to establish a direct correspondence between what they initially stated as functional requirements and what the model already describes.
Conclusions and Outlook
We presented an approach to create context for business software implementation teams in contexts where requirements cannot be properly elicited. Our approach is based on successive models construction and recursive derivation of logical architectures, and makes use of model derivation for creating use cases, based on high-level representations of desired system interactions. The approach assures that validation tasks are performed continuously along the modeling process. It allows for validating: (1) the final software solution according to the initial expressed business requirements; (2) the B-type sequence diagrams according to A-type sequence diagrams; (3) the logical architectures by traversing it with B-type sequence diagrams. These validation tasks, specific to the V-Model, are subject of a future publication.
It is a common fact that domain-specific needs, namely business needs, are a fast changing concern that must be tackled. Process-level architectures must be in a way that potentially changing domain-specific needs are local in the architecture representation. Our proposed V+V process encompasses the derivation of a logical architecture representation that is aligned with domain-specific needs and any change made to those domain-specific needs is reflected in the logical architectural model through successive derivation of the supporting models (OCs, A-and B-type sequence diagrams, and use cases). Additionally, traceability between those models is built-in by construction, and intrinsically integrated in our V+V process.
Acknowledgments
This work has been supported by project ISOFIN (QREN 2010/013837). | 23,545 | [
"1002459",
"1002460",
"1002461",
"991637",
"1002440"
] | [
"486560",
"486561",
"486561",
"300854",
"486532"
] |
01484690 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484690/file/978-3-642-36611-6_3_Chapter.pdf | Julian Faasen
Lisa F Seymour
email: lisa.seymour@uct.ac.za
Joachim Schuler
email: joachim.schuler@hs-pforzheim.de
SaaS ERP adoption intent: Explaining the South African SME perspective
Keywords: Software as a Service, Cloud computing, Enterprise Resource Planning, SaaS ERP, South African SME, Information Systems adoption
This interpretive research study explores intention to adopt SaaS ERP software within South African SMEs. Semi-structured interviews with participants from different industry sectors were performed and seven multidimensional factors emerged explaining the current reluctance to adoption. While, improved IT reliability and perceived cost reduction were seem as benefits they were dominated by other reasons. Reluctance to adopt was attributed to systems performance and availability risk; sunk cost and satisfaction with existing systems; data security risk; loss of control and lack of vendor trust; and finally functionality fit and customization limitations. The findings provide new insights into the slow SaaS ERP adoption in South Africa and provide empirically supported data to guide future research efforts. Findings can be used by SaaS vendors to address perceived shortcomings of SaaS ERP software.
Introduction
Small and medium enterprises (SMEs) are major players in every economy and make a significant contribution to employment and Gross Domestic Product (GDP) [START_REF] Seethamraju | Adoption of ERPs in a medium-sized enterprise-A case study[END_REF]. In the past, many organizations were focused on local markets, but have been forced to respond to competition on a global level as well [START_REF] Shehab | Enterprise resource planning: An integrative review[END_REF]. The role of the SME in developing countries such as South Africa is considered critical in terms of poverty alleviation, employment creation and international competitiveness [START_REF] Berry | The Economics of Small, Medium and Micro Enterprises in South Africa, Trade and Industrial Policy Strategies[END_REF]. However, resource limitations have made it difficult for many smaller organizations to enter new markets and compete against their larger counterparts. Thus SMEs in all countries are forced to seek innovative ways to become more efficient and competitive within a marketplace rife with uncertainty. Adoption of Information Systems (IS) is viewed as a way for SMEs to become more competitive and to drive business benefits such as cost reduction, improved profitability, enhanced customer service, new market growth opportunities and more efficient operating relationships with trading partners [START_REF] Premkumar | A meta-analysis of research on information technology implementation in small business[END_REF]. Many organizations have adopted Enterprise Resource Planning (ERP) software in an attempt to achieve such benefits.
ERP software facilitates the integration of cross-functional business processes in order to improve operational efficiencies and business performance. If used correctly, ERP software can drive bottom-line results and enhance competitive advantage. Whilst most large organizations world-wide have managed to acquire ERP software [START_REF] Klaus | What is ERP? Information Systems Frontiers[END_REF], it has been reported that many SMEs have been unwilling to adopt ERP software due to the high cost and risk involved [START_REF] Buonanno | Factors affecting ERP system adoption: A comparative analysis between SMEs and large companies[END_REF]. However, an alternative to on-premise enterprise software has been made possible with the advent of the Software as a Service (SaaS) model.
SaaS as a subset of cloud computing involves the delivery of web-based software applications via the internet. SaaS is essentially an outsourcing arrangement, where enterprise software is hosted on a SaaS vendor's infrastructure and rented by customers at a fraction of the cost compared with traditional on-premise solutions. Customers access the software using an internet browser and benefit through lower upfront capital requirements [START_REF] Feuerlicht | SOA: Trends and directions[END_REF], faster deployment time [START_REF] Benlian | A transaction cost theoretical analysis of software-as-a-service (SAAS)-based sourcing in SMBs and enterprises[END_REF]; [START_REF] Deyo | Software as a service (SaaS): A look at the migration of applications to the web[END_REF], improved elasticity [START_REF] Xin | Software-as-a Service Model: Elaborating Client-Side Adoption Factors[END_REF], flexible monthly installments [START_REF] Armbrust | A view of cloud computing[END_REF] and more predictable IT budgeting [START_REF] Benlian | A transaction cost theoretical analysis of software-as-a-service (SAAS)-based sourcing in SMBs and enterprises[END_REF]; [START_REF] Hai | SaaS and integration best practices[END_REF]. Countering these benefits are concerns around software reliability, data security [START_REF] Hai | SaaS and integration best practices[END_REF]; [START_REF] Heart | Who Is out there? Exploring Trust in the Remote-Hosting Vendor Community[END_REF]; [START_REF] Kern | Application service provision: Risk assessment and mitigation[END_REF] and long-term cost savings [START_REF] Hestermann | Magic quadrant for midmarket and tier 2oriented ERP for product-centric companies[END_REF]. Customization limitations [START_REF] Chong | Architecture strategies for catching the long tail[END_REF] and integration challenges [START_REF] Xin | Software-as-a Service Model: Elaborating Client-Side Adoption Factors[END_REF] are considered major concerns relating to SaaS offerings. Furthermore, concerns relating to data security and systems availability have raised questions as to the feasibility of SaaS for hosting mission-critical software.
Despite the perceived drawbacks of SaaS, Gartner suggests that SaaS ERP solutions are attracting growing interest in the marketplace [START_REF] Hestermann | Magic quadrant for ERP for Product-Centric Midmarket Companies[END_REF]. Traditional ERP vendors such as SAP have begun expanding their product ranges to include SaaSbased offerings. The success of Salesforce's SaaS CRM solution provides further evidence that the SaaS model is capable of delivering key business functionality. However, the adoption of SaaS ERP software has been reported as slow [START_REF] Hestermann | Magic quadrant for ERP for Product-Centric Midmarket Companies[END_REF] and appears to be confined to developed countries. Despite the plethora of online content promoting the benefits of SaaS ERP software, there is a lack of empirical research available that explains the slow rate of adoption. Thus, the purpose of this study is to gain an understanding of the reluctance to adopt SaaS ERP software within South African SMEs. This research is considered important as SaaS is a rapidly growing phenomenon with widespread interest in the marketplace. Furthermore, this study aims to narrow the research gap by contributing towards much-needed empirical research into SaaS ERP adoption.
Literature Review
A number of pure-play SaaS vendors as well as traditional ERP providers are offering ERP software via the SaaS model. Krigsman [START_REF] Krigsman | The 2011 focus experts' guide to enterprise resource planning[END_REF] summarized the major SaaS ERP vendors and offerings and found that many are offering the major six core modules: Financial Management, Human Resources Management, Project Management, Manufacturing, Service Operations Management and Supply Chain Management.
However, according to Aberdeen Group, only nine SaaS vendors actually offered pure SaaS ERP software and services [START_REF] Wailgum | SaaS ERP Has Buzz, But Who Are the Real Players[END_REF]. A Forrsights survey found that 15% of survey participants were planning adoption of SaaS ERP by 2013 [START_REF] Kisker | ERP Grows Into The Cloud: Reflections From SuiteWorld[END_REF]. However, two-thirds of those firms were planning to complement their existing on-premise ERP software with a SaaS offering. Only 5% of survey participants planned to replace most/all of their on-premise ERP systems within 2 years (from the time of their survey). These findings provide evidence of the slow rate of SaaS ERP adoption. It should also be noted that popular SaaS ERP vendors such as Netsuite and Epicor were not yet providing SaaS ERP products in South Africa during the time of this study in 2011. Given the scarcity of SaaS ERP literature, a literature review of the factors potentially influencing this slow adoption was performed based on prior studies relating to on-premise ERP adoption, IS adoption, SaaS, ASP and IS outsourcing (Figure 1). The major factors identified are structured according to the Technology-Organization Environment (TOE) framework [START_REF] Tornatzky | The process of technological innovation[END_REF]. For parsimonious reasons only these factors that were confirmed from our results are discussed in the results section of this paper.
Research Method
The primary research question was to identify why South African SMEs are reluctant to consider the adoption of SaaS ERP. Given the lack of research available an inductive interpretive and exploratory approach was deemed appropriate. The study also contained deductive elements as past research was used to generate an initial model. Walsham [START_REF] Walsham | Interpretive case studies in IS research: Nature and method[END_REF] posits that past theory in interpretive research is useful as a means of creating a sensible theoretical basis for informing the initial empirical work. To reduce the risk of relying too heavily on theory, a significant degree of openness to the research data was maintained through continual reassessment of initial assumptions [START_REF] Walsham | Interpretive case studies in IS research: Nature and method[END_REF].
Non-probability purposive sampling [START_REF] Saunders | Research methods of business students[END_REF] was used to identify suitable organizations to interview and ethics approval from the University was obtained prior to commencing data collection. The sample frame consisted of South African SMEs with between 50 and 200 employees [START_REF]Small Business Act 102[END_REF]. One participating organization contained 250 employees and was included due to difficulties finding appropriate interview candidates. SMEs in different industry segments were targeted to increase representation. Furthermore, SMEs that operated within traditional ERP-focussed industries (e.g. manufacturing, logistics, distribution, warehousing and financial services, etc.) were considered to improve the relevance of research findings. The majority of participants interviewed were key decision makers within their respective organizations to accurately reflect the intention to adopt SaaS ERP software within their respective organizations. Table 1 provides a summary of company and participant demographics. Data was collected using semi-structured interviews with questions which were initially guided by a priori themes extracted from the literature review. However, the researcher practised flexibility by showing a willingness to deviate from the initial research questions in order to explore new avenues [START_REF] Myers | The qualitative interview in IS research: Examining the craft[END_REF].
Data analysis was conducted using the general inductive approach, where research findings emerged from the significant themes in the raw research data [START_REF] Thomas | A general inductive approach for analyzing qualitative evaluation data[END_REF]. To enhance the quality of analysis member checking, thick descriptions, code-recode and audit trail strategies [START_REF] Anfara | Qualitative analysis on stage: Making the research process more public[END_REF] were employed.
During interviews, it was apparent that the term "ERP" was sometimes used to represent functionality provided by a number of disparate systems. Thus the term ERP was used in terms of how the participant's companies used their business software collectively to fulfil the role of ERP software. Table 2 below provides an overview of the software landscape for each of the companies interviewed. Companies used a combination of off-the-shelf, bespoke, vertical ERP or modular ERP applications.
In this study, intention to adopt SaaS ERP software is defined as the degree to which the organization (SME) considers replacing all or most of their on-premise enterprise software with SaaS ERP software. SaaS ERP was defined as web-based ERP software that is hosted by SaaS ERP vendors and delivered to customers via the internet. The initial engagement with participants focussed primarily on multi-tenant SaaS ERP offerings, implying that a single instance of the ERP software would be shared with other companies. At the time of this study SaaS ERP was not easily available from vendors in South Africa. Irrespective of the availability, none of the companies interviewed had an intention of adopting SaaS ERP software in the future. However, one participant suggested a positive intention towards adoption of SaaS applications: "Microsoft CRM is available on the SaaS model...that's the way companies are going and we are seriously considering going that way" (Participant B). His company was in the process of planning a trial of SaaS CRM software. However, Participant B's organization was also in the process implementing on-premise ERP software. The findings are inconsistent with global Gartner and Forrsights surveys which reported a willingness and intention to adopt SaaS ERP software within small and mid-sized organizations [START_REF]SaaS ERP: Trends and observations[END_REF]; [START_REF] Kisker | ERP Grows Into The Cloud: Reflections From SuiteWorld[END_REF].
The main objective of this research was to explore the factors that impacted the reluctance to consider SaaS ERP software adoption within South African SMEs. The following 7 themes emerged and are discussed in the following sections:
1.
Perceived cost reduction (driver) 2.
Sunk cost and Satisfaction with existing system (inhibitor) 3.
Systems performance and availability risk (inhibitor) 4.
Improved IT reliability (driver) 5.
Data security risk (inhibitor) 6.
Loss of control and Vendor trust (inhibitor) 7.
Functionality Fit and Customization Limitations (inhibitor)
Perceived cost reduction
In line with the literature cost reductions were envisaged in terms of initial hardware and infrastructure [START_REF] Kaplan | SaaS survey shows new model becoming mainstream[END_REF]; [START_REF] Torbacki | SaaS-direction of technology development in ERP/MRP systems[END_REF]; [START_REF] Xin | Software-as-a Service Model: Elaborating Client-Side Adoption Factors[END_REF] and were perceived as having a positive effect on intention to adopt SaaS ERP. However, participants also referred to the high cost of maintaining their on-premise ERP applications and potential long term operational cost savings with SaaS ERP. "..it's the ongoing running costs, support and maintenance, that makes a difference" (Participant B). However, these high costs were often justified in terms of the value that their onpremise systems provided: "...if it's considered important then cost is very much a side issue" (Participant D).
Sunk cost and Satisfaction with existing systems
The intention to adopt SaaS ERP was negatively affected by sunk cost and satisfaction with their existing systems. This was the 2nd most dominant theme. Sunk cost represents irrecoverable costs incurred during the acquisition and evolution of their existing IT systems.
"...if you're company that's got a sunk cost in ERP...the hardware and staff and training them up...what is the benefit of moving across to a SaaS model?" (A1).
"...if we were starting today with a clean slate, with not having a server room full of hardware, then definitely...SaaS would be a good idea" (D) Satisfaction with existing systems relates to the perception of participants that their existing enterprise software was fit for purpose.
"...whenever you've got a system in place that ticks 90% of your boxes and it's reliable...why change, what are we going to gain, will the gain be worth the pain and effort and the cost of changing" (A1). The effect of sunk costs towards SaaS ERP adoption intent could not be verified within academic literature but is consistent with the 2009 Aberdeen Group survey, where organizations showed reluctance towards adoption due to past investment in IT [START_REF]SaaS ERP: Trends and observations[END_REF]. Both sub-themes were also related to a lack of perceived benefits towards changing to alternatives such as SaaS ERP.
"
…you're constantly investing in the current system and you're depreciating those costs over three, five, years. So… if you've got those sunk costs…even if you could save 30% you'd have to weigh it up around the investment" (A1).
This is in agreement with research which states that organizations adopt technology innovations only if they consider the technology to be capable of addressing a perceived performance gap or to exploit a business opportunity [START_REF] Premkumar | Adoption of new information technologies in rural small businesses[END_REF].
System performance and availability risk
Concerns over systems performance and availability risk were the dominant reasons for the reluctance to adopt SaaS ERP. This was commented on by all participants. Systems performance and availability risk concerns were primarily related to bandwidth concerns in South Africa. More specifically, bandwidth cost, internet latency limitations and bandwidth reliability (uptime) were considered factors which impacted the performance and availability of SaaS ERP solutions, thus impacting adoption intent. These findings are in line with literature which suggests that systems performance and availability concerns have a negative impact on ASP adoption [START_REF] Lee | Determinants of success for application service provider: An empirical test in small businesses[END_REF] and SaaS adoption [START_REF] Benlian | Drivers of SaaS-Adoption-An empirical study of different application types[END_REF].
"The cheapest, I suppose is the ADSL, with 4MB lines, but they tend to fall over, cables get stolen" (Participant D).
"They can't guarantee you no downtime, but I mean there are so many factors locally that they've got no control of. You know, you have a parastatal running the bulk of our bandwidth system" (E) Systems performance and availability was associated with the risk of losing access to mission-critical systems and the resulting impact on business operations. Although bandwidth has become cheaper and more reliable in South Africa over the past decade, organizations and SaaS vendors are still faced with a number of challenges in addressing the risks associated with performance and availability of SaaS ERP software.
Improved IT Reliability
Most participants felt that SaaS ERP would be beneficial as a means of providing them with improved reliability of their core business software due to sophisticated platform technology, regular software updates, more effective backups and better systems redundancy. These sub-themes were considered major benefits of SaaS ERP software for SMEs interviewed. The perceived benefits of redundancy, backing up and received software updates were expressed as follows:
"I think it will be a safer option ...if they've got more expensive infrastructure with redundancy built in" (C1).
"...the other advantage is in terms of backing up and protecting of data…at least that becomes somebody else's responsibility" (E). "...it's probably more often updated...because it's been shared across a range of customers; it has to really be perfect all the time" (A1).
The benefit of improved IT reliability becomes more evident when one considers many SMEs often lack the required skills and resources to manage their on-premise enterprise systems effectively [START_REF] Kamhawi | Enterprise resource-planning systems adoption in Bahrain: Motives, benefits, and barriers[END_REF]; [START_REF] Ramdani | SMEs & IS innovations adoption: A review and assessment of previous research[END_REF] thus making on-demand sourcing models such as SaaS more attractive: "...having ERP software in-house that you maintain…does come with huge human resource constraint's." and "I'm not in the business of managing ERP systems, I'm in the business of book publishing and distribution...SaaS ERP makes all the sense in the world...you focus on just using it for your business rather than you run the product as well" (A1).
Data Security Risk
Data security concerns were the fourth most dominant explanation and were related to concerns around the security and confidentiality of business information hosted on SaaS vendor infrastructure. Senior management provided the majority of responses. Data security concerns related to external hacking, risks from inside the SaaS vendor environment and from other clients sharing the infrastructure.
"...somebody somewhere at some level has got to have access to all of that information and it's a very off-putting factor for us" (E). "they've got a large number of other clients accessing the same servers" (D)
This confirms data security risk as of the major inhibitors of SaaS ERP adoption [START_REF] Buonanno | Factors affecting ERP system adoption: A comparative analysis between SMEs and large companies[END_REF], [START_REF] Hai | SaaS and integration best practices[END_REF], [START_REF] Heart | Who Is out there? Exploring Trust in the Remote-Hosting Vendor Community[END_REF]; [START_REF] Xin | Software-as-a Service Model: Elaborating Client-Side Adoption Factors[END_REF]. Issues relating to vendor control over privileged access and segregation of data between SaaS tenants [START_REF] Brodkin | Gartner: Seven cloud-computing security risks[END_REF] appear to be strong concerns. Whilst SaaS vendors claim that their solutions are more secure, SaaS is generally considered suitable for applications with low data security and privacy concerns [START_REF] Benlian | Drivers of SaaS-Adoption-An empirical study of different application types[END_REF].
Ensuring that sufficient data security mechanisms are in place is also critical in terms of regulatory compliance when moving applications into the cloud [START_REF] Armbrust | A view of cloud computing[END_REF]. South African organizations would also need to consider the new Protection of Personal Information Act.
Loss of Control and Lack of Vendor Trust
A number of participants associated SaaS ERP with a loss of control over their software and hardware components. They also raised concerns around trusting vendors with their mission-critical software solutions. This was the 3rd most dominant theme, with the majority of responses coming from senior management: "...if they decide to do maintenance...there's nothing we can do about it...you don't have a choice" (C2).
"...they sort of cut corners and then you end up getting almost a specific-to-SLA type of service" (A2). "Obviously the disadvantage is the fact that you are putting a lot of trust in another company and you've got to be sure that they are going to deliver because your entire business now is running on the quality of their staff, their turnaround times" (A1).
Participants felt that being reliant on vendors introduced risk that may affect the performance, availability and security of their mission critical applications. This is related to literature suggesting that organizations prefer in-house systems due to the risk of losing control over mission critical applications [START_REF] Benlian | Drivers of SaaS-Adoption-An empirical study of different application types[END_REF]. The linkage between lack of vendor trust and two other themes, systems performance and availability risk and data security risk, are consistent with Heart's [START_REF] Heart | Who Is out there? Exploring Trust in the Remote-Hosting Vendor Community[END_REF] findings.
In this study, systems performance and availability risk was primarily related to bandwidth constraints (cost, internet latency and reliability). Thus, in the context of this study, the vendor trust aspect is very much related to SaaS vendors to ensure data security and ISPs to ensure internet connectivity uptime.
Functionality Fit and Customization Limitations
Functionality fit refers to the degree to which ERP software matches the organizations functionality requirements. This was the least dominant concern with three participants raising concerns around lack of flexibility of SaaS ERP software due to concerns around the ability to customize the "...it's got enhanced modules like book production....it gets quite complex, so that's for instance one of the modules that's quite niche that you don't get in typical ERP...I think if you were starting from scratch and you had nothing, the benefit would be that if we put (current ERP software) in, the product and the people who put it in for you understand the industry whereas...but would there be anyone within SAP or Oracle who really understands the book industry?" (A).
"I think the disadvantages are flexibility...most of them won't allow too much of customization" (B).
"They do have a certain amount of configurability in the program...but when it comes down to the actual software application, they (ERP vendor) say this is what you get...and if you want to change, that's fine but then we'll make the change available to everybody...so you lose your competitive advantage" (D). Functionality fit is considered an important factor which effects on-premise ERP software adoption [START_REF] Buonanno | Factors affecting ERP system adoption: A comparative analysis between SMEs and large companies[END_REF] [START_REF] Markus | The enterprise systems experience-from adoption to success[END_REF]. There are a limited number of vendors providing pure SaaS ERP software services [START_REF] Ramdani | SMEs & IS innovations adoption: A review and assessment of previous research[END_REF] and SaaS ERP vendors are providing core ERP modules that cater for a wider market segment [START_REF] Krigsman | The 2011 focus experts' guide to enterprise resource planning[END_REF]. However, niche organizations that require highly specific functionality may find SaaS ERP software unsuitable, since the SaaS ERP business process logic may not fit their organization's functionality requirements.
Customization of ERP software is viewed as a means of accommodating the lack of functionality fit between the ERP software and the organization's functionality requirements, however, customization is limited within multi-tenancy SaaS ERP software [START_REF] Xin | Software-as-a Service Model: Elaborating Client-Side Adoption Factors[END_REF]; [START_REF] Chong | Architecture strategies for catching the long tail[END_REF].
Organizations could adopt SaaS ERP to fulfil standard functionality (accounting, warehousing, etc) whilst retaining in-house bespoke software to deliver specific functionality required but then integration complexity could become an issue. Various integration options are available for SaaS users. Platform as a service (PaaS) solutions provided by SalesForce.com (using Force.com and AppExchange) provide organizations with opportunities for purchasing 3rd party plugins that address integration needs [START_REF] Deyo | Software as a service (SaaS): A look at the migration of applications to the web[END_REF]. However, changes to the SaaS software (e.g. software upgrades or customization) could break 3rd party interfaces [START_REF] Hai | SaaS and integration best practices[END_REF]. Alternatively, organizations can make use of the standard web application programming interfaces (APIs) provided by the SaaS solution providers [START_REF] Chong | Architecture strategies for catching the long tail[END_REF]; [START_REF] Hai | SaaS and integration best practices[END_REF]. This enables SaaS vendors to continuously provide updates to functionality without breaking existing integrations [START_REF] Hai | SaaS and integration best practices[END_REF]. However, these integration solutions have raised concerns around data security since multiple customers are transacting via the same web APIs [START_REF] Sun | Software as a service: An integration perspective[END_REF].
The purpose of this research was to investigate reluctance by South African SMEs to consider the SaaS ERP business model. The following 7 themes emerged, in order from most significant to least, based on the participant perceptions, personal experience and organizational context (Figure 2).
1.
Systems performance and availability risk (inhibitor) 2.
Sunk cost and Satisfaction with existing system (inhibitor) 3.
Loss of control and Vendor trust (inhibitor) 4.
Data security risk (inhibitor) 5.
Improved IT reliability (driver) 6.
Perceived cost reduction (driver) 7.
Functionality Fit and Customization Limitations (inhibitor) Reluctance to adopt SaaS ERP was predominantly attributed to system performance and availability risk; data security risk; and loss of control and lack of vendor trust. Furthermore, loss of control and lack of vendor trust was found to increase the risks associated with systems performance and availability and the risks associated with data security. Thus organizations believed that in-house systems afforded them more control over their mission-critical software. The presence of sunk costs appeared to negatively affect their perceptions towards the degree of cost reduction gains on offer with SaaS ERP software. Satisfaction with existing systems was associated with a lack of perceived benefits towards SaaS ERP software (why should we change when our current systems work?).
There was an acknowledgement that the SaaS ERP model would provide improved IT reliability but it also would come with reduced functionality fit and customization limitations.
SME intention to adopt
Lack of control and vendor trust concerns dominate in the South African environment and this is exacerbated by high risks of unavailability attributed to the poor network infrastructure of the country. Concerns regarding cable theft were even reported. The findings in this study are not necessarily representative of all organizations in South Africa and due to the lack of SaaS ERP vendor presence in South Africa, it is reasonable to assume that South African organizations lack sufficient awareness around SaaS ERP software capabilities and this may have introduced a significant degree of bias.
By providing empirically supported research into SaaS ERP adoption, this research has attempted to narrow the research gap and to provide a basis for the development of future knowledge and theory. SaaS vendors in particular may be able to benefit through comparing these findings with their own surveys and establishing new and innovative ways to address the inhibitors of SaaS ERP adoption intent.
These research findings suggest similarities between the satisfaction with existing systems factor and the diffusion of innovations (DOI) model construct "relative advantage". Other data segments (not included within this paper) also suggest a possible relationship with two other DOI constructs "observability" and "trialability". Therefore the use of DOI theory for future research into SaaS ERP adoption might improve understanding.
Fig. 1 .
1 Fig. 1. Model derived from the broad literature.
Fig. 2 .
2 Fig. 2. An explanation of SME reluctance to adopt SaaS ERP. Negative effects are indicated by a negative sign (-) and positive effects by a positive sign (+).
Table 1 .
1 Company and Participant Demographics.
Company Participant
code Code Position Experience Industry Employees
A A1 Digital Director 10 years + Book publishing & 250
distribution
A A2 IT Operations 17 Years Book publishing & 250
Manager distribution
B B Head of IT 20 years + Financial Services 120
C C1 Chief Operating 20 years + Specialized Health 50
Officer Services
C C2 IT Consultant 7 years + Specialized Health 50
Services
D D Financial 20 years + Freight Logistics 200
Director Provider
E E Managing 20 years + Medical Distribution 137
Director
Table 2 .
2 Software landscape for companies interviewed.
Current Software Landscape Company code
A B C D E
Using industry-specific ERP software Yes No No No No
Using component-based ERP software No No Yes Yes Yes
Using off-the-shelf software Yes Yes Yes Yes Yes
Using Bespoke (customized) software Yes Yes Yes Yes No
Implementation of ERP software in progress No Yes No No No | 32,816 | [
"1003468",
"1003469"
] | [
"303907",
"303907",
"487694"
] |
01484691 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484691/file/978-3-642-36611-6_5_Chapter.pdf | P Pytel
email: ppytel@gmail.com
P Britos
email: paobritos@gmail.com
R García-Martínez
A Proposal of Effort Estimation Method for Information Mining Projects Oriented to SMEs
Keywords: Effort Estimation method, Information Mining, Small and Mediumsized Enterprises, Project Planning, Software Engineering
Software projects need to predict the cost and effort with its associated quantity of resources at the beginning of every project. Information Mining projects are not an exception to this requirement, particularly when they are required by Small and Medium-sized Enterprises (SMEs). An existing Information Mining projects estimation method is not reliable for small-sized projects because it tends to overestimates the estimated efforts. Therefore, considering the characteristics of these projects developed with the CRISP-DM methodology, an estimation method oriented to SMEs is proposed in this paper. First, the main features of SMEs' projects are described and applied as cost drivers of the new method with the corresponding formula. Then this is validated by comparing its results to the existing estimation method using SMEs real projects. As a result, it can be seen that the proposed method produces a more accurate estimation than the existing estimation method for small-sized projects.
Introduction
Information Mining consists in the extraction of non-trivial knowledge which is located (implicitly) in the available data from different information sources [START_REF] Schiefer | Process Information Factory: A Data Management Approach for Enhancing Business Process Intelligence[END_REF]. That knowledge is previously unknown and it can be useful for some decision making process [START_REF] Stefanovic | Supply Chain Business Intelligence Model[END_REF]. Normally, for an expert, the data itself is not the most relevant but it is the knowledge included in their relations, fluctuations and dependencies. Information Mining Process can be defined as a set of logically related tasks that are executed to achieve [START_REF] Curtis | Process Modelling[END_REF], from a set of information with a degree of value to the organization, another set of information with a greater degree of value than the initial one [START_REF] Ferreira | Integration of Business Processes with Autonomous Information Systems: A Case Study in Government Services[END_REF]. Once the problem and the customer's necessities are identified, the Information Mining Engineer selects the Information Mining Processes to be executed. Each Information Mining Process has several Data Mining Techniques that may be chosen to carry on the job [START_REF] Garcia-Martinez | Information Mining Processes Based on Intelligent Systems[END_REF]. Thus, it can be said that, Data Mining is associated to the technology (i.e. algorithms from the Machine Learning's field) while Information Mining is related to the processes and methodologies to complete the project successfully. In other words, while Data Mining is more related to the development tasks, Information Mining is closer to Software Engineering activities [START_REF] García-Martínez | Towards an Information Mining Engineering[END_REF]. However, not all the models and methodologies available in Software Engineering can be applied to Information Mining projects because they do not handle the same practical aspects [START_REF] Rodríguez | Estimación Empírica de Carga de Trabajo en Proyectos de Explotación de Información[END_REF]. Therefore, specific models, methodologies, techniques and tools need to be created and validated in order to aid the Information Mining practitioners to carry on a project.
As in every Software project, Information Mining projects begin with a set of activities that are referred as project planning. This requires the prediction of the effort with the necessary resources and associated cost. Nevertheless, the normal effort estimation method applied in Conventional Software Development projects cannot be used at Information Mining projects because the considered characteristics are different. For example COCOMO II [START_REF] Boehm | Software Cost Estimation with COCOMO II[END_REF], one of the most used estimation method for Conventional Software projects, uses the quantity of source code lines as a parameter. This is not useful for estimating an Information Mining project because the data mining algorithms are already available in commercial tools and then it is not necessary to develop software. Estimation methods in Information Mining projects should use more representative characteristics, such as, the quantity of data sources, the level of integration within the data and the type of problem to be solved. In that respect, only one specific analytical estimation method for Information Mining projects has been found after a documentary research. This method called Data Mining Cost Model (or DMCoMo) is defined in [START_REF] Marbán | A cost model to estimate the effort of data mining projects (DMCoMo)[END_REF]. However, from a statistical analysis of DMCoMo performed in [START_REF] Pytel | Estudio del Modelo Paramétrico DMCoMo de Estimación de Proyectos de Explotación de Información[END_REF], it has been found that this method tends to overestimate the efforts principally in small-sized projects that are usually required by Small and Mediumsized Enterprises [START_REF] García-Martínez | Ingeniería de Proyectos de Explotación de Información para PYMES[END_REF].
In this context, the objective of this paper is proposing a new effort estimation method for Information Mining projects considering the features of Small and Medium-sized Enterprises (SMEs). First, the estimation method DMCoMo is described (section 2), and the main characteristics of SMEs' projects are identified (section 3). Then an estimation method oriented to SMEs is proposed (section 4) comparing its results to DMCoMo method using real projects data (section 5). Finally, the main conclusions and future research work are presented (section 6).
DMCoMo Estimation Method
Analytical estimation methods (such as COCOMO) are constructed based on the application of regression methods in the available historical data to obtain mathematical relationships between the variables (also called cost drivers) that are formalized through mathematical formulas which are used to calculate the estimated effort. DMCoMo [START_REF] Marbán | A cost model to estimate the effort of data mining projects (DMCoMo)[END_REF] defines a set of 23 cost drivers to perform the cost estimation which are associated to the main characteristics of Information Mining projects. These cost drivers are classified in six categories which are included in table 1 as specified in [START_REF] Marbán | A cost model to estimate the effort of data mining projects (DMCoMo)[END_REF]. Once the values of the cost drivers are defined, they are introduced in the mathemati-cal formulas provided by the method. DMCoMo has two formulas which have been defined by linear regression with the information of 40 real projects of different business types (such as marketing, meteorological projects and medical projects). The first formula uses all 23 cost drivers as variables (formula named MM23) and it should be used when the project is well defined; while the second formula only uses 8 cost drivers (MM8) and it should be used when the project is partially defined. As a result of introducing the values in the corresponding formula, the quantity of men x month (MM) is calculated. But, as it has been pointed out by the authors, the behaviour of DMCoMo in projects outside of the 90 and 185 men x month range is unknown. From a statistical analysis of its behaviour performed in [START_REF] Pytel | Estudio del Modelo Paramétrico DMCoMo de Estimación de Proyectos de Explotación de Información[END_REF], DMCoMo always tends to overestimates the estimated efforts (i.e. all project estimations are always bigger than 60 men x month). Therefore, DMCoMo could be used in medium and big-sized projects but it is not useful for small-sized projects. As these are the projects normally required by Small and Medium-sized Enterprises, a new estimation method for Information Mining projects is proposed considering the characteristics of small-sized projects.
Data Mining Models
SMEs' Information Mining Projects
According to the Organization for Economic Cooperation and Development (OECD) Small and Medium-sized Enterprises (SMEs) and Entrepreneurship Outlook report [START_REF]Organization for Economic Cooperation and Development: OECD SME and Entrepreneurship Outlook[END_REF]: "SMEs constitute the dominant form of business organization in all countries world-wide, accounting for over 95 % and up to 99 % of the business population depending on country". However, although the importance of SMEs is well known, there is no universal criterion to characterise them. Depending on the country and region, there are different quantitative and qualitative parameters used to recognize a company as SMEs. For instance, at Latin America each country has a different definition [START_REF] Álvarez | Manual de la Micro[END_REF]: while Argentina considers as SME all independent companies that have an annual turnover lower than USD 20,000 (U.S. dollars maximum amount that depends on the company's activities), Brazil includes all companies with 500 employees or less. On the other hand, the European Union defines as SMEs all companies with 250 employees or less, assets lower than USD 60,000 and gross sales lower than USD 70,000 per year. In that respect, International Organization for Standardization (ISO) has recognized the necessity to specify a software engineering standard for SMEs and thus it is working in the ISO/IEC 29110 standard "Lifecycle profiles for Very Small Entities" [START_REF]International Organization for Standardization: ISO/IEC DTR 29110-1 Software Engineering -Lifecycle Profiles for Very Small Entities (VSEs) -Part 1: Overview[END_REF]. The term 'Very Small Entity' (VSE) was defined by the ISO/IEC JTC1/SC7 Working Group 24 [START_REF] Laporte | Developing International Standards for VSEs[END_REF] as being "an entity (enterprise, organization, department or project) having up to 25 people".
From these definitions (and our experience), in this paper an Information Mining project for SMEs is demarcated as a project performed at a company of 250 employees or less (at one or several locations) where the high-level managers (usually the company's owners) need non-trivial knowledge extracted from the available databases to solve a specific business problem with no special risks at play. As the company's employees usually do not have the necessary experience, the project is performed by contracted outsourced consultants. From our experience, the project team can be restricted up to 25 people (including both the outsourced consultants and the involved company staff) with maximum project duration of one year.
The Information Mining project's initial tasks are similar to a Conventional Software Development project. The consultants need to elicit both the necessities and desires of the stakeholders, and also the characteristics of the available data sources within the organization (i.e. existing data repositories). Although, the outsourced consultants must have a minimum knowledge and experience in developing Information Mining projects, they might or not have experience in similar projects on the same business type which could facilitate the tasks of understanding the organization and its related data. As the data repositories are not often properly documented, the organization's experts should be interviewed. However, experts are normally scarce and reluctant to get involved in the elicitation sessions. Thus, it is required the willingness of the personnel and the supervisors to identify the correct characteristics of the organization and the data repositories. As the project duration is quite short and the structure of the organization is centralized, it is considered that the elicited requirements will not change.
On the other hand, the Information and Communication Technology (ICT) infrastructure of SMEs is analysed. In [START_REF] Ríos | El Pequeño Empresario en ALC, las[END_REF] it is indicated that more than 70% of Latin America's SMEs have an ICT infrastructure, but only 37% have automated services and/or proprietary software. Normally commercial off-the-shelf software is used (such as spread-sheets managers and document editors) to register the management and operational information. The data repositories are not large (from our experience, less than one million records) but implemented in different formats and technologies. Therefore, data formatting, data cleaning and data integration tasks will have a considerable effort if there is no available software tools to perform them because ad-hoc software should be developed to implement these tasks.
Proposed Effort Estimation Method Oriented to SMEs
For specifying the effort estimation method oriented to SMEs, first, the cost drivers used to characterize a SMEs' project are defined (section 4.1) and then the corresponding formula is presented (section 4.2). This formula has been obtained by regression using real projects information. From 44 real information mining projects available, 77% has been used for obtaining the proposed method's formula (section 4.2) and 23% for validation of the proposed method (section 5). This means that 34 real projects have been used for obtaining the formula and 10 projects for validation.
These real Information Mining projects have been collected by researchers from the Information Systems Research Group of the National University of Lanus (GISI-DDPyT-UNLa), the Information System Methodologies Research Group of the Technological National University at Buenos Aires (GEMIS-FRBA-UTN), and the Information Mining Research Group of the National University of Rio Negro at El Bolson (SAEB-UNRN). It should be noted that all these projects had been performed applying the CRISP-DM methodology [START_REF] Chapman | CRISP-DM 1.0 Step by step BI guide Edited by SPSS[END_REF]. Therefore, the proposed estimation method can be considered reliable only for Information Mining projects developed with this methodology.
Cost Drivers
Considering the characteristics of Information Mining projects for SMEs indicated in section 3, eight cost drivers are specified. Few cost drivers have been identified in this version because, as explained in [START_REF] Chen | Finding the right data for software cost modeling[END_REF], when an effort estimation method is created, many of the non-significant data should be ignored. As a result the model is prevented from being too complex (and therefore impractical), the irrelevant and codependent variables are removed, and the noise is also reduced. The cost drivers have been selected based on the most critical tasks of CRISP-DM methodology [START_REF] Chapman | CRISP-DM 1.0 Step by step BI guide Edited by SPSS[END_REF]: in [START_REF] Domingos | 10 challenging problems in data mining research[END_REF] it is indicated that building the data mining models and finding patterns is quite simple now, but 90% of the effort is included in the data pre-processing (i.e. "Data Preparation" tasks performed at phase III of CRISP-DM). From our experience, the other critical tasks are related to "Business Understanding" phase (i.e. "understanding of the business' background" and "identifying the project success" tasks). The proposed cost factors are grouped into three groups as follows:
Cost drivers related to the project:
• Information Mining objective type (OBTY) This cost driver analyses the objective of the Information Mining project and therefore the type of process to be applied based on the definition performed in [START_REF] Garcia-Martinez | Information Mining Processes Based on Intelligent Systems[END_REF]. The allowed values for this cost drivers are indicated in table 2.
Table 2. Values of OBTY cost driver
Value Description
1
It is desired to identify the rules that characterize the behaviour or the description of an already known class.
2
It is desired to identify a partition of the available data without having a previously known classification.
3
It is desired to identify the rules that characterize the data partitions without a previous known classification. [START_REF] Ferreira | Integration of Business Processes with Autonomous Information Systems: A Case Study in Government Services[END_REF] It is desired to identify the attributes that have a greater frequency of incidence over the behaviour or the description of an already known class.
5
It is desired to identify the attributes that have a greater frequency of incidence over a previously unknown class.
• Level of collaboration from the organization (LECO)
The level of collaboration from the members of the organization is analysed by reviewing if the high-level management (i.e. usually the SME's owners), the middlelevel management (supervisors and department's heads) and the operational personnel are willing to help the consultants to understand the business and the related data (specially in the first phases of the project). If the Information Mining project has been contracted, it is assumed that at least the high-level management should support it. The possible values for this cost factor are shown in table 3.
Table 3. Values of LECO cost drivers
Value Description 1 Both managers and the organization's personnel are willing to collaborate on the project.
2
Only the managers are willing to collaborate on the project while the rest of the company's personnel is indifferent to the project.
3
Only the high-level managers are willing to collaborate on the project while the middle-level manager and the rest of the company's personnel is indifferent to the project.
4
Only the high-level managers are willing to collaborate on the project while the middle-level manager is not willing to collaborate.
Cost Drivers related to the available data:
• Quantity and type of the available data repositories (AREP)
The data repositories to be used in the Information Mining process are analysed (including data base management systems, spread-sheets and documents among others). In this case, both the quantity of data repositories (public or private from the company) and the implementation technology are studied. In this stage, it is not necessary to know the quantity of tables in each repository because their integration within a repository is relatively simple as it can be performed with a query statement. However, depending on the technology, the complexity of the data integration tasks could vary. The following criteria can be used:
-If all the data repositories are implemented with the same technology, then the repositories are compatible for integration. -If the data can be exported into a common format, then the repositories can be considered as compatible for integration because the data integration tasks will be performed using the exported data. -On the other hand, if there are non-digital repositories (i.e. written paper), then the technology should not be considered compatible for the integration. But the estimation method is not able to predict the required time to perform the digitalization because it could vary on many factors (such as quantity of papers, length, format and diversity among others). The possible values for this cost factor are shown in table 4. Between 2 and 5 data repositories non-compatible technology for integration. 4
More than 5 data repositories compatible technology for integration. 5
More than 5 data repositories no-compatible technology for integration.
• Total quantity of available tuples in main table (QTUM)
This variable ponders the approximate quantity of tuples (records) available in the main table to be used when applying data mining techniques. The possible values for this cost factor are shown in table 5. • Knowledge level about the data sources (KLDS)
The knowledge level about the data sources studies if the data repositories and their tables are properly documented. In other words, if a document exits that defining the technology in which it is implemented, the characteristics of the tables' fields, and how the data is created, modified, and/or When this document is not available, it should be necessary to hold meetings with experts (usually in charge of the data administration and maintenance) to explain them. As a result the project required effort should be increased depending on the collaboration of these experts to help the consultants.
The possible values for this cost factor are shown in table 7.
Table 7. Values of KLDS cost driver
Value Description 1 All the data tables and repositories are properly documented.
2 More than 50% of the data tables and repositories are documented and there are available experts to explain the data sources.
3
Less than 50% of the data tables and repositories are documented but there are available experts to explain the data sources.
4
The data tables and repositories are not documented but there are available experts to explain the data sources.
5
The data tables and repositories are not documented, and the available experts are not willing to explain the data sources.
6
The data tables and repositories are not documented and there are not available experts to explain the data sources.
Cost drivers related to the available resources:
• Knowledge and experience level of the information mining team (KEXT) This cost driver studies the ability of the outsourced consultants that will carry out the project. Both the knowledge and experience of the team in similar previous projects are analysed by considering the similarity of the business type, the data to be used and the expected goals. It is assumed that when there is greater similarity, the effort should be lower. Otherwise, the effort should be increased. The possible values for this cost factor are shown in table 8.
• Functionality and usability of available tools (TOOL)
This cost driver analyses the characteristics of the information mining tools to be utilized in the project and its implemented functionalities. Both the data preparation functions and the data mining techniques are reviewed.
The possible values for this cost factor are shown in table 9.
Table 8. Values of KEXT cost driver
Value Description
1
The information mining team has worked with similar data in similar business types to obtain the same objectives.
2
The information mining team has worked with different data in similar business types to obtain the same objectives.
3
The information mining team has worked with similar data in other business types to obtain the same objectives.
4
The information mining team has worked with different data in other business types to obtain the same objectives.
5
The information mining team has worked with different data in other business types to obtain other objectives.
Table 9. Values of TOOL cost driver
Value Description
1
The tool includes functions for data formatting and integration (allowing the importation of more than one data table) and data mining techniques.
2
The tool includes functions for data formatting and data mining techniques, and it allows importing more than one data table independently.
3
The tool includes functions for data formatting and data mining techniques, and it allows importing only one data table at a time.
4
The tool includes only functions for data mining techniques, and it allows importing more than one data table independently.
5
The tool includes only functions for data mining techniques, and it allows importing only one data table at a time.
Estimation Formula
Once the values of the cost drivers have been specified, they were used to characterize 34 information mining projects with their real effort1 collected by coresearchers as indicated before. A multivariate linear regression method [START_REF] Weisberg | Applied Linear Regression[END_REF] has been applied to obtain a linear equation of the form used by COCOMO family methods [START_REF] Boehm | Software Cost Estimation with COCOMO II[END_REF]. As a result, the following formula is obtained: PEM = 0.80 OBTY + 1.10 LECO -1.20 AREP -0.30 QTUM -0.70 QTUA + 1.80 KLDS -0.90 KEXT + 1.86 TOOL -3.30 [START_REF] Schiefer | Process Information Factory: A Data Management Approach for Enhancing Business Process Intelligence[END_REF] where PEM is the effort estimated by the proposed method for SMEs (in men x month), and the following cost drivers: information mining objective type (OBTY), level of collaboration from the organization (LECO), quantity and type of the available data repositories (AREP), total quantity of available tuples in the main table (QTUM) and in auxiliaries tables (QTUA), knowledge level about the data sources (KLDS), knowledge and experience level of the information mining team (KEXT), and functionality and usability of available tools (TOOL). The values for each cost driver are defined in tables 2 to 9 respectively of section 4.1.
Validation of the Proposed Estimation Method
In order to validate the estimation method defined in section 4, the data of other 10 collected information mining projects is used to compare the accuracy of the proposed method with both the real effort with the effort estimated by DMCoMo method. A brief description of these projects with their applied effort (in men x months) are shown in table 10.
P1
The business objective is classifying the different types of cars and reviewing the acceptance of the clients, and detecting the characteristics of the most accepted car.
The process of discovering behaviour rules is used.
P2
As there is not big increment in the middle segment, the company wants to gain market by attracting new customers. In order to achieve that, it is required to determine the necessities of that niche market.
The process of discovering behaviour rules is used.
P3
The high management of a company have decided to enhance and expand their market presence by launching a new product. The new concept will be proclaimed as a new production unit which aimed to create more jobs, more sales and therefore more revenue.
The processes of discovering behaviour rules and weighting of attributes are used.
P4
It is necessary to identify the customer behaviour in order to understand which type of customer is more inclined to buy any package of products. The desired objective is increasing the level of acceptance and sales of product packages.
The process of discovering behaviour rules is used.
P5
The objectives of the project are performing a personalized marketing campaign to the clients, and locating the ads in the most optimal places (i.e. the places with most CTR).
The process of discovery group-membership rules is used.
9.35
P6 Perform an analysis of the causes why the babies have some deceases when they are born, considering the economic, social and educational level, and also the age of the mother
The processes of discovering behaviour rules and weighting of attributes are used.
P7
The help desk sector of a governmental organization employs software system to register each received phone call. As a result, it is possible to identify a repairing request, a change or bad function of any computer in order to assign a technical who will solve the problem.
The process of discovering group-membership rules is used.
P8
The objective is improving the image of the company to the customers by having a better distribution service. This means finding the internal and external factors of the company that affect the delay of the orders to be delivered to customers.
The process of discovering group-membership rules is used.
P9
The purpose is achieving the best global technologies, the ownership of independent intellectual property rights, and the creation of an internationally famous brand among the world-class global automotive market.
The processes of discovering group-membership rules and weighting of the attributes are used.
P10
It has been decided to identify the key attributes that produce good quality wines. Once these attributes are detected, they should improve the lesser quality wines.
The processes of discovering behaviour rules and weighting of attributes are used.
1.56
A Proposal of Effort Estimation Method for Information Mining Projects Oriented to SMEs 1 1
Using the collected project data, the values of the DoCoMo's cost drivers are defined to calculate the corresponding estimation method. Both the formula that uses 8 cost factors (MM8 column) and the formula that uses the 23 cost factors (MM23 column) are applied obtaining the values shown in table 11.
N T A B 1 0 0 3 1 0 1 2 0 0 N T U P 1 1 1 5 3 1 1 0 1 1 N A T R 7 1 1 5 3 1 1 3 1 1 D I S P 1 1 1 2 2 1 1 4 1 1 P N U L 0 1 1 2 1 2 2 1 1 1 D M O D 1 4 1 2 5 1 1 0 1 0 D E X T 1 0 0 1 2 2 2 0 2 2 N M O D 1 2 2 3 3 2 2 1 2 2 T M O D 0 1 1 3 1 1 3 1 4 4 M T U P 1 1 1 3 3 1 1 0 1 1 M A T R 1 2 2 3 3 2 2 3 2 2 M T E C 1 1 1 5 3 1 4 1 1 4 N F U N 3 1 1 3 2 1 1 0 1 2 S C O M 1 1 1 0 1 1 1 1 1 1 T O O L 1 1 1 1 1 1 1 1 1 1 C O M P 3 5 0 2 1 0 1 4 3 2 N F O R 3 3 3 3 1 1 3 2 3 1 N D E P 4 2 2 1 4 1 3 0 4 2 D O C U 5 2 3 2 2 2 2 2 5 5 S I T E 3 1 1 0 2 1 3 0 3 3 K D A T 4 3 3 2 4 1 2 1 3 2
A D I R 1 1 1 4 2 1 2 6 1 1 M F A M 3 5 5 3 1 5 3 0 4 4 # P1 P2 P3 P4 P5 P6 P7 P8 P9 P10
Similarly, the same procedure is performed to calculate the effort applying the formula specified in section 3.2 for the proposed estimation method oriented to SMEs (PEM column) as shown in table 12.
1,08
Finally, in table 13 the estimated efforts are compared with the real effort of each project (REf column) are compared. The efforts calculated by the DMCoMo method (MM8 and MM23 columns) and the proposed method for SMEs (PEM column) are indicating with their corresponding error (i.e. the difference between the real effort and the values calculated by each method). Also, the Relative Error for the estimation of the proposed method is shown (calculated as the error divided by the real effort). This comparison is reflected in a boxplot graph (figure 1) where the behaviour of the real and calculated efforts are shown by indicating the minimum and maximum values (thin line), standard deviation range (thick line) and average value (marker). When analysing the results of the DMCoMo method from table 13, it can be seen that the average error is very big (approximately 86 men x months for both formulas) with an error standard deviation of about ± 20 men x months respectively. DMCoMo always tends to overestimate the effort of the project (i.e. the error values are always negative) with a ratio greater than 590% (less difference for the project #6). This behaviour can be seen also graphically in figure 1. In addition, all estimated values are bigger than 60 men x months, which is the maximum threshold value previously identified for SMEs projects. From looking at these results, the conclusions of [START_REF] Marbán | A cost model to estimate the effort of data mining projects (DMCoMo)[END_REF] are confirmed: DMCoMo estimation method is not recommended to predict the effort of small-sized information mining projects.
On the other hand, when the results of the proposed method for SMEs are analysed, it can be seen that the average error is approximately 1.46 men x months with an error standard deviation of approximately ± 2 men x months. In order to study the behaviour of the proposed method with the real effort a new boxplot graph is presented in figure 2. From this second boxplot graph, it seems that the behaviour of the proposed method tends to underestimate the real effort behaviour. There are similar minimum values (i.e. 1.56 men x months for the real effort and 1.08 men x months for the proposed method), maximum values (i.e. 11.63 men x months for REf and 9.80 for PEM), and averages (i.e. 5.77 and 4.51 men x months respectively). Finally, if the real and estimated efforts of each project are compared using a chart graph (figure 3), it can be seen that the estimations of the proposed method are not completely accurate: ─ Projects #1, #3, #5, #8 and #9 have estimated efforts with an absolute error smaller than one men x month and a relative error lower than 10%. ─ Projects #2 and #10 have an estimated effort smaller than the real one with a relative error lower than 35%. In this case, the average error is about 0.74 men x months with a maximum error of one men x months (project #2).
─ At last, projects #4, #6 and #7 have an estimated with a relative error greater than 35% (but lower than 60%). In this case, the maximum error is nearly 7 men x months (project #6) and an average error of 3.81 men x month.
Conclusions
Software projects need to predict the cost and effort with its associated quantity of resources at the beginning of every project. The prediction of the required effort to perform an Information Mining project is necessary for Small and Medium-sized Enterprises (SMEs). Considering the characteristics of these projects developed with the CRISP-DM methodology, an estimation method oriented to SMEs has been proposed defining seven cost drivers and formula.
From the validation of the proposed method, it has been seen that that the proposed method produces a more accurate estimation than the DMCoMo method for smallsized projects. But, even though the overall behaviour of the proposed method is similar to real project behaviour, it tends to perform a little underestimation (the average error is smaller than 1.5 men x month). It can be highlighted that 50% of estimations have a relative error smaller than 10%, and the 20% have a relative error between 11% and 35%. For the rest of estimations, the relative error is smaller than 57%. Nevertheless, in all cases the absolute error is smaller than 7 men x months. These errors could be due to the existence of other factors affecting the project effort which have not been considered in this version of the estimation method.
As future research work, the identified issues will be studied in order to provide a more accurate version of the estimation method oriented to SMEs by studying the dependency between the cost drivers and then adding new cost drivers or redefining the existing ones. Another possible approach is modifying the existing equation formula by using an exponential regression with more collected real project data.
---
Number of Data Models (NMOD) -Types of Data Model (TMOD) -Number of Tuples for each Data Models (MTUP) -Number and Type of Attributes for each Data Model (MATR) -Techniques Availability for each Data Model (MTEC) Development Platform Number and Type of Data Sources (NFUN) -Distance and Communication Form (SCOM) Techniques and Tools -Tools Availability (TOOL) -Compatibility Level between Tools and Other Software (COMP) -Training Level of Tool Users (NFOR) Project Number of Involved Departments (NDEP) -Documentation (DOCU) -Multisite Development (SITE) Project Staff -Problem Type Familiarity (MFAM) -Data Knowledge (KDAT) -Directive Attitude (ADIR)
Fig. 1 .
1 Fig. 1. Boxplot graph comparing the behaviour of the Real Effort with the efforts calculated by DMCoMo and by the proposed estimation method for SMEs
Fig. 2 .
2 Fig. 2. Boxplot graph comparing the behaviour of the Real Effort with the effort calculated by the proposed estimation method for SMEs
Fig. 3 .
3 Fig. 3. Bar graph comparing for each project the Real Effort (REf) and the effort calculated by the proposed estimation method for SMEs (PEM)
Table 1 .
1 Cost Drivers used by DMCoMo
Category Cost Drivers
Source Data
-Number of Tables (NTAB) -Number of Tuples (NTUP) -Number of Table Attributes (NATR) -Data Dispersion (DISP) -Nulls Percentage (PNUL) -Data Model Availability (DMOD) -External Data Level (DEXT)
Table 4 .
4 Values of AREP cost driver
Value Description
1 Only 1 available data repository.
2 Between 2 and 5 data repositories compatible technology for integration.
3
Table 5 .
5 Values of QTUM cost driver • Total quantity of available tuples in auxiliaries tables (QTUA) This variable ponders the approximate quantity of tuples (records) available in the auxiliary tables (if any) used to add additional information to the main table (such as a table used to determine the product characteristics associated to the product ID of the sales main table). Normally, these auxiliary tables include fewer records than the main table. The possible values for this cost factor are shown in table 6.
Value Description
1 Up to 100 tuples from main table.
2 Between 101 and 1,000 tuples from main table.
3 Between 1,001 and 20,000 tuples from main table.
4 Between 20,001 and 80,000 tuples from main table.
5 Between 80,001 and 5,000,000 tuples from main table.
6 More than 5,000,000 tuples from main table.
Table 6 .
6 Values of QTUA cost driver
Value Description
1 No auxiliary tables used.
2 Up to 1,000 tuples from auxiliary tables.
3 Between 1,001 and 50,000 tuples from auxiliary tables.
4 More than 50,000 tuples from auxiliary tables.
Table 10 .
10 Data of the information mining projects used for the validation
# Business Objectives Information Mining Objectives Real Effort (men x month)
Table 11 .
11 Effort calculated by DMCoMo method
MM23 (men x month) 94.88 51.84 68.07 111.47 122.52 81.36 92.49 89.68 98.74 103.13
MM8 (men x month) 84.23 67.16 67.16 118.99 110.92 80.27 96.02 116.87 97.63 105.32
Table 12 .
12 Effort calculated by the proposed estimation method oriented to SMEs
# OBTY LECO AREP QTUM QTUA KLDS KEXT TOOL PEM (men x month)
P1 1 1 3 3 1 3 2 3 2,58
P2 1 1 1 3 1 3 5 5 6,00
P3 4 1 1 3 3 2 5 3 1,48
P4 1 4 3 5 1 1 2 3 1,68
P5 3 2 2 5 2 3 1 5 9,80
P6 4 1 1 2 1 1 5 5 5,10
P7 3 2 1 4 1 1 2 3 3,78
P8 1 4 1 3 2 1 1 3 4,88
P9 5 1 1 3 3 3 4 5 8,70
P10 4 1 2 2 1 1 4 3
Table 13 .
13 Comparison of the calculated efforts (in men x month)
DMCoMo PROPOSED METHOD
# REf MM8 REf -MM8 MM23 REf -MM23 PEM REf -PEM Relative Error
P1 2.41 84.23 -81.82 94.88 -92.47 2,58 -0.17 -7.2%
P2 7.00 67.16 -60.16 51.84 -44.84 6,00 1.00 14.3%
P3 1.64 67.16 -65.52 68.07 -66.43 1,48 0.16 9.8%
P4 3.65 118.99 -115.34 111.47 -107.82 1,68 1.97 54.0%
P5 9.35 110.92 -101.57 122.52 -113.17 9,80 -0.45 -4.8%
P6 11.63 80.27 -68.65 81.36 -69.73 5,10 6.53 56.1%
P7 6.73 96.02 -89.29 92.49 -85.76 3,78 2.95 43.8%
P8 5.40 116.87 -111.47 89.68 -84.28 4,88 0.52 9.6%
P9 8.38 97.63 -89.26 98.74 -90.36 8,70 -0.33 -3.9%
P10 1.56 105.32 -103.75 103.13 -101.56 1,08 0.48 30.9%
Average Error 88.68 85.64 1.46
Error Variance 380.28 428.99 3.98
Pytel, P., Britos, P., García-Martínez, R.
The real projects data used for regression is available at: http://tinyurl.com/bm93wol
Acknowledgements
The research reported in this paper has been partially funded by research project grants 33A105 and 33B102 of National University of Lanus, by research project grants 40B133 and 40B065 of National University of Rio Negro, and by research project grant EIUTIBA11211 of Technological National University at Buenos Aires.
Also, the authors wish to thank to the researchers that provided the examples of real SMEs Information Mining Projects used in this paper. | 39,486 | [
"1003594",
"1003595",
"992693"
] | [
"300134",
"487857",
"346011",
"487856",
"487857"
] |
01484694 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484694/file/978-3-642-36611-6_8_Chapter.pdf | Wipawee Uppatumwichian
email: wipawee.uppatumwichian@ics.lu.se
Understanding the ERP system use in budgeting
Keywords: structuration theory, budgeting, ERP system, IS use
This paper investigates the enterprise resource planning (ERP) system use in budgeting in order to explain how and why ERP system are used or not used in budgeting practices. Budgeting is considered as a social phenomenon which requires flexibility for decision-making and integration for management controls. The analysis at the activity levels, guided by the concept of 'conflict' in structuration theory (ST), suggests that ERP systems impede flexibility in decision-making. However, the systems have the potential to facilitate integration in management controls. The analysis at the structural level, guided by the concept of 'contradiction' in ST, concludes that the ERP systems are not widely used in budgeting. This is because the systems support the integration function alone while budgeting assumes both roles. This paper offers the ERP system non-use explanation from an ulitarian perspective. Additionally, it calls for solutions to improve ERP use especially for the integration function.
Introduction
The advance in information system (IS) technologies has promised many improved benefits to organisations [START_REF] Davenport | Putting the enterprise into the enterprise system[END_REF][START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF]. However such improvements are often hindered by unwillingness to accept new IS technologies [START_REF] Davis | Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology[END_REF][START_REF] Granlund | Moderate impact of ERPS on management accounting: a lag or permanent outcome?[END_REF]. This results in IS technology nonuse [START_REF] Walsham | Cross-Cultural Software Production and Use: A Structurational Analysis[END_REF] and/or workaround [START_REF] Taylor | Understanding Information Technology Usage: A Test of Competing Models[END_REF][START_REF] Boudreau | Enacting integrated information technology: A human agency perspective[END_REF] and, inevitably moderate business benefits. For this reason, a traditional IS use research has been well-established in the discipline [START_REF] Pedersen | Modifying adoption research for mobile interent service adoption: Cross-disciplinary interactions In[END_REF] to investigate how and why users use or not use certain IS technologies.
In the field of accounting information system (AIS), previous research has indicated that there is a limited amount of research as well as understanding on the use of enterprise resource planning (ERP) systems to support management accounting practices [START_REF] Scapens | ERP systems and management accounting change: opportunities or impacts? A research note[END_REF][START_REF] Granlund | Extending AIS research to management accounting and control issues: A research note[END_REF][START_REF] Elbashir | The role of organisational absorptive capacity in strategic use of business intelligence to support integrated management control systems[END_REF][START_REF] Grabski | A review of ERP research: A future agenda for accounting information systems[END_REF]. Up to now, the available research results conclude that most organisations have not yet embraced the powerful capacity of the ERP systems to support the management accounting function [START_REF] Granlund | Moderate impact of ERPS on management accounting: a lag or permanent outcome?[END_REF][START_REF] Dechow | Enterprise resource planning systems, management control and the quest for integration[END_REF][START_REF] Quattrone | A 'time-space odyssey': management control systems in two multinational organisations[END_REF]. Many studies have reported a consistent limited ERP use in management accounting function using data from many countries across the globe such as Egypt [START_REF] Jack | Enterprise Resource Planning and a contest to limit the role of management accountants: A strong structuration perspective[END_REF], Australia [START_REF] Booth | The impacts of enterprise resource planning systems on accounting practice -The Australian experience[END_REF], Finland [START_REF] Granlund | Moderate impact of ERPS on management accounting: a lag or permanent outcome?[END_REF][START_REF] Hyvönen | Management accounting and information systems: ERP versus BoB[END_REF][START_REF] Kallunki | Impact of enterprise resource planning systems on management control systems and firm performance[END_REF][START_REF] Chapman | Information system integration, enabling control and performance[END_REF] and Denmark [START_REF] Rom | Enterprise resource planning systems, strategic enterprise management systems and management accounting: A Danish study[END_REF]. Several researchers have in particular called for more research contributions on the ERP system use in management accounting context, and especially on how the systems might be used to support the two key functions in manegement accounting: decision-making and management control functions [START_REF] Granlund | Extending AIS research to management accounting and control issues: A research note[END_REF][START_REF] Grabski | A review of ERP research: A future agenda for accounting information systems[END_REF][START_REF] Rom | Management accounting and integrated information systems: A literature review[END_REF]. This paper responds to that call by uncovering the ERP systems use in budgeting. In relation to other management accounting activities, budgeting is considered to be the most suitable social phenomenon under investigation. This is because budgeting is a longstanding control procedure [START_REF] Davila | Management control systems in early-stage startup companies[END_REF] which continues to soar in popularity among modern organisations [START_REF] Libby | Beyond budgeting or budgeting reconsidered? A survey of North-American budgeting practice[END_REF]. In addition, it assumes the dual roles of decision-making and management control [START_REF] Abernethy | The role of budgets in organizations facing strategic change: an exploratory study[END_REF].
Budgeting is considered as a process undertaken to achieve a quantitative statement for a defined time period [START_REF] Covaleski | Budgeting reserach: Three theoretical perspectives and criteria for selective integration In[END_REF]. A budget cycle can be said to cover activities such as (1) budget construction, (2) consolidation, (3) monitoring and (4) reporting. The levers of control (LOC) framework [START_REF] Simons | How New Top Managers Use Control Systems as Levers of Strategic Renewal[END_REF] suggests that budgeting can be used interactively for decision-making and diagnostically for management control. This is in line with modern budgeting literature [START_REF] Abernethy | The role of budgets in organizations facing strategic change: an exploratory study[END_REF][START_REF] Frow | Continuous budgeting: Reconciling budget flexibility with budgetary control[END_REF] whose interpretation is that budgeting assumes the dual roles. However, the degree of combination between these two roles varies according to management's judgements in specific situations [START_REF] Simons | How New Top Managers Use Control Systems as Levers of Strategic Renewal[END_REF]. This dual role requires budgeting to be more flexible for decision-making yet integrative for management control [START_REF] Uppatumwichian | Analysing Flexibility and Integration needs in budgeting IS technologies In[END_REF].
Given the research gaps addressed and the flexible yet integrative roles of budgeting, this paper seeks to uncover how the ERP systems are used in budgeting as well as to explain why the ERP systems are used or not used in budgeting.
This paper proceeds as follows. The next section provides a literature review in the ERP system use literature with regard to the integration and flexibility domains. Section 3 discusses the concepts of conflict and contradiction in structuration theory (ST) which is the main theory used. After that, section 4 deliberates on the research method and case companies contained in this study. Subsequently, section 5 proceeds to data analysis based on the conflict and contradiction concepts in ST in order to explain how and why ERP systems are used or not used in budgeting. Section 6 ends this paper with conclusions and research implications.
The ERP literature review on flexibility and integration
This section reviews ERP literatures based on the integration and flexibility domains as it has been previously suggested that budgeting possesses these dual roles. It starts out with a brief discussion on what the ERP system is and its relation to accounting. Later it proceeds to discuss about incompatible conclusions in the literature about how the ERP system can be used to promote flexibility and integration.
The ERP system, in essence, is an integrated cross-functional system containing many selectable software modules which span to support numerous business functions that a typical organisation might have such as accounting and finance, human resources, and sales and distributions [START_REF] Grabski | A review of ERP research: A future agenda for accounting information systems[END_REF]. The system can be considered as a reference model which segments organisations into diverse yet related functions through a centralised database [START_REF] Kallinikos | Deconstructing information packages: Organizational and behavioural implications of ERP systems[END_REF]. The ERP system mandates a rigid business model which enforces underlying data structure, process model as well as organisational structure [START_REF] Kumar | ERP expiriences and evolution[END_REF] in order to achieve an ultimate integration between business operation and IS technology [START_REF] Dechow | Management Control of the Complex Organization: Relationships between Management Accounting and Information Technology In[END_REF].
The ERP system has become a main research interest within the IS discipline as well as its sister discipline, the AIS research, since the inception of this system in the early 1990s [START_REF] Granlund | Moderate impact of ERPS on management accounting: a lag or permanent outcome?[END_REF][START_REF] Grabski | A review of ERP research: A future agenda for accounting information systems[END_REF]. Indeed, it can be said that AIS gives rise to the modern ERP system because accounting is one of the early business operations that IS technology is employed to hasten the process [START_REF] Granlund | Introduction: problematizing the relationship between management control and information technology[END_REF]. A research finding [START_REF] Dechow | Enterprise resource planning systems, management control and the quest for integration[END_REF] posits that the ERP systems require implementing organisations to set up the systems according to either 'accounting' or 'logistic' modes which forms a different control locus in organisations. Such indication strongly supports the prevailing relationship that accounting has in connection to the modern ERP system.
In relation to the flexibility domain, research to date has provided a contradictory conclusion on the relationship between the ERP system and flexibility. One research stream considers the ERP system to impose a stabilising effect on organisations because of the lack of flexibility in relation to changing business conditions [START_REF] Boudreau | Enacting integrated information technology: A human agency perspective[END_REF][START_REF] Booth | The impacts of enterprise resource planning systems on accounting practice -The Australian experience[END_REF][START_REF] Hyvönen | Management accounting and information systems: ERP versus BoB[END_REF][START_REF] Rom | Enterprise resource planning systems, strategic enterprise management systems and management accounting: A Danish study[END_REF][START_REF] Light | ERP and best of breed: a comparative analysis[END_REF][START_REF] Soh | Cultural fits and misfits: Is ERP a universal solution?[END_REF]. Akkermans et al. [START_REF] Akkermans | The impact of ERP on supply chain management: Exploratory findings from a European Delphi study[END_REF], for example, report that leading IT executives perceive the ERP system as a hindrance to strategic business initiatives. The ERP system is said to have low system flexibility which does not correspond to the changing networking organisation mode. This line of research concludes that a lack of flexibility in ERP system can post a direct risk to organisations because the ERP system reference model is not suitable to business processes [START_REF] Soh | Cultural fits and misfits: Is ERP a universal solution?[END_REF][START_REF] Strong | Understanding organization--Enterprise system fit: A path to theorizing the information technology artifact[END_REF]. In addition, the lack of flexibility results in two possible lines of actions from users: (1) actions in the form of inaction, that is, a passive resistance not to use the ERP systems [START_REF] Walsham | Cross-Cultural Software Production and Use: A Structurational Analysis[END_REF], or [START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF] actions to reinvent the systems or a workaround [START_REF] Boudreau | Enacting integrated information technology: A human agency perspective[END_REF]. The other stream of research maintains that ERP system implementation improve flexibility in organisations [START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF][START_REF] Brazel | The Effect of ERP System Implementations on the Management of Earnings and Earnings Release Dates[END_REF][START_REF] Spathis | Enterprise systems implementation and accounting benefits[END_REF][START_REF] Cadili | On the interpretative flexibility of hosted ERP systems[END_REF]. Shang and Seddon [START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF], for example, propose that the ERP system contributes to increased flexibility in organisational strategies. This is because a modular IT infrastructure in the ERP system allows organisations to cherry pick modules which support their current business initiatives. In the same line, Brazel and Dang [START_REF] Brazel | The Effect of ERP System Implementations on the Management of Earnings and Earnings Release Dates[END_REF] posit that ERP implementation allows more organisational flexibility to generate financial reports. Cadili and Whitley [START_REF] Cadili | On the interpretative flexibility of hosted ERP systems[END_REF] support this view to a certain extent as they insert that the flexibility of an ERP system tends to decrease as the system grows in size and complication.
With regard to the integration domain, a similar contradictory conclusion on the role of ERP to integration is presented in the literature. One stream of research posits that the reference model embedded in the ERP system [START_REF] Kallinikos | Deconstructing information packages: Organizational and behavioural implications of ERP systems[END_REF], which enforces a strict data definition across organisational units through a single database, enables integration and control [START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF][START_REF] Quattrone | A 'time-space odyssey': management control systems in two multinational organisations[END_REF][START_REF] Chapman | Information system integration, enabling control and performance[END_REF][START_REF] Brazel | The Effect of ERP System Implementations on the Management of Earnings and Earnings Release Dates[END_REF][START_REF] Spathis | Enterprise systems implementation and accounting benefits[END_REF]. Some of the benefits mentioned in the literature after an ERP implementation are: reporting capability [START_REF] Brazel | The Effect of ERP System Implementations on the Management of Earnings and Earnings Release Dates[END_REF], information quality [START_REF] Häkkinen | Life after ERP implementation: Long-term development of user perceptions of system success in an after-sales environment[END_REF], decision-making [START_REF] Spathis | Enterprise systems implementation and accounting benefits[END_REF] and strategic alliance [START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF]. Another stream of research tends to put a serious criticism toward the view that ERP implementation will enable organisational integration. Quattrone and Hopper [START_REF] Quattrone | What is IT?: SAP, accounting, and visibility in a multinational organisation[END_REF], for example, argue that the ERP system is at best a belief that activities can be integrated by making transactions visible and homogenous. Dechow and Mouritsen [START_REF] Dechow | Enterprise resource planning systems, management control and the quest for integration[END_REF] explicitly support this view by indicating that: "[The] ERP systems do not define what integration is and how it is to be developed". They argue that it is not possible to manage integration around the ERP systems, or any other IS systems. Regularly, any other means of integration but IS is more fruitful for organisational integration and control, such as a lunch room observation. In many cases, it is argued that integration can only be achieved through a willingness to throw away some data and integrate less information [START_REF] Dechow | Management Control of the Complex Organization: Relationships between Management Accounting and Information Technology In[END_REF].
Theoretical background
A review of IS use research [START_REF] Pedersen | Modifying adoption research for mobile interent service adoption: Cross-disciplinary interactions In[END_REF] has indicated that there are three main explanatory views which are widely used to explain IS use research. First the ulitarian view holds that users are rational in their choice of system use. This stream of research often employs a technology acceptance model [START_REF] Davis | Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology[END_REF] or a media richness theory [START_REF] Daft | Organizational Information Requirements, Media Richness and Structural Design[END_REF] to explain system use. Second the social influence view deems that social mechanisms are of importance in enforcing system use in particular social contexts [START_REF] Fishbein | Belief, attitude, intention and behaviour: An introduction to theory and research[END_REF]. The third and the last contingency view [START_REF] Drazin | Alternative Forms of Fit in Contingency Theory[END_REF] explains that people decide to use or not to use systems through personal characteristics and situational factors. Factors such as behavioural control [START_REF] Taylor | Understanding Information Technology Usage: A Test of Competing Models[END_REF], as well as skills and recipient attributes [START_REF] Treviño | Making Connections: Complementary Influences on Communication Media Choices, Attitudes, and Use[END_REF] serve as explanations to system use/non-use.
Being aware about these theoretical alternatives in the literature, the author chooses to approach this research through the lens of ST. It is convinced that the theory has a potential to uncover ERP use based on the ulitarian view. ST is appealing to the ERP system use study because the flexible yet integrative roles of budgeting fit into the contradiction discussion in social sciences research. It has been discussed that most modern theories along with social practices represent contradictions in themselves [START_REF] Robey | Accounting for the Contradictory Organizational Consequences of Information Technology: Theoretical Directions and Methodological Implications[END_REF]. Anthony Giddens, the founder of ST, explicitly supports the aforementioned argument. He writes: "don't look for the functions social practices fulfil, look for the contradiction they embody!" [START_REF] Giddens | Central problems in social theory[END_REF].
The heart of ST is an attempt to treat human actions and social structures as a duality rather than a dualism. To achieve this, Giddens bridges the two opposing philosophical views of functionalism and interpretivism. Functionalism holds that social structures are independent of human actions. Interpretivism, on the contrary, holds that social structures exist only in human minds. It is maintained that structures exist as human actors apply them. They are the medium and outcome of human interactions. ST is appealing to IS research because of its vast potential to uncover the interplay of people with technology [START_REF] Poole | Structuration theory in information systems research: Methods and controversies In[END_REF][START_REF] Walsham | Information systems strategy formation and implementation: The case of a central government agency[END_REF].
This paper focuses particularly on one element of ST, which is the concept of conflict and contradiction. According to Walsham [START_REF] Walsham | Cross-Cultural Software Production and Use: A Structurational Analysis[END_REF], this concept is largely ignored in the literature as well as in the IS research. Giddens defines contradiction as "an opposition or disjunction of structural principles of social systems, where those principles operate in terms of each other but at the same time contravene one another" [START_REF] Giddens | Central problems in social theory[END_REF]. To supplement contradiction which occurs at the structural level, he conceptualises conflict, which is claimed to occur at the level of social practice. In his own words, conflict is a "struggle between actors or collectives expressed as definite social practices" [START_REF] Giddens | Central problems in social theory[END_REF]. Based on the original writing, Walsham [START_REF] Walsham | Cross-Cultural Software Production and Use: A Structurational Analysis[END_REF] interprets conflicts as the real activity and contradiction as the potential basis for conflict which arises from structural contradictions.
This theorising has immediate application to the study of ERP systems use in budgeting. It is deemed that the flexibility (in decision-making) and integration (in management control) inherent in budgeting are the real activities that face business controllers in their daily operations with budgeting. Meanwhile, ERP systems and budgeting are treated as two different social structures [START_REF] Orlikowski | The Duality of Technology: Rethinking the Concept of Technology in Organizations[END_REF] which form the potential basis for conflict due to the clash between these structures. The next section discusses the research method and the case organisations involved in this study.
Research method and case description
This study employs an interpretative case study method according to Walsham [START_REF] Walsham | Interpretive Case Studies in IS Research: Nature and Method[END_REF]. The primary research design is a multiple case study [START_REF] Eisenhardt | Theory building from cases: Opportunities and challenges[END_REF] in which the researcher investigates a single phenomenon [START_REF] Gerring | What is a case study and what is it good for?[END_REF], namely the use of ERP systems in budgeting. This research design is based on rich empirical data [START_REF] Eisenhardt | Theory building from cases: Opportunities and challenges[END_REF][START_REF] Eisenhardt | Building Theories from Case Study Research[END_REF], therefore it tends to generate better explanation in respond to the initial research aim to describe and explain ERP system use in budgeting.
Eleven for-profit organisations from Thailand are included in this study. To be eligible for the study, these organisations meet the following three criteria. First they have installed and used an ERP system for finance and accounting functions for at least two years to ensure system maturity [START_REF] Nicolaou | Firm Performance Effects in Relation to the Implementation and Use of Enterprise Resource Planning Systems[END_REF]. Second, they employ budgeting as the main management accounting control. Third they are listed on a stock exchange to ensure size and internal control consistency due to stock market regulations [START_REF] Grabski | A review of ERP research: A future agenda for accounting information systems[END_REF].
This research is designed with triangulation in mind [START_REF] Miles | Qualitative data analysis : an expanded sourcebook[END_REF] in order to improve the validity of research findings. Based on Denzin [START_REF] Denzin | The Reserach Act: A Theoretical Introduction to Sociological Methods[END_REF]'s triangulation typologies, the methodological triangulation is applied in this study. Interviews, which are the primary data collection method, are conducted with twenty-one business controllers in eleven profit-organisations in Thailand in autumn 2011. These interviews are conducted at interviewee's locations. Therefore data from several other sources such as internal documentations and system demonstrations are available to the researcher for the methodological triangulation purpose. The interview follows a semi-structured format which lasts for approximately one to two hours on average. Interview participants are business controllers who are directly responsible for budgeting as well as IS technologies within organisations. Interview participants are for example chief financial controller (CFO), accounting vice president, planning vice president, accounting policy vice president, management accounting manager, business analyst, and business intelligent manager. Appendix 1 provides an excerpt of the interview guide. All interview participants have been working for the current organisations for a considerable amount of time which ranges between two to twenty years. Therefore it is deemed that they are knowledgeable of the subject under investigation. All interviews are recorded, transcribed and analysed in Nvivo8 data analysis software. Coding is performed following the inductive coding technique [START_REF] Miles | Qualitative data analysis : an expanded sourcebook[END_REF] using a simple two-level theme; an open-ended general epic coding followed by a more specific emic coding in order to allow a maximum interwoven within data analysis. Appendix 2 provides an example of the coding process performed in this research.
With regard to the case companies, the organisations selected represent core industries of Thailand such as the energy industry (Cases A-C), the food industry (Cases D-G) and the automobile industry (Cases H and I). The energy group is the backbone of Thailand's energy production chain, which accounts for more than half of the country's energy demands. The food industry group includes business units of global food companies and Thai food conglomerates which export foods worldwide. The automobile industry group is directly involved in the production and distribution chains of the world's leading automobile brands. For the two remaining cases, Case J is a Thai business unit of a global household electronic appliance company. Case K is a Thai hospitality conglomerate which operates numerous five-star hotels and luxury serviced apartments throughout the Asia Pacific region. In terms of IS technologies, all of these companies employ both ERP and spreadsheets (SSs) for budgeting functions. However, some have access to additional access to BI applications. Some companies employ off-the-shelf BI solutions for budgeting purpose such as the Cognos BI systems. Nevertheless some companies choose to develop their own BI systems in collaboration with IS/IT consultants. This type of in-house BI is referred to as "own BI". Table 1 provides a clear description of each case organisation. The next section presents data analysis obtained from these organisations.
Analysis
The analysis is presented based on the theoretical section presented earlier. It starts with the 'conflict' between (1) the ERP system and flexibility and (2) the ERP system and integration at the four budgeting activity levels. These two sections aim to explain how the ERP systems are used or not used in budgeting. Later on, the paper proceeds to discuss the 'contradiction' between the ERP system and budgeting at a structural level in order to suggest why the ERP system are used or not used to support budgeting activities.
Conflict at the activity level: ERP system and flexibility
Flexibility, defined as business controllers' discretion to use IS technologies for budget-related decision-making [START_REF] Ahrens | Accounting for flexibility and efficiency: A field study of management control systems in a restaurant chain[END_REF], is needed throughout the budgeting process.
Based on a normal budgeting cycle, there are two important activities in relation to the flexibility definition: (1) budget construction, and (2) budget reporting. These two activities require business controllers to construct a data model on an IS technology which takes into account the complex environmental conditions [START_REF] Frow | Continuous budgeting: Reconciling budget flexibility with budgetary control[END_REF][START_REF] Chenhall | Management control systems design within its organizational context: findings from contingency-based research and directions for the future[END_REF] to determine the best possible alternatives.
In the first activity of budget construction, this process requires a high level of flexibility because budgets are typically constructed in response to specific activities and conditions presented in each business unit. The ERP system is not called upon for budget construction in any case company because of the following two reasons: (1) the technology is developed in a generic manner such that it cannot be used to support any specific budgeting process. The Vice President Information Technology in Case I mentions: "SAP [ERP] is too generic1 for budgeting. […] They [SAP ERP developers] have to develop something that perfectly fits with the nature of the business, but I know it is not easy to do because they have to deal with massive accounting codes and a complicated chart of accounts". This suggestion is similar to the reason indicated by the Financial Planning Manager in Case F who explains that her attempt to use an ERP system for budgeting was not successful because "SAP [ERP] has a limitation when it comes to revenue handling. It cannot handle any complicated revenue structure". (2) The technology is not flexible enough to accommodate changes in business conditions which are the keys to forecasting future business operations. The Central Accounting Manager in Case G suggests that the ERP system limits what business controllers can do with their budgeting procedures in connection with volatile environments. She explicitly mentions that: "our [budgeting] requirements change all the time. The ERP system is fixed; you get what the system is configured for. It is almost impossible to alter the system. Our Excel [spreadsheets] can do a lot more than the ERP system. For example, our ERP system does not contain competitor information. In Excel, I can just create another column and put it in".
In the second activity of budget reporting, all cases run basic financial accounting reports from the ERP systems, and then they further edit the reports to fit their managerial requirements and variance analysis in spreadsheets. The practice is also similar in Cases A, B and E, where the ERP systems are utilised for budget monitoring (see more discussion in the next section). For example, the Corporate Accounting Manager in Case D indicates how the ERP system is not flexible for reporting and how he works around it: "When I need to run a report from the ERP system, I have to run many reports then I mix them all in Excel [spreadsheets] to get exactly what I want". The Business Intelligence Manager in Case K comments on why she sees that the ERP system is not flexible enough for variance analysis: "It is quite hard to analyse budgeting information in the ERP system. It is hard to make any sense out of it because everything is too standardised".
In summary, the empirical data suggests the ERP systems are not used to support the flexibility domain in budgeting since that there is a clear conflict between the ERP system and the flexibility required in budgeting activities. The ERP systems put limitations on what business controllers can or cannot do with regard to flexibility in budgeting. For example business controller cannot perform complicated business forecasting which is necessary for budget construction on the ERP system. This conflict is clearly addressed by the Financial Planning Manager in Case F who states: "The SAP [ERP] functions are not flexible enough [for budgeting] but it is quite good for [financial] accounting".
Conflict at the activity level: ERP system and integration
Integration, defined as the adoption of IS technologies to standardise data definitions and structures across data sources [START_REF] Goodhue | The impact of data Integration on the costs and benefits of information systems[END_REF], is needed for budget control. Based on a normal budgeting cycle, there are two important activities in relation to the definition of integration: (1) budget consolidation, and (2) budget monitoring. Various departmental budgets are consolidated together at an organisational level, which is subsequently used for comparison with actual operating results generated from financial accounting for monitoring purposes.
In the first activity of budget consolidation, none of the case companies is reported to be using the ERP system for this function. The majority of budgets are constructed and consolidated outside the main ERP system, typically in spreadsheets (except Case B, which uses a mixture of spreadsheets and BI). The CFO in Case H gives an overview of the company budgeting process: "We do budgeting and business planning processes on Excel [spreadsheets]. It is not only us that do it like this. All of the six [Southeast Asian] regional companies also follow this practice. Every company has to submit budgets on spreadsheets to the regional headquarters. The budget consolidation is also completed on spreadsheets". Regardless of a company's choice to bypass the ERP system for budget consolidation, all the case companies are able to use their ERP systems to prepare and consolidate financial statements for a financial accounting purpose at a specific company level, but not necessarily at a group level. These financial accounting statements will be used to support the second activity of budget monitoring.
In the second activity of budget monitoring, three case companies (Cases A, B and E) report that they use their ERP systems for budget monitoring purposes. The Planning Vice President in Case B mentions: "SAP [ERP] is more like a place which we put budgeting numbers into. We use it to control budgets. We prepare budgets outside the system but we put the final budget numbers into it for a controlling purpose so that we can track budget spending in relation to the purchasing function in SAP [ERP]". A similar use of the ERP systems is presented in Cases A and E, where budgets are loaded into SAP ERP Controlling (CO) and Project System (PS) modules for budget spending monitoring. Note that only the final budget numbers (after budget consolidation in spreadsheets) are loaded into the ERP system for a control purpose alone. The ERP system does not play a part in any budget construction processes in these three cases, as it is mentioned in the previous section that budget construction is entirely achieved outside the main ERP system.
In conclusion, the empirical data suggests that the ERP systems are not widely used to support the integration domain in budgeting. However the empirical data suggests that the ERP systems have the potential to support budget integration as it has been shown earlier that all case companies use the ERP system to prepare financial statements and some cases use the ERP systems to monitor budget spending/achievement. Regardless of the potential that the ERP systems offer, these companies have not widely used the ERP systems to support budgeting practice.
Companies have yet to realise this hidden potential of the ERP system [START_REF] Kallunki | Impact of enterprise resource planning systems on management control systems and firm performance[END_REF] to integrate currently separated financial accounting (e.g. financial statement preparation) and management accounting (e.g. budgeting) practices.
Contradiction at the structural level
Based on the discussions at the two activity levels presented in earlier sections, this section builds on the concept of contradiction in ST to explain how and why the ERP systems are used or not used in budgeting. Budgeting as a social practice is deemed to operate in terms of flexibility and integration, while at the same time these contravene each other. It has been shown earlier that the four main budgeting activities in a typical budgeting cycle (budget construction, budget consolidation, budget monitoring and budget reporting) belong equally to both the integration and flexibility domains. With regards to the four budgeting activities, it has been shown that the four main budgeting remain outside the main ERP systems with an exception of the budget monitoring activity alone. In this activity, a minority of case companies use the ERP systems to support this work function. It is also been noted that the ERP systems have the potential to consolidate budgeting information but it seems that companies have not yet decided to utilise this capability offered in the systems.
Explanations based on the ulitarian view through the conflict and contradiction concept in ST deem that the ERP systems are not used in the budgeting activities because the systems only have the capabilities to support the integration function alone. Compared with budgeting practice which needs flexibility in decision-making as well as integration in management control, the ERP systems are obviously not suitable to support budgeting. Figure 1 shows the overall discussion about the contradiction between the ERP systems and budgeting at a structural level. It explains the shifts in the roles of budgeting activities from flexibility in activity one, budget construction, to integration in activity two, budget consolidation, and so on. It also elaborates how the ERP systems can have the potential to support some particular activities (such as budget consolidation and budget monitoring) but not the others.
Flexibility
ERP systems
Other IS technologies Other IS technologies So why do the ERP systems support the integration but not the flexibility in budgeting? Despite all the endlessly fancy claims made by numerous ERP vendors, the basic assumptions of the ERP system are a reference model which enforces underlying data, business process and organisational structure. The procedures described by the system must be strictly adhered to throughout organisational task executions [START_REF] Kallinikos | Deconstructing information packages: Organizational and behavioural implications of ERP systems[END_REF]. Therefore it is hard or even impossible to alter the systems to change in response to new business requirements or circumstances because such change is contradictory to the most basic principle of the systems.
So how can we readdress the limitations of ERP systems to support the flexibility needs in budgeting? As Figure 1 explains, other types of IS technologies such as spreadsheets and business intelligence (BI) must be called upon to support the activities that the ERP systems cannot accommodate [START_REF] Hyvönen | A virtual integration-The management control system in a multinational enterprise[END_REF]. These technologies are built and designed from different assumptions from those of the ERP systems; therefore they can accommodate the flexibility in budgeting. These systems can be combined to support strategic moves made by top management according to the indication from the LOC framework [START_REF] Simons | How New Top Managers Use Control Systems as Levers of Strategic Renewal[END_REF].
Conclusions and implications
This paper investigates how and why the ERP systems are used or not used in budgeting. It builds from the concepts of conflict and contradiction in ST, which is based on the ulitarian view of IS technology use perspective. Budgeting is treated as a social practice which portrays the two consecutive but contradictory roles of flexibility and integration. Using empirical data from eleven case companies in Thailand, the analysis at the activity level reveals that the ERP systems are not used to support the flexibility domain in budgeting because the systems impede business controllers to perform flexibility-related activities in budgeting, namely budget construction and budget reporting. The analysis on the integration-related budgeting function reveals that the ERP are not widely used to support the activities either. However it strongly suggests the system capability to support the integration function in budgeting as the systems are widely used to generate financial reports along with the evidence that some case companies are using the ERP systems for budget monitoring purpose. The analysis at the structuration level concludes why the ERP systems are not widely used to support budgeting. It is deemed that there is a contradictory relationship between the ERP systems and budgeting because the systems operate only in terms of integration, while the budgeting process assumes both roles. For this reason, other types of IS technologies such as spreadsheets and BI are called upon to accommodate tasks that cannot be supported in the main ERP systems.
This research result concurs with previous research conclusion that the ERP systems may post a flexibility issue to organisations because the systems cannot be tailored or changed in respond to business conditions or user requirements [START_REF] Booth | The impacts of enterprise resource planning systems on accounting practice -The Australian experience[END_REF][START_REF] Rom | Enterprise resource planning systems, strategic enterprise management systems and management accounting: A Danish study[END_REF][START_REF] Soh | Cultural fits and misfits: Is ERP a universal solution?[END_REF][START_REF] Akkermans | The impact of ERP on supply chain management: Exploratory findings from a European Delphi study[END_REF]. Hence it does not support research findings which conclude the ERP systems promote flexibility in organisations [START_REF] Brazel | The Effect of ERP System Implementations on the Management of Earnings and Earnings Release Dates[END_REF]. In addition, it corresponds to previous findings which indicate that the ERP systems may assist integration in organisations [START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF][START_REF] Quattrone | A 'time-space odyssey': management control systems in two multinational organisations[END_REF]. At least, the ERP systems can support a company-wide data integration which is significant in accounting and management control but not necessary a companywide business process integration [START_REF] Dechow | Enterprise resource planning systems, management control and the quest for integration[END_REF][START_REF] Dechow | Management Control of the Complex Organization: Relationships between Management Accounting and Information Technology In[END_REF].
The use of the ulitarian view to generate explanations for ERP system use/non-use is still somewhat limited. There are many aspects that the ulitarian view cannot capture. For example, the ulitarian view cannot provide an explanation as to why the ERP systems are not widely used to support the budget integration functions despite the system capabilities for financial consolidations and budget monitoring. This suggests that other views, such as the social view as well as the contingency view suggested in prior literature, are necessary in explaining the ERP system use/non-use. Therefore future IS use research should employ theories and insights from many perspectives to gain insights into the IS use/non-use phenomena.
The results presented in this study should be interpreted with a careful attention. Case study, by definition, makes no claims to be typical. The nature of case study is based upon studies of small, idiosyncratic and predominantly non-numerical sample set, therefore there is no way to establish the probability that the data can be generalised to the larger population. On the contrary, the hallmark of case study approach lies in theory-building [START_REF] Eisenhardt | Theory building from cases: Opportunities and challenges[END_REF] which can be transposed beyond the original sites of study.
The research offers two new insights to the IS research community. First, it explains the ERP system limited use explanation in budgeting from an ulitarian perspective. It holds that the ERP systems have the potential to support only half of the budgeting activities. Explicitly, the systems can support the integration in management control but not the flexibility in decision-making. Second, it shows that business controllers recognise such limitations imposed by the ERP systems and that they choose to rely on other IS technologies especially spreadsheets to accomplish their budgeting tasks. Spreadsheets use is problematic in itself, issues such as spreadsheets errors and frauds are well-documented in the literature. Therefore academia should look for solutions to improve professionally designed IS technologies (e.g., the ERP system or the BI) use in organisations and reduce spreadsheets reliance in budgeting as well as in other business activities.
For practitioners, this research warns them to make informed decisions about IT/IS investments. ERP vendors often persuade prospective buyers to think that their systems are multipurpose. This research shows at least one of the many business functions in which the ERP systems do not excel. Thus any further IT/IS investments must be made with a serious consideration to the business function that needs support, as well the overall business strategies guiding the entire organisation.
Figure 1
1 Figure 1 Contradiction between budgeting and ERP system
Table 1 .
1 Case company description
Case Main Activities Owner ERP SSs BI
A Power plant Thai SAP Yes Magnitude
B Oil and Petrochemical Thai SAP Yes Cognos
C Oil refinery Thai SAP Yes -
D Frozen food processor Thai SAP Yes -
E Drinks and dairy products Foreign SAP Yes Magnitude
F Drinks Foreign SAP Yes Own BI
G Agricultural products Thai BPCS Yes -
H Truck Foreign SAP Yes -
I Automobile parts Thai SAP Yes Own BI
J Electronic appliances Foreign JDE Yes Own BI
K Hotels and apartments Thai Oracle Yes IDeaS
This italic shown in the original interview text represents the author's intention to emphasize certain information in the original interview text. This practice is used throughout the paper.
Appendix 1: Interview guide
How do you describe your business unit information? What IS technologies are used in relation to budgeting procedure? What are the budgeting procedures in your organisation? What are the characteristics of pre-budget information gathering and analysis? How does your business organisation prepare a budget? How does your business organisation consolidate budget(s)? How does your business organisation monitor budgets? How does your business organisation prepare budget-related reports? How does your organisation direct strategic management? How does your organisation control normative management? From what I understand I think SAP is developing an industrial product line but the budgeting function is very small so they think that it might not worth an investment. First I think that is why they brought in the BI. Second, I think budgeting is something for business students. So they have to develop something that perfectly fits with the nature of the business, but I know it is not easy to do because they have to deal with massive accounting codes and a complicated chart of accounts.
Consequences | 48,946 | [
"1003597"
] | [
"344927"
] |
01484775 | en | [
"spi",
"math"
] | 2024/03/04 23:41:48 | 2018 | https://hal.science/hal-01484775/file/Surprisepressureversion3versionTCAndromarch7.pdf | Thomas Carraro
email: thomas.carraro@iwr.uni-heidelberg.de
Eduard Marušić-Paloka
Andro Mikelić
email: mikelic@univ-lyon1.fr
Effective pressure boundary condition for the filtration through porous medium via homogenization
Keywords: homogenization, stationary Navier-Stokes equations, stress boundary conditions, effective tangential velocity jump, porous media
We present homogenization of the viscous incompressible porous media flows under stress boundary conditions at the outer boundary. In addition to Darcy's law describing filtration in the interior of the porous medium, we derive rigorously the effective pressure boundary condition at the outer boundary. It is a linear combination of the outside pressure and the applied shear stress. We use the two-scale convergence in the sense of boundary layers, introduced by Allaire and Conca [SIAM J. Math. Anal., 29 (1997), pp. 343-379] to obtain the boundary layer structure next to the outer boundary. The approach allows establishing the strong L 2 -convergence of the velocity corrector and identifica-
Introduction
The porous media flows are of interest in a wide range of engineering disciplines including environmental and geological applications, flows through filters etc. They take place in a material which consists of a solid skeleton and billions of interconnected fluid filled pores. The flows are characterised by large spatial and temporal scales. The complex geometry makes direct computing of the flows, and also reactions, deformations and other phenomena, practically impossible. In the applications, the mesoscopic modeling is privileged and one search for effective models where the information on the geometry is kept in the coefficients and which are valid everywhere. The technique which allows replacing the physical models posed at the microstructure level by equations valid globally, is called upscaling. Its mathematical variant, which gives also the rigorous relationship between the upscaled and the microscopic models is the homogenization technique.
It has been applied to a number of porous media problems, starting from the seminal work of Tartar [START_REF] Tartar | Convergence of the homogenization process[END_REF] and the monograph [START_REF] Sanchez-Palencia | Non-homogeneous media and vibration theory[END_REF]. Many subjects are reviewed in the book [START_REF] Hornung | Homogenization and Porous Media[END_REF]. See also the references therein.
Frequently, one has processes on multiple domains and model-coupling approaches are needed. Absence of the statistical homogeneity does not allow direct use of the homogenization techniques. Examples of situations where the presence of an interface breaks the statistical homogeneity are
• the flow of a viscous fluid over a porous bed,
• the forced infiltration into a porous medium.
2
The tangential flow of an unconfined fluid over a porous bed is described by the law of Beavers and Joseph [START_REF] Beavers | Boundary conditions at a naturally permeable wall[END_REF] and it was rigorously derived in [START_REF] Jäger | On the interface boundary condition of Beavers, Joseph, and Saffman[END_REF] and [START_REF] Marciniak-Czochra | Effective pressure interface law for transport phenomena between an unconfined fluid and a porous medium using homogenization[END_REF] using a combination of the homogenization and boundary layer techniques. The forced injection problem was introduced in [START_REF] Levy | On boundary conditions for fluid flow in porous media[END_REF] and the interface conditions were rigorously established and justified in [START_REF] Carraro | Effective interface conditions for the forced infiltration of a viscous fluid into a porous medium using homogenization[END_REF].
A particular class of the above problems is derivation of the homogenized external boundary conditions for the porous media flows. In the case of the zero velocity at the external boundary of the porous medium, one would impose zero normal component of the Darcy velocity as the homogenized boundary condition. The behavior of the velocity and pressure field close to the flat external boundary, with such boundary condition, has been studied in [START_REF] Jäger | On the Flow Conditions at the Boundary Between a Porous Medium and an Impervious Solid[END_REF], using the technique from [START_REF] Jäger | On the boundary condition on the contact interface between a porous medium and a free fluid[END_REF]. The error estimate in 2D, for an arbitrary geometry has been established in [START_REF] Marušić-Paloka | An Error Estimate for Correctors in the Homogenization of the Stokes and Navier-Stokes Equations in a Porous Medium[END_REF].
The case of the velocity boundary conditions could be considered as "intuitively" obvious. Other class of problems arises when we have a contact of the porous medium with another fluid flow and the normal contact force is given at the boundary. It describes the physical situation when the upper boundary of the porous medium in exposed to the atmospheric pressure and wind (see e.g. [START_REF] Coceal | Canopy model of mean winds through urban areas[END_REF]). Or, more generally, when the fluid that we study is in contact with another given fluid. Assuming that the motion in porous medium is slow enough that the interface Σ between two fluids can be seen as immobile. Intuitively, it is expected that the homogenized pressure will take the prescribed value at the boundary.
In this article we study the homogenization of the stationary Navier-Stokes equations with the given normal contact force at the external boundary and we will find out that the result is more rich than expected.
Setting of the problem
We start by defining the geometry. Let and d be two positive constants.
Let Ω = (0, ) × (-d, 0) ⊂ R 2 be a rectangle. We denote the upper boundary by
Σ = {(x 1 , 0) ∈ R 2 ; x 1 ∈ (0, ) } .
The bottom of the domain is denoted by
Γ = {(x 1 , -d) ∈ R 2 ; x 1 ∈ (0, ) } .
We set Γ = ∂Ω\Σ . Let A ⊂⊂ R 2 be a smooth domain such that A ⊂ (0, 1) 2 ≡ Y . The unit pore is Y * = Y \A. Now we choose the small parameter ε 1 such that ε = /m, with m ∈ N and define
T ε = {k ∈ Z 2 ; ε(k + A) ⊂ Ω } , Y * ε,k = ε(k + Y * ) , A ε k = ε (k + A).
The fluid part of the porous medium is now
Ω ε = Ω\ k∈Tε ε (k + A). Finally, B ε = k∈Tε ε (k + A)
is the solid part of the porous medium and its boundary is
S ε = ∂B ε . Σ Γ B ε ' i A • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
On Σ we prescribe the normal stress and Γ is an impermeable boundary. In the dimensionless form, the Stokes problem that we study reads
-µ ∆u ε + ∇p ε = F , div u ε = 0 in Ω ε , (1)
T(u ε , p ε ) e 2 = H = (P, Q) on Σ, u ε = 0 on S ε ∪ Γ, (2)
(u ε , p ε ) is -periodic in x 1 . (3)
Here T(v, q) denotes the stress tensor and D v the rate of strain tensor
T(v, q) = -2µ Dv + q I , Dv = 1 2 ∇v + (∇v) t
and µ is a positive constant.
Assumption 1. We suppose ∂A ∈ C 3 , F ∈ C 1 (Ω) 2 and P = P (x 1 ), Q = Q(x 1 ) being elements of C 1 per [0, ].
For the existence, uniqueness and regularity of solutions to Stokes problem (1)-
, under Assumption 1, we refer e.g. to [START_REF] Boyer | Mathematical Tools for the Study of the Incompressible Navier-Stokes Equations and Related Models[END_REF], Sec. 4.7.
Furthermore, we consider the full stationary incompressible Navier-Stokes system
-µ ∆u 1,ε + (u 1,ε ∇)u 1,ε + ∇p 1,ε = F , div u 1,ε = 0 in Ω ε (4) T(u 1,ε , p 1,ε ) e 2 = H = (P, Q) on Σ, u 1,ε = 0 on S ε ∪ Γ (5) (u 1,ε , p 1,ε ) is -periodic in x 1 . (6)
Existence of a solution for problem (4)-( 6) is discussed in Sec. 5.
Our goal is to study behavior of solutions to (1)-( 3) and ( 4)-( 6) in the limit when the small parameter ε → 0.
The main result
Our goal is to describe the effective behavior of the fluid flow in the above described situation. The filtration in the bulk is expected to be described by Darcy's law and we are looking for the effective boundary condition on the upper boundary Σ. To do so, we apply various homogenization techniques, such as two-scale convergence ( [START_REF] Nguetseng | A general convergence result for a functional related to the theory of homogenization[END_REF] , [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]) and the two-scale convergence for boundary layers ( [START_REF] Allaire | Boundary Layers in the Homogenization of a Spectral Problem in Fluid-Solid Structures[END_REF]). We prove the following result:
Theorem 1. Let us suppose Assumption 1 and let (u ε , p ε ) be the solution of problem ( 1)-(3).
90
Then there exists an extension of p ε to the whole Ω, denoted again by the same symbol, such that
p ε → p 0 strongly in L 2 (Ω), ( 7
)
where p 0 is the solution of problem div K(∇p 0 -F) = 0 in Ω, ( 8
)
p 0 is -periodic in x 1 , n • K (∇p 0 -F) = 0 on Γ, (9)
p 0 = C π P + Q on Σ, (10)
with K the permeability tensor, defined by (83), and C π the boundary layer pressure stabilisation constant given by (41).
Next, let (w, π) be the solution of the boundary layer problem (36)-(38).
Then, after extending u ε and w by zero to the perforations, we have
95 u ε (x) -ε P (x 1 ) w(x/ε) ε 2 V weakly in L 2 (Ω), (11)
u ε (x) ε 2 P (x 1 ) G * w 1 (y) dy δ Σ e 1 + V weak* in M(Ω), (12)
u ε -ε P (x 1 ) w x ε ε 2 - 2 k=1 w k x ε F k - ∂p 0 ∂x k → 0 strongly in L 2 (Ω), ( 13
)
where G * is the boundary layer internal interface fluid/solid given by ( 28), V satisfies the Darcy law An analogous result holds for the homogenization of the stationary Navier-Stokes equations ( 4)-( 6)
V = K(F -∇p 0 ) , M (
Theorem 2. Under the assumptions on the geometry and the data from Theorem (1), there exist solutions (u 1,ε , p 1,ε ) of problem ( 4)-( 6) such that convergences ( 7), ( 11)-( 13) take place.
Proof of Theorem 1
The proof is divided in several steps. First we derive the a priori estimates.
Then we pass to the two-scale limit for boundary layers, in order to determine the local behavior of the solution in vicinity of the boundary. Once it is achieved, we subtract the boundary layer corrector from the original solution and use the classical two-scale convergence to prove that the residual converges towards the limit that satisfies the Darcy law. At the end we prove the strong convergences.
Step one: A priori estimates
We first recall that in Ω ε Poincaré and trace constants depend on ε in the following way
|φ| L 2 (Ωε) ≤ C ε |∇φ| L 2 (Ωε) ( 14
)
|φ| L 2 (Σ) ≤ C √ ε |∇φ| L 2 (Ωε) , ∀ φ ∈ H 1 (Ω ε ) , φ = 0 on S ε (15)
We also recall that the norms |Dv| L 2 (Ωε) and |∇v| L 2 (Ωε) are equivalent, due to the Korn's inequality, which is independent of ε (see e.g. [START_REF] Boyer | Mathematical Tools for the Study of the Incompressible Navier-Stokes Equations and Related Models[END_REF]).
Here and in the sequel we assume that u ε is extended by zero to the whole Ω. In order to extend the pressure p ε we need Tartar's construction from his seminal paper [START_REF] Tartar | Convergence of the homogenization process[END_REF]. It relies on the related construction of the restriction operator, acting from the whole domain Ω to the pore space Ω ε . In our setting we deal with the functional spaces
X 2 = {z ∈ H 1 (Ω) 2 ; z = 0 for x 2 = -d } X ε 2 = {z ∈ X 2 ; z = 0 on S ε } .
Then, after [START_REF] Tartar | Convergence of the homogenization process[END_REF] and the detailed review in [START_REF] Allaire | One-Phase Newtonian Flow[END_REF], there exists a continuous restric-
120 tion operator R ε ∈ L(X 2 , X ε 2 ), such that div (R ε z) = div z + k∈Tε 1 |Y * ε,k | χ ε,k A ε k div z dx, ∀ z ∈ X 2 , |R ε z| L 2 (Ωε) ≤ C (ε |∇z| L 2 (Ω) + |z| L 2 (Ω) ) , ∀ z ∈ X 2 , |∇R ε z| L 2 (Ωε) ≤ C ε (ε |∇z| L 2 (Ω) + |z| L 2 (Ω) ) , ∀ z ∈ X 2 ,
where χ ε,k denotes the characteristic function of the set Y * ε,k , k ∈ T ε . Through a duality argument, it gives an extension of the pressure gradient and it was found in [START_REF] Lipton | Darcy's law for slow viscous flow past a trationary array of bubbles[END_REF] that the pressure extension p is given by the explicit formula
pε = p ε in Ω ε 1 |Y * ε,k | Y * ε,k p ε dx in Y * ε,k for each k ∈ T ε . (16)
For details we refer to [START_REF] Allaire | One-Phase Newtonian Flow[END_REF]. In addition, a direct computation yields
Ωε p ε div (R ε z) dx = Ω pε dx div z dx , ∀ z ∈ X 2 . (17)
Both the velocity and the pressure extensions are, for simplicity, denoted by the same symbols as the original functions (u ε , p ε ).
It is straightforward to see that:
Lemma 1. Let (u ε , p ε ) be the solution to problem (1), [START_REF] Allaire | One-Phase Newtonian Flow[END_REF]. Then there exists 125 some constant C > 0, independent of ε, such that
|∇u ε | L 2 (Ω) ≤ C √ ε ( 18
)
|u ε | L 2 (Ω) ≤ C ε 3/2 ( 19
)
|p ε | L 2 (Ω) ≤ C √ ε . (20)
Proof. We start from the variational formulation of problem (1), ( 2)
µ Ωε Du ε : Dv dx = Σ H • v dS + Ωε F • v dx, ∀ v ∈ V (Ω ε ) , (21)
V (Ω ε ) = {v ∈ H 1 (Ω ε ) 2 ; div v = 0, v = 0 on S ε ∪ Γ, v is -periodic in x 1 }
Using u ε as the test function and applying ( 14)-( 15) yield
µ Ωε |Du ε | 2 dx = Σ H • u ε dS + Ωε F • u ε dx ≤ C √ ε|Du ε | L 2 (Ωε) .
Now ( 14) implies ( 18) and ( 19). Since we have extended the pressure to the solid part of Ω, using Tartar's construction, ( 18) and ( 17) imply
|p ε | L 2 (Ω)/R = sup g∈L 2 (Ω)/R Ω p ε g dx |g| L 2 (Ω)/R = sup z∈X2 Ωε p ε div (R ε z) dx |z| H 1 (Ω) 2 ≤ C ε |∇u ε | L 2 (Ω) ,
giving the pressure estimate (20).
Step two: Two-scale convergence for boundary layers
We recall the definition and some basic compactness results for two-scale convergence for boundary layers due to Allaire and Conca [START_REF] Allaire | Boundary Layers in the Homogenization of a Spectral Problem in Fluid-Solid Structures[END_REF]. In the sequel, if the index y is added to the differential operators D y , ∇ y , div y , then the derivatives are taken with respect to the fast variables y 1 , y 2 instead of x 1 , x 2 .
Let G = (0, 1) × ( -∞ , 0) be an infinite band. The bounded sequence (φ ε ) ε>0 ⊂ L 2 (Ω) is said to two-scale converge in the sense of the boundary layers if there
exists φ 0 (x 1 , y) ∈ L 2 (Σ × G) such that 1 ε Ω φ ε (x) ψ x 1 , x ε dx → Σ G φ 0 (x 1 , y) ψ(x 1 , y) dx 1 dy , (22)
for all smooth functions ψ(x 1 , y) defined in Σ × G, with bounded support, such
that y 1 → ψ(x 1 , y 1 , y 2 ) is 1-periodic.
We need the following functional space
D 1 = {ψ ∈ C ∞ (G) ; ψ is 1 -periodic in y 1
and compactly supported in
y 2 ∈ (-∞, 0]} Now D 1 # (G) is the closure of D 1 in the norm |ψ| D 1 # (G) = |∇ψ| L 2 (G) .
It should be noticed that such functions do not necessarily vanish as y 2 → -∞. For that kind of convergence we have the following compactness result from [START_REF] Allaire | Boundary Layers in the Homogenization of a Spectral Problem in Fluid-Solid Structures[END_REF]:
Theorem 3. 1. Let us suppose 1 √ ε |φ ε | L 2 (Ω) ≤ C . ( 23
)
Then there exists φ 0 ∈ L 2 (Σ × G) and a subsequence, denoted by the same indices, such that φ ε → φ 0 two-scale in the sense of boundary layers.
2. Let us suppose
1 √ ε |φ ε | L 2 (Ω) + ε |∇φ ε | L 2 (Ω) ≤ C. ( 25
)
Then there exists
φ 0 ∈ L 2 (Σ; D 1 # (G)
) and a subsequence, denoted by the same indices, such that φ ε → φ 0 two-scale in the sense of boundary layers [START_REF] Nguetseng | A general convergence result for a functional related to the theory of homogenization[END_REF] ε ∇φ ε → ∇ y φ 0 two-scale in the sense of boundary layers . (
Using the a priori estimates, we now undertake our first passing to the limit.
Before we start we define
C = -∞ j=0 (j e 2 + ∂A ) , M = -∞ j=0 (j e 2 + A) , G * = G\ -∞ j=0 (j e 2 + A ). ( 28
)
We introduce the space D 1 #0 (G * ) defined similarly as D 1 # (G) but on G * and such that its elements have zero trace on C. Thus, we take
D 1 = {ψ ∈ C ∞ (G * ) ; ψ| C = 0 , ψ is 1 -periodic in y 1 ,
and compactly supported in y 2 ∈ (-∞, 0]} .
Then D 1 #0 (G * ) is its closure in the norm |ψ| D 1 #0 (G * ) = |∇ψ| L 2 (G * )
. Those functions do vanish as y 2 → -∞ due to the zero trace on C that prevents them to 135 tend to a constant.
Lemma 2. Let (v 0 , q 0 ) ∈ L 2 (Σ; D 1 #0 (G * )) × L 2 (Σ; L 2 loc (G *
)) be given by the boundary layer problem
-µ∆ y v 0 + ∇ y q 0 = 0, div y v 0 = 0 in G * , (29)
-2µ D y v 0 + q 0 I e 2 = H for y 2 = 0 , v 0 = 0 on C, (30)
(v 0 , q 0 ) is 1-periodic in y 1 , v 0 → 0 as y 2 → -∞ . (31)
Then
1 ε u ε → v 0 two-scale in the sense of boundary layers (32)
∇u ε → ∇ y v 0 two-scale in the sense of boundary layers .
Proof. The a priori estimates ( 19) and ( 18) and the compactness theorem 3 imply the existence of some
v 0 ∈ L 2 (Σ; D 1 #0 (G * )) such that v 0 = 0 on M and 1 ε u ε → v 0 two-scale in the sense of boundary layers ( 34
)
∇u ε → ∇ y v 0 two-scale in the sense of boundary layers . ( 35
)
Now we take the test function
z ε (x) = z x 1 , x ε ∈ D 1 #0 (G * ) 2 such that div y z = 0 and z(x 1 , • ) = 0 in M in the variational formulation for (1), (2) 2µ ε Ωε εD u ε ε : εD z ε dx - Ωε p ε div z ε dx = Σ H • z ε dS + Ωε F • z ε dx. Since ∂z ε ∂x j = ε -1 ∂z ∂y j + δ 1j ∂z ∂x 1
we get on the limit 2µ
Σ G D y v 0 (x 1 , y) : D y z(x 1 , y) dy dx 1 = Σ H• 1 0 z(x 1 , y 1 , 0)dy 1 dx 1 .
Furthermore, since div u ε = 0 it easily follows that div y v 0 = 0. Thus there exists q 0 ∈ L 2 (Σ; L 2 loc (G * )) such that (v 0 , q 0 ) satisfy ( 29)-(31).
The boundary layer corrector (v 0 , q 0 ) can be decomposed as v 0 = P (x 1 ) w(y) 145 and q 0 = P (x 1 ) π(y) + Q(x 1 ) , where
-µ∆ y w + ∇ y π = 0 , div y w = 0 in G * , (36)
(-2µ D y w + π I) e 2 = e 1 for y 2 = 0 , w = 0 on C, ( 37
) (w, π) is 1-periodic in y 1 , w → 0 as y 2 → -∞ . (38)
Problem (36), (38) is of the boundary layer type. Existence of the solution and exponential decay can be proved as in [START_REF] Jäger | On the boundary condition on the contact interface between a porous medium and a free fluid[END_REF]. We have Theorem 4. Problem (36), (38) has a unique solution (w, π)
∈ D 1 #0 (G * ) × L 2
loc (G * ). Furthermore, there exists a constant C π such that
150 |e α |y2| ( π -C π ) | L 2 (G * ) ≤ C (39) |e α |y2| w | L 2 (G * ) + |e α |y2| ∇ w | L 2 (G * ) ≤ C . ( 40
)
for some constants C, α > 0 .
In the sense of (39) we write Using (40) yields
C π = lim y2→-∞ π(y) . (41
1 0 w 2 (y 1 , y 2 ) dy 1 = 0, ∀y 2 ≤ 0. ( 42
) (42) implies G * w 2 dy = 0. ( 43
)
Remark 3. Integrating (37) with respect to y 1 yields
e 1 = 1 0 -µ ∂w 1 ∂y 2 e 1 + ∂w 2 ∂y 1 e 1 + 2 ∂w 2 ∂y 2 e 2 + π e 2 (y 1 , 0) dy 1 .
Equating the second components gives
0 = 1 0 -2 µ ∂w 2 ∂y 2 + π (y 1 , 0) dy 1 = 1 0 2 µ ∂w 1 ∂y 1 + π (y 1 , 0) dy 1 = = 1 0 π(y 1 , 0) dy 1 .
If we test (36) with w k and (80) by w and combine, we get
C π = K -1 22 1 0 w 2 1 (y 1 , 0) dy 1 + 1 0 -2µ ∂w k ∂y 2 + π k e 2 (y 1 , 0) w(y 1 , 0) dy 1 .
Finally, we denote
J = {y 2 ∈] -∞, 0] ; (y 1 , y 2 ) ∈ M , y 1 ∈]0, 1[ }. Denoting m A = min{y 2 ∈ [0, 1] ; (y 1 , y 2 ) ∈ A } , M A = max{y 2 ∈ [0, 1] : (y 1 , y 2 ) ∈ A } .
The set J is then a union of disjoint intervals We now know the behavior of (u ε , p ε ) in vicinity of Σ. To get additional information of the behavior far from the boundary we deduce the boundary layer corrector from (u ε , p ε ) and define
J 0 = ] 0, m A [ , J i =]i -1 + M A , i + m a [ , i = 1,
U ε (x) = u ε (x) -ε P (x 1 ) w(x/ε) , P ε (x) = p ε (x) -[P (x 1 ) π(x/ε) + Q(x 1 )] .
The stress tensor T(v, q) = 2µ Dv -q I for such approximation satisfies
T(U ε , P ε ) = T(u ε , p ε ) -P (x 1 ) (2µD y w -πI) - -2µε dP dx 1 w 1 w 2 /2 w 2 /2 0 = T(u ε , p ε )- - P (x 1 ) 2µ ∂w1 ∂y1 -π + 2µε dP dx1 w 1 -Q µ P (x 1 ) ∂w1 ∂y2 + ∂w2 ∂y1 + ε dP dx1 w 2 µ P (x 1 ) ∂w1 ∂y2 + ∂w2 ∂y1 + ε dP dx1 w 2 P (x 1 ) 2µ ∂w2 ∂y2 -π -Q(x 1 )
By direct computation we get
-div T(U ε , P ε ) = f ε , (44)
f ε ≡ F + µε d 2 P dx 2 1 (w + w 1 e 1 ) + dP dx 1 2µ ∂w ∂y 1 -πe 1 + µ∇ y w 1 - dQ dx 1 e 1 , ( 45
) div U ε = -ε dP dx 1 w 1 in Ω ε , (46)
U ε = 0 on S ε , U ε = -ε P (x 1 ) w(x/ε) on Γ, (47)
(-2µ D U ε + P ε I) e 2 = 0 on Σ . ( 48
)
We want to derive appropriate a priori estimates for (U ε , P ε ). However, according to (46), the divergence of U ε is still too large for our purpose. Thus we need to compute the additional divergence corrector.
Lemma 3. There exists Φ ∈ H 2 (G * ) 2 such that div y Φ = w 1 in G * , (49)
Φ is 1-periodic in y 1 , Φ = 0 on C , Φ(y 1 , 0) = Ce 2 , (50)
e γ|y2| Φ ∈ L 2 (G * ) 4 and |Φ(y 1 , y 2 )| ≤ Ce -γ|y2| , for some γ > 0. (51)
Proof. We follow [START_REF] Jäger | On the boundary condition on the contact interface between a porous medium and a free fluid[END_REF] and search for Φ in the form
Φ = ∇ y ψ + curl y h = ∂ψ ∂y 1 - ∂h ∂y 2 , ∂ψ ∂y 2 + ∂h ∂y 1 .
The function ψ solves Again, assuming that U ε is extended by zero to the pores B ε we extend P ε using the formula ( 16) to prove:
170 -∆ y ψ = w 1 (y) in G * , ∂ψ ∂n = 0 on C, (52)
∂ψ ∂y 2 = d 0 = const. for y 2 = 0, ψ is 1-periodic in y 1 , (53)
Lemma 4. |∇U ε | L 2 (Ω) ≤ C ε (55) |U ε | L 2 (Ω) ≤ C ε 2 (56)
|P ε | L 2 (Ω) ≤ C . ( 57
)
Proof. It is straightforward to see that for the right-hand side, we have
|f ε | L 2 (Ω) ≤ C .
Furthermore
f ε = F -( dP dx 1 C π + dQ dx 1 ) e 1 + g ε , with |g ε | L 2 (Ω) = O( √ ε).
The idea is to test the system (44) with
Ũε = U ε + ε 2 dP dx 1 (x 1 ) Φ x ε , ( 58
)
where Φ is constructed in lemma 3. By the construction
div Ũε = ε 2 d 2 P dx 2 1 Φ ε 1 with Φ ε (x) = Φ(x/ε) . Thus |div Ũε | L 2 (Ω) ≤ C ε 5/2 .
The weak form of (44) reads
2µ Ωε D U ε : D z dx - Ωε P ε div z dx = Ωε f ε z dx , ∀ z ∈ X ε 2 (59) so that Ωε P ε div z dx ≤ C ( |D U ε | L 2 (Ωε) + ε )| z| H 1 (Ωε) , ∀ z ∈ X ε 2 . ( 60
)
Next we use identity [START_REF] Jäger | On the Flow Conditions at the Boundary Between a Porous Medium and an Impervious Solid[END_REF] to obtain the estimate
Ω P ε div z dx = Ωε P ε div (R ε z) dx ≤ C ε ( |D U ε | L 2 (Ωε) +ε ) |z| H 1 (Ω) , (61)
∀ z ∈ X 2 . Since div : X 2 → L 2 (Ω) is a surjective continuous operator, (61) yields | P ε | L 2 (Ω) ≤ C ( ε -1 |D U ε | L 2 (Ωε) + 1 ) . ( 62
)
Now we take z = Ũε as a test function in (59). To be precise, we observe that Ũε is not exactly in X ε 2 since it is not equal to zero for x 2 = -d. But, that value is exponentially small, of order e -γ/ε , so it can be easily corrected by lifting its boundary value by a negligibly small function. Thus, slightly abusing the notation, we consider it as an element of X ε 2 . Then, due to the (58)
Ωε P ε div Ũε dx = ε 2 Ωε P ε d 2 P dx 2 1 Φ ε 1 dx ≤ C ε |D U ε | L 2 (Ωε) + C ε 2 . (63)
Consequently, we get (55)-(57) .
At this point we use the classical two-scale convergence (see e.g. [START_REF] Nguetseng | A general convergence result for a functional related to the theory of homogenization[END_REF], [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]).
For readers' convenience we recall basic definitions and compactness results.
Let Y = [0, 1] 2 and let C ∞ # (Y ) be the set of all C ∞ functions defined on Y and periodic with period 1. We say that a sequence (v ε ) ε>0 , from L 2 (Ω), twoscale converges to a function
v 0 ∈ L 2 (Ω) if lim ε→0 Ω v ε (x) ψ x, x ε dx → Ω Y v 0 (x, y) ψ(x, y)dx dy , for any ψ ∈ C ∞ 0 (Ω; C ∞ # (Y )
). For such convergence we have the following compactness result from [START_REF] Allaire | Homogenization and two-scale convergence[END_REF] and [START_REF] Nguetseng | A general convergence result for a functional related to the theory of homogenization[END_REF] that we shall need in the sequel Theorem 5.
• Let (v ε ) ε>0 be a bounded sequence in L 2 (Ω). Then we can extract a subsequence that two-scale converges to some v 0 ∈ L 2 (Ω × Y ).
• Let (v ε ) ε>0 be a sequence in H 1 (Ω) such that v ε and ε ∇v ε are bounded in L 2 (Ω). Then, there exists a function v 0 ∈ L 2 (Ω; H 1 # (Y )) and a subsequence for which
v ε → v 0 in two-scales, (64)
ε ∇v ε → ∇ y v 0 in two-scales. ( 65
)
Lemma 5. Let (U ε , P ε ) be the solution of the residual problem ( 46)-( 48). Then
ε -2 U ε → U 0 in two-scales, (66)
ε -1 ∇U ε → ∇ y U 0 in two-scales, (67)
P ε → P 0 in two-scales, (68)
where
(U 0 , P 0 , Q 0 ) ∈ L 2 (Ω; H 1 # (Y * )) × H 1 (Ω) × L 2 (Ω; L 2 (Y * )/R) is the solu- tion of the two-scale problem -µ ∆ y U 0 + ∇ y Q 0 + ∇ x P 0 = F -( dQ dx 1 + C π dP dx 1 ) e 1 in Y * × Ω, ( 69
)
div y U 0 = 0 in Y * × Ω, (70)
U 0 = 0 on S × Ω, (U 0 , Q 0 ) is 1 -periodic in y, (71)
div x Y U 0 dy = 0 in Ω, Y U 0 dy • n = 0 on Γ, P 0 = 0 on Σ. ( 72
)
Proof. Using the estimates (55)-(57) we get that there exist
U 0 ∈ L 2 (Ω; H 1 # (Y )) and P 0 ∈ L 2 (Ω × Y ) such that ε -2 U ε → U 0 in two-scales, ε -1 ∇U ε → ∇ y U 0 in two-scales, P ε → P 0 two-scale.
It follows directly that U 0 (x, y) = 0 for y ∈ A.
First, for ψ(x, y) ∈ C ∞ (Y × Ω), periodic in y, such that ψ = 0 for y ∈ A 0 ← Ω dP dx 1 (x 1 ) w 1 x, x ε ψ x, x ε dx = ε -1 Ω div U ε ψ x, x ε dx = - Ω ε ∇ x ψ x, x ε + ∇ y ψ x, x ε • U ε (x) ε 2 dx → (73) → Ω Y U 0 • ∇ y ψ(x, y) dy dx ⇒ div y U 0 = 0 .
We then test equations ( 44)-( 48) with
m ε (x) = m x, x ε , where m ∈ H 1 (Ω; H 1 # (Y )), m = 0 for y ∈ M . 195 0 ← ε Ω f ε m ε dx = 2µ Ω D U ε (x) D y m x, x ε + εD x m x, x ε dx - Ω P ε (x) εdiv x m(x, x/ε) + div y m(x, x/ε) dx → - Ω Y P 0 (x, y) div y m(x, y) dy dx. (74)
Thus ∇ y P 0 = 0 implying P 0 = P 0 (x) .
Next we test system (44)-(48) with Z ε (x) = Z x, x ε , where Z ∈ H 1 (Ω; H 1 # (Y )), such that div y Z = 0 and Z = 0 for y ∈ A. It yields
Ω [F - dP (x 1 )C π + Q(x 1 ) dx 1 e 1 ] Y Z dy ← Ω f ε Z ε = - Ω P ε (x)div x Z(x, x/ε) dx + 2µ ε Ω D U ε (x) D y Z(x, x ε ) + εD x Z(x, x ε ) dx → (75) → 2µ Ω Y D y U 0 (x, y) D y Z(x, y) dy dx - Ω Y P 0 (x) div x Z(x, y) dy dx .
We conclude that ∇ x P 0 ∈ L 2 (Ω) and (U 0 , P 0 ) satisfies equations ( 69)-( 71).
200
The effective filtration velocity boundary conditions are determined by picking a smooth test-function ψ ∈ C ∞ (Ω), periodic in x 1 , ψ = 0 on Σ, and testing
div Ũε = ε 2 dP dx 1 Φ ε 1 18
with it. It gives
- Ωε dP dx 1 (x 1 ) Φ 1 x ε ψ(x) dx = ε -2 Ωε div Ũε (x) ψ(x) dx = = - Ωε ε -2 Ũε (x) • ∇ψ(x) dx - 0 Ũ ε 2 (x 1 , -d) ψ(x 1 , -d) dx 1 . ( 76
)
The last integral on the right hand side is negligible due to the exponential decay of w and Φ. The first integral on the right hand side, due to (66), converges and, due to the construction of Ũε ,
205 lim ε→0 Ω ε -2 Ũε (x) • ∇ψ(x) dx = lim ε→0 Ω ε -2 U ε (x) • ∇ψ(x) dx = = Ω Y U 0 (x, y) dy • ∇ψ(x) dx .
For the left-hand side in (76) we get
Ω d 2 P dx 2 1 (x 1 ) Φ 1 x ε ψ(x) dx ≤ C √ ε .
Thus
Ω Y U 0 dy • ∇ψ dx = 0 meaning that div x Y U 0 dy = 0 in Ω , Y U 0 dy • n = 0 on Γ.
We still need to determine the boundary condition for P 0 on Σ.
Let b be a smooth function defined on Ω×Y , such that div y b = 0 and b = 0 on Γ and b = 0 for y ∈ A . We now use b ε (x) = b(x, x/ε) as a test function in 210 (44)-( 48). We obtain
Ω f ε • b ε dx = 2µ Ω D U ε D x b • , • ε + ε -1 D y b • , • ε dx - ( 77
) Ω P ε div x b • , • ε dx → 2µ Ω Y D y U 0 D y b dydx - Ω P 0 div x Y b dy dx.
As for the left-hand side, we have
Ω f ε • b ε dx → Ω [F - d(P (x 1 )C π + Q(x 1 )) dx 1 e 1 ] ( Y b dy) dx so that 2µ Ω Y D y U 0 D y b dydx - Ω P 0 div x Y b dy dx = Ω Y b [F - d(P (x 1 )C π + Q(x 1 )) dx 1 e 1 ] dydx.
Using ( 69)-( 72) yields
Ω P 0 div Y b dy dx = - Ω ∇P 0 • Y b dy dx. It implies 2µ Σ Y
b • e 2 dy P 0 dx = 0 and, finally, P 0 = 0 on Σ.
Proving uniqueness of a weak solution for problem (69)-( 72) is straightforward.
Step four: Strong convergence 215
We start by proving the strong convergence for the pressure. We follow the approach from [START_REF] Sanchez-Palencia | Non-homogeneous media and vibration theory[END_REF]. Let {z ε } ε>0 be a sequence in X 2 such that z ε z 0 weakly in H 1 (Ω) .
Then we have
Ω P ε div z ε dx - Ω P 0 div z dx = Ω P ε div (z ε -z) dx + Ω ( P ε -P 0 ) div z dx.
For two integrals on the right-hand side we have lim ε→0 Ω ( P ε -P 0 ) div z dx = 0 and
Ω P ε div (z ε -z) dx = Ωε P ε div R ε (z ε -z) dx = 2µ Ωε D U ε ε εD(R ε (z ε -z) ) dx → 0 as ε → 0 .
Using surjectivity of the operator div : X 2 → L 2 (Ω) we conclude that P ε → P 0 strongly in L 2 (Ω).
Next we prove the strong convergence for the velocity. We define
U 0,ε (x) = 2 k=1 w k (x/ε) F k (x) - ∂ ∂x k P 0 (x) + C π P (x 1 ) + Q(x 1 )
.
Then for the L 2 -norms we have
Ωε U ε ε 2 -U 0,ε 2 dx ≤ C 2µ ε 2 Ωε D U ε ε 2 -U 0,ε 2 dx = = C 2µ ε -2 Ωε | D U ε | 2 dx + 2µ ε 2 Ωε | D U 0,ε | 2 dx - -4µ Ωε D U ε ε ε D U 0,ε dx .
Using the smoothness of U 0 we get, as ε → 0
220 (i) ε 2 Ωε | D U 0,ε | 2 dx = Ωε |D y U 0,ε | 2 dx + O(ε) → Ω×Y * |D y U 0 | 2 dx dy . (ii) 2µ Ω×Y * | D y U 0 | 2 dx = Ω (F - d(P (x 1 )C π + Q(x 1 )) dx 1 e 1 ) Y * U 0 dydx . (iii) 2µε -2 Ωε |D U ε | 2 dx = 2µε -2 Ωε D U ε D Ũε dx + O( √ ε) . (iv) 2µε -2 Ωε D U ε D Ũε dx -ε -2 Ωε P ε div Ũε dx = Ωε (F - d(P (x 1 )C π + Q(x 1 )) dx 1 e 1 ) U ε ε 2 dx + O( √ ε). (v) ε -2 Ωε P ε div Ũε dx = Ωε P ε d 2 P dx 2 1 Φ ε dx → 0 . (vi) (iii), (iv) and (v) ⇒ 2µ ε -2 Ωε |D U ε | 2 dx → Ω [ F - d(P (x 1 )C π + Q(x 1 )) dx 1 e 1 ] Y * U 0 dydx. (vii) Ωε D U ε ε ε D U 0,ε dx → Ω×Y * | D y U 0 | 2 dxdy. Thus lim ε→0 Ωε U ε ε 2 -U 0,ε 2 dx = 0 .
Step five: Weak* convergence of the boundary layer corrector
To prove convergence [START_REF] Coceal | Canopy model of mean winds through urban areas[END_REF] we need to show that
ε -1 P (x 1 ) w(x/ε) P (x 1 ) ( G * w(y) dy)δ Σ weak* in M(Ω) .
Thus we take the test function z ∈ C(Ω) 2 and, using the exponential decay of w, we get
230 1 ε Ω P (x 1 ) w x ε z(x) dx = 1 ε 0 P (x 1 ) 0 ε log ε w x ε z(x) dx 2 dx 1 + O(ε) = = 0 P (x 1 ) z(x 1 , 0) 0 -∞ w x 1 ε , y 2 dy 2 dx 1 + O(ε | log ε|) .
Using the well known property of the mean of a periodic function (see e.g. [START_REF] Sanchez-Palencia | Non-homogeneous media and vibration theory[END_REF])
yields lim ε→0 0 P (x 1 ) z(x 1 , 0) 0 -∞ w x 1 ε , y 2 dy 2 dx 1 = = 0 P (x 1 ) z(x 1 , 0) 0 -∞ 1 0 w(y) dy 1 dy 2 dx 1 = = 0 P (x 1 ) z(x 1 , 0) G * w(y) dy dx 1 = G * w(y) dy P (x 1 ) δ Σ | z . 4.6.
Step six: Separation of scales and the end of the proof of Theorem 1
We can separate the variables in ( 69)-( 72) by setting
U 0 (x, y) = 2 k=1 w k (y) F k (x) - ∂ ∂x k (Q(x 1 ) + C π P (x 1 ) + P 0 (x) ) , (78)
Q 0 (x, y) = 2 k=1 π k (y) F k (x) - ∂ ∂x k (Q(x 1 ) + C π P (x 1 ) + P 0 (x) ) , (79)
with
235 -µ∆w k + ∇π k = e k , div w k = 0 in Y * , (80)
w k = 0 on S, (w k , π k ) is 1 -periodic. (81)
Inserting the separation of scales formulas (78)-( 79) into (69)-(72) yields
div K [ F -∇ (P 0 + C π P + Q) ] = 0 in Ω, P 0 = 0 on Σ, P 0 is -periodic in x 1 , n • K [ F -∇ (P 0 + C π P + Q) ] = 0 on Γ. . (82)
Here
K = [K ij ] = [ Y w i j dy] (83)
stands for the positive definite and symmetric permeability tensor. System (82) a well-posed mixed boundary value problem for a linear elliptic equation for P 0 .
Nevertheless, it is important to note that P 0 is not the limit or homogenized pressure since
p ε (x) = P ε (x) + π x ε P (x 1 ) + Q(x 1 ) . Obviously p ε p 0 ≡ P 0 + C π P + Q .
This ends the proof of theorem 1 since the limit pressure is p 0 and it satisfies the boundary value problem ( 8)- [START_REF] Carraro | Effective interface conditions for the forced infiltration of a viscous fluid into a porous medium using homogenization[END_REF].
Proof of Theorem 2
We start by proving that problem (4)-( 6) admits at least one solution satisfying estimates ( 18)- [START_REF] Jäger | Asymptotic Analysis of the Laminar Viscous Flow Over a Porous Bed[END_REF].
240
It is well known that in the case of the stress boundary conditions, the inertia term poses difficulties and existence results for the stationary Navier-Stokes system can be obtained only under conditions on data and/or the Reynolds number (see e.g. [START_REF] Conca | The Stokes and Navier-Stokes equations with boundary conditions involving the pressure[END_REF]). Presence of many small solid obstacles in the porous media flows corresponds to a small Reynolds number, expressed through the 245 presence of ε in Poincaré's and trace estimates ( 14) and [START_REF] Helmig | Model coupling for multiphase flow in porous media[END_REF].
In order to estimate the inertia term we need fractional order Sobolev spaces.
we recall that
H 1/2 (Ω) 2 = {z ∈ L 2 (Ω) 2 | Ez ∈ H 1/2 (R 2 ) 2 }, where E : H 1 (Ω) 2 → H 1 (R 2 ) 2
is the classical Sobolev extension map. It is defined on the spaces H α (Ω), α ∈ (0, 1) through interpolation (see [START_REF] Constantin | Navier-Stokes equations[END_REF], Chapter 6).
Next, after [START_REF] Constantin | Navier-Stokes equations[END_REF], Chapter 6, one has
Ωε (u 1,ε ∇)u 1,ε • v dx ≤ C|u 1,ε | H 1/2 (Ω) 2 |∇u 1,ε | L 2 (Ω) 2 |v| H 1/2 (Ω) 2 , ∀v ∈ V (Ω ε ). (84)
Using ( 14) in (84) yields
Ωε (u 1,ε ∇)u 1,ε • u 1,ε dx ≤ Cε|∇u 1,ε | 3 L 2 (Ω) 2 . ( 85
)
Now it is enough to have an a priori estimate for the H 1 -norm. With such 250 estimate the standard procedure would give existence of a solution. It consists of defining a finite dimensional Galerkin approximation and using the a priori estimate and Brouwer's theorem to show that it admits a solution satisfying a uniform H 1 -a priori estimate. Finally, we let the number of degrees of freedom in the Galerkin approximation tend to infinity and obtain a solution through 255 the elementary compactness. For more details we refer to the textbook of Evans [START_REF] Evans | Partial Differential Equations: Second Edition[END_REF], subsection 9.1.
We recall that the variational form of ( 4)-( 6) is
L ε u 1,ε , v = 2µ Ωε Du 1,ε : Dv dx + Ωε (u 1,ε ∇)u 1,ε • v dx- - Ωε F • v dx - Σ H • v dS = 0, ∀v ∈ V (Ω ε ). (86)
Then, for ε ≤ ε 0 ,
L ε u 1,ε , u 1,ε ≥ 2µ|Du 1,ε | 2 L 2 (Ωε) 4 -Cε|Du 1,ε | 3 L 2 (Ωε) 4 -C √ ε|Du 1,ε | L 2 (Ωε) 4 ≥ ≥ C 1 ε 2 > 0, if |Du 1,ε | L 2 (Ωε) 4 = 1 √ ε . ( 87
)
As a direct consequence of (87), Brouwer's theorem implies existence of at least one solution for the N dimensional Galerkin approximation corresponding to (86) (see [START_REF] Evans | Partial Differential Equations: Second Edition[END_REF], subsection 9.1). After passing to the limit N → +∞, we obtain existence of at least one solution u ε for problem (86), such that |Du
1,ε | 2 L 2 (Ωε) 4 ≤ C √ ε|Du 1,ε | 2 L 2 (Ωε) 4 + C √ ε|Du 1,ε | L 2 (Ωε) 4 ,
implying estimates ( 18)-( 20).
Now we have
Ωε (u 1,ε ∇)u 1,ε • v ≤ Cε|∇u 1,ε | 2 L 2 (Ω) 2 |∇v| L 2 (Ω) 2 ≤ Cε 2 |∇v| L 2 (Ω) 2 , ∀v ∈ V (Ω ε ) (88)
and we conclude that in the calculations from subsections 4.2-4.4 the inertia term does not play any role. Hence it does not contribute to the homogenized 260 problem either. This observation concludes the proof of Theorem 2.
Numerical confirmation of the effective model
In this section we use a direct computation of the boundary layer corrector (36-38) and the microscopic problem (1-3) to numerically confirm the estimate (39)
|π -C π | L 2 (G * ) = O( √ )
and the strong convergence of the effective pressure [START_REF] Boyer | Mathematical Tools for the Study of the Incompressible Navier-Stokes Equations and Related Models[END_REF]. For the pressure we find out
|p -p 0 | L 2 (Ω) = O( √ ),
which is consistent with the corrector type results from [START_REF] Jäger | On the boundary condition on the contact interface between a porous medium and a free fluid[END_REF].
Confirmation of boundary layer estimate
We start with estimate (39). For this we need to compute the value C π gives an accurate approximation. Furthermore, the cut-off boundary layer is computed by the finite element method. Thus, we compute C h π,cut , where the superscript h indicates the Galerkin approximation, and we have to assure that the discretization error |C π,cut -C h π,cut | is small enough.
For the numerical approximation we first introduce the cut-off domain
G * l := G\ -l j=0
(j e 2 + A)
and then consider the following cut-off boundary layer problem Problem 1 (Cut-off boundary layer problem). Find w and π, both 1-periodic in y 1 , such that it holds in the interior
-µ∆ y w l + ∇ y π l = 0 in G * l , (89)
∇ • w l = 0 in G * l , (90)
and on the boundaries (-2µD y w l + π l I) = e 1 for y 2 = 0, (91)
w l = 0 on C (92) w l,2 = ∂w l,1 ∂y 2 = 0 on Γ l , (93)
where Γ l = (0, 1) × l is the lower boundary of the cut-off domain.
The inclusions are defined as in Figure 1. The solid domain A is (a) circular in the isotropic case with radius r = 0.25 and center (0.5, 0.5), see Problem (89)-( 93) is approximated by the finite element method (FEM) using a Taylor-Hood element [START_REF] Taylor | A numerical solution of the Navier-Stokes equations using the finite element technique[END_REF] with bi-quadratic elements for the velocity and bilinear for the pressure. Since the inclusions are curvilinear we use a quadratic description of the finite element boundaries (iso-parametric finite elements). The stabilized pressure value of the boundary layer is defined in our computations as C h π,cut := π l,h (y 1 , l), i.e. it is the pressure value at the lower boundary of G * l . To define the value C h π,cut we have performed a test with increasing l to obtain the minimal length l of the cut-off domain for which the pressure value reaches convergence (up to machine precision). A shorter domain would introduce a numerical error and a longer domain would increase the computational costs without adding more accuracy.
In Table 1 the values of π l,h (y 1 , l) for increasing number of inclusions l are reported. It can be observed that one inclusion is enough to get the exact value C π = 0 for the circular inclusions. In case of elliptical inclusions the pressure is stabilized for l ≥ 7 and the effect of the cut-off domain can be seen only for smaller domains. Figure 2 shows a visualization of the boundary layer pressure Therefore for the convergence study of the effective pressure, we consider as exact value for ellipses C π = 0.2161642.
After computing the constant C h π,cut we proceed with the confirmation of the estimate (39) and plot in Figure 3 the convergence curves. We confirm the expected convergence rate
|π -C π | L 1 (G * ) = O( ) and |π -C π | L 2 (G * ) = O( √ ).
Confirmation of effective pressure values
The next step is the confirmation of the estimate [START_REF] Boyer | Mathematical Tools for the Study of the Incompressible Navier-Stokes Equations and Related Models[END_REF]. For a stress tensor defined by the constant contact stress (P, Q) and a right hand side which depends only on x 2 we have the analytical exact solution for the effective pressure
p 0 (x 2 ) = C π P + Q - 0 x2 f 2 (z) dz - K 12 K 22 0 x2 f 1 (z) dz. (94)
To compute it we need the vales K 12 and K 22 of the permeability tensor. These are defined as follows with the 1-periodic solution w i c (i = 1, 2) of the i th cell problem
K ij := Y * w i c,j dx, 29 10 -3 10
-∆w i c + ∇π i c = e i in Y * , ∇ • w i c = 0 in Y * , w i c = 0 on ∂A
where Y * is the unit pore domain of the cell problem with the corresponding 305 inclusion A. The inclusions are defined as in our previous work [START_REF] Carraro | Pressure jump interface law for the Stokes-Darcy coupling: Confirmation by direct numerical simulations[END_REF]. They correspond to one cell of problem (89)-( 93) and they are shown on Figure 1.
Therefore, we use the values of the permeability tensor computed therein and reported in Table 2. We use the extension pε h (16) for the microscopic pressure, where the subscript denotes the finite element approximation of the microscopic 310 problem obtained with Taylor-Hood elements, as for the cut-off boundary layer. With the expression of the effective pressure and the extension pressure we compute the convergence estimates. For the test case we use the values (P, Q)
for the normal component of the stress tensor and f (x) for the right hand side, needed in formula (94), as reported in Table 3. The results with the expected convergence rates are depicted in Figure 4. Finally, figures 5 and 6 show the velocity components, the velocity magnitude and the pressure in the microscopic problem for circles and ellipses. To simplify the visualization these figures show a microscopic problem with nine inclusions, so that the boundary layer is clearly visible. 320
Conclusion
The novelty of the result is in the boundary condition on Σ. The value of the Darcy pressure on the upper boundary Σ is now prescribed and its value depends not only on the given applied pressure force Q but also on the shear Thus, in interior of the domain, the velocity is plain Darcean, while in vicinity of the upper boundary, a boundary layer term ε P (x 1 ) w(x/ε) dominates.
The result can be used for the development of the model-coupling strategies, see [START_REF] Helmig | Model coupling for multiphase flow in porous media[END_REF] and [START_REF] Mosthaf | A coupling concept for two-phase compositional porous medium and single-phase compositional free flow[END_REF].
Ω) denotes the set of Radon measures on Ω and δ Σ is the Dirac measure concentrated on Σ, i.e. δ Σ |ψ = Σ ψ(x 1 , 0) dx 1 .
) Remark 1 .Remark 2 .div w dy = 1 0w 2 1 0w 2
121212 If ∂A ∈ C 3 then the regularity theory for the Stokes operator applies and (39), (40) hold pointwise. For more details on the regularity see e.g.[START_REF] Boyer | Mathematical Tools for the Study of the Incompressible Navier-Stokes Equations and Related Models[END_REF]. 155 Let the solution w to system (36)-(38) be extended by zero to M . Let b > a > 0 be arbitrary constants. Then we have (y 1 , b) dy 1 -(y 1 , a) dy 1 .
2, . . .. It is easy to see that the mapping t → 1 0 π(y 1 , t) dy 1 is constant on each of the intervals J i . If those constants are denoted by c i then c 0 = 0 and lim i→∞ c i = C π . 160 Remark 4. Let us suppose that the boundary layer geometry has the mirror symmetry with respect to the axis {y 1 = 1/2}. Then w 2 and π are uneven functions with respect to the axis and C π = 0. In particular, this result applies to the case of circular inclusions. 4.3. Step three: Derivation of the Darcy law via classical two-scale convergence 165
with n = (n 1 , n 2 ) 1 (y 1 ,
1211 being the exterior unit normal on C and t = (-n 2 , n 1 ) the tangent. The constant d 0 is chosen in a way that problem (52)-(53) admits a solution. By simple integration it turns out that d 0 = -G * w 1 (y) d y. Since the right-hand side is in H 1 (G * ), the problem has a solution ψ ∈ H 3 (G * ) that can be chosen to have an exponential decay |ψ| H 1 (G * ∩{|y2|>s}) ≤ C e -γs .(54)Next we use the trace theorem and construct a y 1 -periodic function h ∈ H 3 (G * ) such that∂h ∂t = curl h • n = 0 , ∂h ∂n = curl h • t = -0) = 0 (achieved if we take h(y 1 , 0) = const. ) .The function Φ, constructed above, satisfies (49) and (50). Exponential decay (54) of ψ implies exponential decay of h in the same sense and, finally, gives (51).
265
which is the limit value of the boundary layer pressure π for y 2 → ∞, see (41). Since the boundary layer problem is defined on an unbounded domain, we need to cut the domain and compute C π,cut , which is the approximation of C π on a cut-off domain with |y 2 | large enough so that the difference |C π -C π,cut | is smaller than the machine precision. Since the value π(y) stabilizes to C π exponentially fast, we expect that a boundary layer with a few unit cells
Figure 1 :
1 Figure 1a;
Figure 2 :
2 Figure 2: Visualization of boundary layer pressure and cut-off domain.
Figure 3 :
3 Figure 3: Confirmation of convergence for the boundary layer problem.
µ P (x 1 )Table 3 :
13 Q(x 1 ) f 1 (x) f 2 (x) C π ellipses C π Values used for the computations.
Figure 4 :
4 Figure 4: Confirmation of convergence for the microscopic problem.
(a) u 1 (b) u 2 (
12
Figure 5 : 1 (b) u 2 (
512 Figure 5: Visualization of the microcopic velocity and pressure with elliptical inclusions.
Figure 6 :
6 Figure 6: Visualization of the microcopic velocity and pressure with circular inclusions.
1,ε | L 2 (Ωε) 4 ≤
1/ √ ε. After plugging this information into estimate (85), equation (86) yields
the energy estimate
2µ|Du
Table 1 :
1 Stabiliziation of Cπ in the cut-off domain with increasing number of inclusions.
28
π in the cut-off domain with seven inclusions. A convergence check with global 300 refined meshes have shown that the discretization error is of the order O(10 -8 ).
Table 2 :
2 Values of the permeability tensor components.
The work of T.C. was supported by the German Research Council (DFG) through project "Multiscale modeling and numerical simulations of Lithium ion battery electrodes using real microstructures" (CA 633/2-1). 2 The work of EMP was supported in part by the grant of the Croatian science foundation No 3955, Mathematical modelling and numerical simulations of processes in thin and porous domains 3 The research of A.M. was supported in part by the LABEX MILYON (ANR-10-LABX-0070) of Université de Lyon, within the program "Investissements d'Avenir" (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). | 45,393 | [
"1003612",
"2599"
] | [
"231495",
"444777",
"521754"
] |
01007328 | en | [
"spi"
] | 2024/03/04 23:41:48 | 2010 | https://hal.science/hal-01007328/file/GSFA.pdf | Dr B Girault
Dr A S Schneider
Prof E Arzt
Dr C P Frick
J Schmauch
INM K.-P Schmitt
Christof Schwenk
Strength Effects in Micropillars of a Dispersion Strengthened Superalloy**
By Baptiste Girault * , Andreas S. Schneider, Carl P. Frick and Eduard Arzt In order to realize the full potential of emerging microand nanotechnologies, investigations have been carried out to understand the mechanical behavior of materials as their internal microstructural constraints or their external size is reduced to sub-micron dimensions. [1,2] Focused ion beam (FIB) manufactured pillar compression techniques have been used to investigate size-dependent mechanical properties at this scale on a variety of samples, including single-crystalline, [3][4][5][6][7][8][9][10] nanocrystalline, [11] precipitatestrengthened, [12,13] and nanoporous [14,15] metals. Tests revealed that single-crystal metals exhibit strong size effects in plastic deformation, suggesting that the mechanical strength of the metal is related to the smallest dimension of the tested sample. Among the various explanations that have been pointed out to account for such a mechanical behavior, one prevailing theory developed by Greer and Nix [6] invokes ''dislocation starvation.'' It assumes that dislocations leave the pillar via the surface before dislocation multiplication occurs. To accommodate the induced deformation new dislocations have to be nucleated, which requires high stresses. [6,16] This theory has been partially substantiated by direct in situ transmission electron microscope (TEM) observations of FIB manufactured pillars which demonstrate a clear decrease in mobile dislocations with increasing deformation, a result ascribed to a progressive exhaustion of dislocation sources. [17] Another origin of size-dependent strengthening may lie in the constraints on active dislocation sources exerted by the external surface, i.e., source-controlled mechanisms. [18][19][20] A clear understanding of the mechanisms responsible for the size effects in plastic deformation is still missing and other origins of strength modification with size remains somewhat controversial. [17] Unlike pure metals, pillars with an internal size parameter smaller than the pillar diameter would be expected to exhibit no size effect, reflecting the behavior of bulk material. This was demonstrated for nanocrystalline [11] and nanoporous Au. [14,15] Nickel-titanium pillars with semi-coherent precipitates approximately 10 nm in size and spacing also exhibited no size dependence, although results are difficult to interpret due to the concurrent martensitic phase transformation. [13] Conversely, precipitate strengthened superalloy pillars were reported to show size-dependent behavior, a result left largely unexplained. [12,21] Therefore, a strong need exists to further explore the influence of internal size parameters on the mechanical properties of small-scale single crystals, to better understand the associated mechanisms responsible for the size effect.
The present paper investigates the uniaxial compression behavior of highly alloyed, focused ion beam (FIB) manufactured micropillars, ranging from 200 up to 4000 nm in diameter. The material used was the Ni-based oxide-dispersion strengthened (ODS) alloy Inconel MA6000. Stress-strain curves show a change in slip behavior comparable to those observed in pure fcc metals. Contrary to pure Ni pillar experiments, high critical resolved shear stress (CRSS) values were found independent of pillar diameter. This suggests that the deformation behavior is primarily controlled by the internal obstacle spacing, overwhelming any pillar-size-dependent mechanisms such as dislocation source action or starvation.
The research presented here investigates the mechanical behavior of single-crystalline micropillars made of a dispersion strengthened metal with a small internal size scale: the oxide-dispersion strengthened (ODS) Inconel MA6000, 1 which is a highly strengthened Ni-based superalloy produced by means of mechanical alloying. This high-energy ball milling process produces a uniform dispersion of refractory particles (Y 2 O 3 ) in a complex alloy matrix, and is followed by thermo-mechanical and heat treatments (hot-extrusion and hot-rolling) to obtain a large grained microstructure (in the millimeter range). MA6000 has a nominal composition of Ni-15Cr-4.5A1-2.5Ti-2Mo-4W-2Ta-0.15Zr-0.01B-0.05C-1.1Y 2 O 3 , in wt%. Previous studies carried out on bulk MA6000 showed that its strength is due to the oxide dispersoids and to coherent precipitates of globular-shaped g 0 -(Ni 3 Al/Ti) particles, which are formed during the heat treatment. Depending on the studies, the average sizes in these two-particle populations are about 20-30 and 275-300 nm, respectively. [22][START_REF] Singer | High Temperature Alloys for Gas Turbines and Other Applications[END_REF][START_REF] Reppich | High Temperature Alloys for Gas Turbines and Other Applications[END_REF][START_REF] Heilmaier | [END_REF] TEM investigations of our sample revealed a dense distribution of oxide particles with diameter and spacing well below 100 nm; however, no indications of g 0 -precipitates were found (Fig. 1(a)). Thus, in contrast to a recent study on nanocrystalline pillars, [11] the tested specimens have no internal grain boundaries, which would impede the dislocations from leaving the sample, but have a characteristic length scale smaller than the pillar diameter.
Experimental
Bulk MA6000 was mechanically and chemically polished. The polishing process and testing were carried out in a plane allowing access to elongated grains of several millimeters in size. Pillar manufacturing, testing, and analysis were similar to the study by Frick et al. [26] . Micro-and nanopillars with diameters ranging from 200 to 4000 nm and a diameter to length aspect ratio of approximately 3:1 were machined with a FIB FEI Nova 600 NanoLab DualBeam TM . All pillars were FIB manufactured within the same grain (Fig. 1(b)) in order to avoid any crystallographic orientation changes that could activate different slip systems. To minimize any FIB-related damage, a decreasing ionic current intensity from 0.3 nA down to 10 pA was used as appropriate with decreasing pillar diameters [27] . The pillars were subsequently compressed in loadcontrol mode by an MTS XP nanoindenter system equipped with a conical diamond indenter with a flat 10 mm diameter tip under ambient conditions. Loading rates varied between 1 and 250 mN Á s À1 depending on pillar diameter in order to obtain equal stress rates of 20 MPa Á s À1 .
The pillar diameter, measured at the top of the column, was used to calculate the engineering stress. It is important to mention that the pillars had a slight taper of approximately 2.78 on average, with a standard deviation of 0.58. Hence, stress as defined in this study represents an upper bound to the stress experienced by the sample during testing.
Figure 1(c) and (d) shows representative post-compression scanning electron microscope (SEM) micrographs of 304 and 1970 nm diameter pillars. Pillars with diameters above 1000 nm retained their cylindrical shape and showed multiple slip steps along their length; in some cases, barreling was observed. Samples below this approximate size tended to show localized deformation at the top with fewer, concentrated slip steps, which have been observed in previous studies, e.g., see Ref. [28]. Independent of pillar size, multiple slip was observed. High-magnification pictures of the sidewalls showed fewer slip steps in the vicinity of particles, emphasizing that particles act as efficient dislocation obstacles.
Electron backscattered diffraction (EBSD) measurements showed that the pillars were cut in a grain with the h110i crystallographic orientation aligned normal to the sample surface. Among the 12 different possible slip systems in fcc crystals, only four present a non-zero Schmid factor equal to 0.41. The slip bands were oriented at approximately 348 with regard to the pillar axis, nearly matching the expected 35.38 angle of the {111} h110i slip system for a h110i oriented fcc crystal.
1 Inconel MA6000 is a trademark of the Inco Alloys International, Inc., Huntington, WV.
Results and Discussion
Typical engineering stress-strain curves are shown in Figure 2. The features of the stress-strain curves changed with decreasing pillar diameter. Larger pillars displayed a stressstrain curve with strain hardening similar to bulk material. Below approximately 2000 nm, staircase-like stress-strain curves with plastic strain bursts separated by elastic loading segments were observed. This has been demonstrated in previous single-crystalline micropillar studies, where strain bursts were related to dislocation avalanches. [10,26] For pillars even smaller than 1000 nm in diameter, the staircase-like shape under 4% strain is followed by large bursts over several percent strain, which gave the appearance of strain softening. The large bursts are consistent with SEM observations showing highly localized deformation on a few glide planes for pillars with diameters below 1000 nm. This behavior suggests that, for small pillar diameters, the dispersoid particles no longer promote homogeneous deformation, as they do in bulk alloys. The pillars hence exhibit a size effect in the slip behavior.
By contrast, the flow stresses are comparable for all pillar diameters and do not exhibit a size effect (Fig. 2). This is highlighted in Figure 3, where the flow stress measured at 3% strain is plotted as a function of pillar diameter, and compared with previous results on pure Ni micropillar. [4,26] Whereas the pure Ni exhibits the frequently reported size effect, our data are independent of pillar diameter and lie close to the bulk value (critical resolved shear stress (CRSS) of about 500 MPa [START_REF] Singer | High Temperature Alloys for Gas Turbines and Other Applications[END_REF] ). Best power-law fits gave a relationship between flow stress s and diameter d of s a d À0.65 and d À0.62 for [111] and [269] Ni, respectively; for MA6000, the exponent is À0.04 AE 0.02, a value close to zero.
In contrast to the study on a superalloy containing only coherent precipitates, [12] this study clearly shows that incoherent particles can give rise to an internal size parameter, which is dominant over any pillar-size effect in the entire size range. The oxide particle spacing in our study is below 100 nm, which is much smaller than the pillar diameters. [22][START_REF] Singer | High Temperature Alloys for Gas Turbines and Other Applications[END_REF][START_REF] Reppich | High Temperature Alloys for Gas Turbines and Other Applications[END_REF][START_REF] Heilmaier | [END_REF] It is notable that the extrapolated MA6000 strength values and the pure Ni data in Figure 3 intersect at a pillar diameter of about 150 nm, close to the oxide particle spacing. The smallest pillars still contain about 10, the largest about 40 000 oxide particles. In the latter case, continuous stress-strain curves as in bulk are expected due to averaging effects; in the smaller pillars, stochastic effects would explain the staircase-like behavior.
The absence of the size effect in single-crystalline MA6000 implies that neither the starvation theory nor sourcecontrolled mechanisms may be applicable. The high density of internal obstacles will be likely to prevent dislocations from exiting excessively through the surface; and the small obstacle spacing, compared to the pillar diameter, will make source-operation insensitive to surface effects. As a result, the flow stress will be determined by the interactions of dislocations and obstacles, as in bulk alloys. Size effects might, however, be expected for pillar diameters below the oxide particle spacing, i.e., 100 nm, but are beyond the scope of the present study.
Conclusions
In summary, compression tests were carried out on single-crystal pillars of an ODS-Ni superalloy (MA6000). The following conclusions were drawn: i) As in pure fcc metals, the superalloy pillars undergo a change in slip behavior. Pillars thinner than 2000 nm showed staircase-like stress-strain curves. The localized strain bursts suggest that the non-shearable particles no longer manage to homogenize slip as in bulk alloys. ii) Contrary to single-crystal studies on pure metals, no dependence of yield stress on sample size was measured.
A high constant strength was found, which is comparable Ni [5] and 3% offset values for [111] Ni. [START_REF] Singer | High Temperature Alloys for Gas Turbines and Other Applications[END_REF] The solid lines represent best power-law fits.
to the highest flow stress value published for pure Ni pillars (with a diameter of 150 nm). iii) These results suggest that size-dependent mechanisms such as dislocation starvation or source exhaustion are not operative in a dispersion strengthened alloy. Instead, the strong internal hardening dominates over any specimen size effect.
Fig. 1 .
1 Fig. 1. TEM plane view of MA6000 microstructure (a) and SEM images of (b) location of pillar series (white circles) with regards to grains boundaries (white dotted lines); (c) and (d) show deformed pillars with diameter of 304 and 1970 nm, respectively. Pictures were taken at a 528 tilt angle relative to the surface normal.
Fig. 2 .
2 Fig. 2. Representative compressive stress-strain behavior for MA6000 pillars of various diameters ranging from approximately 200 to 4000 nm.
Fig. 3 .
3 Fig. 3. Logarithmic plot of the critical resolved shear stress (CRSS) at 3% strain for all [111] MA6000 pillars tested. The error bars correspond to the standard deviation of six tests on different pillars presenting similar diameters. For comparison, 0.2% offset compressive stresses are shown for pure [269] Ni[5] and 3% offset values for [111] Ni.[START_REF] Singer | High Temperature Alloys for Gas Turbines and Other Applications[END_REF] The solid lines represent best power-law fits. | 14,079 | [
"1238219"
] | [
"123891",
"123891",
"303412",
"123891"
] |
01485082 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2014 | https://minesparis-psl.hal.science/hal-01485082/file/Dubois%20et%20al%202014%20IPDM%20co-design.pdf | Louis-Etienne Dubois
Pascal Le Masson
Benoît Weil
Patrick Cohendet
From organizing for innovation to innovating for organization: how co-design brings about change in organizations
Amongst the plethora of methods that have been developed over the years to involve users, suppliers, buyers or other stakeholders in the design of new objects, co-design has been advertised as a way to generate innovation in a more efficient and more inclusive manner. Yet, empirical evidence that demonstrate its innovativeness is still hard to come by. Moreover, the fact that co-design workshops are gatherings of participants with little design credentials and often no prior relationships raises serious doubts on its potential to generate novelty. In this paper, we study the contextual elements of 21 workshops in order to better understand what codesign really yields in terms of design outputs and relational outcomes. Our data suggest that co-design emerges in crisis situations and that it is best used as a two-time intervention. We suggest using collaborative design activities as a way to bring about change through innovation.
INTRODUCTION
Open, cross-boundary, participative, collaborative, distributed: whatever the word used, innovation has become a practice known to involve a wide array of actors [START_REF] Chesbrough | Open innovation: The new imperative for creating and profiting from technology[END_REF][START_REF] Remneland-Wikhamn | Open innovation climate measure: The introduction of a validated scale[END_REF]. Collaborative design activities, also known as codesign, are increasingly used to design new products, services and even public policies with users, citizens and other stakeholders [START_REF] Sanders | Co-creation and the new landscapes of design[END_REF][START_REF] Berger | Co-designing modes of cooperation at the customer interface: learning from exploratory research[END_REF].
While its tools and methods, as well as its benefits for design purposes, have been discussed at length, the settings in which such activities arise and more importantly its effects on the groups, organizations and design collectives remain to this date misunderstood [START_REF] Kleinsmann | Why do (n't) actors in collaborative design understand each other? An empirical study towards a better understanding of collaborative design[END_REF][START_REF] Schwarz | Sustainist Design Guide[END_REF]). Yet, initial contexts, which can be defined by and explored through the relationship between stakeholders, should be of major interest for they play a significant role in the unfolding of collaborative design or joint innovation processes [START_REF] Clauß | The Influence of the Type of Relationship on the Generation of Innovations in Buyer-Supplier Collaborations[END_REF]. Furthermore, the fact that co-design workshops often involves participants who lack design credentials and do not share some sort of common purpose raises serious questions on the potential for innovation and motivations to take part in such time-consuming activities.
The purpose of this paper is to shed light on the context of co-design activities and its outputs, arguing that it may be used as a change management intervention while being advertised as a design and innovation best practice. Through a multiple-case study, we investigate the contextual elements of 21 workshops in which stakeholders gather, often for the very first time, to design new products, services or processes together. Following an overview of the literature on innovation, design and collaboration, we suggest based on our results that co-design is in fact a two-phase intervention in which relationships must first be reinforced through design activities before innovation issues can be tackled.
LITTERATURE REVIEW
Over the past decades, innovation has received increased attention from practitioners and academics altogether, resulting in new forms for organizing such activities and a large body of literature on its every dimension [START_REF] Remneland-Wikhamn | Open innovation climate measure: The introduction of a validated scale[END_REF]. Garel & Mock (2011:133) argue that "innovation requires a collective action and an organized environment". In other words, we need on one side people, preferably with relevant knowledge and skills (expertise), and on the other side, a collaborative setting in which diverse yet compatible collectives can come together to design new products, services or processes. This classic innovation scheme holds true for not only standard R&D teams, but also for new and more open forms of innovation in which users interact with industry experts in well-defined platforms [START_REF] Piller | Mass customization: reflections on the state of the concept[END_REF][START_REF] Von Hippel | Democratizing innovation[END_REF]. Accordingly, the literature review is structured as follows: first on the rationale behind the need for organized environment in which stakeholders can design and innovate, and then on the collective action that drives the collaboration between them.
From open innovation [START_REF] Chesbrough | Open innovation: The new imperative for creating and profiting from technology[END_REF] to participative design [START_REF] Schuler | Participatory design: Principles and practices[END_REF], the call for broader involvement in organizations' design, NPD and innovation activities has been heard widely and acted upon by many [START_REF] Von Hippel | Democratizing innovation[END_REF][START_REF] Hatchuel | Teaching innovative design reasoning: How concept-knowledge theory can help overcome fixation effects[END_REF]. Seen as a response to mounting competitive pressures, cross-boundaries practices are a way for organizations broadly taken to remain innovative, adaptive and flexible [START_REF] Teece | Explicating dynamic capabilities: the nature and microfoundations of (sustainable) enterprise performance[END_REF]. The case for openness, as put forth in the literature, is built on the promise of reduced uncertainty, more efficient processes, better products and positive market reaction to the introduction of the innovation [START_REF] Diener | The Market for Open Innovation[END_REF][START_REF] Thomke | Experimentation matters: Unlocking the potential of new technologies for innovation[END_REF]. Rather than focusing on cost reductions alone, open and collaborative are to be implemented for the value-added and creativity they bring to the table [START_REF] Remneland-Wikhamn | Transaction cost economics and open innovation: implications for theory and practice[END_REF]. As a result, organizations are increasingly engaging with their stakeholders to tap into their knowledge base, leverage value co-creation potential and integrate them in various stages of new product or service development activities [START_REF] Lusch | Competing through service: insights from service-dominant logic[END_REF][START_REF] Mota Pedrosa | Customer Integration during Innovation Development: An Exploratory Study in the Logistics Service Industry[END_REF]. However, openness to outside ideas does not come naturally. The existence of the well-documented "not-invented-here" syndrome [START_REF] Katz | Investigating the Not Invented Here (NIH) syndrome: A look at the performance, tenure, and communication patterns of 50 R & D Project Groups[END_REF] has academics constantly remind us that open innovation can only emerge in settings where the culture welcomes and nurtures ideas from outsiders [START_REF] Hurley | Innovation, market orientation, and organizational learning: an integration and empirical examination[END_REF].
The field of design has long embraced this participatory trend. Designers have been looking for more than forty years now for ways to empower users and make them more visible in the design process (Stewart & Hyysaalo, 2008). As a result, multiple approaches now coexist and have yielded a rich literature often focused on visualisation tools, design techniques and the benefits of more-inclusive objects that are obtained through sustained collaboration with users. Whether empathic (e.g. [START_REF] Koskinen | Empathic design: User experience in product design[END_REF]), user-centered (e.g. Norman & Draper, 1986), participatory [START_REF] Schuler | Participatory design: Principles and practices[END_REF] or contextual (e.g. [START_REF] Wixon | Contextual design: an emergent view of system design[END_REF]), streams of "user-active" design presupposes engaging with willing participants in order to improve the construction process and output. Still too often, the interactions between those who know and those who do remain shallow, and are limited to having users discuss the design of services or products [START_REF] Luke | Co-designing Services in the Co-futured City. Service Design: On the Evolution of Design Expertise[END_REF]. Worse, the multiplication of participatory design approaches, often calling a rose by another name, has resulted in practical and theoretical perplexity. According to Sanders et al. (2010: 195) "many practices for how to involve people in designing have been used and developed during the years» and as a untended consequence, «there is some confusion as to which tools and techniques to use, when, and for what purpose". Amongst these «better design methods», co-design seeks the active participation and integration of users' viewpoints throughout the entire design process. More than a glorified focus group, outsiders gather to create the object, not just discuss about it. According to Pillar et al. (2011: 9), the intended purpose of these activities "is to utilize the information and capabilities of customers and users for the innovation process". As such, co-design is often portrayed as a way to facilitate mass customization through platforms, merely enabling better design in settings where users are already willing to take part in the process. Yet, a more accurate and acknowledged definition of co-design refers to it as a creative and collective approach "applied across the whole span of a design process, (where) designers and people not trained in design are working together in the design development process" (Sanders & Stappers, 2008:6).
However, active participation of stakeholders and users in innovation or design processes does not always lead to positive outcomes. For one, [START_REF] Christensen | The Innovator's Dilemma[END_REF] has studied situations (dilemmas) in which intensively catering to existing users leads to diminishing returns and loss of vision. Da Mota Pedrosa (2012) also demonstrate that too much user integration in innovation process becomes detrimental to an organization, and that the bulk of the interactions should occur early in the ideation process rather than in the later development and production stages.
Finally, [START_REF] Holzer | Construction of Meaning in Socio Technical Networks: Artefacts as Mediators between Routine and Crisis Conditions[END_REF] raises the all-important mutual understanding hurdles that heterogeneous innovation groups face, which sometimes translates into a lack of shared meaning and conflict.
Collective action and collaboration in innovation are also well documented in the literature.
Choosing partners, whether it is your suppliers, buyers or other firms, is often portrayed as a strategic, yet highly contextual decision, where issues of trustworthiness, confidentiality and relevance are paramount [START_REF] Clauß | The Influence of the Type of Relationship on the Generation of Innovations in Buyer-Supplier Collaborations[END_REF]. Prior relationships, mutual understanding and common identity are also said to play a role in the successful development of social cohesion and innovation [START_REF] Coleman | Social Capital in the Creation of Human Capital[END_REF][START_REF] Dyer | Creating and managing a high performance knowledge-sharing network: the Toyota case[END_REF]. In other words, engaging in exploratory activities across boundaries requires that the actors know and trust each other, are willing to play nice and share a minimum of behavioural norms [START_REF] Clauß | The Influence of the Type of Relationship on the Generation of Innovations in Buyer-Supplier Collaborations[END_REF]. Along the same lines, Fleming et al. (2007:444) state that "closed social structures engender greater trust among individuals », which in turn generate more collaboration, creativity and ultimately more innovation. Simply put, the absence of social proximity or relationships precludes the expression of creativity and the emergence of novelty. This translates into settings or contexts in which open dialogue, inclusiveness and collaboration amongst individual leads to new objects [START_REF] Remneland-Wikhamn | Transaction cost economics and open innovation: implications for theory and practice[END_REF]. Without a common purpose, groups are bound to failure or conflict, for "goal incongruence hinders (the construction of) a joint solution [START_REF] Xie | Antecedents and consequences of goal incongruity on new product development in five countries: A marketing view[END_REF]. While relevant to our study, this literature remains elusive on more open forms of collaborative design, where relationships are multiple, often not obvious (i.e. not one firm and its few suppliers, but rather a "many-tomany" format) and not held together by contractual ties [START_REF] Hinde | Relationships: A dialectical perspective[END_REF]. Moreover, repeated interactions and equal commitment, two important drivers of collaboration in design [START_REF] Clauß | The Influence of the Type of Relationship on the Generation of Innovations in Buyer-Supplier Collaborations[END_REF], are unlikely in ad-hoc formats such co-design in which interests are seldom shared (i.e. users are hardly committed at improving the firm's bottom line). Figure 1 below combines these streams in the literature, in which collectives and expertise are considered as innovation inputs. Yet first-hand observation of co-design activities leads one to denote that it 1) often involves people who lack design credentials and 2) gathers people with little to none prior history of working together or even sometimes a desire to collaborate. Our extended immersion in a setting that holds co-design workshops on a regular basis, as well as observations of several workshops in Europe has yielded few successful design outputs to account for. What's more, participants are seldom lead-users with relevant deep knowledge [START_REF] Von Hippel | Lead users: a source of novel product concepts[END_REF], nor driven by shared values or purpose as in an organization or an innovation community [START_REF] Raymond | The cathedral and the bazaar[END_REF]Adler & Heckscher, 1996). As opposed to Saxenian & Sabel (2008: 390) who argue that «it is possible to foster collaboration (…) only when social connections have become so dense and reliable that it is almost superfluous to do so», collaborative design workshops often take place in relational deserts. Very few experts and poor relationships: what can we really expect?
To a point, this situation is consistent with what authors such as [START_REF] Granovetter | The Strength of Weak Ties[END_REF][START_REF] Glasser | Economic action and social structure: The problem of embeddedness[END_REF] on the strength on weak ties and [START_REF] Burt | Structural Holes: The Structure of Competition[END_REF] on network embeddeness have studied. They demonstrate that too much social and cognitive proximity can be detrimental to innovation.
When too much social cohesion exists, knowledge variety and access to sources elsewhere in the network are hindered, thus limiting one's ability to generate novelty [START_REF] Uzzi | Social Structure and Competition in Interfirm Networks: The Paradox of Embeddedness[END_REF]. This phenomenon, described as the "paradox-of-embededdness" [START_REF] Uzzi | Social Structure and Competition in Interfirm Networks: The Paradox of Embeddedness[END_REF], shows that knowledge homogeneity can be an obstacle to collaboration, especially when it is geared towards innovation. Noteboom (2000) further explains this paradox by talking about "cognitive distance", where following an inverted-u curve, too little or too much proximity results in suboptimal innovation outcomes. Being further away, cognition and social wise, is also said to help avoid creativity hurdles such as "group-think" [START_REF] Janis | Victims of groupthink[END_REF] and fear of the outside world [START_REF] Coleman | Community conflict[END_REF]. While the jury is still out of whether weak or strong ties are best for innovation, authors have suggested that network configuration and density should be adapted to the nature of the task (i.e weak for exploration vs. strong for exploitation, Noteboom, 2000) or external conditions [START_REF] Rowley | Redundant Governance Structures: An Analysis of Structural ad Relational Embeddedness in the Steel and Semiconductor Industries[END_REF]. In other words, this literature argues that despite facing challenges in getting heterogeneous groups to collaborate, those who do can expect a proper innovation pay-off. Then again, going back to workshops witnessed over the past two years, we are still confronted with the same problem: even with weak ties, it still does not yield novelty. If no one configuration results in innovation, could it be that we are looking at it the wrong way?
More importantly, could it be that co-design is geared to generate more than just new objects?
And so we ask: what drives, as in one of our cases, elders, students, caregivers and representatives from an insurance company to design together? Or toddlers, teachers, architects and school board officials to get together to re-invent the classroom? In other words, why does co-design always seem to emerge in settings where basic conditions for collaboration and innovation are lacking [START_REF] Huxham | Creating collaborative advantage[END_REF]. Are such weak ties really generative? This, we argue, calls for a broader investigation of co-design; one that does not separate new object design (outputs) from effects on the design collective (outcomes). Hence, this paper addresses the following questions: what defines co-design contexts and what do workshops really yield?
RESEARCH METHOD AND EMPIRICAL BASE
Following a multiple case study methodology [START_REF] Eisenhardt | Building theories from case study research[END_REF][START_REF] Yin | Case study research. Design and methods[END_REF], of both retrospective and current cases, we investigate different contexts (i.e. organizational, pedagogical, industrial, etc.) in which co-design is used and the relationship between participants. Through semistructured or sometimes informal interviews, as well observation of both planning and execution phases of workshops, we investigate the background setting, prior relationships between participants and actual outputs of 21 different co-design workshops in 4 countries (France, Finland, Netherlands, Belgium). In total, we interviewed 20 participants (lasting anywhere from 15 to 60 minutes at a time) and witnessed 10 live workshops (lasting 5 to 8 hours each time).
Since co-design is still an emerging phenomenon theory-wise, the methodology was designed in a way that was coherent with our research object (Edmonson & McManus, 2007;[START_REF] Von Krogh | Phenomenon-based Research in Management and Organisation Science: When is it Rigorous and Does it Matter?[END_REF]). As such, our desire to contribute to the development of a co-design theory invited a broad study of different contexts and workshops. Adopting a "grounded theorist" posture (Glaser & Strauss, 1967), we opted for a qualitative study of multiple dimensions of a same phenomenon [START_REF] Shah | Building Better Theory by Bridging the Quantitative-Qualitative Divide[END_REF]. Moreover, we used different collection tools to better apprehend our research object in all its complexity (Eisenhardt & Grabner, 2007).
Cases for this study were selected based on an opportunity sampling, meaning we studied both past workshops and attended live experiments as they became available to us. For retrospective cases, we made sure that were less than a year old and that access to both participants and documentation was readily available to prevent any time bias. Only case was older (C1), yet was thoroughly documented in a book shortly after, thus preventing distortion.
Our questions touched on relationships dimensions amongst participants; their thoughts on the workshop and on they personally and collectively took away from their experience. To ensure the coherence of our data, we only studied co-design workshops that had a similar format, in terms of length (1 day), protocol (divergent-convergent sequence), tools and number of participants (15-25 at a time). Furthermore, as we still lack co-design theory to guide us in the identification of cases, we simply made sure that they 1) involved a wide array of participants and stakeholders (the "co") and 2) focused on the creation of a new object (the "design", as opposed to testing of existing concepts). Interviews were conducted during and after the workshops, recorded when possible and transcribed by the lead author. Key excerpts were later shared to the respondents during two group interviews, to ensure the validity of the interview content. Data collected for this article was made anonymous, in order to protect any sensitive innovation material, design issues or interpersonal conflicts from leaking out in the open.
Once transcribed, we then looked into our interview material, as well as secondary sources, for any information that could help us assess prior relationship (or lack thereof) between participants. Quotes pertaining to working or personal relationships, apprehensions towards collaboration and potential conflicts were highlighted, and in turn codified into first-level categories. We also asked (for retrospective cases) or observed the tangible design outputs of each workshop in order to see if the initial design goals were met. The lead author first conducted this task, followed by a discussion of the results with the co-authors, all reaching agreement on the coded data, the different categories and the subsequent interpretation of the results. Table 1
RESULTS
Our data reveals co-design often emerge out of little prior relationships or out of weak ties, with some cases even supporting claims about the presence of underlying malfunctions and poor collaboration climate. It should be mentioned that some of the cases are still underway, and that accordingly our attention is on the initial context and relationships between the stakeholders involved. Most participants are usually sitting around the same table for the first time, and very few of them have any prior design or collaboration experience to account for. According to one of the project leaders in the H1 case: "this is an opportunity to really get to meet the colleagues and get out of this isolated environment". This case, just like the W1 workshop, where stakeholders have not had to work together before, but are now forced to do so is common across most of our data. On this point, the respondent in charge of the W1 case stated that before " there was enough funding for every project, but now they have to come up with an integrated and coherent plan, instead of all pulling in different directions". A claim echoed by one participant (E1), pointing out to the fact "doctoral students come different schools and never really talk to each other". Lack of personal interactions was also raised by one of the dean in the F1 workshop: " we must find ways to get back in touch with both students and local businesses, something we've lost lately". Prior relationships are not assessed by face time, for many stakeholders have met in the past without engaging in anything more than shallow conversations, let alone design activities. As the L1 facilitator explains: " participants had never really exchanged in the past. It's sad because they all work together, but don't interact very much in the end". As a result, cases such as W1 or GT1 show that relationships are improved or created during workshops. In the former, one participant was satisfied about having met "new people around her with whom to work again in the near future", while in the latter, the facilitator believed that the real outcome of the day was "creating mutual interest amongst participants".
Some participants also touched on the lack of trust and collaboration with their colleagues, or similar negative state-of-minds towards the group. The host of the C1 case used these words to sum up prior relationships amongst the stakeholders: "designers and architects see the parents, professors and students as hurdles, they feel as if involving them in the process will only slow things down and bring new problems". Along the same lines, another participant identified the challenges of dealing with "everyone (who) arrive initially with their own pet project or personal needs" and in finding ways to bring everything together. Finally, the host of the N1 case said that they used "co-design because of the economical and trust crises", adding that it was the only good way to go in order to "connect the top-down system with the bottom-up movement".
These two cases, while working on projects that vary by scale and nature, were also both targeting stakeholders or neighborhood facing harsher conditions; "the poor schools, not just the wealthy ones". Workshops, once again, aiming for the most difficult conditions possible.
Other cases raised even more dramatic or sensitive issues amongst stakeholders, with some of them confessing about the absence of meaning or coherence in their day-to-day activities.
Workshops such as F1 or P1 had participants expressing feelings of uselessness in their job.
The host of the latter case explained: " what we are going through here is a meaning crisis, for whom and why are we working in this organization?" Access to material and notes used in the planning of some cases also support our hypothesis by pointing out to sometimes conflictual or tensed settings. For instance, the ED1 animation protocol states that "the client should quickly go over the workshop introduction, to briefly set the stage and avoid raising the sensitive issues". The animation protocol also reads like this: "the facilitator will need to refocus the discussions and remind participants of what is sought after and allowed during the workshop".
Respondents pointed out to different kind of gaps between what they wished to achieve (often expressed by the title of the workshop) and the current reality. Whether it was for an organizational structure ill suited for innovation "in which management usually does all the innovation and simply explains it to others after" as in the P1 case or a lack of relevant knowledge that leads the university (F1) dean " unsure of what to do to help students cope with today's changing environment". It can also just be that some participants have no or little design resources to spare, and as one facilitator puts it, "they come with their issue hoping one codesign workshop will solve it". In other cases, such as GT1 or CS1, what is missing are common language, criteria or working methods. "They had no idea on how to work together" said the facilitator of the former, adding that "what they ended up creating where filters and criteria to assess the quality of concepts to come". For all A and O cases, organizations turn to co-design when they lack the proper knowledge or skills to conduct the activities internally. In the A3 case, participants from the sponsoring firm confessed that they not only needed outsiders and experts to weigh in on the technological dimensions of the product-to-be, but also extended interactions with users to help them sort out real needs from all the needs identified through market research. Finally, for cases such as H1 and E1, stakeholders do not lack knowledge or skills on the content of the workshop, but rather on the means to go about organizing collaboration in design. According to the E1 facilitator, "it seemed as if the participants were as much, if not more, interested in our animation protocol than we tried to achieve with it". While some stakeholders leave with the protocol, other leave with participants by recruiting them out of the workshop. Firms involved in A and O cases are systematically on the lookout for participants who could help them fill internal knowledge gaps beyond the one-day workshops. As the facilitator of the O2 case confessed, the firm is convinced that "if they identify one good talent to recruit, then their investment is worth every penny. At that point, reaching a working prototype just becomes a bonus". One participant from the A2 case even adds that organizations are "not even hiding the fact that they also use co-design as a recruiting-in-action tool".
Secondary data such as planning documents or animation protocols also provide another interesting element: the name of the workshop, or in other words, what brings the stakeholders together. Hence, most cases seem to display both a small level of ambition and a low novelty target. Workshops focusing on concepts such as "classroom of the future", "energy-efficient buildings" or "facilitating care for elders" are fields that have been thoroughly explored for some time already. The fact that stakeholders get to it only now could be interpreted as a symptom of their inability to get it before, when such considerations were still emerging. In other words, workshop "titles" are often an open window into the collective's "common problem", rather than their "common purpose". But while they do not suggest any real innovative outputs, the way the design work is being distributed amongst active stakeholder is in itself quite original. Other significant secondary data can be found in the design output, or lack there of as in most of our cases. While many of them allowed for knowledge to be externalized, new functionalities to be identified and innovative fields (rather than precise innovation) to be suggested for later work, only 1 of our 21 cases (A3) yielded an artifact that could reach the market in the near future.
Hence, the results show that initial contexts and their underlying malfunctions vary. At the most simple level, our cases reveal that the problems may be of knowledge, skills or relational nature.
These dimensions serve as first-level constructs, in which four different levels can be used to further define the tensions faced by the collectives. They may be affect individuals, organizations, institutions (value networks) or society at large (i.e. cities, territories). These problems are not mutually exclusive and can be embedded or interlocked with one another.
Complex and deep-level tensions are not only signs that collaboration is unlikely, but that the need for facilitation amongst the stakeholder is essential to achieve any real design outputs.
Table 2 presented on the next page sums up these dimensions and levels built from our cases.
DISCUSSION
Based on our results, we argue that co-design should not be considered as a best practice, but rather as a crisis symptom. If it was indeed used to foster innovation and surpass classic methods to design new products or services, both the aim of the workshops and its results would support such claims. More importantly, if co-design was only used as a way to facilitate dialogue and build better teams, we would not still find real design tasks and expectations that bring stakeholders together. Rather, our results suggest that groups resort to co-design only when crises undermine their ability to collectively create using conventional approaches.
Workshops, it seems, are used as Trojan horses: getting design collectives operational (again) by working on the design of products or services. As Godet (1977:21) argues: "action accelerates change". In our cases, designing together fosters change amongst stakeholders.
And as such, co-design rises in a field that tends to respond to crises by inventing new methods, modes of organizing and management principles [START_REF] Midler | Compétion par l'innovation et dynamique des systèmes de conception dans les entreprises françaises-Une comparaison de trois secteurs[END_REF].
Resorting to the word "crises" is certainly not neutral: it holds a meaning often seen as dramatic and negative. This choice of words stems from both interviews excerpts, where some participants raised "meaning" (case P1) or "trust" (case C1) crises, and from existing innovation literature (e.g. [START_REF] Godet | Crise de la prévision, essor de la prospective: exemples et méthodes[END_REF][START_REF] Midler | Compétion par l'innovation et dynamique des systèmes de conception dans les entreprises françaises-Une comparaison de trois secteurs[END_REF][START_REF] Spina | Factors influencing co-design adoption: drivers and internal consistency[END_REF]). Scharmer (2009:2) argues that «the crisis of our time reveals the dying of an old social structure and way of thinking, an old way of institutionalizing and enacting collective social forms». The difference here is that we highlight crises not just to describe the initial setting in which the innovation or design efforts unfold, but actually as the sine qua non condition in which it can take place. By crisis, we simply point out to, as Godet (1977:20) said, "a gap between reality and aspirations".
In a collaborative design setting that aspires to innovate, the gap lies in the lack of prior relationships between the participants and the absence of experts or innovation credentials around the table. Contrary to the literature on proper innovation settings, co-design occurs where there is little potential for collaboration. Yet, while tensed context are known to hinder innovation [START_REF] Clauß | The Influence of the Type of Relationship on the Generation of Innovations in Buyer-Supplier Collaborations[END_REF], our cases show that stakeholders push through and keep on codesigning. It seems outcomes such as stronger ties, learning and change trump design outputs.
The change management literature can help us better understand co-design if it is indeed an intervention to a crisis, although it remains different on many levels. For one, co-design does not always target pre-existing and stable collectives (such as a team or an organization), but rather disgruntled individuals often coming together for the first time. Not does change management seek to regroup individuals: its purpose is to capitalize on "common purpose" and "shared values" as accelerators of organizational progress (Roy, 2010: 48). And while change management is about «moving from strategic orientation to action» (Rondeau et al. 2005:7) and helping individuals cope with disruption [START_REF] Rondeau | Transformer l'organisation : Vers un modèle de mise en oeuvre[END_REF], co-design appears to be moving from
IMPLICATIONS
This discussion raises in turn an important question: why then do we co-design if not to design?
Solving crises, (re) creating design collectives, exchanging knowledge, building foundations for later work: these are all legitimate, yet unsung co-design outcomes. Knowing that innovation and its subversive nature can disrupts collectives [START_REF] Hatchuel | Services publics: la Subversion par les nouveaux produits[END_REF], co-design can be used a "controlled disruption" that bonds people together through design activities. Again, if this holds true, then we must also revisit criteria used to assess its performance by including dimensions not only on the new objects, but also on the new collectives. As one workshop organizer told us: " the real success lies in knowing which citizens to mobilize for future projects (…) this came a bit as a surprise, but it represents an enormous potential". Creation of a design collective does not mean that the exact same individuals will be involved the next time around, only that the workshop participants benefit from increased awareness, mutual consideration and minimal knowledge that can be used in future projects in which they all hold stakes. For the innovation manager of the "A" cases, co-designing has translated into new design automatisms, where such workshops are now to be held "systematically in any project that are heavily clientcentered". While prior work has addressed the links between innovation and change (e.g
Hendersen & [START_REF] Henderson | Architectural innovation: the reconfiguration of existing product technologies and the failure of established firms[END_REF][START_REF] Kim | Strategy, value innovation, and the knowledge economy[END_REF], the latter is often described as the primary and intended target of interventions on the former. What we put forth here is the idea that change is the indirect, yet most important, outcome of co-design. What's more, it's specificity and strength is that this change in mediated by the new objects, which should still be pursued. Rather than treating it as a handicap, the lack of prior relationship can be turned into an asset.
Organizations should not give up or delay innovation efforts, as poor contexts may turn out to be quite conductive for new collectives and ideas to emerge. According to [START_REF] Morin | Pour une crisologie[END_REF], crisis highlight flaws and opportunities, but more importantly trigger actions that lead to new solutions.
The author adds, as Shumpeter (1939) before him, that "within the destruction of crisis lies creation". For the existing literature on co-design fails to account for relationships and focuses on the new objects, we argue that the complexity of current innovation issues call for more of such design interventions that create both tangible outputs and relationship outcomes. In terms of practical implications, our results offer a reversed outlook on the organizational change or configuration for innovation reasoning. While we have known for some time now that innovation and design can trigger change [START_REF] Schott | Process innovations and improvements as a determinant of the competitive position in the international plastic market[END_REF][START_REF] March | Footnotes to organizational change[END_REF], our understanding of codesign takes it one step further, making the case that such disruption can be purposefully introduced and managed as a mean to bring about desired changes in disgruntled groups or dissembled collectives. Starting from the proposition that design can help cope with collaboration issues, organizations can envision new means to bring about change through such activities. Moreover, knowing co-design activities need some sort of crisis to emerge, practitioners may want to hold back on hosting workshops in positive settings in which they are unlikely to bring about changes on collectives, that is if they don't result in negative outcomes.
As Noteboom (2000:76) explains, «people can collaborate without agreeing, (but) it is more difficult to collaborate without understanding, and it is impossible to collaborate if they don't make sense to each other». Hence, starting from a difficult context where collaboration is unlikely, it appears as if engaging in design activities allows not for complete agreement, but at least in the construction of a common repertoire of practices and references. Most cases studied here fail to produce significant design outputs, however most hold the promise of subsequent tangible results. Hence, based on classic transaction-cost economics, this "twotime" collaboration dynamic depicts the first co-design workshop as the investment phase, whereas subsequent workshops allow for tangible return (design outputs) to emerge. In that sense, participants or organizations disappointed about co-design could be compared to riskaverse or impatient investors pulling out too soon of the market. This mean that organization should have to resort to hybrid weak-and-strong ties configurations [START_REF] Uzzi | Social Structure and Competition in Interfirm Networks: The Paradox of Embeddedness[END_REF], but rather adopt a sequential approach to strengthening weak ties through design activities and then mobilizing this more cohesive collective towards solving innovation issues. Briefly put, it may be the answer to the paradox of embeddeness: strong ties can indeed lead to innovation, but only when the ties have first been built or mediated through the co-construction of a common object.
LIMITS AND FUTURE RESEARCH
Lastly, our research design holds two methodological limits that ought to be discussed. First, the use of retrospective cases holds the risk of historical distortions and maturation. Second, discussing crises, albeit not in such terms, with participants may at times have raised sensitive issues. To minimize the impact of both time and emotions, we interviewed a wide array of participants and centered our questions on specific instances of the workshop [START_REF] Hubert | Retrospective reports of strategic-level managers: Guidelines for increasing their accuracy[END_REF]. More importantly, we relied on secondary sources used in the planning or the facilitation of the workshop in order to assess contexts without falling into data maturation traps.
Future research on collaborative design activities should pursue the ongoing theorization of codesign and extend on this paper by conducting a larger-scale quantitative paper on relationships before and after workshops. It ought to further define the nature of the ties between stakeholders, and its evolution as they go through collaborative design. More importantly, future research should seek longitudinal cases in which repeated workshops would allow to further validate our claims on the "design-co-design" sequence required to reach innovative design.
Finally, other dimensions of the workshop that may influence both design outputs and collective outcomes such as animation protocols, formats or goals (what products, services or processes should collectives with poor relationships attempt to design?) should also be further studied.
CONCLUSION
Puzzled by both theoretical and empirical inconsistencies about what is to be expected from codesign, we conducted this multiple case study hoping to better understand the contextual elements and different results of collaborative design workshops. Our data has shown that codesign's natural environment was one of crisis, whether of knowledge, skill or relational nature.
Rather than seeing this situation as a hurdle for collaboration or an impossible setting for innovation, we have argued that it could on the contrary be overcome through the engagement of stakeholders in design activities and used as a leverage to change management. Results also point out to a sequence in which initial weak ties are strengthened by design, which in turns can lead to new objects to be designed by strong collectives. As a consequence, we have advised organizations to tackle internal or network malfunctions through innovation first, rather than addressing innovation issues only once the collective has reached collaboration maturity.
Figure 1 .
1 Figure 1. What co-design should look like based on innovation and collaboration theory
action to strategic orientation, by not focusing on the change itself but rather on what needs to be designed by the collective. Such interventions are not seeking to bring change according to precise plan, to a level where higher-level activities such as design and innovation can be better executed. As change-management guruKotter (1995:12) advocates, research has demonstrated that only "one or two years into transformation (can we) witness the introduction of new products". What co-design seems to do, on the contrary, is operating change for and by design, meaning innovation activities precede the existence of a proper collaboration setting.If this holds true, collectives should no longer been considered as mere inputs in the design of new objects, but on the contrary, as an important result of the design activity. Rather than treating collectives as in an input of design, the interventions we studied suggest a reversed outlook where design becomes an input of collectives. Hence, controlled disruption through codesign is not only possible, but desirable in order to fulfill both design targets and renewed collectives. It thus becomes a new managerial tool for bringing about change, not just new objects. According to[START_REF] Hatchuel | The two pillars of new management research[END_REF], advances in management are precisely, as we have tried to demonstrate here again, responses to difficulties of collective actions. Such outcomes on the collective are not taken into account by the existing design literature. For instance,Hatchuel et al. (2011) identify five potential design outputs: research questions, products, emerging concepts, skills and knowledge. While it could be said that the two last dimensions are improved along the way, a new design collective should be seen as a desirable outcome of its own.
Figure 2 .
2 Figure 2. What co-design looks like based on empirical evidence.
Table 1 .
1 below sums up the contextual elements of the 21 cases studied in this article. Summary of the contextual elements for 21 co-design workshops studied.
CASE PURPOSE PARTICIPANTS PRIOR RELATIONSHIPS DESIGN OUTPUTS
Designing an application to Sale clerks, IT and Little to none. Employees Early concepts from IT staff
L1 improve sales and customer marketing employees have internally designed are not well received. No
in-store experience customers, students together, but never with consensual concept emerges.
other stakeholders.
A1 Using RFID technology to Store employees and Little to none. Employees Functionalities emerge, but no
locate items in the store in management, IT have internally designed working concept or prototype
real-time. experts, customers, together, but never with designed.
students. other stakeholders.
A2 Designing the in-store Store employees and Little to none. Employees Client intends to recommend
offices of the future for sales management, have internally designed testing some of new office
associates and managers. ergonomist students. together, but never with concepts.
other stakeholders. Some
returning participants (A1)
A3 Using no-contact technology Store employees and Little to none. Employees Successful design of a
to locate to improve in-store management, IT have internally designed smartphone application
customer experience experts, customers, together, but never with working prototype. In store
students. other stakeholders. Some real testing is planned.
returning participants (A2)
O1 Designing new sensors to Store employees and Little to none. Employees Functionalities emerge, but no
better measure athletes' management, IT have internally designed working concept or prototype
performance experts, customers, together, but never with designed.
students. other stakeholders.
O2 Using kinect-like technology Store employees and Little to none. Employees Functionalities emerge, but no
to create new in-store management, IT have internally designed working concept or prototype
interactions with customers. experts, customers, together, but never with designed.
students. other stakeholders. No
returning participants (O1)
Table 2 .
2 Design collectives' initial malfunctions
Nature Level Manifestations (examples)
Individual Loss of meaning
Knowledge Low motivation on the job
Evolving roles
Organizational No common criteria / methods
Skills Poor innovation structure
Lack of specific skills
Institutional Unsure of who to work with
Relational Mistrust
Poor links with local partners
Social Wealth inequalities | 48,262 | [
"955410",
"1111",
"1099",
"1027507"
] | [
"39111",
"304765",
"39111",
"39111",
"304765"
] |
01485086 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2013 | https://minesparis-psl.hal.science/hal-01485086/file/Kroll%20Le%20Masson%20Weil%202013%20ICED%20final.pdf | Dr Ehud Kroll
email: kroll@aerodyne.technion.ac.il
Pascal Le Masson
MODELING PARAMETER ANALYSIS DESIGN MOVES WITH C-K THEORY
Keywords: parameter analysis, C-K theory, conceptual design, design theory
The parameter analysis methodology of conceptual design is studied in this paper with the help of C-K theory. Each of the fundamental design moves is explained and defined as a specific sequence of C-K operators and a case study of designing airborne decelerators is used to demonstrate the modeling of the parameter analysis process in C-K terms. The theory is used to explain how recovery from an initial fixation took place, leading to a breakthrough in the design process. It is shown that the efficiency and innovative power of parameter analysis is based on C-space "de-partitioning". In addition, the role of K-space in driving the concept development process is highlighted.
1
INTRODUCTION Studying a specific method with the aid of a theory is common in scientific areas [START_REF] Reich | A theoretical analysis of creativity methods in engineering design: casting and improving ASIT within C-K theory[END_REF][START_REF] Shai | Creativity and scientific discovery with infused design and its analysis with C-K theory[END_REF]. It allows furthering our understanding of how and why the method works, identifying its limitations and area of applicability, and comparing it to other methods using a common theoretical basis. At the same time, interpreting and demonstrating the method from the theoretic perspective can provide empirical validation of the theory. The current study focuses on using C-K theory to clarify the (implicit) theoretical grounds and logic of a pragmatic design method called Parameter Analysis (PA). It also helps to explain some practical issues in C-K design theory. C-K theory [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF][START_REF] Le Masson | Strategic management of innovation and design[END_REF][START_REF] Hatchuel | Towards an ontology of design: lessons from C-K design theory and Forcing[END_REF] is a general descriptive model with a strong logical foundation, resulting in powerful expressive capabilities. The theory models design as interplay between two spaces, the space of concepts (C-space) and the space of knowledge (K-space). Four operators, C→K, K→C, C→C and K→K, allow moving between and within these spaces to facilitate a design process. Space K contains all established, or true, propositions, which is all the knowledge available to the designer. Space C contains "concepts", which are undecidable propositions (neither true nor false) relative to K, that is, partially unknown objects whose existence is not guaranteed in K. Design processes aim to transform undecidable propositions into true propositions by jointly expanding spaces C and K through the action of the four operators. This expansion continues until a concept becomes an object that is well defined by a true proposition in K. Expansion of C yields a tree structure, while that of K produces a more chaotic pattern. PA [START_REF] Kroll | Innovative conceptual design: theory and application of parameter analysis[END_REF][START_REF] Kroll | Design theory and conceptual design: contrasting functional decomposition and morphology with parameter analysis[END_REF] is an empirically-derived method for doing conceptual design. It was developed initially as a descriptive model after studying designers in action and observing that their thought process involved continuously alternating between conceptual-level issues (concept space) and descriptions of hardware 1 (configuration space). The result of any design process is certainly a member of configuration space, and so are all the elements of the design artifact that appear, and sometimes also disappear, as the design process unfolds. Movement from one point to another in configuration space represents a change in the evolving design's physical description, but requires conceptual reasoning, which is done in concept space. The concept space deals with "parameters", which in this context are functions, ideas and other conceptual-level issues that provide the basis for anything that happens in configuration space. Moving from concept space to configuration space involves a realization of the idea in a particular hardware representation, and moving back, from configuration to concept space, is an abstraction or generalization, because a specific hardware serves to stimulate a new conceptual thought. It should be emphasized that concept space in PA is epistemologically different from C-space in C-K theory, as explained in [START_REF] Kroll | Design theory and conceptual design: contrasting functional decomposition and morphology with parameter analysis[END_REF]. To facilitate the movement between the two spaces, a prescriptive model was conceived, consisting of three distinct steps, as shown in Figure 1. The first step, Parameter Identification (PI), consists primarily of the recognition of the most dominant issues at any given moment during the design process. In PA, the term "parameter" specifically refers to issues at a conceptual level. These may include the dominant physics governing a problem, a new insight into critical relationships between some characteristics, an analogy that helps shed new light on the design task, or an idea indicating the next best focus of the designer's attention. Parameters play an important role in developing an understanding of the problem and pointing to potential solutions. The second step is Creative Synthesis (CS). This part of the process represents the generation of a physical configuration based on the concept recognized within the parameter identification step. Since the process is iterative, it generates many physical configurations, not all of which will be very interesting. However, the physical configurations allow one to see new key parameters, which will again stimulate a new direction for the process. The third component of PA, the Evaluation (E) step, facilitates the process of moving away from a physical realization back to parameters or concepts.
Evaluation is important because one must consider the degree to which a physical realization represents a possible solution to the entire problem. Evaluation also points out the weaknesses of the configurations and possible areas of improvement for the next design cycle.
1
Hardware descriptions or representations are used here as generic terms for the designed artifact; however, nothing in the current work excludes software, services, user experience and similar products of the design process. PA's repetitive PI-CS-E cycles are preceded by a Technology Identification (TI) stage of looking into fundamental technologies that can be used, thus establishing several starting points, or initial conditions. A cursory listing of each candidate technology's pros and cons follows, leading the designer to pick the one that seems most likely to succeed. PA proved to be useful and intuitive, yet more efficient and innovative than conventional "systematic design" approaches [START_REF] Kroll | Design theory and conceptual design: contrasting functional decomposition and morphology with parameter analysis[END_REF].
The present study attempts to address some questions and clarify some of the fundamental notions of both PA and C-K theory. Among them: What exactly are the elements of C-space and K-space? C-K theory distinguishes between the spaces based on the logical status of their members ("undecidable" propositions are concepts, and "true" or "false" ones are knowledge items), but it can still benefit from a clear and consistent definition of the structure and contents of these spaces.
What is the exact meaning of the C-K operators? In particular, is there a C→C operator, and does it mean that one concept is generated from another without use of knowledge? How should C-K diagrams be drawn? How can these diagrams capture the time-dependence of the design process? How exactly should the arrows representing the four operators be drawn?
If PA is a proven design method and C-K is a general theory of design, does the latter provide explanation to everything that is carried out in the former? Does C-K theory explain the specific design strategy inherent in PA, and in particular, the latter's claim that it supports innovative design? The PA method of conceptual design is demonstrated in the next section by applying it to a design task. The steps of PA are explained next with the notions of C-K theory, followed by a detailed interpretation of the case study in C-K terms. The paper concludes with a discussion of the results of this study and their consequences in regard to both PA and C-K theory. For brevity, the focus here is on the basic steps of PA, leaving out the preliminary stage of TI. The role of the case study in this paper is merely to demonstrate various aspects; the results presented are general and have been derived by logical reasoning and not by generalizing from the case study.
PARAMETER ANALYSIS APPLICATION EXAMPLE
The following is a real design task that had originated in industry and was later changed slightly for confidentiality reasons. It was assigned to teams of students (3-4 members in each) in engineering design classes, who were directed to use PA for its solution after receiving about six hours of instruction and demonstration of the method. The design process presented here is based on one team's written report with slight modifications for clarity and brevity. The task was to design the means of deploying a large number (~500) of airborne sensors for monitoring air quality and composition, wind velocities, atmospheric pressure variations, etc. The sensors were to be released at an altitude of ~3,000 m from an under-wing container carried by a light aircraft and stay as long as possible in the air, with the descent rate not exceeding 3 m/s (corresponding to the sensor staying airborne for over 15 minutes). Each sensor contained a small battery, electronic circuitry and radio transmitter, and was packaged as a 10 by 50-mm long cylinder weighing 10 g. It was necessary to design the aerodynamic decelerators to be attached to the payload (the sensors), and the method of their deployment from a minimum weight and size container. The following focuses on the decelerator design only. The design team began with analyzing the need, carrying out some preliminary calculations that showed that the drag coefficient C D of a parachute shaped decelerator is about 2, so to balance a total weight of 12-15 g (10 g sensor plus 2-5 g assumed for the decelerator itself), the parachute's diameter would be ~150 mm. If the decelerator were a flat disk perpendicular to the flow, the C D reduces to ~1.2, and if it were a sphere, then C D 0.5, with the corresponding diameters being about 200 and 300 mm, respectively. It was also clear that such large decelerators would be difficult to pack compactly in large numbers, that they should be strong enough to sustain aerodynamic loads, particularly during their deployment, when the relative velocity between them and the surrounding air was high, and that being disposable, they should be relatively cheap to make and assemble. Further, the sturdier the decelerator is made, chances are that it would also be heavier. And the heavier it is, the larger it would have to be in order to provide enough area to generate the required drag force. Technology identification began with the team identifying deceleration of the sensors as the most critical aspect of the design. For this task they came up with the technologies of flexible parachute, rigid parachute, gas-filled balloon and hot-air balloon. Reviewing some pros and cons of each technology, they chose the flexible parachute for further development. Figure 2 is a detailed description of a portion of the PA process carried out by the design team.
PA step Reasoning process Outcome
PI1
The first conceptual issue (parameter) should be the chosen technology. Drag force is ok but compact packing is impossible because these configurations cannot nest in each other.
Shall we try to improve the last configuration or backtrack?
Try to improve the design by finding a way to pack it compactly.
PI3
How can the last configuration be improved? Combine the idea of flexible parachute that can be folded for packing with a rigid parachute that doesn't have cords and doesn't require a strong "pull" to open.
Parameter: "Use a frame + flexible sheet construction that can fold like an umbrella; use a spring for opening" This would work, seems cheap to make, and shouldn't have deployment problems. But how will the "gliders" be packed and released in the air?
Shall we try to improve the last configuration or backtrack?
Continue with this configuration: design the container, packing arrangement, and method of deployment.
Figure 2. continued
The first concept (PI 1 ) is based on a small parachute that will provide the necessary drag force while allowing compact packing. The following creative synthesis step (CS 1 ) realizes this idea in a specific hardware by sketching and sizing it with the help of some calculations. Having a configuration at hand, evaluation can now take place (E 1 ), raising doubts about the operability of the solution.
The next concept attempted (PI 2 ) is the rigid parachute from the TI stage, implemented as a square pyramid configuration (CS 2 ), but found to introduce a new problem-packing-when evaluated (E 2 ).
A folding, semi-rigid parachute is the next concept realized and evaluated, resulting in the conclusion that parachutes are not a good solution. This brings a breakthrough in the design: dissipating energy by frictional work can also be achieved by a smaller drag force over a larger distance, so instead of a vertical fall the payload can be carried by a "glider" in a spiraling descent (PI 4 ). The resulting configuration (CS 4 ) shows an implementation of the last concept in words and a sketch, followed by an evaluation (E 4 ) and further development (not shown here).
It is interesting to note a few points in this process: First, when the designers carried out preliminary calculations during the need analysis stage, they already had a vertical drag device in mind, exhibiting the sort of fixation in which a seemingly simple problem triggers the most straightforward solution.
Second, technology identification yielded four concepts, all still relevant for vertical descent, and all quite "standard". A third interesting point is that when the "umbrella" concept failed (E 3 ), the designers chose not to attempt another technology identified at the outset (such as gas-filled balloon), but instead used the insights and understanding gained during the earlier steps to arrive at a totally new concept, that of a "glider" (PI 4 ). And while in hindsight, this last concept may not seem that innovative, it actually represents a breakthrough in the design process because this concept was not apparent at all at the beginning.
INTERPRETATION OF PARAMETER ANALYSIS IN C-K TERMS
Technology identification, which is not elaborated here, establishes the root concept, C 0 , as the important aspect of the task to be designed first. The actual PA process consists of three steps that are applied repeatedly (PI, CS and E) and involves two types of fundamental entities: parameters (ideas, conceptual-level issues) and configurations (hardware representations, structure descriptions). In addition, the E step deduces the behavior of a configuration followed by a decision as to how to proceed. The interpretation in C-K terms is based on the premises that because knowledge is not represented explicitly in PA and because a design should be considered tentative (undecidable in C-K terms) until it is complete, both PA's parameters and configurations are entities of C-K's C-space. The parameter identification (PI) step begins with the results of an evaluation step that establishes the specific behavior of a configuration in K-space by deduction ("given structure, find behavior"), and makes a decision about how to proceed. There are three possible decisions that the evaluation step can make: 1.
Stop the process if it is complete (in this case there is no subsequent PI step), or 2.
Try to improve the undesired behavior of the evolving configuration (this is the most common occurrence), or 3.
Use a specific technology (from technology identification, TI) for the current design task. This can happen at the beginning of the PA process, after establishing (in TI) which is the most promising candidate for further development, or if the evaluation results in a decision to abandon the current sequence of development and start over with another technology. In C-K terms, current behavior and decision on proceeding are knowledge items in K-space, so generating a new concept (for improvement or totally new) begins with a K→C operator. This, in turn, triggers a C→C operator, as shown in Figure 3. The K→C operator carries the decision plus domain knowledge into the C-space, while the C→C operator performs the actual derivation of the new concept. Two cases can be distinguished: the PI step can begin with a decision to improve the current design (case 2 above), as in Figure 3a, or it can begin with a decision to start with a new technology (case 3 above), as in Figure 3b. In both cases, the result of the PI step is always a new concept in C-K terms, which in PA terms is a parameter. In the following diagrams we shall use round-cornered boxes to denote C-K concepts that stand for PA parameters, and regular boxes for C-K concepts that represent PA configurations. The red numbers show the order of the process steps. However, because knowledge is required to realize an idea in hardware and perform quantitative reasoning, a visit to K-space is also needed. The CS step therefore begins with searching for the needed knowledge by a C→K operator that triggers a K→K (deriving specific results from existing knowledge). The new results, in turn, are used by a K→C operator to activate a C→C that generates the new concept, which is a PA configuration that realizes the parameter in hardware. This interpretation of CS as a sequence of four C-K operators is depicted in Figure 4a.
In PA, parameters (concept, ideas) cannot be evaluated, only configurations. This means that the evaluation (E) step begins with a configuration or structure and tries to deduce its behavior, from which it will make a decision (any one of the three described above). This means that a C→K operator is used to trigger a K→K; the former is the operation of looking for the knowledge necessary for the evaluation, while the latter is the actual deductive reasoning that leads to deriving the specific behavior and making the decision as to how to proceed. This is shown in Figure 4b. The design process began with the need, the problem to solve, as stated by the customer. A need analysis stage produced greater understanding of the task and the design requirements. This took place entirely in K-space and is not shown here. Next, technology identification focused the designers on the issue of deceleration (C 0 ), found possible core technologies, listed their pros and cons, and made a choice of the best candidate. The following description of the PA process commences at this point. Figure 5 shows the first cycle of PI-CS-E as described in Figure 2 and depicted with the formalism of Figures 3 and4. Note that while C 0 does not have a meaning of parameter or configuration in PA terms, the result of the first partition, C 1 , is a PA parameter, while the second partition generates the configuration C 2 . This first cycle ended with a decision to abandon the flexible parachute concept and use another technology identified earlier (in TI) instead. For brevity, the demonstration now skips to the last PI-CS-E cycle as depicted in Figure 6. It began with the evaluation result of step E 3 (see Figure 2) shown at the lower right corner of Figure 6. The designers concluded that parachutes, flexible or rigid, were not a good solution path, and called for trying something different. They could, of course, opt for the balloon technologies identified earlier, but thanks to their better understanding of the problem at that point, they decided to take a different look at the problem (PI 4 in Figure 6). They realized that their previous efforts had been directed at designing vertical decelerators, but that from the energy dissipation viewpoint a spiraling "glider" concept might work better. The C-K model of this step depicts a "de-partition", or growing of the tree structure in C-space upward, at its root. This phenomenon, also demonstrated in chapter 11 of Le [START_REF] Le Masson | Strategic management of innovation and design[END_REF], represents moving toward a more general or wider concept, and in our case, redefining the identity of C 0 :decelerator to C 0 ':vertical drag decelerator and partitioning C 7 to C 0 ' and C 8 .
5
DISCUSSION C-K theory has been clarified by this study with regard to its spaces and operators. Elements of Cspace correspond to both PA's parameters (concepts) and configurations (structures), thus they have the following structure: "there exists an object Name, for which the group of (behavioral) properties B 1 , B 2 ,… can be made with the group of structural characteristics S 1 , S 2 ,…". For example, concept C 2 (a PA configuration) and concept C 5 (a PA parameter) in Figure 6 can be described as:
"there exists an object C 2 , for which the group of properties B 1 = produces vertical drag (inherited from C 0 ') B 2 = based on flexible parachute (inherited from C 1 ) can be made with the group of characteristics S 1 = 150-mm dia. hemispherical canopy S 2 = cords for sensor attachment" "there exists an object C 5 , for which the group of properties B 1 = produces vertical drag (inherited from C 0 ') B 2 = based on rigid parachute (inherited from C 3 ) B 3 = built as an umbrella, i.e., folding frame and flexible skin can be made with the group of characteristics S 1 = 150x150mm square pyramid shape (inherited from C 4 )" The interesting thing to note is that except for the root concept in C-K (which is not defined as a PA entity), all other concepts have some attributes (properties and/or characteristics). But because a C-K concept can be either a PA parameter of configuration and PA excludes the possibility of having configurations without parameters to support them, the concepts in C-K sometimes have only properties (i.e., behavioral attributes), and sometimes properties plus characteristics (structural attributes); however, a concept cannot have characteristics and no properties. Need analysis, although not elaborated here, is the stage of studying the design task in terms of functions and constraints, and generating the design requirements (specifications). It takes place entirely in K-space. Technology identification also takes place mostly in K-space. The basic entities of PA, parameters (conceptual-level issues, ideas) and configurations (embodiments of ideas in hardware) have been shown to reside in C-K's C-space. However, all the design "moves" in PA-PI, CS and Ewhich facilitate moving between PA's spaces, require excursions to C-K's K-space, as shown in Figures 3 and4. In particular, the importance of investigating K-space when studying design becomes clear by observing how the acquisition of new knowledge (modeled with dark background in Figures 5 and6) that results from evaluating the evolving design is also the driver of the next step. It should be noted, however, that the tree structure of C-space is not chronological, as demonstrated by the de-partition that took place. To capture the time-dependence of the design process, C-K's concepts were labeled with a running index and the operator arrows numbered. This method of drawing C-K diagrams is useful for providing an overall picture of the design process, but is incorrect in the sense that when a C-K concept is evaluated and found to be deficient, leading to abandoning its further development (as with concepts C 2 and C 6 of Figure 6, for example), it should no longer show in C-space, as its logical status is now "decidable." Some of the ancestors of such 'false' concepts may also need to be dropped from C-space, depending on the exact outcome of the pertinent evaluation.
C-K theory is, by definition, a model of the design process, and does not contain a strategy for designing. However, modeling PA with C-K theory helps to clarify the former's strategy in several respects. First, PA is clearly a depth-first method, attempting to improve and modify the evolving design as much as possible and minimizing backtracking. It also uses a sort of heuristic "cost function" that guides the process to address the more difficult and critical aspects first. This strategy is very different from, for example, the breadth-first functional analysis and morphology method of systematic design [START_REF] Pahl | Engineering design: a systematic approach[END_REF], where all the functions are treated concurrently.
A second clarification of PA regards its support of innovation. As many solution-driven engineers do, the designers of the decelerator example also began with straightforward, known solutions for vertical descent (parachutes, balloons). This fixation often limits the designer's ability to innovate; however, the PA process demonstrated here allowed recovery from the effect of the initial fixation by learning (through the repeated evaluation of "standard" configurations) during the development process (generating new knowledge in C-K terms) and discovery of a final solution that was not included in the fixation-affected initial set of technologies. Moreover, C-K theory allowed identifying departitioning of concept space as the exact mechanism through which the innovation was achieved.
6 CONCLUSION C-K theory was shown to be able to model PA's steps, which are fundamental design "moves": generating an idea, implementing an idea in hardware representation, and evaluating a configuration. It also showed that PA supports innovative design by providing a means for recovering from fixation effects. Conversely, PA helped to clarify the structure of C-K's concepts, operators and C-space itself, and to emphasize the importance of K-space expansions. Many interesting issues still remain for future research: What particular knowledge and capabilities are needed by the designer when deciding what are the most dominant aspects of the problem in TI, and the most critical conceptual-level issues in each PI step? What exactly happens in K-space during PA as related to the structures of knowledge items and their role as drivers of the design process? Are there additional innovation mechanisms in PA that can be explained with C-K theory? Can C-K theory help compare PA to other design methodologies? In addition, we have already begun a separate investigation of the logic of PA as a special case of Branch and Bound algorithms, where design path evaluation is used for controlling the depth-first strategy in a way that ensures efficiency and innovation.
Figure 1 .
1 Figure 1. The prescriptive model of parameter analysis consists of repeatedly applying parameter identification (PI), creative synthesis (CS) and evaluation (E)
Parameter: "Produce a large enough drag force using a flexible parachute"CS1Which particular physical configuration would realize the flexible parachute concept?Configuration: A 150-mm dia. hemispherical parachute, connected to the sensor with cords. E1 Given the physical configuration, what is the behavior? Drag force is ok and compact packing can be done by folding, but the parachute may not open and cords may tangle. Shall we try to improve the last configuration or backtrack? Try another technology from the TI stage. PI2 Use the new technology for the decelerator design. Parameter: "Use a rigid parachute to generate drag force" CS2 Which particular physical configuration would realize the rigid parachute concept? Configuration: A 150-mm diagonal square pyramid with the sensor rigidly attached. E2 Given the physical configuration, what is the behavior?
Figure 2 .
2 Figure 2. Description of the reasoning process used to design airborne decelerators
Figure 3 .
3 Figure 3. C-K model of parameter identification (PI): (a) applies to the common case encountered during PA and (b) shows starting with a new technology The creative synthesis (CS) step starts with a parameter, a PA concept, and results in a new configuration. It involves a realization of an idea in hardware representation by particularization or instantiation (the opposite of generalization). It usually requires some quantitative specification of dimensions, materials, etc. that are derived by calculation. In terms of C-K theory, if PA's parameters and configurations are elements of C-space, then the CS step should start and end in C-space.However, because knowledge is required to realize an idea in hardware and perform quantitative reasoning, a visit to K-space is also needed. The CS step therefore begins with searching for the needed knowledge by a C→K operator that triggers a K→K (deriving specific results from existing knowledge). The new results, in turn, are used by a K→C operator to activate a C→C that generates
Figure 4 .
4 Figure 4. C-K model of (a) creative synthesis (CS) and (b) evaluation (E). Dark background denotes a new knowledge item
Figure 5 .
5 Figure 5. C-K model of the first PI-CS-E cycle of the decelerator design
Figure 6 .
6 Figure 6. C-K model of the fourth PI-CS-E cycle, demonstrating a "de-partition"It was shown that K→K operators represent deductive reasoning, generating new knowledge from existing one, but their action needs to be triggered by a reason, a purpose, and this is represented by a C→K operator. Likewise, a K→C operator uses knowledge for triggering a C→C operator. As demonstrated in this study, C→C operators do exist, representing the derivation of a new concept from another. However, this operation does not happen by itself in C-space, only if triggered by a K→C operator. The importance of having a C→C operator can be explained by the need to capture the relation of new concepts to their ancestors, including inheritance of their attributes. It should be noted, however, that the tree structure of C-space is not chronological, as demonstrated by the de-partition that took place. To capture the time-dependence of the design process, C-K's concepts were labeled with a running index and the operator arrows numbered. This method of drawing C-K diagrams is useful for providing an overall picture of the design process, but is incorrect in the sense that when a C-K concept is evaluated and found to be deficient, leading to abandoning its further development (as with concepts C 2 and C 6 of Figure6, for example), it should no longer show in C-space, as its logical status is now "decidable." Some of the ancestors of such 'false' concepts may also need to be dropped from C-space, depending on the exact outcome of the pertinent evaluation.
ACKNOWLEDGMENTS
The first author is grateful to the chair of "Design Theory and Methods for Innovation" at Mines ParisTech for hosting him in February 2012 and 2013 for furthering this work, and for the partial support of this research provided by the ISRAEL SCIENCE FOUNDATION (grant no. 546/12). | 31,677 | [
"1003635",
"1111",
"1099"
] | [
"84142",
"39111",
"39111"
] |
01485098 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2013 | https://minesparis-psl.hal.science/hal-01485098/file/Hatchuel%20Weil%20Le%20Masson%202011%20ontology%20of%20expansion%20v15.pdf | Armand Hatchuel
email: hatchuel@ensmp.fr
Benoit Weil
email: bweil@ensmp.fr
Pascal Le Masson
email: lemasson@ensmp.fr
Towards an ontology of design: lessons from C-K design theory and Forcing 1
Keywords:
In this paper we present new propositions about the ontology of design and a clarification of its position in the general context of rationality and knowledge. We derive such ontology from a comparison between formal design theories developed in two different scientific fields: Engineering and Set theory. We first build on the evolution of design theories in engineering, where the quest for domain-independence and "generativity" has led to formal approaches, likewise C-K theory, that are independent of what has to be designed. Then we interpret Forcing, a technique in Set theory developed for the controlled invention of new sets, as a general design theory. Studying similarities and differences between C-K theory and Forcing, we find a series of common notions like "d-ontologies", "generic expansion", "object revision", "preservation of meaning" and "K-reordering". They form altogether an "ontology of design" which is consistent with unique aspects of design.
Part 1. Introduction
What is design? Or in more technical terms, can we clarify as rigorously as possible some of the main features of an ontology of design? In this paper, we develop an approach of such ontology that became possible thanks to the following developments:
-the elaboration in the field of engineering of formal design theories, like C-K theory, (Hatchuel andWeil 2003, 2009) which are independent of any engineering domain and avoid too strong restrictions about what is designed.
-the exploration of design theories that could have emerged in other fields from a similar process of abstraction and generalization. In this paper, we introduce Forcing [START_REF] Cohen | The independence of the Continuum Hypothesis[END_REF], a technique and branch of Set theory that generalized extension procedures to the creation of new collections of sets. It presents, from our point of view, specific traits of a design theory with highly general propositions.
These design theories offered a unique material for a comparative investigation. The study of their similarities and differences is the core subject of this paper. It will lead us to what can be named an ontology of expansion which clarifies the nature of design. This ontology is no more postulated but revealed by common assumptions and structures underlying these design theories. Therefore, our findings only reach the ontological features consistent with existing formalizations of design. Yet, to our knowledge, such ontology of expansion, as well as the interpretation of Forcing as a design theory, had not been investigated and formulated in existing literature.
Before presenting the main hypotheses and the structure of this paper some preliminary material on design theories is worth mentioning.
Formal design theories. In the field of engineering, efforts to elaborate formal (or formalized) design theories have been persistent during the last decades [START_REF] Yoshikawa | General Design Theory and a CAD System[END_REF][START_REF] Reich | A Critical Review of General Design Theory[END_REF][START_REF] Braha | Topologial structures for modelling engineering design processes[END_REF]. "Formal" means the attempt to reach rigor, if possible logical and mathematical rigor, both in the formulation of hypotheses and the establishment of findings. It also delineates the limited scope and purpose of these theories. Formal design theories (in the following we will say design theories or design theory) are only one part of the literature about design. They neither encompass all the findings of design research [START_REF] Finger | A Review of Research in Mechanical Engineering Design[END_REF]Cross 1993), nor describe all activities involved in design in professional contexts. For instance, it is well known that design practice is shaped by managerial, social and economic forces that may not be captured by formal design theories. Yet, this does not mean that design theories have no impact. Such forces are influenced by how design is described and organized. Actually, it is well documented that design theories, in engineering and in other fields, have contributed to change dominant design practices in Industry [START_REF] Hatchuel | A systematic approach to design theories using generativeness and robustness[END_REF].
Still, the main purpose of design theory is the advancement of design science by capturing the type of reasoning (or model of thought) which is specific to design. As an academic field, design theory has its specific object and cannot be reduced to decision theory, optimization theory or problem-solving theory. Therefore, recent design theories focus on what is called the creative or "generative" [START_REF] Hatchuel | A systematic approach to design theories using generativeness and robustness[END_REF]) aspects of design. Indeed, any design engineer (or designer) uses standard techniques to select or optimise existing solutions. However, design theories target the rationale, the models of thought and reasoning that only appear in design. This special attention does not deny the importance of routinized tasks in design, and design theory should, conceptually, account for both creative and routinized aspects of design, even if it does not include all routinized techniques used in design. Likewise, in mathematics, Set theory is able to account for the core assumptions of Arithmetics, Algebra and Analysis, yet it cannot replace these branches of mathematics. Finally, by focusing on creative design, design theory can complement decision theory by helping engineering and social sciences (economics, management, political science…) to better capture the human capacity to intentionally create new things or systems.
Research methodology. For sure, there is no unique way to explore an ontology of design. However, in this paper we explore a research path that takes into account the cumulative advancement of theoretical work in engineering design. The specific material and methodology of this research follows from two assumptions about the potential contribution of design theories to the identification of an ontology of design.
Assumption 1: Provided they reach a high level of abstraction and rigor, design theories model ontological features of design, Assumption 2: Provided there is a common core of propositions between design theories developed in different fields, this core can be seen as an ontology of design.
An intuitive support for these assumptions and the method they suggest, can be found using an analogy with Physics. If the goal of our research was to find an ontology of "matter" or "time" consistent with contemporary knowledge in Physics, a widely accepted method would be to look in detail to common or divergent assumptions about "matter" or "time" in contemporary theories and physics. And clearly, there is a wide literature about the implications of Special relativity and Quantum mechanics for the elaboration of new ontologies of time and matter. Similarly, our method assumes that design theories have already captured a substantial part of our knowledge about design and may be valid guides for the exploration of an ontology of design.
Outline of the paper. In this section (part 1) we outline the trends towards generality and domain-independence followed by design theories in engineering. They are well illustrated by the specific features of C-K theory [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF]. Then we discuss the existence of design theories in other fields like science and mathematics. We suggest that in Mathematics, Forcing, a method in Set theory developed for the generation of new sets, can be seen as a design theory. In Part 2 and 3 we give an overview of the principles and findings of both C-K theory and Forcing. In part 4, we compare the assumptions and rationale of both theories. In spite of their different contents and contexts, we find that C-K theory and Forcing present common features that unveil an ontology of design that we characterize as an "ontology of expansion".
1.1-Design theories in engineering: recent trends
In engineering, the development of formal design theories can be seen as a quest for more generality, abstraction and rigor. This quest followed a variety of paths and it is out of the scope of this paper to provide a complete account of all theoretical proposals that occurred in the design literature during the last decades. We shall briefly overview design theories which have been substantially discussed in the literature. Still, we will analyse in more detail C-K theory as an example of formal theories developed in engineering that present a high level of generality. Then we will compare it to a theory born in Mathematics.
Brief overview of design theories. In the field of engineering, General Design Theory [START_REF] Yoshikawa | General Design Theory and a CAD System[END_REF] and Axiomatic Design (AD) [START_REF] Suh | Principles of Design[END_REF] were among the first to present formalized approaches of design. Both approaches have in common to define the design of a new entity as the search of some ideal mappings between required functions and selected attributes. The core value of these theories was to model these functions and attributes with mathematical structures that helped to define and warrant specific operations of design. For instance, in GDT, Hausdorff spaces of functions and attributes were assumed. Such mathematical structures offered the possibility to map new desired intersections of functions by the "best" adapted intersection of attributes. In AD, Matrix algebras and information theory were introduced. They are used to model types of mappings between attributes (called design parameters) and functions (called Functional requirements). These matrices define the design output. Thus, ideal designs can be axiomatically defined as particular matrix structures (AD"s first axiom) and associated to the ideal information required from the design user (AD"s second axiom). GDT and AD were no more dependent on any specific engineering domain but still relied on special mathematical structures that aimed to model and warrant "good" mappings.
After GDT and AD, the discussion about design theory followed several directions. Authors introduced a more dynamic and process-based view of design (eg. FBS, [START_REF] Gero | Design prototypes: a knowledge representation schema for design[END_REF])); they insisted on the role of recursive logic [START_REF] Zeng | On the logic of design[END_REF] as well as decomposition and combination aspects in design (Zeng and Gu 1999a, b). For this research, we only need to underline that these discussions triggered the quest for more general mathematical assumptions that could: i) capture both mapping and recursive processes within the same framework; ii) account for the "generative" aspect of design. Two recent theories are good examples of these trends.
The first one, called Coupled Design Process (CDP, [START_REF] Braha | Topologial structures for modelling engineering design processes[END_REF], kept the distinction between spaces of functions and spaces of attributes (or design parameters) but it allowed them to evolve dynamically by introducing general topological structures (Closure sets and operators). Thanks to these structures, CDP captured, for instance, the introduction of new functions different from those defined at the beginning of the design process. It also described new forms of interplay between functions and attributes, which could be generated by available data bases and not only by some fixed or inherited definitions of the designed objects. Thus CDP, extended GDT and the idea of a satisfactory mapping was replaced by a co-evolution of functions and attributes. Its mathematical assumptions also accounted for the non-linear and non-deterministic aspects of design.
The second design theory is called C-K theory [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF][START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF]. C-K theory is consistent with the dynamics captured by CDP. However it is no more built on the distinction between functions and attributes spaces. Instead, it intends to model the special logic that allows a "new object" to appear. This generative aspect is commonly placed at the heart of the dynamics of design. C-K theory models design as the necessary interplay between known and "unknown" (undecidable) propositions. Attributes and functions are seen as tentative constraints used for the stepwise definition of an unknown and desired object. They also play a triggering role in the production of new knowledge. New attributes and new functions are both causes and consequences of changes in the knowledge available. C-K theory explains how previous definitions of objects are revised and new ones can appear, threatening the consistency of past knowledge. Thus the core idea of C-K theory is to link the definition process of a new object to the activation of new knowledge and conversely.
Authors point out [START_REF] Hatchuel | A systematic approach to design theories using generativeness and robustness[END_REF] that all these design theories do not present neither radically different nor contradictory point of views about design. Rather, they can be seen as steps belonging to a same endeavour: develop design theory on more general grounds by seeking domain independence and increased "generativity":
-Domain-independence. Formal design theories aimed to define design reasoning without any object definitions or assumptions coming from specific engineering domains. Thus design theory evolves towards a discipline that could be axiomatically built and improved through empirical field research or theory-driven experiments. A similar evolution has already happened in the fields of decision science or machine learning.
-Increased generativity. Design has often been seen as a sophisticated, ill-structured or messy type of problem solving. This vision was introduced by Herbert Simon, but it needs to be extended by introducing a unique aspect of design: the intention to produce "novel" and creative things (Hatchuel 2002). Authors recently called "generativity", this intentional research of novelty and surprises which has driven the development of design theories through more abstract mathematical assumptions [START_REF] Hatchuel | A systematic approach to design theories using generativeness and robustness[END_REF].
This second property of design deserves some additional remarks, because it has crucial theoretical consequences about what can be defined as a design task.
-Intuitively, the formulation of a design task has to maintain some indeterminacy of its goals and some unknown aspects about what has to be designed. If there is already a deterministic and complete definition of the desired object, design is done or is reduced to the implementation of a constructive predefined algorithm. For instance, finding the solution of an equation when the existence of the solution and the solving procedure are already known should not be seen as a design task. -However, in practice the frontiers of design are fuzzy. For instance, one can generate "novel" objects from random variables: is it design? For sure, a simple lottery will not appear as a design process. However, random fractal figures, present to observers complex and surprising forms. Yet, surprises are not enough to characterize design. The literature associates design work to the fact that novelty and surprises are: i) intentionally generated [START_REF] Schön | ) Varieties of Thinking. Essays from Harvard's Philosophy of Education Research Center[END_REF]); or ii) if they appear by accident, they may be used as a resource for the design task (an effect that has been popularised as "serendipity"). Authors already model Design practice as a combination of intentionality and indeterminacy [START_REF] Gero | Creativity, emergence and evolution in design: concepts and framework[END_REF][START_REF] Braha | Topologial structures for modelling engineering design processes[END_REF]. Domain independence and the modelling of generative processes are main drivers of recent design theory in engineering. On both aspects, C-K theory can be seen as a good representative of the present stage of abstraction and generality in the field. It will be described in more detail and compared to Forcing for the main purpose of this paper: an investigation of the ontology of design.
The status of design theory: beyond engineering can we find formal design theories?
As described before, the evolution in engineering has led to design theories that are no more linked to engineering disciplines and domains. This is a crucial observation for our research. If design theory is independent of what is designed, an ontology of design becomes possible. Similarly an ontology of decision became possible when decision theory was no more dependent of its context of application. However, the independence from engineering domains does not prove that design theory has reached a level of generality that is acceptable outside engineering. Hence, our research of an ontology of design would be more solidly grounded if we could also rely on design theories that emerged in other fields than engineering. Yet, are there other fields where general design theories can be found? And do they present common aspects with engineering design theories? A complete answer to such questions would require a research program in philosophy, Art and science that is beyond the scope of one paper. Thus, we focused our inquiry on potential design theories in science and mathematics. We will introduce Forcing in Set theory and explain why, from our point of view, it can be seen as a general design theory.
Design theory in science. In standard scientific research, the generation of new models and discoveries is common place. Yet, classically the generative power of science is based upon a dynamic interaction between theory and experimental work. This view of science has been widely discussed and enriched. At the end of the 19th century, the mathematician Henri Poincaré suggested that the formation/construction of hypotheses is a creative process [START_REF] Poincaré | Science and Hypothesis. science and hypothesis was originally[END_REF]). Since, it has often been argued that the interaction between theories and experiments follows no deterministic path, or that radically different theories could present a good fit with the same empirical observations. Proposed by Imre Lakatos [START_REF] Worral | Imre Lakatos, the methodology of scientific research programmes[END_REF], the idea of "research programmes"(which can be interpreted in our view as "designed research programs") seemed to better account for the advancement of science than a neutral adjustment of theory to facts. In Modern Physics (relativity theory or quantum mechanics) the intentional generation of new theories is an explicit process. These theories2 are expected to be conceived in order to meet specific requirements: consistency with previously established knowledge, unification of partial theories, mathematical tractability, capacity to be tested experimentally, the prediction of new facts, and so forth... Thus, it is acceptable to say that in classic science new theories are designed. However, [START_REF] Worral | Imre Lakatos, the methodology of scientific research programmes[END_REF] to our knowledge, there is no formal design theory that has emerged as a general one in this field. This is a provisional observation and further research is needed. However, this state of information contrasts with what can be found in Mathematics where the generation of new objects has been modelled.
Design theory in mathematics: the forcing model in Set theory. Following again Henri Poincaré, it is now widely accepted that mathematical objects are created (i.e. designed) to reach increased generality, tractability and novelty. Yet, these views are not enough to offer a formal design theory. For our specific research program, a chapter of Set theory, called Forcing, deserves a special attention because it builds a highly general process for the design of new objects (set models) within the field of Set theory. Forcing generates new sets that verify the axioms of set theory (i.e. new "models" of Set theory). It is also a theory that proves why such technique has general properties and applications. Forcing played a major role in the solution of famous mathematical problems of the 20 th century. For instance, we will see how Forcing has been used to generate new real numbers that changed existing ideas about the cardinality -i.e. "size"of uncountable infinite sets. For sure, Forcing is embedded in the mathematical world of Set theory. However, the level of abstraction of Set theory is such that the following hypothesis can be made and will be justified after a more detailed presentation of Forcing:
Hypothesis: Due to the abstraction of Set theory, Forcing can be seen as a general design theory
A comparative approach between design theories. Towards a clarification of the ontology of design
If Forcing can be seen as a general design theory, then different design theories could have reached, independently, a high level of abstraction. And if these theories present a common core of propositions, this core would be a good description of what design is essentially about and what makes design reasoning possible. Thus our research method was not to postulate an ontology of design and discuss its validity but to infer it from the comparison of design theories coming from different scientific contexts. In this paper, we focus our comparison on i) C-K theory as a representative of design theories in engineering design; and ii) Forcing as a general design theory in Set theory. This comparison was structured by the following questions:
Q1: What are the similarities and differences between C-K theory and Forcing? Q2: What are the common propositions between such theories? What does this "common core" tell us about the ontology of design?
In spite of their different backgrounds, we found consistent correspondences between both theories. As expected, they offer new ground for the clarification of an ontology of design. However, such comparison presents limitations that have to be acknowledged. Forcing is a mathematical theory well established in the field of Set theory. The scope of C-K theory is broader. It aims to capture the design of artefacts and systems including physical and symbolic components. Its mathematical formalization is still an open research issue. Therefore, in this paper, we only seek for insights revealed by the comparison of their structural assumptions and operations when they are interpreted as two models of design. Our claim is that a more rigorous discussion about the ontology of design can benefit from such comparative examination of the structure of design theories.
Part 2. C-K theory: modelling design as a dual expansion
C-K theory has been introduced by Hatchuel and Weil [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF][START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF]. C-K theory attempts to describe the operations needed to generate new objects presenting desired properties. The conversation about C-K theory in the literature treats both its theoretical implications and potential developments (Kazakçi and Tsoukias 2005; Salustri 2005; Reich et al. 2010; Shai et al. 2009; Dym et al. 2005 ; Hendriks and Kazakçi 2010; Sharif Ullah et al. 2011) 3 . In this section we will present the main principles of C-K theory. They are sufficient to study its correspondences with Forcing (more detailed accounts and discussions can be found in [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF][START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF][START_REF] Hendriks | A formal account of the dual extension of knowledge and concept in C-K design theory[END_REF][START_REF] Kazakçi | Is "creative subject" of Brouwer a designer? -an Analysis of Intuitionistic Mathematics from the Viewpoint of C-K Design Theory?[END_REF])).
C-K theory: notions and operators.
Intuitive motivation of C-K theory: what is a design task ? C-K theory focuses on a puzzling aspect of design [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF]: the theoretical and practical difficulties to define the departure point of a design task. In professional contexts such departure points are called "specifications", "programs" or "briefs". But in all cases, their definition is problematic: they have to indicate some desired properties of an object without being able to give a constructive definition of this object, or without being able to warrant its existence by pre-existing knowledge. This explains why a design task cannot be fully captured by the mere task of a mapping between attributes and functions. Design only appears when such mapping is driven by an equivocal, incomplete, fuzzy or paradoxical formulation. Thus to better approach design, we need to model a type of reasoning that begins with a proposition that speaks of an object which is desirable, yet partially unknown, and which construction is undecided with available knowledge. But, this intuitive interpretation leads to difficult modelling issues. How can we reason on objects (or collections of objects) which existence is undecidable? Moreover, because the desired objects are partially unknown, their design will require the introduction of new objects or propositions that were unknown at the beginning of the process. The aim of C-K theory was to give a formal account of these intuitive observations and their consequences.
Concept and knowledge spaces. The name "C-K theory" mirrors the assumption that design can be modelled as the interplay between two interdependent spaces having different structures and logics: the space of concepts (C) and the space of knowledge (K). "Space" means here collections of propositions that have different logical status and relations. The structures of these two spaces determine the core propositions of C-K theory [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF]. Space K contains established (true) propositions or propositions with a clear logical status. Space C is the space where the progressive construction of desired objects is attempted. In this space, we find propositions about objects the existence of which is undecided by the propositions available in K: these propositions of C space are called "concepts" in C-K theory. Example of concepts are propositions like "there exists a flying boat" or "there exists a smarter way to learn tennis". Design begins when one first concept C0 is used as a trigger of a design process. Design is then described as the special transformation of C0 into other concepts until it becomes possible to reject their undecidability by a proof of existence or non-existence in the K-space available at the moment of the proof (the propositions become decidable in the new K-space). The crucial point here is that in space C, the desired unknown objects or (collections of these objects) can only be characterized by comprehensionnally and not extensionally. If a true extensional definition of these objects existed in K or was directly deductible from existing K (i.e. there is a true constructive proof of their existence in K) then the design task would already been done. Now when a new object is designed its existence becomes true in space K, the space of known objects and propositions with a decided logical status, ie its concept becomes a proposition of K. To summarize:
-Space K contains all established (true) propositions (the available knowledge).
3 There is also documented material on its practical applications in several industrial contexts [START_REF] Elmquist | Towards a new logic for Front End Management: from drug discovery to drug design in pharmaceutical R&D[END_REF][START_REF] Mahmoud-Jouini | Managing Creativity Process in Innovation Driven Competition[END_REF][START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF]Hatchuel et al. 2004[START_REF] Hatchuel | The design of science based-products: an interpretation and modelling with C-K theory[END_REF][START_REF] Gillier | Managing innovation fields in a cross-industry exploratory partnership with C-K design theory[END_REF] ; Elmquist andLe Masson 2009). -Space C contains "concepts" which are undecided propositions by K (neither true nor false in K) about some desired and partially unknown objects x.
If follows from these principles that the structure of C is constrained by the special constructive logic of objects the existence of which is undecided. The structure of K is a free parameter of the theory. This corresponds to the observation that design can use all types of Knowledge. K can be modelled with simple graph structures, rigid taxonomies, flexible "object" structures, special topologies [START_REF] Braha | Topologial structures for modelling engineering design processes[END_REF] or Hilbert spaces if there are stochastic propositions in K. What counts from the point of view of C-K theory is that the structure of K allows distinguishing between decided and undecidable propositions. Indeed the K space of an engineer is different from an industrial designer"s one: the latter may include perceptions, emotions, theories about color and form and this will directly impact the objects they will design but, basically, from the point of view of design theory their model of reasoning can be the same.
Concepts, as defined in C-K theory, attempt to capture the ambiguity and equivocality of "briefs" and "specifications". Therefore, concepts are propositions of the form: "There exists a (non empty) class of objects x, for which a group of properties p 1 , p 2 , p k holds in K" 4 . Because Concepts are assumed as undecidable propositions in K, the collection of objects x that they "capture" has unusual structures. This is a crucial point of C-K theory that can be illustrated with an example E of a design task:
Example E: let us consider the design task E of "new tyres (for ordinary cars) without rubber". The proposition "there exists a (non empty) class of tyres for ordinary cars without rubber" is a concept as it can be assumed as undecidable within our present knowledge. For sure, existing tyres for ordinary cars are all made with rubber and there are no existing, or immediately constructible, tyres without rubber. Moreover, we know no established and invariant truth that forbids the existence of such new objects that we call "no-rubber tyres " (Example E will be used as an illustration in all sections of this paper). C-K theory highlights the fact that the design task E (and any design task) creates the necessity to reason consistently on "no rubber tyres" which existence is undecidable in K. These objects form a class that corresponds to a formula that is undecidable in K.
At this stage, the mathematical formulation of C-K theory is still a research issue and a key aspect of such discussion is the interpretation and formalization of the unknown and undecidable aspects of a "concept" 5 . However, turning undecided concepts into defined and constructible things is required by a design task and it is this process that is tentatively described by C-K theory. Necessarily, these operations are "expansions" in both K and C: -in K, we can attempt to "expand" the available knowledge (intuitively, it means learning and experimenting) if we want to reach a decidable definition of the initial concept -in C we can attempt to add new properties to the first concept in order to reach decidability. This operation, which we call a partition (see below) is also an expansion of the definition of the designed object (see below). If I say that I want to design a boat that can fly, I can logically expect that I have to add some properties to the usual definition of boats.
4 It can also be formulated as: "The class of objects x, for which a group of properties p1, p2, pk holds in K is non empty". 5 The literature about C-K theory discusses two ways to treat this issue:
-the class of "non rubber tyres for ordinary cars" can be seen as a special kind of set, called C-set, for which the existence of elements is K-undecidable [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF]. This is the core idea of the theory and the most challenging aspect of its modelling. Clearly assuming "elements" of this C-set will be contradictory with the status of the concept, or we would have to speak of elements without any possibility to define them or to construct them. This is in contradiction with the classic elementarist approach of sets (see Jech, Dehornoy). It means that the propositions "Cset is empty" or "a C-set is non-empty" is K-undecided and only after design is done we will be able to decide this question. Technically, Hatchuel and Weil suggest that C-Sets could be axiomatized within ZF if we reject the axiom of choice and the axiom of regularity, as these axioms assume necessarily the existence of elements. More generally, in space C, where the new object is designed, the membership relation of Set theory has a meaning only when the existence of elements is proved.
-Hendriks and Kazakci [START_REF] Hendriks | Design as Imagining Future Knowledge, a Formal Account[END_REF] have studied an alternative formulation of C-K theory only based on first order logic. They make no reference to C-sets and they reach similar findings about the structure of Design reasoning.
The core proposition of C-K theory is that design appears when both expansions interact. And C-K theory studies the special structure and consequences of such interplay.
-The design process: partitions and C-K operators. As a consequence of the assumptions of C-K theory, design can only proceed by a step-by-step "partitioning" of the initial concept or its corresponding class. Due to the undecidability of concepts and associated classes, "partitions" of a concept cannot be complete family of disjoint propositions. In the language of C-K theory, partitions are one or several new classes obtained by adding properties (coming from K) to the existing concepts. If C k is the concept "there exists a non empty class of objects which verify the properties p 0 , p 1 , p 2 … and p k ", a partition will add a new property p k+1 to obtain a new concept C k+1. Such partition create a partial order where C k+1 > C k . However, in Space C the class associated to C k+1 is not included in the Class associated to C k , as not extensional meaning holds in Space C. There is no warranted existence of any element of a Class associated to a concept. These additions form a "nested" collection of concepts. Beginning with concept C 0 , this partitioning operation may be repeated, whenever there is an available partitioning property in K and until the definition of an object is warranted in K.
Having in mind the interplay between C and K, this partitioning process has specific and unique [START_REF] Ullah | On some unique features of Ck theory of design[END_REF] features.
-Each new partition of a concept has an unknown status that has to be "tested" in K. "Testing" means activating new knowledge that may check the status of the new partition (mock-ups, prototypes, experimental plans are usual knowledge expansions related to a group of partitions).
-Testing a partition has two potential outputs: i) the new partition is true or false, thus forms an expansion in K, or is still undecidable and forms an expansion in C; ii) testing may also expand existing knowledge in a way which is not related to the status of the tested partition (surprises, discoveries, serendipity…). Such new knowledge can be used to generate new partitions and so forth... Finally, "testing" the partition of a concept always expands C or expands K by generating new truths. Hence the more we generate unknown objects in C, the more we may increase the expansion of K.
Example E: Assume that the concept of "non-rubber tyres" is partitioned by the type of material that replaces rubber. This depends of the knowledge we have in K about materials: for instance, plastics, metal alloys and ceramics. Thus we have three possible partitions: "non-rubber tyres with plastics", "non-rubber tyres with metal alloys" and "non-rubber tyres with ceramics". These partitions may create new objects. And testing these partitions may lead to new knowledge in K, for instance new types of plastics, or new materials that are neither plastics, metal alloys or ceramics! By combining all assumptions and operations described in C-K theory, the following propositions hold (Hatchuel andWeil 2003, 2009;Hendriks and Kazakci 2010) -Space C has necessarily a tree structure that follows the partitions of C 0 (see Fig 1).
-A design solution is the concept C k that is called the first conjunction i.e. the first concept to become a true proposition in K. It can also be defined by the series of partitioning properties (p 1 , p 2 …p k ) that forms the design path that goes from the initial concept C 0 to C k . When C k becomes true in K (a design is reached), the class associated to the series of concepts (C 0 , C 1 , C 2 ,... C k ) verify the property: i, i=0…k-1, C k C i. 6 ie: it becomes possible to use the inclusion relationship since the existence of the elements of C k is true in K and these elements are also included in all concepts that are "smaller" than C k .
-The other classes resulting from partitions of C 0 are concept expansions that do not form a proposition that belongs to K.
-All operations described in C-K theory are obtained through four types of operators within each space and between spaces: C-C, C-K, K-K, and K-C. The combination of these four operators is assumed to capture the specific features of design, including creative processes and seemingly "chaotic" evolutions of a real design work [START_REF] Hatchuel | The design of science based-products: an interpretation and modelling with C-K theory[END_REF]. From the point of view of C-K theory, standard models of thought and rationality do not model concepts and can be interpreted as K-K operators.
C-K theory: findings and issues
For sure, neither C-K theory nor any other design theory will warrant the existence of "tyres without rubber". Design theories, like C-K theory, only model the reasoning and operations of the design process and capture some of its "odd" aspects. Thus C-K theory introduces the notion of "expanding partition" which captures a wide range of creative mechanisms.
Expanding partition: generating new objects through chimeras and "crazy" concepts. In C-K theory, it is crucial to distinguish between two types of partitions in space C: expanding and restricting ones. To do so we need to introduce some additional structures in K: the definition of known objects. In example E, the attribute "made with rubber" is assumed to be a common attribute of all known tyres in K. Therefore, the partition "without rubber" is not a known property of the class of objects "tyres". This partition is called an expanding partition as it attempts to expand the definition of tyres by creating new tyres, which are different from existing ones. Suppose that the concept is now "a cheaper tyre" and the first partition is "a cheaper tyre using a rubber coloured in white": if "tyres with white rubber" are known in K, this is called a restricting partition. Restricting partitions only act as selectors among existing objects in K. While, expanding partitions have two important roles:
-they revise the definition of objects and potentially create new ones; they are a vehicle for intentional novelty and surprise in design; -they guide the expansion of knowledge in new directions that cannot be deduced from existing Knowledge.
The generative power captured by C-K theory comes from the combination of these two effects of expanding partitions. Revising the definition of objects allows for new potential objects to emerge (at least as concepts). But this is not enough to warrant their existence in K. Expanding partitions also foster the exploration of new knowledge, which may help to establish the existence of new objects. Thus, expanding partitions capture what is usually called imagination, inspiration, analogies or metaphors. These are well known ingredients of creativity. However, their impact on design was not easy to assess and seemed rather irrational. C-K theory models these mechanisms as expanding partitions through the old and simple technique of chimera forming 7 : partially defining a new object by unexpected attributes (this definition can be seen as crazy or monstruous regarding existing knowledge). Yet this is only one part of the mechanism. C-K theory unveils two distinct effects of these chimeras: allow for new definition of things and guide the expansion of new knowledge. By disentangling these two roles and the value of their interplay and superposition, C-K theory explains the rationality of chimeras and seemingly "crazy concepts" in design: they force the designer to explore new sources of knowledge which could, surprisingly, generate new objects different from the "crazy concepts". It is worth mentioning that this is not classic trial and error reasoning. Trials are not only selected among a list of predefined possibles. Trials are regenerated through C and K expansions. And the acquired knowledge is not only due to errors but also comes from unexpected explorations. And finally, most of potential trials stay at a stage of chimeras, and yet have generated new knowledge.
Example E: the concept of "non-rubber tyres using plastics" may appear as a chimera and rather "crazy" if known plastics do not fit with usual tyre requirements. But, from another point of view, it may trigger the investigation of plastics offering better resistance. Again, these new plastics may not fit. The same process could happen with ceramics and metal alloys and still only reach undecidable concepts. Meanwhile space K would have been largely expanded: new alloys, new plastics new ceramics and more. Then, and only then, new partitions can appear in Space C, for instance introducing new structures combining multiple layers of different materials and new shapes… The whole logic of space C will change: and the first partitions will no more be on types of materials but on new structural forms that were not known at the beginning of the design process.
An important issue: introducing new objects and the preservation of meaning.
Actually, expanding partitions also raise important issues. If, in example E, design succeeds, then "tyres without rubber" will exist in K. Now, if in K the definition of a tyre was "a special wheel made with rubber", such definition is no more consistent with the new designed object and has to be changed. The design of "tyres without rubber" outdates the old definition of tyres. Yet, revising the definition of tyres may impact other definitions like the definition of wheels, and so on. Thus, the revision of definitions has to be done without inconsistencies between all old and new objects in K. Clearly, any design should include a rigorous reordering of names and definitions in K in order to preserve the meaning of old and new things. Otherwise definitions will become less consistent and the whole space K will be endangered. Finally, design theory underlines a hidden, yet necessary, impact of design: the perturbation of names and definitions. It warns about the necessity to reorganize knowledge in order to preserve meaning in K i.e. the consistency of definitions in K.
What is the generality of the principles and issues raised by C-K theory? Are there implicit assumptions about design that limit the generality of the theory? We now explore existing similarities and differences between C-K theory and a general technique of modern set theory called "forcing". This comparison will guide us towards an ontology of "expansion" as a core ontological feature of design.
Part 3. Design inside Set theory: the Forcing method.
Can we find design theory or methods in mathematics? If a crucial feature of design is the intentional generation of new objects, several design approaches can be found. A branch of mathematics called intuitionism even perceives the mathematician as a "creative subject" [START_REF] Kazakçi | Is "creative subject" of Brouwer a designer? -an Analysis of Intuitionistic Mathematics from the Viewpoint of C-K Design Theory?[END_REF]. Within more traditional mathematics there is a wide variety of "extensions" which can be interpreted as design techniques. "Extension" means transforming some existing mathematical entity M (a group, a ring, a field,…) in order to generate a larger one N, related to but different from M, that contains new entities which verify new selected properties.
The design of complex numbers. Extension procedures are usually dependent of the specific mathematical structure that is extended. Classic maths for engineering includes an example of such ad hoc extensions: the generation of complex numbers from Real ones. The procedure shows clear features of a design process. Real numbers have no "negative squares", yet, we can generate new entities called "complex numbers" that are "designed" to verify such a strange property. The method uses special properties of the division of polynomials. Let us divide any polynomial by a polynomial which has no real root (for instance X 2 +1); we generate equivalence classes built on the remainder of the division. The equivalence classes obtained by the polynomial division by X 2 + 1 are all of the form aX+b where (a, b) IR 2 . These equivalence classes have a field structure (ie with an addition, a multiplication…) and it contains the field of real numbers (all the equivalence classes where a = 0). In this field the polynomial X 2 +1 belongs to the equivalence class 0, ie X 2 +1 0. Hence the classes can be renamed with the name ai +b where i verifies i 2 + 1 = 0, ie i can be considered as the complex (or imaginary) root of x 2 +1=0. Just like the equivalence classes to which they correspond, these ai + b entities form a field (with an addition, a multiplication…), ie a new set of designed numbers which have standard properties of reals plus new ones. It is worth mentioning that with the design of complex numbers the definition of "number", likewise the definition of "tyre" in example E, had to be revised. And the most striking revision is that the new imaginary number i is not a real number, yet all the powers of i 2 are reals! Clearly, this extension method is dependent of the specific algebra of the ground structure ie. the field of real numbers. Therefore, if an extension method acts on general and abstract structures, then it could be interpreted as a general design theory. This is precisely the case of Forcing discovered by Paul Cohen in 1963 [START_REF] Cohen | The independence of the Continuum Hypothesis[END_REF][START_REF] Cohen | The independence of the Continuum Hypothesis II[END_REF][START_REF] Cohen | Set Theory and the Continuum Hypothesis. Addison-Wesley, Cross N (1993) Science and design methodology: A review[END_REF] 8 . It generalizes the extension logic to any sets and allows the generation of new collection of sets. We first present the principles of Forcing to support the idea that Forcing is a design theory; then, we study its correspondence with C-K theory.
Forcing: designing new collections of sets
Forcing has been described by historians of Set theory as « a remarkably general and flexible method with strong intuitive underpinnings for extending models of set theory » [START_REF] Kanamori | The Mathematical Development of Set Theory from Cantor to Cohen[END_REF]. Let us remind what are "models of set theory" before describing the Forcing operations.
Figure 2: The forcing Method -Models of Set theory. Set theory is built on a short list of axioms called Zermelo-Frankael axiomatic (ZF) [START_REF] Jech | Set Theory[END_REF] 9 . They define the equality, union, separation and well formation of sets. They also postulate the existence of some special sets. A model of set theory is any collection of sets that verify ZF; it is also called a model of ZF. In the engineering world, the conventional definition of a thing or a class of things (for instance, the definition of tyres) plays the role of a "model of tyres" even if real life conventions are less demanding than mathematical ones; thus, a 8 He has been awarded a Fields medal for this work. 9 Properly speaking, ZF has infinitely many axioms: its axiomatization consists of six axioms and two axiom schema's (comprehension and replacement), which are infinite collections of axioms of similar form. We thank an anonymous reviewer for this remark. model of tyres is a collection of sets of tyres that verify the usual definition of tyres10 . In the industrial world, thanks to technical standards, most engineering objects are defined through models (for example, machine elements).
-Why Forcing? Independent and undecidable propositions in Set theory. After the elaboration of ZF, set theorists faced propositions (P), like the "axiom of choice" and the "continuum hypothesis"11 , that seemed difficult to prove or reject within ZF. This difficulty could mean that these propositions were independent from the axioms of ZF, hence were undecidable within ZF, so that models of ZF could verify or not these propositions. Now, proving the existence of a model of ZF that does not verify the axiom of choice is the same type of issue than proving that there is a model of tyres with no rubber. One possible proof in both cases is to design such a model! Actually, designing new models of ZF is not straightforward and there comes Forcing, the general method invented by Paul Cohen.
The forcing method: ground models, generic filters and extensions
Forcing assumes the existence of a first model M of ZF, called the ground model and then it offers a constructive procedure of a new model N, called the extension model, different from M, which refutes or verifies P and yet, is a model of ZF. In other words, Forcing generates new collections of sets (i.e. models) and preserves ZF. Hence, it creates new sets but preserves what can be interpreted as their meaning i.e. the basic rules of sets. Forcing is not part of the basic knowledge for engineering science and is only taught in advanced Set theory courses. Therefore, a complete presentation of Forcing is beyond the scope of this paper and we will avoid unnecessary mathematical details and focus on the most insightful aspects of Forcing12 needed to establish the findings of this paper. Moreover, it is precisely because Forcing is a very general technique that one can understand its five main elements and its logic without a complete background in advanced Set Theory.
-The first element of Forcing is a ground model M: a well formed collection of sets, a model of ZF.
-The second element is the set of forcing conditions that will act on M. To build new sets from M, we have to extract elements according to some conditions that can be defined in M. Let us call (Q, <) a set of candidate conditions Q and a partial order relation < on Q. This partially ordered set (Q, <) is completely defined in M. From Q, we can extract conditions that can form series of compatible and increasingly refined conditions (q 0 , q 1 , q 2 ... q i ) with for any i: q i < q i-1 ; this means that each condition refines its preceding one. The result of each condition is a subset of M. Hence the series (q i ) builds series of nested sets, each one being included in its preceding set of the series. Such series of conditions generates a filter13 F on Q. And a filter can be interpreted as a step-by-step definition of some object or some set of objects where each step refines the preceding definition by adding new conditions.
-The third element of Forcing is the dense subsets of (Q, <): a dense subset D of Q is a set of conditions so that any condition in Q can be refined by at least one condition belonging to this dense subset. One property of dense subsets is that they contain very long (almost "complete") definitions of things (or sets) on M, since every condition in Q, whatever its "length", can always be refined by a condition in D.
-The fourth element of Forcing (its core idea!) is the formation of a generic filter G which step by step completely defines a new set not in M ! Now how is it possible to jump out the box M? Forcing uses a very general technique: it creates an object that has a property that no other object of M can have ! (Remark: this is similar to an expanding partition in the language of C-K theory). Technically, a generic filter is defined as a filter that intersects all dense subsets. In general this generic filter defines a new set that is not in M 14 but is still defined by conditions from Q, defined on M. Thus, G builds a new object that is necessarily different from all objects defined in M. We can interpret G as a collector of all information available in M in order to create something new not in M.
-The fifth element of Forcing is the construction method of the extended model N. The new set G is used as the foundation stone for the generation of new sets combining systematically G with other sets of M (usually called M(G)). The union of M and M(G) is the extension model N. (Fig 2 illustrates how G is built with elements of M, yet G is not in M; then N is built with combinations of G and M). A crucial aspect of Forcing is the necessity to well organize the naming of the sets of M when they are embedded in the extension set N. Thus, elements of M have two names, the old one and the new one. The generic set G, the new designed object, has one unique name as it was not present in M.
The main Forcing theorems. Paul Cohen invented Forcing and proved a series of theorems that highlighted the generality of the design process. The main results can be synthesized as follows:
-Forcing preserves ZF: Whenever a generic filter G exists, the new model N is a model of ZF. Hence, ZF is preserved and the new sets are not meaningless objects.
-Forcing controls all properties of N: All properties of the elements of N are strictly dependent on the conditions (q 0 ... q i ) that formed the generic filter. This means that any true proposition T in N is such that there exists some p i in G so that: q i T. Hence, the appropriate generic filter G warrants the existence of new models of sets with desired properties. The impact of Forcing on Set theory has been paramount, and at the same time, historians of mathematics acknowledge its surprising power. « Set theory had undergone a sea-change and beyond how the subject was enriched it is difficult to convey the strangeness of it" [START_REF] Kanamori | The Mathematical Development of Set Theory from Cantor to Cohen[END_REF]). 14 G is not in M as soon as Q follows the splitting condition: for every condition p, there are two conditions q and q" that refine p but are incompatible (there is no constraint that refine q and q"). Demonstration (see [START_REF] Jech | Set Theory[END_REF], exercise 14.6, p. 223): Suppose that G is in M and consider D = Q \ G. For any p in Q, the splitting condition implies that there q and q" that refine p and are incompatible; so one of the two is not in G hence is in D. Hence any condition of Q is refined by an element of D. Hence D is dense. So G is not generic. To illustrate Forcing we give a simple application due to Cohen [START_REF] Jech | Set Theory[END_REF]. It is the forcing of real numbers from integers (see Fig 3). Ground model: The sets of integers (power set based on the set of integers ); Forcing conditions Q: the conditions Q can be written as a (0; 1)-functions defined on a subset of : assume, a finite series of ordered integers (1, 2, 3, 4,…, k); to each integer assign a (0,1) value; we obtain a new k-list (0, 1, 1, 1,…, 0). The condition is defined over the k first integers and among these integers it extracts some integers (those with value 1) and leaves the others (value 0). It also describes the set of all numbers beginning by the sequence of selected integers. This can be assimilated to the reals written in base 2 and beginning with the same k first binary digits. Then, let us build a more refined condition by keeping this first list and assigning to k+1 a (0,1) value, without changing the values of the preceding n-list. We obtain a new condition of length k+1 that refines the first one. The operation can be repeated infinitely. This extension defines the order relation on the conditions Q. Note that (Q, <) follows the splitting condition: for any condition p, (q(0), q(1),… q(k)), there are always two conditions that refine p and are incompatible: (q(0), q(1),… q(k), 0) and (q(0), p(1),… q(k), 1). A series of ordered conditions from length 1 to length k forms a filter; all sets of conditions that contain a refinement of every condition are dense subsets; Generic filter: it is formed with the infinite series of conditions that intersects all dense subsets. Hence, the generic filter G builds an infinite list of selected integers and G is not in M. This follows directly from the splitting condition (see footnote [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF] or this can also be demonstrated as follows: for any 0-1 function g in M, Dg = {q Q, q g} is dense so it meets G so that G is different from any g. Hence G forms a new real number (this is the demonstration given in [START_REF] Jech | Set Theory[END_REF]). Note that any real number written in base 2 corresponds to a function g. Hence it means that G forms a real number that is different from any real number written in base 2 15, 16 .
Part 4: C-K theory and Forcing: a correspondence that uncovers an ontology of expansion
Now that we have presented C-K theory and Forcing we can come back to our hypotheses and research questions.
4.1-Forcing as a general design theory
The previous brief introduction to Forcing brings enough material to discuss our claim that Forcing is a general design theory, not an ad hoc technique.
-Design task: like any design project Forcing needs targeted properties for the new sets to be generated. However, Forcing gives no recipe to find generic filters for all desired properties about sets. It only explains how such generation is conceivable without creating nonsense in the world of Sets.
-Generality: Forcing uses only universal techniques like the definition of a new "thing" through a series of special refinements. Indeed, the basic assumptions of Forcing are the axioms of Set theory and the existence of ground models of sets. However, Set theory is one of the most universal languages that are available.
-Generativity: Novelty is obtained by a general method called generic filter which is independent of the targeted sets. The generic filter builds a new set that is different from any set that would have been built by a classic combination of existing conditions within M. Thanks to this procedure, the generic filter is different from any such combination. Thus genericity creates new things by stepping out the combinatorial process within M.
These three observations support the idea that Forcing can be interpreted as a general design theory. Indeed the word "Design" is not part of the Forcing language and it is the notion of "extension" that is used in Forcing and other branches of mathematics. But it is precisely the aim of a design science to unify distinct procedures that appear in different fields under different names, if they present an equivalent structure. Such unification is easier when we can compare abstract and general procedures. And Forcing shows that, likewise design theories in engineering, extensions in mathematics have evolved towards more general models17 . Let us now come back to our comparison and establish similarities and differences between both theories and why these reveal specific ontological elements of design.
4.2-C-K theory and Forcing: similarities and differences
At first glance, both theories present a protocol that generates new things that were not part of existing background. Yet, similarities and differences between these approaches will lead us to highlight common features which may be explicit in both approaches; or implicit in one and explicit in the other. As main common aspects we find: Knowledge expandability, Knowledge "voids", and generic expansions. They form altogether a basic substrate that makes design possible and unique. a) Knowledge expandability, invariant and designed ontologies (dontologies)
Knowledge expandability. Clearly, the generation of new objects needs new knowledge. In C-K theory it is an explicit operation. C-K theory assumes knowledge expansions that are not only the result of induction rules, which can be interpreted as KK operations. New knowledge is also obtained by CK and KC operators which have a triggering and guiding role through the formation of expanding partitions and concepts [START_REF] Kazakçi | Is "creative subject" of Brouwer a designer? -an Analysis of Intuitionistic Mathematics from the Viewpoint of C-K Design Theory?[END_REF]. But where is such new knowledge in Forcing? Is this a major difference between the two theories? As already remarked by Poincaré [START_REF] Poincaré | Science and Hypothesis. science and hypothesis was originally[END_REF], one essential method to create novelty in mathematics is the introduction of induction rules which generate actual infinites. In Set theory, the axiom of infinity allows such repeated induction and plays the role of an endless supplier of new objects. Without such expansion technique, generic filters are impossible and Forcing disappears. Thus, both theories assume a mechanism for K expandability even if they use different techniques to obtain it.
Invariant ontologies and the limits of design. The background of Forcing is Set theory. Actually, Forcing creates new models of ZF, but ZF is explicitly unchanged by Forcing. The existence of such invariant structures is implicit in C-K theory and relates to implicit assumptions about the structure of K. C-K theory lacks some explicit rules about knowledge: at least, some minimal logic and basic wording that allows consistent deduction and learning. These common rules are thus necessary to the existence of design and, like ZF, may not be changed by design. Yet, it is really difficult establish ex ante what are the invariant rules that should never be changed by design. This issue unveils an interesting ontological limitation for a design theory. To formulate a design theory we need a minimal language and some pre-established knowledge that is invariant by design ! Intuitively, we could expect that the more this invariant ontology is general, the most generative will be the design theory. But it could be argued that too minimal invariant ontology would hamper the creative power of design. We can only signal this issue that deserves further research.
By constrast, we can also define variable ontologies, i.e. all definitions, objects and rules that can be changed by design. These variable ontologies correspond to the classic definition of ontologies in computer science or artificial intelligence [START_REF] Gruber | Towards Design Theory and expandable rationality: the unfinished program of Herbert Simon[END_REF]. They are generated and renewed by design: we suggest to call them designed ontologies or d-ontologies, to remind that they result from a previous design process. Most human Knowledge is built on such d-ontologies. Finally, the ontology of design highlights a specific aspect of Knowledge. It is not only a matter of truth or validity. Design science also discusses how knowledge shapes or is shaped by the frontier between what is invariant or designed at the ontological level.
Example E: In the Tyre industry, rubber should be seen as a designed ontological element of Tyres. Yet, for obvious economic and managerial reasons, it could be considered as an invariant one. In any human activity, the frontier between what is invariant and what can be changed by design is a tough and conflictual issue. The role of design theory is not to tell where such frontier should be, but to establish that the existence of such frontier is an invariant ontological element of design, in all domains, be it in Mathematics, or in engineering.
Beyond invariant ontologies, design needs to generate designed ones and this needs another interesting aspect of knowledge: the presence of knowledge "voids".
b) Knowledge voids: undecidability and independence
In Set theory, Forcing is used to design models of Sets such that some models satisfy the property P while others verify its negation. These models prove that P is undecidable within Set theory.
When this happens, P can be interpreted as a void in the knowledge about Sets. Conversely, the presence of such voids is a condition of Forcing. In C-K theory, concepts are also undecidable propositions that can be similarly seen as voids. Yet, their undecidability is assumed and they are necessary to start and guide the design process 18 . Thus knowledge voids are a common ontology in both theories. Their existence, detection and formulation is a crucial part of the ontology of design. The word "void" is used as a metaphor, that conveys the image that these "voids" have to be intentionally "filled" by design 19 . As proved in Forcing, they signal the existence of independent structures in existing knowledge or in a system of axioms.
Example E. If one succeeds to design "Tyres without rubber", it will be confirmed that: i) the concept of "tyres without rubber" was undecidable within previous knowledge; ii) that the dontology of tyres has become independent from the d-ontology of rubber.
Thus C-K theory and forcing present consistent views about undecidability and highlight its importance for a science of design: it is both a necessary hypothesis for starting design (C-K theory) and an hypothesis that can only be proved by design (Forcing).
This finding leads to three propositions that explain why an ontology of design was so specific and required modelling efforts:
-The ontology of design is not linked to the accumulation of knowledge, but to the formation of independent structures (voids) in cumulated knowledge.
-The specific rationality of design is to "fill" such voids in order to create desired things -"filling" means to prove the independence between two propositions in K.
-The existence of such desired things remains undecidable as long as they are not designed.
c) Design needs generic processes for expansion
Generic and expanding expansions. In C-K theory, a design solution is a special path (C 0 ,… C k ) of the expanded tree of concepts in space C. This design path is obtained through a series of refinements which form a new true proposition in K. Whenever this series is established, several results hold. The partitions that form the design solution are proved compatible in K and define a new class of objects which verifies the first C 0 (initially undecidable in K). Comparing with Forcing, this design path is also a filter as the path is generated by a step by step refinement process. It is also a generic filter in C 20 . Hence, the design path is a generic filter in C, which includes C 0 and "forces" a new set of objects that verify C 0 . Yet the generation of novelty in C-K is not obtained by the mathematical chimera of an actual infinity of conditions like in Forcing. It is warranted by: i) the assumption of C 0 as an undecidable proposition in K at the beginning of design; and ii ) at least one expanding partition and one expansion in K, which are necessary to form one new complete design path. Thus, genericity also exists in C-K theory but it is not built by an infinite induction but by introducing new truths and revising the definition of objects. Finally, C-K theory and Forcing differ by the technique that generates novelty but both can be seen as generic expansions as they are obtained by expansions designed to generate an object that is different from any existing object or any combination of existing objects. Thus, generic expansions are a core element of the ontology of design.
Expanding partitions as potential forcings: C-K theory adopts a "real" world perspective. All Knowledge is not given at the beginning of the design process and C-K operators aim to expand this knowledge. This can be interpreted, yet only as a metaphor, with the forcing language. We could say that expanding partitions offer new potential forcing conditions. However, increasing potential Forcings is possible only if expanding partitions are not rejected in K because they contradict some invariant ontology.
K-reordering, new namings and preservation of meaning. In both Forcing and C-K theory, design generates new things. In mathematics, the generation of new? real numbers by Forcing obliged to rediscuss the cardinality of the continuous line. We call K-reordering, these KK operations that are needed to account for the safe introduction of new objects with all its consequences. For instance, design needs new names to avoid confusion and distinguish new objects. Interpretation rules will be necessary to preserve meaning with old and new names. As mentioned before, such issues are explicitly addressed in Forcing 21 . In the first formulations of C-K theory [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF][START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF], K-reordering was implicit. Now it is clear that it should receive explicit attention in any design theory. To avoid creating nonsense, design needs such careful K-reordering. These theoretical findings have been confirmed by empirical observations of design teams. During such experiments, authors observed the generation of "noun phrases" [START_REF] Mabogunje | Noun Phrases as Surrogates for Measuring Early Phases of the Mechanical Design Process[END_REF]: this is the response of designers facing the need 20 Proof: for any dense subset D of C, there is a refinement of C k that is in D. But since C k is also in K, any refinement of C k is in K and cannot be in C. Hence C k is in D. 21 The output of Forcing is not one unique new Set G, but a whole extended model N of ZF. The building of the extension Model combines subsets of the old ground Model M and the new set G. Thus new names have to be carefully redistributed so that an element M with name a gets a new name a" when considered as an element of the new set. As a consequence of these preserving rules the extension Set is well formed and obeys ZF axioms (Jech 2000).
to invent new names to describe objects defined by unexpected series of attributes. These "noun phrases" also allow some partial K-reordering that preserves meaning during the conversations at work.
Part 5: Discussion and conclusion: an ontology of design.
In the preceding sections we have compared two design theories coming from different fields. Our main assumptions were that these theories were sufficiently general to bring solid insights about what is design and what are some of its ontological features. We also expected that these common features would appear when each design theory is used to mirror the other.
What we have found is that an ontology of design is grounded on an ontology of expansion. This means that in any design domain, model or methodology we have to find a common group of basic assumptions and features that warrant a consistent model of expansion. Or, to put it more precisely: if we find a reasoning process where these features are present, we can consider it as a design process. What are these features? We have assumed that these features can be inductively obtained from the comparison between two general design theories in different fields. We found six ontological features that we summarize in the first column of table1, where we recall the corresponding elements of each feature for both Forcing (column 2) and C-K theory (column 3). -An ontology of design needs a dynamic frontier between invariant ontologies and designed ontologies. This proposition has important implications for the status of design. Design cannot be defined as an applied science or as the simple use of prior knowledge. Invariant ontologies can be seen as some sort of universal lawts. Yet designed ontologies are not deduced from these laws, their design needs extra knowledge and revised definitions. Moreover, it is not possible to stabilize ex ante the frontier between these two ontologies. Fore sure generic expansions need some minimal and invariant knowledge. But design theories say nothing about what could be such minimal frontier. Take the field of contemporary Art, even if Art work was not studied in this research, we can conjecture that invariant ontologies that bear on present artistic work are rather limited . Each artist can design and decide what should stay as an invariant ontology for her own future work. In mathematics we can find similar discussions when axiomatics and foundations are in debate. Therefore, an ontology of design may contribute to the debate about the creative aspects of mathematical work [START_REF] Kazakçi | Is "creative subject" of Brouwer a designer? -an Analysis of Intuitionistic Mathematics from the Viewpoint of C-K Design Theory?[END_REF]. Applying such categories to analyse our own work, we have to acknowledge that the ontology of expansion that we have found is a designed and not an invariant one. It depends on the design theories that we have compared in this paper. New ontological features of design may appear if we study other theories. However, by grounding our work on theories that present a high level of generality, but we can reasonably expect that we have at least captured some invariant features of design.
Ontology of design
-An ontology of design acknowledges for voids in knowledge: modelling unknowness. The notion of "voids" opens a specific perspective on knowledge structures. It should not be confused with the usual "lack of knowledge" about something that already exists or is well defined. It is correct to say that: "we lack knowledge about the presence of water on Mars". In this sentence the notions of presence, Mars, and water have not to be designed. Instead, knowledge voids designate unknown entities which existence requires design work. Thus it is not consistent, from our point of view, to say that "we lack knowledge about tyres without rubber". If we want to know something about them, we have to design them before! These findings open difficult questions that need further research: can we detect all "voids" in Knowledge? Are there limits to such inquiry? Are there different possibilities to conceptualize this metaphor? In our research we modelled "voids" with notions like undecidability and independence which are linked to the common background of C-K theory and Forcing. To challenge these interpretations further is needed to explore new models of what we called "concepts" and "unknown objects". A similar evolution happened with the notion of uncertainty, which was traditionally modelled with probability theory before more general models where suggested (like possibility theories).
-An ontology of design needs generic processes for the formation of new things. An important finding of our comparison is that generating new things needs generic expansions, which are neither pure imagination nor pure combination of what is already known. What we have found is that design needs a specific superposition and interplay of both chimeras and knowledge expansions. C-K theory insists on the dual role of expanding partitions which allows to revise the identity and definition of objects. Forcing is not obtained by a finite combination of elements of the ground model. It needs first to break the ground model by building a new object, the generic filter, and then to recombine it with old ones. This was certainly the most difficult mechanism to capture. Design theories like C-K theory and Forcing clarify such mechanisms but they are difficult to express with ordinary language. Our research shows that common notions like idea generation, "problem finding" or "serendipity" are only images or elements of a more complex cognitive mechanism. Indeed, it is the goal of theories to clarify what is confused in ordinary language and design theories have attempted to explain what remained usually obscure in Design. Still, it is a challenge to account more intuitively for the notion of generic expansion.
-An ontology of design needs mechanisms for preservation of meaning and Knowledge reordering. This finding signals the price to pay if we want to design and to continue expanding knowledge and things. At the core of these operations we find the simplest and yet most complex task: consistent naming. It is naming that controls the good reordering of knowledge when design is active. Naming is also necessary to accurately identify new "voids" i.e. new undecidable concepts or independent knowledge structures. Naming is also a central task for any industrial activity and organisation. An ontology of expansion tells us that the most consistent way to organize names is to remember how the things we name have been designed and thus differentiated from existing things. Yet, in practice names tends to have an existence of their own and it is well documented that this contributes to fixation effects [START_REF] Jansson | Design Fixation[END_REF]. It is also documented that in innovative industries, engineering departments are permanently producing a flow of new objects, thus a complete K-reordering becomes almost impossible and this process continuously threatens the validity of naming and component interchangeability [START_REF] Giacomoni | M et gestion des évolutions de données techniques : impacts multiples et interchangeabilité restreinte[END_REF].
Limitations and further research. To conclude we must stress again that our findings are limited by our material and research methodology. Our comparative work could be extended and strengthened by introducing other formal design theories, provided they are more general than C-K theory and Forcing and reveal new ontological features.
An alternative to our work would be to study design from the point of view of its reception which can be interpreted as a continuation of design or as a K-reordering process, both taking place beyond the designer"s work (by clients, users, experts, critics, media, etc..). There is also a wide scientific work on perception that has influenced many designers like for instance, Gestalt theory, contrast and color theory etc… One issue for further research could be to compare the ontology of expansion that we have found for Design to existing ontologies of perception.
We also acknowledge that, for instance, social or psychological approaches of design could lead to different perspectives on what design is about. However, the clarification of an ontology of design may contribute to new explorations of the social and psychological conditions of design.
The frontier between invariant and designed ontologies can be interpreted from a social perspective. Design, as we have find it, requires consistent naming and K-reordering and this also means special social work and training are needed for the acceptance of design activities. Human Societies need both invariance and evolution. Words, rules, habits, cannot change too rapidly but they also need to evolve by design. Thus one can ask if there are social systems that are more or less consistent with the ontology of expansion that we have described. Social and psychological structures indeed play an important role in the fixation of ontologies and in design training and learning. It will be the task of future research to link such theoretical advances to more empirical observations of design tasks [START_REF] Agogué | The Impact of Examples on Creative Design: Explaining Fixation and Stimulation Effects[END_REF].
Implication for design practice. The practical lesson of this theoretical research is rather simple. According to our findings design has a specific ontology, anchored in subtle and difficult cognitive mechanisms like Knowledge voids, generic expansions and K-reorderings. Thus we can better understand why design practice can be disconcerting, controversial and stressful; also why empirical design research is so demanding and complex [START_REF] Blessing | What is Engineering Design Research?[END_REF]. The good news is that design theory can cope with the cognitive "chaos" that seems to emerge from design. We understand that design corresponds to a type of rationality that cannot be reduced to standard learning or problem solving. The rationality of design is richer and more general than other rationalities. It keeps the logic of intention but accepts the undecidability of its target; it aims exploring the unknown and it is adapted to the exploitation of the emergent. Yet, its ontology can be explained, and as any other science, design science can make the obscure and the complex clearer and simpler.
Figure 1 :
1 Figure 1: C-K diagram
Figure 2. The forcing method
Figure 3 :
3 Figure 3: The generation of Cohen Reals by Forcing 3.3. An example of Forcing: the generation of new real numbers.
Table 1 :
1 Ontology of design as a common core of design Theories These findings have several implications and open areas for further research that we briefly discuss now.
Forcing C-K theory
Invariant ontologies Axioms of Set theory Basic logic and language;
invariant objects (frontier)
designed ontologies New models of Sets New families of objects
Knowledge expansions Inductive rules (axiom Discovery or guided
of infinity) exploration
"voids", undecidability and Independant axioms Concepts and independent
independence Set theory structures in K
Generic expansions Generic filter Design path with expanding
(generating new thing) partitions and K-expansions
K-reordering, naming and Building rules for the New names and reorganising
preservation of meaning extension model the definition of designed
ontologies
For instance, there is an active quest for new theoretical physics based on String theory that could replace the standard model of particles.
It may be surprising that the inclusion relation becomes possible: it becomes possible only when the existence is proved.
The idea of Design as Chimera forming can be traced back to Yoshikawa"s GDT[START_REF] Yoshikawa | Design Theory for CAD/CAM integration[END_REF] (see the Frodird, p. 177) although the authors didn"t use the term chimera and the theoretical properties of such operations were not fully described in the paper.
Such models of things are also present in Design theories (see for instance the "entity set" in GDT[START_REF] Yoshikawa | General Design Theory and a CAD System[END_REF]).
The two propositions of this type that gave birth to the forcing method are well known in set theory. The first one is "every set of nonempty sets has a choice function"; the second one is the existence of infinite cardinals that are intermediate between the cardinal of the integers and the cardinal of the reals also called the continuum hypothesis.
Complete presentations of Forcing can be easily found in standard textbooks in advanced set theory[START_REF] Kunen | The Interplay Between Creativity issues and Design Theories: a new perspective for Design Management Studies?[END_REF][START_REF] Jech | Set Theory[END_REF][START_REF] Cohen | Set Theory and the Continuum Hypothesis. Addison-Wesley, Cross N (1993) Science and design methodology: A review[END_REF]
Filters are standard structures in Set theory. A filter F is a set of conditions of Q with the following properties: non empty; nestedness (if p < q and p in F then q is in F) and compatibility (if p, q are in F, then there is s in F such that s < p and s < q).
To give a hint on this strange property and its demonstration: Cohen follows, as he explains himself, the reasoning of Cantor diagonalization. He shows that the "new" real is different from any real g written in base 2 by showing that there is at least one condition in G that differentiates G and this real (this corresponds to the fact that G intersects Dg, the set of conditions that are not included in g).
Forcing is a mathematical tool that can Design new sets using infinite series of conditions. In real Design, series of conditions are not always infinite.
There are several forms of extensions in Mathematics that cannot be even mentioned in this paper. Our claim is that Forcing, to our knowledge, presents the highest generality in its assumptions and scope.
In a simulation study of C-K reasoning[START_REF] Kazakçi | Simulation of Design reasoning based on C-K theory: a model and an example application[END_REF] voids could be modelled, since knowledge was assumed to have a graph structure
One can also use the image of a "hole". The metaphor of "holes" has been suggested by Udo Lindemann during a presentation about "Creativity in engineering". (SIG Design theory Workshop February
2011). It is a good image of the undecidable propositions, or concepts in C-K theory, that trigger a Design process. Udo Lindemann showed that such "holes" can be detected with engineering methods when they are used to find Design ways that were not yet explored. | 86,616 | [
"3386",
"1099",
"1111"
] | [
"39111",
"39111",
"39111"
] |
01485144 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2013 | https://minesparis-psl.hal.science/hal-01485144/file/Hatchuel%20Reich%20Le%20Masson%20Weil%20Kazakci%202013%20ICED%20formatted%20V6%20%2B%20abstract.pdf | Armand Hatchuel
Yoram Reich
Pascal Le Masson
Benoit Weil
Akin Kazakci
Beyond Models and Decisions: Situating Design through generative functions
This paper aims to situate Design by comparison to scientific modeling and optimal Decision. We introduce "generative functions" characterizing each of these activities. We formulate inputs, outputs and specific conditions of the generative functions corresponding to modeling (G m ), Optimization (G o ) and Design (G d ): G d follows the classic view of modeling as a reduction of observed anomalies in knowledge by assuming the existence of unknown objects that may be observed and described with consistency and completeness. G o is possible when free parameters appear in models. Gd bears on recent Design theory, which shows that design begins with unknown yet not observable objects to which desired properties are assigned and have to be achieved by design. On this basis we establish that: i) modeling is a special case of Design; ii) the definition of design can be extended to the simultaneous generation of objects (as artifacts) and knowledge. Hence, the unity and variety of design can be explained, and we establish Design as a highly general generative function that is central to both science and decision. Such findings have several implications for research and education.
INTRODUCTION: THE NATURE OF DESIGN THEORY
1. Research goals: in this paper, we aim to situate Design theory by comparison to Science as a modeling activity and to Decision as an optimization activity. To tackle this critical issue, we introduce and formalize generative functions that characterize these three activities. From the study of these generative functions we show that: i) modeling is a special case of design; and ii) Design can be seen as the simultaneous generation of artifacts and knowledge. 2. Research motivation and background. Contemporary Design theories have reached a high level of formalization and generality [START_REF] Hatchuel | A systematic approach of design theories using generativeness and robustness[END_REF]. They establish that design can include, yet cannot be reduced to, classic types of cognitive rationality (problem-solving, trial and error, etc.) [START_REF] Dorst | Design Problems and Design Paradoxes[END_REF][START_REF] Hatchuel | Towards Design Theory and expandable rationality: the unfinished program of Herbert Simon[END_REF]). Even if one finds older pioneers to this approach, modern attempts can be traced back to [START_REF] Yoshikawa | General Design Theory and a CAD System[END_REF] and have been followed by a series of advancements which endeavored to reach a theory of design that is independent of what is designed, and that can rigorously account for the generative (or creative) aspects of Design [START_REF] Hatchuel | A systematic approach of design theories using generativeness and robustness[END_REF]. General Design Theory [START_REF] Yoshikawa | General Design Theory and a CAD System[END_REF], Coupled Design Process [START_REF] Braha | Topologial structures for modelling engineering design processes[END_REF], Infused Design [START_REF] Shai | Infused Design: I Theory[END_REF]), Concept-Knowledge (C-K theory) [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF] are representatives of such endeavor. The same evolution has occurred in the domain of industrial design where early esthetic orientations have evolved towards a comprehensive and reflexive approach of design that seeks new coherence [START_REF] Margolin | Design in History[END_REF]. Such academic corpus opens new perspectives about the situation of Design theory within the general landscape of knowledge and science: the more Design theory claims its universality, the more it is necessary to explain how it can be articulated to other universal models of thought well known to scientists. Still the notion of "Design theory" is unclear for the non-specialist: it has to be better related to standard forms of scientific activity. To advance in this direction this paper begins to answer simple, yet difficult, questions like: what is different between Design and the classic scientific method? Why design theory is not simply a decision theory? In this paper we focus on the relation between design, modeling and optimization; the latter are major and dominant references across all sciences. 3. Methodology. Authors (Cross 1993;[START_REF] Zeng | On the logic of design[END_REF][START_REF] Horvath | A treatise on order in engineering design research[END_REF]) have already attempted to position Design in relation with Science. [START_REF] Rodenacker | Methodisches Konstruieren. Konstruktionsbücher[END_REF] considered that Design consisted in: i) analyzing "physical phenomena" based on scientific modeling; and ii) "inverting" the logic by beginning with selecting a function and addressing it by using known physical models of the phenomena (see p. 22 in [START_REF] Rodenacker | Methodisches Konstruieren. Konstruktionsbücher[END_REF]). After WW2, Simon's approach of the artificial proposed a strong distinction between science and design [START_REF] Simon | The Sciences of the Artificial[END_REF]. However, Simon's Design theory was reduced to problem solving and did not capture specific traits of design [START_REF] Hatchuel | Towards Design Theory and expandable rationality: the unfinished program of Herbert Simon[END_REF][START_REF] Dorst | Design Problems and Design Paradoxes[END_REF]. [START_REF] Farrell | The Simon-Kroes model of technical artitacts and the distinction between science and design[END_REF] criticized the Simonian distinction, considering that design and science have a lot in common. Still science and design are not specified with enough rigor and precision in these comparisons. Our aim is to reach more precise propositions about "scientific modeling" and "optimal decision" and to establish similarities and differences with design theory, at the level of formalization allowed by recent design theories. The core of this paper is the analysis of modeling, decision and design through generative functions.
For each generative function we define its inputs and outputs, as well as the assumptions and constraints to be verified by these functions. This common formal language will help us establish the relations and differences between design theory, modeling theory and decision theory. 4. Paper outline. Section 2 presents a formal approach of classic modeling theory and decision theory. Section 3 shows why Design differs from modeling and decision theory. Section 4 outlines differences in the status of the "unknown" in each case. We show that modeling can be interpreted as the design of knowledge. It establishes that science and decision are centrally dependent of our capacity to design.
MODELING AND DECISION: UNKNOWN OBJECTS AS OBSERVABLES
Modeling: anomalies and unknown objects
The classic task of Science (formed in the 19 th Century), was to establish the "true laws of nature". This definition has been criticized during the 20 th century: more pragmatic notions about Truth were used to define scientific knowledge, based on falsifiability [START_REF] Poincaré | Science and Hypothesis[END_REF][START_REF] Popper | The Logic of Scientific Discovery[END_REF]; laws are interpreted as provisional "scientific models" [START_REF] Kuhn | The Structure of Scientific Revolutions[END_REF]McComas 1998;[START_REF] Popper | The Logic of Scientific Discovery[END_REF]. The conception of "Nature" itself has been questioned. The classic vision of "reality" was challenged by the physics of the 20 th century (Relativity theory, Quantum Mechanics). The environmental dangers of human interventions provoked new discussions about the frontiers of nature and culture. Yet, these new views have not changed the scientific method i.e. the logic of modeling. It is largely shared that Science produces knowledge using both observations and models (mostly mathematical, but not uniquely). The core of the scientific conversation is focused on the consistency, validity, testability of models, and above all, on how models may fit existing or experimentally provoked observations. To understand similarities and differences between Design theory and modeling theory, we first discuss the assumptions and generative function that define modeling theory.
The formal assumptions of modeling
Modeling is so common that its basic assumptions are widely accepted and rarely reminded. To outline the core differences or similarities between Modeling theory and Design theory, these assumptions have to be clarified and formalized. We adopt the following notations: -X i is an object i that is defined by its name "X i " and by additional properties.
-K i (X i ) is the established knowledge about X i (e.g. the collection of its properties). Under some conditions described below, they may form a model of X i -K(X i ) is the collection of models about all the X i s. At this stage we only need to assume that K follows the classic axioms of epistemic logic [START_REF] Hendricks | Mainstream and Formal Epistemology[END_REF]) (see section 4). Still, modeling theory needs additional assumptions (these are not hypotheses; they are not discussed):
A1. Observability of objects and independence from the observer. Classic scientific modeling assumes that considered objects X i are observable: it means that the scientist (as the observer) can perceive and/or activate some observations x i about X i . The quality and reliability of these observations is an issue that is addressed by statistics theory. These observations may impact on what is known K i (X i ) and even modify some parameters of X i i.e. some subsets of K i (X i ) but it is usually assumed that observations do not provoke the existence of X i , i.e. the existence of the X i s is independent of the observer. For instance in quantum mechanics, the position and momentum of a particle are dependent of the observation, not its existence, mass or other physical characteristics. (Here we adopt what is usually called the positivistic approach of Science. Our formalization also fits with a constructivist view of scientific modeling but it would be too long to establish it in this paper.) A2. Model consistency and completeness: K(X i ) is a model of the X i s if two conditions defined by the scientist are verified:
-Consistency: the scientist can define a consistency function H, that tests K(X i ) (no contradictions, no redundant propositions, simplicity, unity, symmetry etc…):
H(K(X i )) true means K(X i ) is a consistent model.
-Completeness: we call Y the collection of observations (or data coming from these observations) that can be related to the X i s. The scientist can define a completeness function D that checks (K(X i )-Y): D(K(X i )-Y) holds means that K(X i ) sufficiently predicts Y. Obviously, there is no universal formulation of H and D. Scientific communities tend to adopt common principles for consistency and completeness. For our research, what counts is the logical necessity of some H and D functions to control the progress of modeling. Notations: For the sake of simplicity, we will write: ∆H > 0 (resp. ∆D > 0) when consistency (resp. completness) of knowledge has increased. A3. Modeling aims to reduce knowledge anomalies. The modeling activity (the research process) is stimulated by two types of "anomalies" that may appear separately or together: -K(X i ) seems inconsistent according to H. For instance K(X i ) may lack unity or present contradictions. For instance Ockam's razor is a criterion of economy in the constitution of K. -New observations Y appear or are provoked by an experiment, and do not fit, according to D, with what is described or expected by K(X i ). Or K(X i ) predicts observations Y * that still never happened or are contradictory with available ones. For instance Higgs's Boson was predicted by the standard theory of particles and was observed several decades after its prediction. A4. Hypothesizing and exploring unknown objects. Facing anomalies, the scientist makes the hypothesis that there may exist an unknown object X x , observable but not yet observed, that would reduce the anomalies if it verifies some properties. Anomalies are perceived as signs of the existence of X x, and the modeling process will activate two interrelated activities.
-The elaboration of K(X x ) will hopefully provide a definition of X x and validate its expected properties. Optimization procedures can routinize such elaboration [START_REF] Schmidt | Distilling free-form natural laws from experimental data[END_REF]. -The expansion of Y, i.e. new provoked observations (experimental plans) may also increase information about (X x , K(X x )). Ideally, the two series should converge towards an accepted model K x (X x ) that increases H and D. . This process may also provoke a revision of previous knowledge K(X i ) that we will note K'(X i ) in all the paper (revised knowledge on X i ).
Some examples of scientific modeling
Example 1 X-Rays. When the story of X rays begun, many objects were already known (modelled): electricity, light, electromagnetic waves, photography where common K(X i ) for scientists. Research was stimulated by the formation of a photographic anomaly Y: a photosensitive screen became fluorescent when Crookes tubes were discharged in a black room. Roentgen hypothesized the existence of an unknown radiation, X x , that was produced by the Crookes tube and could produce a visible impact on photographic screens. It took a long period of work combining hypothesis building and experimental testing before X rays were understood and the photographic anomaly reduced. Example 2 New planets. We find a similar logic in the discovery of Neptune and then Pluto, the "planet X". In the 1840s, astronomers had detected a series of irregularities in the path of Uranus, an anomaly Y which could not be entirely explained by Newton gravitational theory applied to the thenknown seven planets (the established K(X i )). Le Verrier proposed a new model with eight planets (K'(X i ), K(X x )) in which the irregularities are resolved if the gravity of a farther, unknown planet X x was disturbing Uranus path around the Sun. Telescopic observations confirming the existence of a major planet were made by Galle, working from Le Verrier's calculations. The story followed the same path with the discovery of Pluto, which was predicted in the late 19 th century to explain newly discovered anomalies in Uranus' trajectory (new Y). For decades, astronomers suggested several possible celestial coordinates (i.e. multiple possible K(X x )) for what was called the "planet X". Interestingly enough, even today astronomers go on studying other models K(X x ) to explain Uranus trajectory, integrating for instance new knowledge on Neptune mass, gained by Voyager 2's 1989 flyby of Neptune.
Corollary assumptions in modeling theory
Modeling theory is driven by the criteria of consistency H and completeness D that allow detecting anomalies of knowledge before any explanation has been found. Hence, modeling needs the independence between X x and the criteria that judge the consistency and completeness of K(X i ): H and D. This assumption is necessary because H(K(X i ) and D(K(X i )-Y)) have to be evaluated when X x is still unknown and its existence not warranted (only K(X i ) and Y are known). Still, as soon as (X x , K(X x )) are formulated, even as hypotheses, H and D can take into account this formulation. Finally, modeling can be described through what we call a generative function G m . Definition: in all the following, we call generative function a transformation where the output contains at least one object (X x , K(X x )) that was unknown in the input of the function [START_REF] Hatchuel | A systematic approach of design theories using generativeness and robustness[END_REF]) and which knowledge has been increased during the transformation. In the case of modeling, this generative function can be structurally defined as:
G m : (K(X i ), Y) (K(X x ), K'(X j ))
under the conditions that: -D(K(Xi)-Y)) does not hold (i.e. there is an anomaly in the knowledge input of G m ) -H(K'(X j )K(X x )) -H(K(X i ))>0 or H>0 (i.e. the new models are more consistent than the previous ones)
-D((K(X i ) K'(X j ) K(X x ))-Y) holds or D>0 (i.e.
the new models better fit with the observations)
The generative function G m only acts on knowledge but not on the existence of modeled objects. It helps to detect the anomalies and reduce distance between knowledge and observations.
Decisions and decidable parameters: models as systems of choice
The output of a modeling process is a transformation of K(X i ) that includes a new model K x (X x ) that defines an observable X x and captures its relations with other X i s. A decision issue appears when this new object X x can be a potential instrument for action through some program about X x .
Models as programs
The path from the discovery of a new object to a new technology is a classic view (yet limited as we will see in later sections) of design and innovation. This perspective assumes that K(X x ) can be decomposed into two parts: K u (X x ) which is invariant and K f (Xx) which offers free parameters (d 1 , d 2 ,..,d i ) that can be decided within some range of variation. Example 3: X rays consisted in a large family of electromagnetic radiations, described by a range of wavelengths and energies. The latter appeared as free parameters that could be controlled and selected for some purpose. The design of specific X-rays artefacts could be seen as the "best choice" among these parameters in relation to specific requirements: functionality, cost, danger, etc. The distinction between K f and K u clarifies the relation between the discovery of a new object and the discovery of a decision space of free parameters where the designer may "choose" a strategy. Decision theory and/or optimization theory provide techniques that guide the choice of these free parameters.
Optimization: generating choices
The literature about Decision theory and optimization explores several issues: decision with uncertainty, multicriteria or multiple agents decision making, etc. In all cases, the task is to evaluate and select among alternatives. Classic "optimization theory" explores algorithms that search the "best" or "most satisficing" choices among a decision space which contains a very large number or free possibilities -a number so large that systematic exploration of all possibilities is infeasible even with the most powerful computers. In recent decades optimization algorithms have been improved through inspiring ideas coming from material science (simulated annealing) or biomimicry (genetic algorithms, ant based algorithms…). However, from a formal point of view, the departure point of all these algorithms is a decision space (K(X x ), D(d j ), O(d i )), where: -K(X x ) is an established model of X x , -D(d j ) is the space of acceptable decisions about the d j s, which are the free parameters of X x -O(d j ) is the set of criteria used to select the "optimal" group of decisions D * (d j ). The task of these algorithms can be seen as a generative function G o that transforms the decision space into D * (d j ), which is the optimal decision. From the comparison of G m and G o , it appears that they both generate new knowledge, but in a different way. Modeling may introduce new X x when optimization only produces knowledge on the structure of K(X x ) from the perspective of some criterion O (If O was independent of X x (for instance, if O is a universal cost function), it could be possible to integrate both functions in one unique modeling function including optimization G m,o : (K(X i ), Y) (K'(X j ), K(X x ), D * (d j )) where H, D and O hold. Yet, in most cases, O may depend on the knowledge acquired about X x ). We now compare these structural propositions to the generative function associated to Design theory.
3
DESIGN: THE GENERATION OF NEW OBJECTS Intuitively, Design aims to define and realize an object X x that does not already exist, or that could not be obtained by a deduction from existing objects and knowledge. This intuition has been formalized by recent design theories [START_REF] Hatchuel | A systematic approach of design theories using generativeness and robustness[END_REF]). However, it mixes several assumptions that imply, as a first step of our analysis, strong differences between Design and modeling and need to be carefully studied. In the next developments we will follow the logic of C-K design theory to formalize the generative function of Design [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF][START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF].
-Unknowness, desirability and unobservability Unknown objects X x are necessary to modeling theory. Design also needs unknown objects X x. According to C-K design theory, these objects do not exist and hence are not observable when design begins. They will exist only if design succeeds. Actually, when design starts, these objects are unknown and only desirable. How is it possible? They are assigned desirable properties P(X x ) and they form a concept (X x, P(X x )) , where P is the only proposition that is formulated about the specific unknown X x that has to be created by design. Similarly to the O of G o , P refers to a set of criteria to be met by X x . Moreover, within existing K(X i ), the existence of such concept is necessarily undecidable [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF]. X x is not assumed as an observable object like in modeling, thus it can be viewed as an imaginary object. In design, X x is only partially imagined: design only needs that we imagine the concept of an object, but its complete definition has to be elaborated and realized. This has important consequences for the generative function of Design.
-Design as decided anomalies Like in modeling, we again assume K(X i ). Now, Design is possible only if between the concept (X x , P(X x )) and K(X i ) the following relations hold: -(K(X i )P(X x )) is wrong (i.e. what we know about X i s cannot imply the existence of X x ) -(K(X i ) (non (P(X x )) is wrong (i.e. what we know about the X i s cannot forbid the existence of X x ). These relations mean that K(X i ) is neither a proof of the existence of X x , nor a proof of its nonexistence. Hence, the existence of (X x , P(X x ) is undecidable, yet desirable, under K(X i ). Remark: undecidability can be seen as the anomaly specific to Design. It is not an observed anomaly, a distance between observations and K(X i ); it is a decided anomaly created by the designer when she builds the concept (X x , P(X x )). This makes a major difference between modeling and design.
-The generative function of design: introducing determination function Design theory is characterized by a specific generative function G d that aims to build some K(X x ) that proves the existence of X x, and P(X x ). As we know that K(X i ) cannot prove this existence, Design will need new knowledge. This can be limited to K(X x ) or, in the general case, this can require, like in modeling, to revise (X i , K(X i )) into (X j , K'(X j )) different from X x . These (X j , K'(X j )) were also unknown when design began, thus design includes modeling. The generative function of design G d is:
G d : (K(X i ), P(X x )) (K'(X j ), K(X x
)) with the following conditions (two are identical for modeling and the third is specific to design):
-∆H ≥ 0 which means that Design creates objects that maintain or increase consistency -∆D ≥ 0 which means that Design maintains or increases completeness
-(K(X i ) K'(X j ) K(X x
)) ((X x exists) and (P(X x ) holds)) The third condition can be called a determination function as it means that Design needs to create the knowledge that determines the realization of X x and the verification of P(X x ). This condition did not appear in the generative function of modeling. We will show that it was implicit in its formulation.
-Design includes decision, yet free parameters have to be generated Design could appear as a special case of decision theory: it begins with a decided anomaly and it aims to find some free parameters that, when "optimized", will warrant P(X x ). However, the situation is different from the decision theory analyzed previously: when design begins the definition parameters of X x are unknown, they have to be generated before being decided.
-Design observes "expansions" i.e. potential components of X x
As mentioned earlier, when Design begins, X x is not observable; it will be observed only when its complete definition will be settled, its existence warranted and made observable. So what can be observed during design if X x still does not exist? We may think that we could observe "some aspects" of X x . This is not a valid formulation as it assumes that X x is already there and we could capture some of its traits. But X x cannot be "present" until we design it and prove its existence. What can be, and is, done is to build new objects that could potentially be used as components of X x . These objects can be called expansions C i (X x ) (we use here the language of C-K design theory). Their existence and properties cannot be deduced from K(X i ), they have to be observed and modeled. Obviously if one of these expansions C j (X x ) verifies P, it can be seen as a potential design of X x . Usually, these expansions only verify some property P' that is a necessary (but not sufficient) condition for P. By combining different expansions, X x will be defined and P verified. The notion of "expansion" unifies a large variety of devices, material or symbolic, usually called sketches, mock-sup, prototypes, demonstrators, simulation etc. These devices are central for Design practice and are well documented in the literature [START_REF] Goldschmidt | The dialectics of sketching[END_REF][START_REF] Tversky | What do sketches say about thinking[END_REF][START_REF] Subrahmanian | Boundary Objects and Prototypes at the Interfaces of Engineering Design[END_REF]. Still they received limited attention in science (except in experimental plans) because they were absent of Modeling or Decision theory. Observing expansions generates two different outputs: i) some "building bricks" that could be used to form X x ; ii) new knowledge that will stimulate modeling strategies or new expansions. Thus, G d can be formulated more precisely by introducing expansions in its output:
G d : (K(X i ), P(X x )) (K'(X j ), C i (X x )
) and some subgroup C m of the expansions is such that X x = ∩ C m (X x ) and verifies P.
-G d does not generate a pure combination of X is : design goes out of the box This is a corollary of all previous findings. Because X x is unknown and undecidable when related to K(X i ), if a successful design exists, it will be composed of expansions that are different from any of the X i s (and outside the topology of the X i s). Hence, there is no combination of the X i s that would compose X x . G d goes necessarily out of the X i s' box! Creativity is not something added to design. Genuine design is creative by definition and necessity. Example 4: the design of electric cars. The use of electric power in cars is not a design task. It is easy to compose an electric car with known components. Design begins for instance with the concept: "an electric car with an autonomous range that is not too far from existing cars using fuel power". Obviously, this concept was both highly desired by carmakers and undecidable some years ago. Today, it is easy to observe all the new objects and knowledge that have been produced in existing electric cars that are now proposed, thus observable: new architectures, new batteries technologies and management systems, new car heating and cooling systems, new stations for charging or for battery exchange… New types of cars have been also proposed like the recent Twizzy by Renault who won the Red Dot best of the best design award in 2012. Still, commercialized cars could be seen as only expansions of the concept as none of them has reached the same autonomy as existing fuel cars (circa 700km). From a theoretical point of view commercial products are only economic landmarks of an ongoing design process. This example also illustrates the variety of design propositions, predicted by the theory.
4
COMPARISON AND GENERALIZATION: DESIGN AS THE SIMULTANEOUS GENERATION OF ARTEFACTS AND MODELS Now we can compare similarities and differences between Design, modeling and Decision theories. Table 1 synthesizes what we have learned about their generative functions.
Status of the unknown
X x is unknown, yet observable and independent, Y forms an anomaly X x presents free parameters to be decided, optimum is unknown
X x is unknown, assigned properties desirable, not observable, Input (K(X i ), Y) Y not explained by X i s (K(X x ), D(d i ), O(D * (d i ))) (K(X i ), P(X x )) P(X x ) undecidable / K(X i ) Output K'(X j ), K(X x ) D * (d i ) K'(X j ), K(X x ) Conditions -consistency ∆H > 0 -completeness ∆D > 0 O(D * (d j )) holds. ∆H ≥ 0 ∆D ≥ 0 Determination: (X x exists)
and (P(X x ) holds)
Discovery, invention and the status of the unknown
One can first remark the structural identity between the outputs of G d and G m . It explains why it is actually cumbersome to distinguish between "invention" and "discovery": in both cases, a previously unknown object has been generated. Yet this distinction is often used to distinguish between science and design. The difference appears in the assumptions on the unknown in each generative function: in modeling, the unknown is seen as an "external reality" that may be observed; in design, it is a desirable entity to bring to existence. The structure of the generative functions will show us that these differences mask deep similarities between modeling and Design.
Modeling as a special form of Design
We can now reach the core of our research by examining how these generative functions can be combined. Three important findings can be established. Proposition 1: Design includes modeling and decision. This is obvious from the structure of G m . Proof : Design needs to observe and test expansions as potential components of X x :
G d : (K(X i ), P(X x )) (K'(X j ), C i (X x )) so that X x = ∩ C m (X x
) and verifies P. If for some X u =∩C m (X x ), P(X u ) does not hold, (non-P(X u )) can be interpreted as an observed anomaly. Let us set: Y=non-P(X u ), Y appears as a provoked observation; if K(X u ) is the available knowledge about X u , then G d leads to a modeling issue corresponding to the following generative function: (K(X u ), Y) (K'(Xj), K(X z )) where X z is a new unknown object that has to be modeled and observed. Example 5: Each time when a prototype (∩C m (X x )) fails to meet design targets, it is necessary to build a scientific modeling of the failure. One famous historical example occurred at GE Research in the 1920s where Langmuir study of light bulb blackening led to the discovery of plasma, which owed him Nobel prize in 1932 [START_REF] Reich | The Making of American Industrial Research, Science and Business at GE and Bell[END_REF]. Proposition 2: Modeling needs design. This proposition seems less obvious: where is design in the reduction of anomalies that characterizes modeling? Actually Design is implicit in the conditions of the generative function of modeling: D((K(X i ) K(X' j ) K(X x ))-Y) holds. Proof: This condition simply says that adding K(X x ) to available knowledge explains Y. Now to check this proposition may require an unknown experimental setting that should desirably fit with the requirements of D. Let us call E x this setting and D r (E x ) these requirements. Hence, the generative function of modeling G m : (K(X i ), Y) (K(X x ), K'(X j )) is now dependent on a design function:
G d : (K(X i ), D r (E x )) (K'(X j ), K(E x ))
Example 6: There are numerous examples in the history of science where modeling was dependent on the design of new experimental settings (instruments, machines, reactors,…). In the case of the Laser, the existence of this special form of condensed light was theoretically predicted by Einstein as early as 1917 (a deduction from available K(X i )). Yet, the type of experimental "cavity" where the phenomena could appear was unknown and would have to meet extremely severe conditions. Thus, the advancement of knowledge in the field was dependent on Design capabilities [START_REF] Bromberg | Engineering Knowledge in the Laser Field[END_REF]. Proposition 3: Modeling is a special form of Design This proposition will establish that in spite of their differences, modeling is an implicit Design. Let us interpret modeling using the formal generative function of Design. Such operations are precisely those where the value of formalization is at its peak. Intuitively modeling and Design seem two logics with radically different views of the unknown; yet structurally, modeling is also a design activity. Proof: we have established that the generative function of modeling G m is a special form of G d .
G m : (K(X i ), Y) (K'(X j ), K(X x )) with the conditions : a. D(K(X i )-Y) does not hold b. H(K'(X j )K(X x )) -H(K(X i ))>0 c. D((K(X i ) K'(X j ) K(X x
))-Y) holds. Now instead of considering an unknown object X x to reduce the anomaly created by Y, let us consider an unknown knowledge K x (note that we do not write K(X x ) but K x ). In addition, we assume that K x verifies the following properties b' and c' which are obtained by replacing K(X x ) by K x in conditions b and c (remind that condition a is independent of K x and thus is unchanged ):
b': H(K'(X j )K x ) -H(K(X i ))>0 c': D((K(X i ) K'(X j ) K x ))-Y) holds. Remark that K x , like X x, is unknown and not observable, it has to be generated (designed). If we set a function T(K x ) that is true if "(b' and c') holds" then G m is equivalent to the design function:
G d : (K(X i ), T(K x )) (K'(X j ), K x ) Proof: if design succeeds then T(K x ) is true; this implies that c' holds i.e. K x reduces the anomaly Y. Thus, modeling is equivalent to a design process where the generation of knowledge is designed. Conditioning the "realism"of K x : with this interpretation of modeling, we miss the idea that K x is about an observable and independent object X x . Design may lead to an infinite variety of K x which all verify T(K x ). We need an additional condition that would control the "realism" of K x . "Realism" was initially embedded in the assumption that there is an observable and independent object. Now assume that we introduce a new design condition V(K x ) which says: K x should be designed independently from the designer. This would force the designer to only use observations and test expansions (for instance knowledge prototypes) that are submitted to the judgment of other scientists. Actually, this condition is equivalent to the assumption of an independent object X x . Proof: to recognize that X x exists and is independent of the scientists, we need to prove that two independent observers reach the same knowledge K(X x ). Conditioning the design of K x by V(K x ) is equivalent to assuming the existence of an independent object. This completes our proof that modeling is a special form of Design.
Generalization: design as the simultaneous generation of objects and knowledge
Design needs modeling but modeling can be interpreted as the design of new knowledge. Therefore we can generalize design as a generative function that simultaneously applies to a couple (X x , K(X x )):
-Let us call, Z i = (X i , K(X i )), Z x = (X x , K(X x )), -In classic epistemic logic, for all U: K(K(U))= K(U) this only means that we know what we know; and as K(X x )X x then K(X x , K(X x ))=(K(X x ), K(K(X x )) according to the distribution axiom [START_REF] Hendricks | Mainstream and Formal Epistemology[END_REF], which means that K is consistent with implication rules.
-Then, K(Z i ) = K(X i , K(X i )) = (K(X i ), K(K(X i )) = (K(X i ), K(X i )) =K(X i ) ; and similarly K(Z x ) =K(X x )
the generalized generative function G dz can be written with the same structure as G d :
G dz : (K(Z i ), L(Z x )) (K'(Z j ), K(Z x )) Where L(Z x ) is the combination of all desired properties related to the couple (X x , K(X x )):
-Assigned property to X x : P(X x ) -Conditions on K(X x ): consistency ∆H > 0; completeness ∆D > 0 Example 7: there are many famous cases where new objects and new knowledge is generated, e.g. the discovery of "neutral current" and the bubble chamber to "see" them at CERN in the 1960s [START_REF] Galison | How Experiments End[END_REF], or DNA double helix and the X-ray diffraction of biological molecules, needed for the observation [START_REF] Crick | What Mad Pursuit: A Personal View of Scientific Discovery. Basic Books, New York Cross N[END_REF]. This result establishes that the generative function of design is not specific to objects or artefacts. The standard presentations of modeling or design are partial visions of Design. Confirming the orientation of contemporary Design theory, our research brings rigorous support to the idea that Design is a generative function that is independent of what is designed and simultaneously generates objects and the knowledge about these objects according to the desired properties assigned to each of them.
5
CONCLUDING REMARKS AND IMPLICATIONS. 1. Our aim was to situate design and design theory by comparison to major standard references like scientific modeling and Decision theory. To reach this goal, we have not followed the classic discussions about science and design. Contemporary design theory offers a new way to study these issues. It has reached a level of formalization that can be used to organize a rigorous comparison of design, modeling and optimization. We use this methodology to reach novel and precise propositions. Our findings confirm previous research that insisted more on the similarities between Design and Science. But it goes beyond such general statements: we have introduced the notion of generative functions which permits to build a common formal framework for our comparison. We showed that design, modeling and decision correspond to various visions of the unknown. Beyond these differences, we have established that modeling (hence optimization) could be seen as special forms of design and we have made explicit the conditions under which such proposition holds. Finally we have established the high generality of Design that simultaneously generates objects and knowledge. These findings have two series of implications, which are also areas for further research: 2. On the unity and variety of forms of design: tell us what is that unknown that you desire… Establishing that design generates simultaneously objects and knowledge clarifies the unity of design. Engineers, Scientists, Architects, product creators are all designers. They do not differ in the structure of their generative functions, they differ in the desired properties they assign to the objects (or artifacts) and in the knowledge they generate. Scientists desire artefacts and knowledge that verify consistency, completeness and determination. They tend to focus on the desires of their communities. Engineers give more importance to the functional requirements of the artefacts they build; they also design knowledge that can be easily learned, transferred and systematized in usual working contexts. Architects have desires in common with engineers regarding the objects they create. But they do not aim at a systematized knowledge about elegance, beauty or urban values. Professional identities tend to underestimate the unity of design and tend to overemphasize the specificity of their desires and to confuse it with the generative functions they have to enact. This has led to persistent misunderstandings and conflicts. It has also fragmented the scientific study of design. It is still common to distinguish between "the technology and the design" of a productas if generating a new technology was not the design of both artefacts and knowledge. Our research certainly calls for an aggiornamento of the scientific status of Design where its unity will be stressed and used as a foundation stone for research and education.
On the relations between Science and Design
In this paper we avoid the usual debates about the nature of Science, knowledge and Design. We add nothing to the discussions on positivist and constructivist conceptions of reality. Our investigations focus on the operational logic and structure of each type of activity. We find that the status of the unknown is a key element of the usual distinction between design-as-artifact-making and Science-as-knowledge-creation. Still we also establish that Design offers a logic of the unknown that is more general and includes the logic of scientific Knowledge. Design makes explicit what it desires about the unknown. We establish that Science also designs knowledge according to desires but they are implicit or related to a community (not to the unique judgment of one researcher). Obviously, these findings should be better related to contemporary debates in epistemology and philosophy of Science. This task goes largely beyond the scope of this paper. Finally, our main conclusion is that Design theory can serve as an integrative framework for modeling and decision. By introducing desirable unknowns in our models of thought, Design does not create some sort of irrationality or disorder. Instead it offers a rigorous foundation stone to the main standards of scientific thinking.
G o : (K(X x ), D(d j ), O(d j )) D * (d j ) so that D * (d i ) D(d j ) and O(D * (d j )) holds.1
Table 1 :
1 Comparison of generative functions
Generative Modeling: G m Decision: G o Design: G d
function | 42,275 | [
"3386",
"1111",
"1099",
"10954"
] | [
"39111",
"63133",
"39111",
"39111",
"39111"
] |
01422161 | en | [
"math"
] | 2024/03/04 23:41:48 | 2020 | https://hal.science/hal-01422161v2/file/navier_slip_v2_0.pdf | Jean-Michel Coron
Frédéric Marbach
Franck Sueur
Small-time global exact controllability of the Navier-Stokes equation with Navier slip-with-friction boundary conditions *
come
Small-time global exact controllability of the Navier-Stokes equation with Navier slip-with-friction boundary conditions
Introduction
Description of the fluid system
We consider a smooth bounded connected domain Ω in R d , with d = 2 or d = 3. Although some drawings will depict Ω as a very simple domain, we do not make any other topological assumption on Ω. Inside this domain, an incompressible viscous fluid evolves under the Navier-Stokes equations. We will name u its velocity field and p the associated pressure. We assume that we are able to act on the fluid flow only on a open part Γ of the full boundary ∂Ω, where Γ intersects all connected components of ∂Ω (this geometrical hypothesis is used in the proofs of Lemma 2). On the remaining part of the boundary, ∂Ω \ Γ, we assume that the fluid flow satisfies Navier slip-with-friction boundary conditions. Hence, (u, p) satisfies:
∂ t u + (u • ∇)u -∆u + ∇p = 0 in Ω, div u = 0 in Ω, u • n = 0 on ∂Ω \ Γ, N (u) = 0 on ∂Ω \ Γ. (1)
Here and in the sequel, n denotes the outward pointing normal to the domain. For a vector field f , we introduce [f ] tan its tangential part, D(f ) the rate of strain tensor (or shear stress) and N (f ) the tangential Navier boundary operator defined as:
[f ] tan := f -(f • n)n, (2)
D ij (f ) := 1 2 (∂ i f j + ∂ j f i ) , (3)
N (f ) := [D(f )n + M f ] tan . (4)
Eventually, in (4), M is a smooth matrix valued function, describing the friction near the boundary. This is a generalization of the usual condition involving a single scalar parameter α ≥ 0 (i.e. M = αI d ). For flat boundaries, such a scalar coefficient measures the amount of friction. When α = 0 and the boundary is flat, the fluid slips along the boundary without friction. When α → +∞, the friction is so intense that the fluid is almost at rest near the boundary and, as shown by Kelliher in [START_REF] Kelliher | Navier-Stokes equations with Navier boundary conditions for a bounded domain in the plane[END_REF], the Navier condition [D(u)n + αu] tan = 0 converges to the usual Dirichlet condition.
Controllability problem and main result
Let T be an allotted positive time (possibly very small) and u * an initial data (possibly very large). The question of small-time global exact null controllability asks whether, for any T and any u * , there exists a trajectory u (in some appropriate functional space) defined on [0, T ] × Ω, which is a solution to [START_REF] Alexandre | Well-posedness of the Prandtl equation in Sobolev spaces[END_REF], satisfying u(0, •) = u * and u(T, •) = 0. In this formulation, system (1) is seen as an underdetermined system. The controls used are the implicit boundary conditions on Γ and can be recovered from the constructed trajectory a posteriori. We define the space L 2 γ (Ω) as the closure in L 2 (Ω) of smooth divergence free vector fields which are tangent to ∂Ω \ Γ. For f ∈ L 2 γ (Ω), we do not require that f • n = 0 on the controlled boundary Γ. Of course, due to the Stokes theorem, such functions satisfy Γ f • n = 0. The main result of this paper is the following small-time global exact null controllability theorem: Theorem 1. Let T > 0 and u * ∈ L 2 γ (Ω). There exists u ∈ C 0 w ([0, T ]; L 2 γ (Ω)) ∩ L 2 ((0, T ); H 1 (Ω)) a weak controlled trajectory (see Definition 1) of (1) satisfying u(0, •) = u * and u(T, •) = 0. Remark 1. Even though a unit dynamic viscosity is used in equation (1), Theorem 1 remains true for any fixed positive viscosity ν thanks to a straightforward scaling argument. Some works also consider the case when the friction matrix M depends on ν (see [START_REF] Paddick | Stability and instability of Navier boundary layers[END_REF] or [START_REF] Wang | Boundary layers in incompressible Navier-Stokes equations with Navier boundary conditions for the vanishing viscosity limit[END_REF]). This does not impact our proofs in the sense that we could still prove that: for any ν > 0, for any T > 0, for any smooth M ν , for any initial data u * , one can find boundary controls (depending on all these quantities) driving the initial data back to the null equilibrium state at time T . Remark 2. Theorem 1 is stated as an existence result. The lack of uniqueness both comes from the fact that multiple controls can drive the initial state to zero and from the fact that it is not known whether weak solutions are unique for the Navier-Stokes equation in 3D (in 2D, it is known that weak solutions are unique). Always in the 3D case, if the initial data u * is smooth enough, it would be interesting to know if we can build a strong solution to (1) driving u * back to zero (in 2D, global existence of strong solutions is known). We conjecture that building strong controlled trajectories is possible. What we do prove here is that, if the initial data u * is smooth enough, then our small-time global approximate null control strategy drives any weak solution starting from this initial state close to zero.
Ω ∂Ω \ Γ u • n = 0 [D(u)n + M u] tan = 0 Γ
Although most of this paper is dedicated to the proof of Theorem 1 concerning the null controllability, we also explain in Section 5 how one can adapt our method to obtain small-time global exact controllability towards any weak trajectory (and not only the null equilibrium state).
A challenging open problem as a motivation
The small-time global exact null controllability problem for the Navier-Stokes equation was first suggested by Jacques-Louis Lions in the late 80's. It is mentioned in [START_REF] Lions | Exact controllability for distributed systems. Some trends and some problems[END_REF] in a setting where the control is a source term supported within a small subset of the domain (this situation is similar to controlling only part of the boundary). In Lions' original question, the boundary condition on the uncontrolled part of the boundary is the Dirichlet boundary condition. Using our notations and our boundary control setting, the system considered is:
∂ t u + (u • ∇)u -∆u + ∇p = 0 in Ω, div u = 0 in Ω, u = 0 on ∂Ω \ Γ.
(5)
Global results
The second approach goes the other way around: see the viscous term as a perturbation of the inviscid dynamic and try to deduce the controllability of Navier-Stokes from the controllability of Euler. This approach is efficient to obtain small-time results, as inviscid effects prevail in this asymptotic. However, if one does not control the full boundary, boundary layers appear near the uncontrolled boundaries ∂Ω \ Γ. Thus, most known results try to avoid this situation.
In [START_REF] Coron | Global exact controllability of the 2D Navier-Stokes equations on a manifold without boundary[END_REF], the first author and Fursikov prove a small-time global exact null controllability result when the domain is a manifold without border (in this setting, the control is a source term located in a small subset of the domain). Likewise, in [START_REF] Fursikov | Exact controllability of the Navier-Stokes and Boussinesq equations[END_REF], Fursikov and Imanuvilov prove small-time global exact null controllability when the control is supported on the whole boundary (i.e. Γ = ∂Ω). In both cases, there is no boundary layer.
Another method to avoid the difficulties is to choose more gentle boundary conditions. In a simple geometry (a 2D rectangular domain), Chapouly proves in [START_REF] Chapouly | On the global null controllability of a Navier-Stokes system with Navier slip boundary conditions[END_REF] small-time global exact null controllability for Navier-Stokes under the boundary condition ∇ × u = 0 on uncontrolled boundaries. Let [0, L] × [0, 1] be the considered rectangle. Her control acts on both vertical boundaries at x 1 = 0 and x 1 = L. Uncontrolled boundaries are the horizontal ones at x 2 = 0 and x 2 = 1. She deduces the controllability of Navier-Stokes from the controllability of Euler by linearizing around an explicit reference trajectory u 0 (t, x) := (h(t), 0), where h is a smooth profile. Hence, the Euler trajectory already satisfies all boundary conditions and there is no boundary layer to be expected at leading order.
For Navier slip-with-friction boundary conditions in 2D, the first author proves in [START_REF] Coron | On the controllability of the 2-D incompressible Navier-Stokes equations with the Navier slip boundary conditions[END_REF] a small-time global approximate null controllability result. He proves that exact controllability can be achieved in the interior of the domain. However, this is not the case near the boundaries. The approximate controllability is obtained in the space W -1,∞ , which is not a strong enough space to be able to conclude to global exact null controllability using a local result. The residual boundary layers are too strong and have not been sufficiently handled during the control design strategy.
For Dirichlet boundary conditions, Guerrero, Imanuvilov and Puel prove in [START_REF] Guerrero | Remarks on global approximate controllability for the 2-D Navier-Stokes system with Dirichlet boundary conditions[END_REF] (resp. [START_REF] Guerrero | A result concerning the global approximate controllability of the Navier-Stokes system in dimension 3[END_REF]) for a square (resp. a cube) where one side (resp. one face) is not controlled, a small time result which looks like global approximate null controllability. Their method consists in adding a new source term (a control supported on the whole domain Ω) to absorb the boundary layer. They prove that this additional control can be chosen small in L p ((0, T ); H -1 (Ω)), for 1 < p < p 0 (with p 0 = 8/7 in 2D and 4/3 in 3D). However, this norm is too weak to take a limit and obtain the result stated in Open Problem (OP) (without this fully supported additional control). Moreover, the H -1 (Ω) estimate seems to indicate that the role of the inner control is to act on the boundary layer directly where it is located, which is somehow in contrast with the goal of achieving controllability with controls supported on only part of the boundary.
All the examples detailed above tend to indicate that a new method is needed, which fully takes into account the boundary layer in the control design strategy.
The "well-prepared dissipation" method
In [START_REF] Marbach | Small time global null controllability for a viscous Burgers' equation despite the presence of a boundary layer[END_REF], the second author proves small-time global exact null controllability for the Burgers equation on the line segment [0, 1] with a Dirichlet boundary condition at x = 1 (implying the presence of a boundary layer near the uncontrolled boundary x = 1). The proof relies on a method involving a well-prepared dissipation of the boundary layer. The sketch of the method is the following:
1. Scaling argument. Let T > 0 be the small time given for the control problem. Introduce ε ≪ 1 a very small scale. Perform the usual small-time to small-viscosity fluid scaling u ε (t, x) := εu(εt, x), yielding a new unknown u ε , defined on a large time scale [0, T /ε], satisfying a vanishing viscosity equation. Split this large time interval in two parts: [0, T ] and [T, T /ε].
2. Inviscid stage. During [0, T ], use (up to the first order) the same controls as if the system was inviscid. This leads to good interior controllability (far from the boundaries, the system already behaves like its inviscid limit) but creates a boundary layer residue near uncontrolled boundaries.
3. Dissipation stage. During the long segment [T, T /ε], choose null controls and let the system dissipate the boundary layer by itself thanks to its smoothing term. As ε → 0, the long time scale compensates exactly for the small viscosity. However, as ε → 0, the boundary layer gets thinner and dissipates better.
The key point in this method is to separate steps 2 and 3. Trying to control both the inviscid dynamic and the boundary layer at the end of step 2 is too hard. Instead, one chooses the inviscid controls with care during step 2 in order to prepare the self-dissipation of the boundary layer during step 3. This method will be used in this paper and enhanced to prove our result. In order to apply this method, we will need a very precise description of the boundary layers involved.
Boundary conditions and boundary layers for Navier-Stokes
Physically, boundary layers are the fluid layers in the immediate vicinity of the boundaries of a domain, where viscous effects prevail. Mathematically, they appear when studying vanishing viscosity limits while maintaining strong boundary conditions. There is a huge literature about boundary conditions for partial differential equations and the associated boundary layers. In this paragraph, we give a short overview of some relevant references in our context for the Navier-Stokes equation.
Adherence boundary condition
The strongest and most commonly used boundary condition for Navier-Stokes is the full adherence (or no-slip) boundary condition u = 0. This condition is most often referred to as the Dirichlet condition although it was introduced by Stokes in [START_REF] Gabriel | On the effect of the internal friction of fluids on the motion of pendulums[END_REF]. Under this condition, fluid particles must remain at rest near the boundary. This generates large amplitude boundary layers. In 1904, Prandtl proposed an equation describing the behavior of boundary layers for this adherence condition in [START_REF] Prandtl | Uber flussigkeits bewegung bei sehr kleiner reibung[END_REF]. Heuristically, these boundary layers are of amplitude O(1) and of thickness O( √ ν) for a vanishing viscosity ν. Although his equation has been extensively studied, much is still to be learned. Both physically and numerically, there exists situations where the boundary layer separates from the border: see [START_REF] Cowley | Computer extension and analytic continuation of Blasius' expansion for impulsive flow past a circular cylinder[END_REF], [START_REF] Guyon | Hydrodynamique physique[END_REF], [START_REF] Van Dommelen | On the Lagrangian description of unsteady boundary-layer separation. I. General theory[END_REF], or [START_REF] Van Dommelen | The spontaneous generation of the singularity in a separating laminar boundary layer[END_REF]. Mathematically, it is known that solutions with singularities can be built [START_REF] Weinan | Blowup of solutions of the unsteady Prandtl's equation[END_REF] and that the linearized system is ill-posed in Sobolev spaces [START_REF] Gérard | On the ill-posedness of the Prandtl equation[END_REF]. The equation has also been proved to be ill-posed in a non-linear context in [START_REF] Guo | A note on Prandtl boundary layers[END_REF]. Moreover, even around explicit shear flow solutions of the Prandtl equation, the equation for the remainder between Navier-Stokes and Euler+Prandtl is also ill-posed (see [START_REF] Grenier | Boundary layers[END_REF] and [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF]).
Most positive known results fall into two families. First, when the initial data satisfies a monotonicity assumption, introduced by Oleinik in [START_REF] Oleȋnik | On the mathematical theory of boundary layer for an unsteady flow of incompressible fluid[END_REF], [START_REF] Oleȋnik | Mathematical models in boundary layer theory[END_REF]. See also [START_REF] Alexandre | Well-posedness of the Prandtl equation in Sobolev spaces[END_REF], [START_REF] Gérard-Varet | Gevrey Stability of Prandtl Expansions for 2D Navier-Stokes[END_REF], [START_REF] Masmoudi | Local-in-time existence and uniqueness of solutions to the Prandtl equations by energy methods[END_REF] and [START_REF] Xin | On the global existence of solutions to the Prandtl's system[END_REF] for different proof techniques in this context. Second, when the initial data are analytic, it is both proved that the Prandtl equations are well-posed [START_REF] Sammartino | Zero viscosity limit for analytic solutions, of the Navier-Stokes equation on a half-space. I. Existence for Euler and Prandtl equations[END_REF] and that Navier-Stokes converges to an Euler+Prandtl expansion [START_REF] Sammartino | Zero viscosity limit for analytic solutions of the Navier-Stokes equation on a half-space. II. Construction of the Navier-Stokes solution[END_REF]. For historical reviews of known results, see [START_REF] Weinan | Boundary layer theory and the zero-viscosity limit of the Navier-Stokes equation[END_REF] or [START_REF] Nickel | Prandtl's boundary-layer theory from the viewpoint of a mathematician[END_REF]. We also refer to [START_REF] Maekawa | The Inviscid Limit and Boundary Layers for Navier-Stokes Flows[END_REF] for a comprehensive recent survey.
Physically, the main difficulty is the possibility that the boundary layer separates and penetrates into the interior of the domain (which is prevented by the Oleinik monotonicity assumption). Mathematically, Prandtl equations lack regularization in the tangential direction thus exhibiting a loss of derivative (which can be circumvented within an analytic setting).
Friction boundary conditions
Historically speaking, the adherence condition is posterior to another condition stated by Navier in [START_REF] Navier | Mémoire sur les lois du mouvement des fluides[END_REF] which involves friction. The fluid is allowed to slip along the boundary but undergoes friction near the impermeable walls. Originally, it was stated as:
u • n = 0 and [D(u)n + αu] tan = 0, ( 6
)
where α is a scalar positive coefficient. Mathematically, α can depend (smoothly) on the position and be a matrix without changing much the nature of the estimates. This condition has been justified from the boundary condition at the microscopic scale in [START_REF] Coron | Derivation of slip boundary conditions for the Navier-Stokes system from the Boltzmann equation[END_REF] for the Boltzmann equation. See also [START_REF] Golse | From the Boltzmann equation to the Euler equations in the presence of boundaries[END_REF] or [START_REF] Masmoudi | From the Boltzmann equation to the Stokes-Fourier system in a bounded domain[END_REF] for other examples of such derivations.
Although the adherence condition is more popular in the mathematical community, the slip-withfriction condition is actually well suited for a large range of applications. For instance, it is an appropriate model for turbulence near rough walls [START_REF] Edward | Lectures in mathematical models of turbulence[END_REF] or in acoustics [START_REF] Geymonat | On the vanishing viscosity limit for acoustic phenomena in a bounded region[END_REF]. It is used by physicists for flat boundaries but also for curved domains (see [START_REF] Einzel | Boundary condition for fluid flow: curved or rough surfaces[END_REF], [START_REF] Guo | Slip boundary conditions over curved surfaces[END_REF] or [START_REF] Panzer | The effects of boundary curvature on hydrodynamic fluid flow: calculation of slip lengths[END_REF]). Physically, α is homogeneous to 1/b where b is a length, named slip length. Computing this parameter for different situations, both theoretically or experimentally is important for nanofluidics and polymer flows (see [START_REF] Barrat | Large slip effect at a nonwetting fluid-solid interface[END_REF] or [START_REF] Bocquet | Flow boundary conditions from nano-to micro-scales[END_REF]).
Mathematically, the convergence of the Navier-Stokes equation under the Navier slip-with-friction condition to the Euler equation has been studied by many authors. For 2D, this subject is studied in [START_REF] Thierry Clopeau | On the vanishing viscosity limit for the 2D incompressible Navier-Stokes equations with the friction type boundary conditions[END_REF] and [START_REF] Kelliher | Navier-Stokes equations with Navier boundary conditions for a bounded domain in the plane[END_REF]. For 3D, this subject is treated in [START_REF] Gung | Boundary layer analysis of the Navier-Stokes equations with generalized Navier boundary conditions[END_REF] and [START_REF] Masmoudi | Uniform regularity for the Navier-Stokes equation with Navier boundary condition[END_REF]. To obtain more precise convergence results, it is necessary to introduce an asymptotic expansion of the solution u ε to the vanishing viscosity Navier-Stokes equation involving a boundary layer term. In [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF], Iftimie and the third author prove a boundary layer expansion. This expansion is easier to handle than the Prandtl model because the main equation for the boundary layer correction is both linear and well-posed in Sobolev spaces. Heuristically, these boundary layers are of amplitude O( √ ν) and of thickness O( √ ν) for a vanishing viscosity ν.
Slip boundary conditions
When the physical friction between the inner fluid and the solid boundary is very small, one may want to study an asymptotic model describing a situation where the fluid perfectly slips along the boundary. Sadly, the perfect slip situation is not yet fully understood in the mathematical literature.
2D. In the plane, the situation is easier. In 1969, Lions introduced in [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] the free boundary condition ω = 0. This condition is actually a special case of [START_REF] Beirão | Concerning the W k,p -inviscid limit for 3-D flows under a slip boundary condition[END_REF] where α depends on the position and α(x) = 2κ(x), where κ(x) is the curvature of the boundary at x ∈ ∂Ω. With this condition, good convergence results can be obtained from Navier-Stokes to Euler for vanishing viscosities.
3D. In the space, for flat boundaries, slipping is easily modeled with the usual impermeability condition u • n = 0 supplemented by any of the following equivalent conditions:
∂ n [u] tan = 0, (7)
[D(u)n] tan = 0, (8)
[∇ × u] tan = 0. (9)
For general non-flat boundaries, these conditions cease to be equivalent. This situation gives rise to some confusion in the literature about which condition correctly describes a true slip condition. Formally, condition (8) can be seen as the limit when α → 0 of the usual Navier slip-with-scalarfriction condition [START_REF] Beirão | Concerning the W k,p -inviscid limit for 3-D flows under a slip boundary condition[END_REF]. As for condition [START_REF] Veiga | Reducing slip boundary value problems from the half to the whole space. Applications to inviscid limits and to non-Newtonian fluids[END_REF] it can be seen as the natural extension in 3D of the 2D Lions free boundary condition. Let x ∈ ∂Ω. We note T x the tangent space to ∂Ω at x. The Weingarten map (or shape operator) M w (x) at x is defined as a linear map from T x into itself such that M w (x)τ := ∇ τ n for any τ in T x . The image of M w (x) is contained in T x . Indeed, since |n| 2 = 1 in a neighborhood of ∂Ω, [START_REF] Gung | Boundary layer analysis of the Navier-Stokes equations with generalized Navier boundary conditions[END_REF]). If Ω is smooth, the shape operator M w is smooth. For any x ∈ ∂Ω it defines a self-adjoint operator with values in T x . Moreover, for any divergence free vector field u satisfying u • n = 0 on ∂Ω, we have:
0 = ∇ τ (n 2 ) = 2n • ∇ τ n = 2n • M w τ for any τ . Lemma 1 ([5],
[D(u)n + M w u] tan = 1 2 (∇ × u) × n. (10)
Even though it is a little unusual, it seems that condition (9) actually better describes the situation of a fluid slipping along the boundary. The convergence of the Navier-Stokes equation to the Euler equation under this condition has been extensively studied. In particular, let us mention the works by Beirao da Veiga, Crispo et al. (see [START_REF] Veiga | On the sharp vanishing viscosity limit of viscous incompressible fluid flows[END_REF], [START_REF] Beirão | Sharp inviscid limit results under Navier type boundary conditions. An L p theory[END_REF], [START_REF] Beirão | Concerning the W k,p -inviscid limit for 3-D flows under a slip boundary condition[END_REF], [START_REF] Beirão | The 3-D inviscid limit result under slip boundary conditions. A negative answer[END_REF], [START_REF] Beirão | A missed persistence property for the Euler equations and its effect on inviscid limits[END_REF], [START_REF] Veiga | Reducing slip boundary value problems from the half to the whole space. Applications to inviscid limits and to non-Newtonian fluids[END_REF] and [START_REF] Crispo | On the zero-viscosity limit for 3D Navier-Stokes equations under slip boundary conditions[END_REF]), by Berselli et al. (see [START_REF] Carlo | Some results on the Navier-Stokes equations with Navier boundary conditions[END_REF], [START_REF] Berselli | On the vanishing viscosity limit of 3D Navier-Stokes equations under slip boundary conditions in general domains[END_REF]) and by Xiao, Xin et al. (see [START_REF] Wang | Vanishing viscous limits for 3D Navier-Stokes equations with a Navier-slip boundary condition[END_REF], [START_REF] Wang | Boundary layers in incompressible Navier-Stokes equations with Navier boundary conditions for the vanishing viscosity limit[END_REF], [START_REF] Xiao | On the vanishing viscosity limit for the 3D Navier-Stokes equations with a slip boundary condition[END_REF], [START_REF] Xiao | Remarks on vanishing viscosity limits for the 3D Navier-Stokes equations with a slip boundary condition[END_REF] and [START_REF] Xiao | On the inviscid limit of the 3D Navier-Stokes equations with generalized Navier-slip boundary conditions[END_REF]).
The difficulty comes from the fact that the Euler equation (which models the behavior of a perfect fluid, not subject to friction) is only associated with the u • n = 0 boundary condition for an impermeable wall. Any other supplementary condition will be violated for some initial data. Indeed, as shown in [START_REF] Beirão | A missed persistence property for the Euler equations and its effect on inviscid limits[END_REF], even the persistence property is false for condition [START_REF] Veiga | Reducing slip boundary value problems from the half to the whole space. Applications to inviscid limits and to non-Newtonian fluids[END_REF] for the Euler equation: choosing an initial data such that (9) is satisfied does not guarantee that it will be satisfied at time t > 0.
Plan of the paper
The paper is organized as follows:
• In Section 2, we consider the special case of the slip boundary condition [START_REF] Veiga | Reducing slip boundary value problems from the half to the whole space. Applications to inviscid limits and to non-Newtonian fluids[END_REF]. This case is easier to handle because no boundary layer appears. We prove Theorem 1 in this simpler setting in order to explain some elements of our method.
• In Section 3, we introduce the boundary layer expansion that we will be using to handle the general case and we prove that we can apply the well-prepared dissipation method to ensure that the residual boundary layer is small at the final time.
• In Section 4, we introduce technical terms in the asymptotic expansion of the solution and we use them to carry out energy estimates on the remainder. We prove Theorem 1 in the general case.
• In Section 5 we explain how the well-prepared dissipation method detailed in the case of null controllability can be adapted to prove small-time global exact controllability to the trajectories.
A special case with no boundary layer: the slip condition
In this section, we consider the special case where the friction coefficient M is the shape operator M w . On the uncontrolled boundary, thanks to Lemma 1, the flow satisfies:
u • n = 0 and [∇ × u] tan = 0. (11)
In this setting, we can build an Euler trajectory satisfying this overdetermined boundary condition. The Euler trajectory by itself is thus an excellent approximation of the Navier-Stokes trajectory, up to the boundary. This allows us to present some elements of our method in a simple setting before moving on to the general case which involves boundary layers.
As in [START_REF] Coron | On the controllability of the 2-D incompressible Navier-Stokes equations with the Navier slip boundary conditions[END_REF], our strategy is to deduce the controllability of the Navier-Stokes equation in small time from the controllability of the Euler equation. In order to use this strategy, we are willing to trade small time against small viscosity using the usual fluid dynamics scaling. Even in this easier context, Theorem 1 is new for multiply connected 2D domains and for all 3D domains since [START_REF] Coron | On the controllability of the 2-D incompressible Navier-Stokes equations with the Navier slip boundary conditions[END_REF] only concerns simply connected 2D domains. This condition was also studied in [START_REF] Chapouly | On the global null controllability of a Navier-Stokes system with Navier slip boundary conditions[END_REF] in the particular setting of a rectangular domain.
Domain extension and weak controlled trajectories
We start by introducing a smooth extension O of our initial domain Ω. We choose this extended domain in such a way that Γ ⊂ O and ∂Ω \ Γ ⊂ ∂O (see Figure 2.1 for a simple case). This extension procedure can be justified by standard arguments. Indeed, we already assumed that Ω is a smooth domain and, up to reducing the size of Γ, we can assume that its intersection with each connected component of ∂Ω is smooth. From now on, n will denote the outward pointing normal to the extended domain O (which coincides with the outward pointing normal to Ω on the uncontrolled boundary ∂Ω\ Γ). We will also need to introduce a smooth function ϕ : R d → R such that ϕ = 0 on ∂O, ϕ > 0 in O and ϕ < 0 outside of Ō.
Moreover, we assume that |ϕ(x)| = dist(x, ∂O) in a small neighborhood of ∂O. Hence, the normal n can be computed as -∇ϕ close to the boundary and extended smoothly within the full domain O. In the sequel, we will refer to Ω as the physical domain where we try to build a controlled trajectory of (1). Things happening within O \Ω are technicalities corresponding to the choice of the controls and we advise the reader to focus on true physical phenomenons happening inside Ω.
Ω ∂Ω \ Γ u • n = 0 [D(u)n + M u] tan = 0 Γ O Figure 2: Extension of the physical domain Ω ⊂ O. Definition 1. Let T > 0 and u * ∈ L 2 γ (Ω). Let u ∈ C 0 w ([0, T ]; L 2 γ (Ω)) ∩ L 2 ((0, T ); H 1 (Ω)
). We will say that u is a weak controlled trajectory of system (1) with initial condition u * when u is the restriction to the physical domain Ω of a weak Leray solution in the space C 0 w ([0, T ]; L 2 (O)) ∩ L 2 ((0, T ); H 1 (O)) on the extended domain O, which we still denote by u, to:
∂ t u + (u • ∇)u -∆u + ∇p = ξ in O, div u = σ in O, u • n = 0 on ∂O, N (u) = 0 on ∂O, u(0, •) = u * in O, (12)
where ξ ∈ H 1 ((0, T ),
L 2 (O)) ∩ C 0 ([0, T ], H 1 (O)
) is a forcing term supported in Ō \ Ω, σ is a smooth non homogeneous divergence condition also supported in Ō \ Ω and u * has been extended to O such that the extension is tangent to ∂O and satisfies the compatibility condition div u * = σ(0, •).
Allowing a non vanishing divergence outside of the physical domain is necessary both for the control design process and because we did not restrict ourselves to controlling initial data satisfying u * • n = 0 on Γ. Defining weak Leray solutions to (12) is a difficult question when one tries to obtain optimal functional spaces for the non homogeneous source terms. For details on this subject, we refer the reader to [START_REF] Farwig | A new class of weak solutions of the Navier-Stokes equations with nonhomogeneous data[END_REF], [START_REF] Farwig | Global weak solutions of the Navier-Stokes equations with nonhomogeneous boundary data and divergence[END_REF] or [START_REF] Raymond | Stokes and Navier-Stokes equations with a nonhomogeneous divergence condition[END_REF]. In our case, since the divergence source term is smooth, an efficient method is to start by solving a (stationary or evolution) Stokes problem in order to lift the non homogeneous divergence condition. We define u σ as the solution to:
∂ t u σ -∆u σ + ∇p σ = 0 in O, div u σ = σ in O, u σ • n = 0 on ∂O, N (u σ ) = 0 on ∂O, u σ (0, •) = u * in O. (13)
Smoothness (in time and space) of σ immediately gives smoothness on u σ . These are standard maximal regularity estimates for the Stokes problem in the case of the Dirichlet boundary condition. For Navier boundary conditions (sometimes referred to as Robin boundary conditions for the Stokes problem), we refer to [START_REF] Shibata | On a generalized resolvent estimate for the Stokes system with Robin boundary condition[END_REF], [START_REF] Shibata | On the Stokes equation with Robin boundary condition[END_REF] or [START_REF] Shimada | On the L p -L q maximal regularity for Stokes equations with Robin boundary condition in a bounded domain[END_REF]. Decomposing u = u σ + u h , we obtain the following system for u h :
∂ t u h + (u σ • ∇)u h + (u h • ∇)u σ + (u h • ∇)u h -∆u h + ∇p h = ξ -(u σ • ∇)u σ in O, div u h = 0 in O, u h • n = 0 on ∂O, N (u h ) = 0 on ∂O, u h (0, •) = 0 in O. (14)
Defining weak Leray solutions to ( 14) is a standard procedure. They are defined as measurable functions satisfying the variational formulation of ( 14) and some appropriate energy inequality. For in-depth insights on this topic, we refer the reader to the classical references by Temam [START_REF] Temam | Theory and numerical analysis[END_REF] or Galdi [START_REF] Galdi | An introduction to the Navier-Stokes initial-boundary value problem[END_REF]. In our case, let L 2 div (O) denote the closure in L 2 (O) of the space of smooth divergence free vector fields tangent to ∂O. We will say that
u h ∈ C 0 w ([0, T ]; L 2 div (O)) ∩ L 2 ((0, T ); H 1 (O)
) is a weak Leray solution to [START_REF] Thierry Clopeau | On the vanishing viscosity limit for the 2D incompressible Navier-Stokes equations with the friction type boundary conditions[END_REF] if it satisfies the variational formulation:
- O u h ∂ t φ + O ((u σ • ∇)u h + (u h • ∇)u σ + (u h • ∇)u h ) φ + 2 O D(u h ) : D(φ) + 2 ∂O [M u h ] tan φ = O (ξ -(u σ • ∇)u σ ) φ, (15)
for any φ ∈ C ∞ c ([0, T ), Ō) which is divergence free and tangent to ∂O. We moreover require that they satisfy the so-called strong energy inequality for almost every τ < t:
|u h (t)| 2 L 2 + 4 (τ,t)×O |D(u h )| 2 ≤ |u h (τ )| 2 L 2 -4 (τ,t)×∂O [M u h ] tan u h + (τ,t)×O σu 2 h + 2(u h • ∇)u σ u h + 2 (ξ -(u σ • ∇)u σ ) u h . (16)
In ( 16), the boundary term is well defined. Indeed, from the Galerkin method, we can obtain strong convergence of Galerkin approximations u n h towards u h in L 2 ((0, T ); L 2 (∂O)) (see [56, page 155]). Although uniqueness of weak Leray solutions is still an open question, it is easy to adapt the classical Leray-Hopf theory proving global existence of weak solutions to the case of Navier boundary conditions (see [START_REF] Thierry Clopeau | On the vanishing viscosity limit for the 2D incompressible Navier-Stokes equations with the friction type boundary conditions[END_REF] for 2D or [START_REF] Iftimie | Inviscid limits for the Navier-Stokes equations with Navier friction boundary conditions[END_REF] for 3D). Once forcing terms ξ and σ are fixed, there exists thus at least one weak Leray solution u to [START_REF] Bocquet | Flow boundary conditions from nano-to micro-scales[END_REF].
In the sequel, we will mostly work within the extended domain. Our goal will be to explain how we choose the external forcing terms ξ and σ in order to guarantee that the associated controlled trajectory vanishes within the physical domain at the final time.
Time scaling and small viscosity asymptotic expansion
The global controllability time T is small but fixed. Let us introduce a positive parameter ε ≪ 1. We will be even more ambitious and try to control the system during the shorter time interval [0, εT ]. We perform the scaling: u ε (t, x) := εu(εt, x) and p ε (t, x) := ε 2 p(εt, x). Similarly, we set ξ ε (t, x) := ε 2 ξ(εt, x) and σ ε (t, x) := εσ(εt, x). Now, (u ε , p ε ) is a solution to the following system for t ∈ (0, T ):
∂ t u ε + (u ε • ∇) u ε -ε∆u ε + ∇p ε = ξ ε in (0, T ) × O, div u ε = σ ε in (0, T ) × O, u ε • n = 0 on (0, T ) × ∂O, [∇ × u ε ] tan = 0 on (0, T ) × ∂O, u ε | t=0 = εu * in O. (17)
Due to the scaling chosen, we plan to prove that we can obtain
|u ε (T, •)| L 2 (O) = o(ε)
in order to conclude with a local result. Since ε is small, we expect u ε to converge to the solution of the Euler equation. Hence, we introduce the following asymptotic expansion for:
u ε = u 0 + εu 1 + εr ε , (18)
p ε = p 0 + εp 1 + επ ε , (19)
ξ ε = ξ 0 + εξ 1 , (20)
σ ε = σ 0 . ( 21
)
Let us provide some insight behind expansion ( 18)- [START_REF] Crispo | On the zero-viscosity limit for 3D Navier-Stokes equations under slip boundary conditions[END_REF]. The first term (u 0 , p 0 , ξ 0 , σ 0 ) is the solution to a controlled Euler equation. It models a smooth reference trajectory around which we are linearizing the Navier-Stokes equation. This trajectory will be chosen in such a way that it flushes the initial data out of the domain in time T . The second term (u 1 , p 1 , ξ 1 ) takes into account the initial data u * , which will be flushed out of the physical domain by the flow u 0 . Eventually, (r ε , π ε ) contains higher order residues. We need to prove |r ε (T, 1) in order to be able to conclude the proof of Theorem 1.
•)| L 2 (O) = o(
A return method trajectory for the Euler equation
At order O(1), the first part (u 0 , p 0 ) of our expansion is a solution to the Euler equation. Hence, the pair (u 0 , p 0 ) is a return-method-like trajectory of the Euler equation on (0, T ):
∂ t u 0 + u 0 • ∇ u 0 + ∇p 0 = ξ 0 in (0, T ) × O, div u 0 = σ 0 in (0, T ) × O, u 0 • n = 0 on (0, T ) × ∂O, u 0 (0, •) = 0 in O, u 0 (T, •) = 0 in O, (22)
where ξ 0 and σ 0 are smooth forcing terms supported in Ō \ Ω. We want to use this reference trajectory to flush any particle outside of the physical domain within the fixed time interval [0, T ]. Let us introduce the flow Φ 0 associated with u 0 :
Φ 0 (t, t, x) = x, ∂ s Φ 0 (t, s, x) = u 0 (s, Φ 0 (t, s, x)). (23)
Hence, we look for trajectories satisfying:
∀x ∈ Ō, ∃t x ∈ (0, T ), Φ 0 (0, t x , x) / ∈ Ω. (24)
We do not require that the time t x be the same for all x ∈ O. Indeed, it might not be possible to flush all of the points outside of the physical domain at the same time. Property ( 24) is obvious for points x already located in Ō \ Ω. For points lying within the physical domain, we use: [START_REF] Dardé | On the reachable set for the one-dimensional heat equation[END_REF] such that the flow Φ 0 defined in (23) satisfies [START_REF] Duoandikoetxea | Moments, masses de Dirac et décomposition de fonctions[END_REF]. Moreover, u 0 can be chosen such that:
Lemma 2. There exists a solution (u 0 , p 0 , ξ 0 , σ 0 ) ∈ C ∞ ([0, T ] × Ō, R d × R × R d × R) to system
∇ × u 0 = 0 in [0, T ] × Ō. (25)
Moreover, (u 0 , p 0 , ξ 0 , σ 0 ) are compactly supported in (0, T ). In the sequel, when we need it, we will implicitly extend them by zero after T .
This lemma is the key argument of multiple papers concerning the small-time global exact controllability of Euler equations. We refer to the following references for detailed statements and construction of these reference trajectories. First, the first author used it in [START_REF] Coron | Contrôlabilité exacte frontière de l'équation d'Euler des fluides parfaits incompressibles bidimensionnels[END_REF] for 2D simply connected domains, then in [START_REF] Coron | On the controllability of 2-D incompressible perfect fluids[END_REF] for general 2D domains when Γ intersects all connected components of ∂Ω. Glass adapted the argument for 3D domains (when Γ intersects all connected components of the boundary), for simply connected domains in [START_REF] Glass | Contrôlabilité exacte frontière de l'équation d'Euler des fluides parfaits incompressibles en dimension 3[END_REF] then for general domains in [START_REF] Glass | Exact boundary controllability of 3-D Euler equation[END_REF]. He also used similar arguments to study the obstructions to approximate controllability in 2D when Γ does not intersect all connected components of the boundary for general 2D domains in [START_REF] Glass | An addendum to a J. M. Coron theorem concerning the controllability of the Euler system for 2D incompressible inviscid fluids[END_REF]. Here, we use the assumption that our control domain Γ intersects all connected parts of the boundary ∂Ω. The fact that condition (25) can be achieved is a direct consequence of the construction of the reference profile u 0 as a potential flow: u 0 (t, x) = ∇θ 0 (t, x), where θ 0 is smooth.
Convective term and flushing of the initial data
We move on to order O(ε). Here, the initial data u * comes into play. We build u 1 as the solution to:
∂ t u 1 + u 0 • ∇ u 1 + u 1 • ∇ u 0 + ∇p 1 = ∆u 0 + ξ 1 in (0, T ) × O, div u 1 = 0 in (0, T ) × O, u 1 • n = 0 on (0, T ) × ∂O, u 1 (0, •) = u * in O, (26)
where ξ 1 is a forcing term supported in Ō \ Ω. Formally, equation ( 26) also takes into account a residual term ∆u 0 . Thanks to [START_REF] Weinan | Boundary layer theory and the zero-viscosity limit of the Navier-Stokes equation[END_REF], we have ∆u 0 = ∇(div u 0 ) = ∇σ 0 . It is thus smooth, supported in Ō \ Ω and can be canceled by incorporating it into ξ 1 . The following lemma is natural thanks to the choice of a good flushing trajectory u 0 :
Lemma 3. Let u * ∈ H 3 (O) ∩ L 2 div (O). There exists a force ξ 1 ∈ C 1 ([0, T ], H 1 (O)) ∩ C 0 ([0, T ], H 2 (O)) such that the associated solution u 1 to system (26) satisfies u 1 (T, •) = 0. Moreover, u 1 is bounded in L ∞ ((0, T ), H 3 (O)).
In the sequel, it is implicit that we extend (u 1 , p 1 , ξ 1 ) by zero after T . This lemma is mostly a consequence of the works on the Euler equation, already mentioned in the previous paragraph, due to the first author in 2D, then to Glass in 3D. However, in these original works, the regularity obtained for the constructed trajectory would not be sufficient in our context. Thus, we provide in Appendix A a constructive proof which enables us to obtain the regularity for ξ 1 and u 1 stated in Lemma 3. We only give here a short overview of the main idea of the proof. The interested reader can also start with the nice introduction given by Glass in [START_REF] Glass | Contrôlabilité de l'équation d'Euler tridimensionnelle pour les fluides parfaits incompressibles[END_REF].
The intuition behind the possibility to control u 1 is to introduce ω 1 := ∇ × u 1 and to write (26) in vorticity form, within the physical domain Ω:
∂ t ω 1 + u 0 • ∇ ω 1 -ω 1 • ∇ u 0 = 0 in (0, T ) × Ω, ω 1 (0, •) = ∇ × u * in Ω. (27)
The term ω 1 • ∇ u 0 is specific to the 3D setting and does not appear in 2D (where the vorticity is merely transported). Nevertheless, even in 3D, the support of the vorticity is transported by u 0 . Thus, thanks to hypothesis [START_REF] Duoandikoetxea | Moments, masses de Dirac et décomposition de fonctions[END_REF], ω 1 will vanish inside Ω at time T provided that we choose null boundary conditions for ω 1 on the controlled boundary Γ when the characteristics enter in the physical domain. Hence, we can build a trajectory such that ω 1 (T, •) = 0 inside Ω. Combined with the divergence free condition and null boundary data, this yields that u 1 (T, •) = 0 inside Ω, at least for simple geometries.
Energy estimates for the remainder
In this paragraph, we study the remainder defined in expansion [START_REF] Coron | On the controllability of 2-D incompressible perfect fluids[END_REF]. We write the equation for the remainder in the extended domain O:
∂ t r ε + (u ε • ∇) r ε -ε∆r ε + ∇π ε = f ε -A ε r ε , in (0, T ) × O, div r ε = 0 in (0, T ) × O, [∇ × r ε ] tan = -∇ × u 1 tan on (0, T ) × ∂O, r ε • n = 0 on (0, T ) × ∂O, r ε (0, •) = 0 in O, (28)
where we used the notations:
A ε r ε := (r ε • ∇) u 0 + εu 1 , (29)
f ε := ε∆u 1 -ε(u 1 • ∇)u 1 . (30)
We want to establish a standard L ∞ (L 2 ) ∩ L 2 (H 1 ) energy estimate for the remainder. As usual, formally, we multiply equation ( 28) by r ε and integrate by parts. Since we are considering weak solutions, some integration by parts may not be justified because we do not have enough regularity to give them a meaning. However, the usual technique applies: one can recover the estimates obtained formally from the variational formulation of the problem, the energy equality for the first terms of the expansion and the energy inequality of the definition of weak solutions (see [56, page 168] for an example of such an argument). We proceed term by term:
O ∂ t r ε • r ε = 1 2 d dt O |r ε | 2 , ( 31
) O (u ε • ∇) r ε • r ε = - 1 2 O (div u ε ) |r ε | 2 , ( 32
) -ε O ∆r ε • r ε = ε O |∇ × r ε | 2 -ε ∂O (r ε × (∇ × r ε )) • n, (33)
O ∇π ε • r ε = 0. (34)
In [START_REF] Galdi | An introduction to the Navier-Stokes initial-boundary value problem[END_REF], we will use the fact that div u ε = div u 0 = σ 0 is known and bounded independently of r ε . In [START_REF] Gérard | On the ill-posedness of the Prandtl equation[END_REF], we use the boundary condition on r ε to estimate the boundary term:
∂O r ε × (∇ × r ε ) • n = ∂O r ε × (∇ × u 1 ) • n = O div r ε × ω 1 = O (∇ × r ε ) • ω 1 -r ε • (∇ × ω 1 ) ≤ 1 2 O |∇ × r ε | 2 + 1 2 O ω 1 2 + 1 2 O |r ε | 2 + 1 2 O ∇ × ω 1 2 . ( 35
)
We split the forcing term estimate as:
O f ε • r ε ≤ 1 2 |f ε | 2 1 + |r ε | 2 2 . ( 36
)
Combining estimates ( 31)-( 34), ( 35) and ( 36) yields:
d dt |r ε | 2 2 + ε|∇ × r ε | 2 2 ≤ 2ε u 1 2 H 2 + |f ε | 2 + ε + σ 0 ∞ + 2 |A ε | ∞ + |f ε | 2 |r ε | 2 2 . ( 37
)
Applying Grönwall's inequality by integrating over (0, T ) and using the null initial condition gives:
|r ε | 2 L ∞ (L 2 ) + ε |∇ × r ε | 2 L 2 (L 2 ) = O(ε). (38)
This paragraphs proves that, once the source terms ξ ε and σ ε are fixed as above, any weak Leray solution to ( 17) is small at the final time. Indeed, thanks to Lemma 2 and Lemma 3, u 0 (T ) = u 1 (T ) = 0. At the final time, [START_REF] Glass | Contrôlabilité de l'équation d'Euler tridimensionnelle pour les fluides parfaits incompressibles[END_REF] gives:
|u ε (T, •)| L 2 (O) ≤ ε |r ε (T, •)| L 2 (O) = O(ε 3/2 ). (39)
Regularization and local arguments
In this paragraph, we explain how to chain our arguments in order to prove Theorem 1. We will need to use a local argument to finish bringing the velocity field exactly to the null equilibrium state (see paragraph 1.4.1 for references on null controllability of Navier-Stokes):
Lemma 4 ( [START_REF] Guerrero | Local exact controllability to the trajectories of the Navier-Stokes system with nonlinear Navier-slip boundary conditions[END_REF]). Let T > 0. There exists δ T > 0 such that, for any u * ∈ H 3 (O) which is divergence free, tangent to ∂O, satisfies the compatibility assumption N (u * ) = 0 on ∂O and of size |u
* | H 3 (O) ≤ δ T , there exists a control ξ ∈ H 1 ((0, T ), L 2 (O))∩C 0 ([0, T ], H 1 (O)
) supported outside of Ω such that the strong solution to [START_REF] Bocquet | Flow boundary conditions from nano-to micro-scales[END_REF] with σ = 0 satisfies u(T, •) = 0.
In this context of small initial data, the existence and uniqueness of a strong solution is proved in [START_REF] Guerrero | Local exact controllability to the trajectories of the Navier-Stokes system with nonlinear Navier-slip boundary conditions[END_REF]. We also use the following smoothing lemma for our Navier-Stokes system:
Lemma 5. Let T > 0. There exists a continuous function C T with C T (0) = 0, such that, if u * ∈ L 2 div (O) and u ∈ C 0 w ([0, T ]; L 2 div (O)) ∩ L 2 ((0, T ); H 1 (O)
) is a weak Leray solution to [START_REF] Bocquet | Flow boundary conditions from nano-to micro-scales[END_REF], with ξ = 0 and σ = 0:
∃t u ∈ [0, T ], |u(t u , •)| H 3 (O) ≤ C T |u * | L 2 (O) . (40)
Proof. This result is proved by Temam in [84, Remark 3.2] in the harder case of Dirichlet boundary condition. His method can be adapted to the Navier boundary condition and one could track down the constants to explicit the shape of the function C T . For the sake of completeness, we provide a standalone proof in a slightly more general context (see Lemma 9, Section 5).
We can now explain how we combine these arguments to prove Theorem 1. Let T > 0 be the allowed control time and u * ∈ L 2 γ (Ω) the (potentially large) initial data to be controlled. The proof of Theorem 1 follows the following steps:
• We start by extending Ω into O as explained in paragraph 2.1. We also extend the initial data u * to all of O, still denoting it by u * . We choose an extension such that u * • n = 0 on ∂O and σ * := div u * is smooth (and supported in O \ Ω). We start with a short preparation phase where we let σ decrease from its initial value to zero, relying on the existence of a weak solution once a smooth σ profile is fixed, say σ(t, x) := β(t)σ * , where β smoothly decreases from 1 to 0. Then, once the data is divergence free, we use Lemma 5 to deduce the existence of a time
T 1 ∈ (0, T /4) such that u(T 1 , •) ∈ H 3 (O)
. This is why we can assume that the new "initial" data has H 3 regularity and is divergence free. We can thus apply Lemma 3.
• Let T 2 := T /2. Starting from this new smoother initial data u(T 1 , •), we proceed with the small-time global approximate controllability method explained above on a time interval of size T 2 -T 1 ≥ T /4.
For any δ > 0, we know that we can build a trajectory starting from u(T 1 , •) and such that u(T 2 , •) is smaller than δ in L 2 (O). It particular, it can be made small enough such that C T • Repeating the regularization argument of Lemma 5, we deduce the existence of a time
T 3 ∈ T 2 , 3T 4 such that u(T 3 , •) is smaller than δ T 4 in H 3 (O).
• We use Lemma 4 on the time interval T 3 , T 3 + T 4 to reach exactly zero. Once the system is at rest, it stays there until the final time T . This concludes the proof of Theorem 1 in the case of the slip condition. For the general case, we will use the same proof skeleton, but we will need to control the boundary layers. In the following sections, we explain how we can obtain small-time global approximate null controllability in the general case.
Boundary layer expansion and dissipation
As in the previous section, the allotted physical control time T is fixed (and potentially small). We introduce an arbitrary mathematical time scale ε ≪ 1 and we perform the usual scaling u ε (t, x) := εu(εt, x) and p ε (t, x) := ε 2 p(εt, x). In this harder setting involving a boundary layer expansion, we do not try to achieve approximate controllability towards zero in the smaller physical time interval [0, εT ] like it was possible to do in the previous section. Instead, we will use the virtually long mathematical time interval to dissipate the boundary layer. Thus, we consider (u ε , p ε ) the solution to:
∂ t u ε + (u ε • ∇) u ε -ε∆u ε + ∇p ε = ξ ε in (0, T /ε) × O, div u ε = σ ε in (0, T /ε) × O, u ε • n = 0 on (0, T /ε) × ∂O, N (u ε ) = 0 on (0, T /ε) × ∂O, u ε | t=0 = εu * in O. (41)
Here again, we do not expect to reach exactly zero with this part of the strategy. However, we would like to build a sequence of solutions such that |u (T,
•)| L 2 (O) = o(1)
. As in Section 2, this will allow us to apply a local result with a small initial data, a fixed time and a fixed viscosity. Due to the scaling chosen, this conditions translates into proving that
u ε T ε , • L 2 (O) = o(ε).
Following and enhancing the original boundary layer expansion for Navier slip-with-friction boundary conditions proved by Iftimie and the third author in [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF], we introduce the following expansion:
u ε (t, x) = u 0 (t, x) + √ εv t, x, ϕ(x) √ ε + εu 1 (t, x) + . . . + εr ε (t, x), (42)
p ε (t, x) = p 0 (t, x) + εp 1 (t, x) + . . . + επ ε (t, x). ( 43
)
The forcing terms are expanded as:
ξ ε (t, x) = ξ 0 (t, x) + √ εξ v t, x, ϕ(x) √ ε + εξ 1 (t, x), (44)
σ ε (t, x) = σ 0 (t, x). (45)
Compared with expansion [START_REF] Coron | On the controllability of 2-D incompressible perfect fluids[END_REF], expansion (42) introduces a boundary correction v. Indeed, u 0 does not satisfy the Navier slip-with-friction boundary condition on ∂O. The purpose of the second term v is to recover this boundary condition by introducing the tangential boundary layer generated by u 0 . In equations ( 42) and ( 43), the missing terms are technical terms which will help us prove that the remainder is small. We give the details of this technical part in Section 4. We use the same profiles u 0 and u 1 as in the previous section (extended by zero after T ). Hence, u ε ≈ √ εv after T and we must understand the behavior of this boundary layer residue that remains after the short inviscid control strategy.
Boundary layer profile equations
Since the Euler system is a first-order system, we have only been able to impose a single scalar boundary condition in [START_REF] Dardé | On the reachable set for the one-dimensional heat equation[END_REF] (namely, u 0 • n = 0 on ∂O). Hence, the full Navier slip-with-friction boundary condition is not satisfied by u 0 . Therefore, at order O( √ ε), we introduce a tangential boundary layer correction v. This profile is expressed in terms both of the slow space variable x ∈ O and a fast scalar variable z = ϕ(x)/ √ ε. As in [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF], v is the solution to:
∂ t v + (u 0 • ∇)v + (v • ∇)u 0 tan + u 0 ♭ z∂ z v -∂ zz v = ξ v in R + × Ō × R + , ∂ z v(t, x, 0) = g 0 (t, x) in R + × Ō, v(0, x, z) = 0 in Ō × R + , (46)
where we introduce the following definitions:
u 0 ♭ (t, x):= - u 0 (t, x) • n(x) ϕ(x) in R + × O, (47)
g 0 (t, x):=2χ(x)N (u 0 )(t, x) in R + × O. ( 48
)
Unlike in [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF], we introduced an inhomogeneous source term ξ v in [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF]. This corresponds to a smooth control term whose x-support is located within Ō \ Ω. Using the transport term, this outside control will enable us to modify the behavior of v inside the physical domain Ω. Let us state the following points about equations ( 46), ( 47) and ( 48):
• The boundary layer profile depends on d + 1 spatial variables (d slow variables x and one fast variable z) and is thus not set in curvilinear coordinates. This approach used in [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF] lightens the computations. It is implicit that n actually refers to the extension -∇ϕ of the normal (as explained in paragraph 2.1) and that this extends formulas (2) defining the tangential part of a vector field and (4) defining the Navier operator inside O.
• The boundary profile is tangential, even inside the domain. For any x ∈ Ō, z ≥ 0 and t ≥ 0, we have v(t, x, z) • n(x) = 0. It is easy to check that, as soon as the source term ξ v • n = 0, the evolution equation ( 46) preserves the relation v(0, x, z) • n(x) = 0 of the initial time. This orthogonality property is the reason why equation ( 46) is linear. Indeed, the quadratic term (v • n)∂ z v should have been taken into account if it did not vanish. In the sequel, we will check that our construction satisfies the property ξ v • n = 0.
• In [START_REF] Guerrero | Remarks on global approximate controllability for the 2-D Navier-Stokes system with Dirichlet boundary conditions[END_REF], we introduce a smooth cut-off function χ, satisfying χ = 1 on ∂O. This is intended to help us guarantee that v is compactly supported near ∂O, while ensuring that v compensates the Navier slip-with-friction boundary trace of u 0 . See paragraph 3.4 for the choice of χ.
• Even though ϕ vanishes on ∂O, u 0 ♭ is not singular near the boundary because of the impermeability condition u 0 • n = 0. Since u 0 is smooth, a Taylor expansion proves that u 0 ♭ is smooth in Ō.
Large time asymptotic decay of the boundary layer profile
In the previous paragraph, we defined the boundary layer profile through equation ( 46) for any t ≥ 0. Indeed, we will need this expansion to hold on the large time interval [0, T /ε]. Thus, we prefer to define it directly for any t ≥ 0 in order to stress out that this boundary layer profile does not depend in any way on ε. Is it implicit that, for t ≥ T , the Euler reference flow u 0 is extended by 0. Hence, for t ≥ T , system [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF] reduces to a parametrized heat equation on the half line z ≥ 0 (where the slow variables x ∈ O play the role of parameters):
∂ t v -∂ zz v = 0, in R + × O, for t ≥ T, ∂ z v(t, x, 0) = 0 in {0} × O, for t ≥ T. ( 49
)
The behavior of the solution to (49) depends on its "initial" data v(x, z) := v(T, x, z) at time T . Even without any assumption on v, this heat system exhibits smoothing properties and dissipates towards the null equilibrium state. It can for example be proved that:
|v(t, x, •)| L 2 (R+) t -1 4 |v(x, •)| L 2 (R+) . (50)
However, as the equation is set on the half-line z ≥ 0, the energy decay obtained in ( 50) is rather slow. Moreover, without any additional assumption, this estimate cannot be improved. It is indeed standard to prove asymptotic estimates for the solution v(t, x, •) involving the corresponding Green function (see [START_REF] Bartier | Improved intermediate asymptotics for the heat equation[END_REF], [START_REF] Duoandikoetxea | Moments, masses de Dirac et décomposition de fonctions[END_REF], or [START_REF] Elena | Decay of solutions to parabolic conservation laws[END_REF]). Physically, this is due to the fact that the average of v is preserved under its evolution by equation [START_REF] Guerrero | A result concerning the global approximate controllability of the Navier-Stokes system in dimension 3[END_REF]. The energy contained by low frequency modes decays slowly. Applied at the final time t = T /ε, estimate (50) yields:
√ εv T ε , •, ϕ(•) √ ε L 2 (O) = O ε 1 2 + 1 4 + 1 4 , (51)
where the last ε 1 4 factor comes from the Jacobian of the fast variable scaling (see [56, Lemma 3, page 150]). Hence, the natural decay O(ε) obtained in [START_REF] Guo | A note on Prandtl boundary layers[END_REF] is not sufficient to provide an asymptotically small boundary layer residue in the physical scaling. After division by ε, we only obtain a O(1) estimate. This motivates the fact that we need to design a control strategy to enhance the natural dissipation of the boundary layer residue after the main inviscid control step is finished.
Our strategy will be to guarantee that v satisfies a finite number of vanishing moment conditions for k ∈ N of the form:
∀x ∈ O, R+ z k v(x, z)dz = 0. ( 52
)
These conditions also correspond to vanishing derivatives at zero for the Fourier transform in z of v (or its even extension to R). If we succeed to kill enough moments in the boundary layer at the end of the inviscid phase, we can obtain arbitrarily good polynomial decay properties. For s, n ∈ N, let us introduce the following weighted Sobolev spaces:
H s,n (R) := f ∈ H s (R), s α=0 R (1 + z 2 ) n |∂ α z f (z)| 2 dz < +∞ , (53)
which we endow with their natural norm. We prove in the following lemma that vanishing moment conditions yield polynomial decays in these weighted spaces for a heat equation set on the real line.
Lemma 6. Let s, n ∈ N and f 0 ∈ H s,n+1 (R) satisfying, for 0 ≤ k < n, R z k f 0 (z)dz = 0. ( 54
)
Let f be the solution to the heat equation on R with initial data f 0 :
∂ t f -∂ zz f = 0 in R, for t ≥ 0, f (0, •) = f 0 in R, for t = 0. ( 55
)
There exists a constant C s,n independent on f 0 such that, for 0 ≤ m ≤ n,
|f (t, •)| H s,m ≤ C s,n |f 0 | H s,n+1 ln(2 + t) 2 + t 1 4 + n 2 -m 2 . ( 56
)
Proof. For small times (say t ≤ 2), the t function in the right-hand side of ( 56) is bounded below by a positive constant. Thus, inequality (56) holds because the considered energy decays under the heat equation. Let us move on to large times, e.g. assuming t ≥ 2. Using Fourier transform in z → ζ, we compute:
f (t, ζ) = e -tζ 2 f0 (ζ). (57)
Moreover, from Plancherel's equality, we have the following estimate:
|f (t, •)| 2 H s,m m j=0 R (1 + ζ 2 ) s ∂ j ζ f (t, ζ) 2 dζ. (58)
We use [START_REF] Imanuvilov | Remarks on exact controllability for the Navier-Stokes equations[END_REF] to compute the derivatives of the Fourier transform:
∂ j ζ f (t, ζ) = j i=0 ζ i-j P i,j tζ 2 e -tζ 2 ∂ i ζ f0 (ζ), (59)
where P i,j are polynomials with constant numerical coefficients. The energy contained at high frequencies decays very fast. For low frequencies, we will need to use assumptions [START_REF] Havârneanu | Exact internal controllability for the two-dimensional Navier-Stokes equations with the Navier slip boundary conditions[END_REF]. Writing a Taylor expansion of f0 near ζ = 0 and taking into account these assumptions yields the estimates:
∂ i ζ f0 (ζ) |ζ| n-i ∂ n ζ f0 L ∞ |ζ| n-i |z n f 0 (z)| L 1 |ζ| n-i |f 0 | H 0,n+1 . (60)
We introduce ρ > 0 and we split the energy integral at a cutting threshold:
ζ * (t) := ρ ln(2 + t) 2 + t 1/2 . ( 61
)
High frequencies. We start with high frequencies |ζ| ≥ ζ * (t). For large times, this range actually almost includes the whole spectrum. Using ( 58) and ( 59) we compute the high energy terms:
W ♯ j,i,i ′ (t) := |ζ|≥ζ * (t) 1 + ζ 2 s e -2tζ 2 |ζ| i-j |ζ| i ′ -j P i,j tζ 2 P i ′ ,j tζ 2 ∂ i ζ f0 ∂ i ′ ζ f0 dζ. ( 62
)
Plugging estimate ( 60) into (62) yields:
W ♯ j,i,i ′ (t) ≤ |f 0 | 2 H 0,n+1 e -t(ζ * (t)) 2 |t| n-j+ 1 2 R 1 + ζ 2 s e -tζ 2 tζ 2 n-j P i,j tζ 2 P i ′ ,j tζ 2 t 1 2 dζ. ( 63
)
The integral in ( 63) is bounded from above for t ≥ 2 through an easy change of variable. Moreover,
e -t(ζ * (t)) 2 = e -ρt 2+t ln(2+t) = (2 + t) -ρt 2+t ≤ (2 + t) -ρ 2 . (64)
Hence, for t ≥ 2, combining ( 63) and ( 64) yields:
W ♯ j,i,i ′ (t) (2 + t) -ρ 2 |f 0 | 2 H 0,n+1 . (65)
In [START_REF] Lions | Exact controllability for distributed systems. Some trends and some problems[END_REF], we can choose any ρ > 0. Hence, the decay obtained in (65) can be arbitrarily good. This is not the case for the low frequencies estimates which are capped by the number of vanishing moments assumed on the initial data f 0 .
Low frequencies. We move on to low frequencies |ζ| ≤ ζ * (t). For large times, this range concentrates near zero. Using ( 58) and ( 59) we compute the low energy terms:
W ♭ j,i,i ′ (t) := |ζ|≤ζ * (t) 1 + ζ 2 s e -2tζ 2 |ζ| i-j |ζ| i ′ -j P i,j tζ 2 P i ′ ,j tζ 2 ∂ i ζ f0 ∂ i ′ ζ f0 dζ. (66)
Plugging estimate (60) into (66) yields:
W ♭ j,i,i ′ (t) ≤ |f 0 | 2 H 0,n+1 |ζ|≤ζ * (t) 1 + ζ 2 s |ζ| 2n-2j P i,j tζ 2 P i ′ ,j tζ 2 e -2tζ 2 dζ. (67)
The function τ → |P i,j (τ )P i ′ ,j (τ )| e -2τ is bounded on [0, +∞) thanks to the growth comparison theorem. Moreover, (1 + ζ 2 ) s can be bounded by (1 + ρ) s for |ζ| ≤ |ζ * (t)|. Hence, plugging the definition ( 61) into (67) yields:
W ♭ j,i,i ′ (t) |f 0 | 2 H 0,n+1 ρ ln(2 + t) 2 + t 1 2 +n-j . (68)
Hence, choosing ρ = 1 + 2n -2m in equation ( 61) and summing estimates (65) with ( 68) for all indexes 0 ≤ i, i ′ ≤ j ≤ m concludes the proof of (56) and Lemma 6.
We will use the conclusion of Lemma 6 for two different purposes. First, it states that the boundary layer residue is small at the final time. Second, estimate [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF] can also be used to prove that the source terms generated by the boundary layer in the equation of the remainder are integrable in large time. Indeed, for n ≥ 2, f 0 and f satisfying the assumptions of Lemma 6, we have:
f L 1 (H 2,n-2 ) |f 0 | H 2,n+1 . (69)
Preparation of vanishing moments for the boundary layer profile
In this paragraph, we explain how we intend to prepare vanishing moments for the boundary layer profile at time T using the control term ξ v of equation [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF]. In order to perform computations within the Fourier space in the fast variable, we want to get rid of the Neumann boundary condition at z = 0. This can be done by lifting the inhomogeneous boundary condition g 0 to turn it into a source term. We choose the simple lifting -g 0 (t, x)e -z . The homogeneous boundary condition will be preserved via an even extension of the source term. Let us introduce V (t, x, z) ∈ R d defined for t ≥ 0, x ∈ Ō and z ∈ R by:
V (t, x, z) := v(t, x, |z|) + g 0 (t, x)e -|z| . (70)
We also extend implicitly ξ v by parity. Hence, V is the solution to the following evolution equation:
∂ t V + (u 0 • ∇)V + BV + u 0 ♭ z∂ z V -∂ zz V = G 0 e -|z| + G0 |z|e -|z| + ξ v in R + × Ō × R + , V (0, x, z) = 0 in Ō × R + , (71)
where we introduce:
B i,j :=∂ j u 0 i -n • ∂ j u 0 n i +(u 0 • ∇n j )n i for 1 ≤ i, j ≤ d, (72)
G 0 :=∂ t g 0 -g 0 + (u 0 • ∇)g 0 + Bg 0 , (73)
G0 := -u 0 ♭ g 0 . ( 74
)
The null initial condition in ( 71) is due to the fact that u 0 (0, •) = 0 and hence g 0 (0, •) = 0. Similarly, we have g 0 (t, •) = 0 for t ≥ T since we extended u 0 by zero after T . As remarked for equation [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF], equation ( 71) also preserves orthogonality with n. Indeed, the particular structure of the zeroth-order operator B is such that (u 0 • ∇)V + BV • n = 0 for any function V such that V • n = 0. We compute the partial Fourier transform V (t, x, ζ) := R V (t, x, z)e -iζz dz. We obtain:
∂ t V + (u 0 • ∇) V + B + ζ 2 -u 0 ♭ V -u 0 ♭ ζ∂ ζ V = 2G 0 1 + ζ 2 + 2 G0 (1 -ζ 2 ) (1 + ζ 2 ) 2 + ξv . (75)
To obtain the decay we are seeking, we will need to consider a finite number of derivatives of V at ζ = 0. Thus, we introduce:
Q k (t, x) := ∂ k ζ V (t, x, ζ = 0). ( 76
)
Let us compute the evolution equations satisfied by these quantities. Indeed, differentiating equation ( 75) k times with respect to ζ yields:
∂ t ∂ k ζ V + (u 0 • ∇)∂ k ζ V + B + ζ 2 -u 0 ♭ ∂ k ζ V +2kζ∂ k-1 ζ V +k(k -1)∂ k-2 ζ V -u 0 ♭ (ζ∂ ζ + k)∂ k ζ V = ∂ k ζ 2G 0 1 + ζ 2 + 2 G0 (1 -ζ 2 ) (1 + ζ 2 ) 2 + ξv . (77)
Now we can evaluate at ζ = 0 and obtain:
∂ t Q k + (u 0 • ∇)Q k + BQ k -u 0 ♭ (k + 1)Q k = ∂ k ζ 2G 0 1 + ζ 2 + 2 G0 (1 -ζ 2 ) (1 + ζ 2 ) 2 + ξv ζ=0 -k(k -1)Q k-2 . ( 78
)
In particular:
∂ t Q 0 + (u 0 • ∇)Q 0 + BQ 0 -u 0 ♭ Q 0 = 2G 0 +2 G0 + ξv ζ=0 ( 79
)
∂ t Q 2 + (u 0 • ∇)Q 2 + BQ 2 -3u 0 ♭ Q 2 = -2Q 0 -4G 0 -12 G0 + ∂ 2 ζ ξv ζ=0 . (80)
These equations can be brought back to ODEs using the characteristics method, by following the flow Φ 0 . Moreover, thanks to their cascade structure, it is easy to build a source term ξ v which prepares vanishing moments. We have the following result:
Lemma 7. Let n ≥ 1 and u 0 ∈ C ∞ ([0, T ] × Ō) be a fixed reference flow as defined in paragraph 2.3. There exists
ξ v ∈ C ∞ (R + × Ō × R + ) with ξ v • n = 0, whose x support is in Ō \ Ω, whose time support is compact in (0, T ), such that: ∀0 ≤ k < n, ∀x ∈ Ō, Q k (T, x) = 0. (81)
Moreover, for any s, p ∈ N, for any 0 ≤ m ≤ n, the associated boundary layer profile satisfies:
|v(t, •, •)| H p x (H s,m z ) ln(2 + t) 2 + t 1 4 + n 2 -m 2 , ( 82
)
where the hidden constant depends on the functional space and on u 0 but not on the time t ≥ 0.
Proof. Reduction to independent control of n ODEs. Once n is fixed, let n ′ = ⌊(n-1)/2⌋. We start by choosing smooth even functions of z, φ j for 0 ≤ j ≤ n ′ , such that ∂ 2k ζ φj (0) = δ jk . We then compute iteratively the moments Q 2j (odd moments automatically vanish by parity) using ξ v j := ξ v j (t, x)φ j (z) to control Q 2j without interfering with previously constructed controls. When computing the control at order j, all lower order moments 0 ≤ i < j are known and their contribution as the one of Q 0 in (80) can be seen as a known source term.
Reduction to a null controllability problem. Let us explain why ( 79) is controllable. First, by linearity and since the source terms G 0 and G0 are already known, fixed and tangential, it suffices to prove that, starting from zero and without these source terms, we could reach any smooth tangential state. Moreover, since the flow flushing property ( 24) is invariant through time reversal, it is also sufficient to prove that, in the absence of source term, we can drive any smooth tangential initial state to zero. These arguments can also be formalized using a Duhamel formula following the flow for equation [START_REF] Elena | Decay of solutions to parabolic conservation laws[END_REF].
Null controllability for a toy system. We are thus left with proving a null controllability property for the following toy system:
∂ t Q + (u 0 • ∇)Q + BQ + λQ = ξ in (0, T ) × Ō, Q(0, •) = Q * in Ō, (83)
where B(t, x) is defined in [START_REF] Oleȋnik | Mathematical models in boundary layer theory[END_REF] and λ(t, x) is a smooth scalar-valued amplification term. Thanks to the flushing property [START_REF] Duoandikoetxea | Moments, masses de Dirac et décomposition de fonctions[END_REF] and to the fact that Ō is bounded, we can choose a finite partition of unity described by functions η l for 1 ≤ l ≤ L with 0 ≤ η l (x) ≤ 1 and l η l ≡ 1 on Ō, where the support of η l is a small ball B l centered at some x l ∈ Ō. Moreover, we extract our partition such that: for any 1 ≤ l ≤ L, there exists a time t l ∈ (ǫ, Tǫ) such that dist(Φ 0 (0, t, B l ), Ω) ≥ δ/2 for |tt l | ≤ ǫ where ǫ > 0. Let β : R → R be a smooth function with β = 1 on (-∞, -ǫ) and β = 0 on (ǫ, +∞). Let Q l be the solution to [START_REF] Gabriel | On the effect of the internal friction of fluids on the motion of pendulums[END_REF] with initial data Q l * := η l Q * and null source term ξ. We define:
Q(t, x) := L l=1 β(t -t l )Q l (t, x), (84)
ξ(t, x):= L l=1 β ′ (t -t l )Q l (t, x). (85)
Thanks to the construction, formulas ( 84) and ( 85) define a solution to (83) with a smooth control term ξ supported in Ō \ Ω, satisfying ξ • n = 0 and such that Q(T, •) = 0. Decay estimate. For small times t ∈ (0, T ), when ξ v = 0, estimate (82) can be seen as a uniform in time estimate and can be obtained similarly as the well-posedness results proved in [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF]. For large times, t ≥ T , the boundary layer profile equation boils down to the parametrized heat equation ( 49) and we use the conclusion of Lemma 6 to deduce (82) from (56).
Staying in a small neighborhood of the boundary
The boundary layer correction defined in [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF] is supported within a small x-neighborhood of ∂O. This is legitimate because Navier boundary layers don't exhibit separation behaviors. Within this xneighborhood, this correction lifts the tangential boundary layer residue created by the Euler flow but generates a non vanishing divergence at order √ ε. In the sequel, we will need to find a lifting profile for this residual divergence (see ( 116)). This will be possible as long as the extension n(x) := -∇ϕ(x) of the exterior normal to ∂O does not vanish on the x-support of v. However, there exists at least one point in O where ∇ϕ = 0 because ϕ is a non identically vanishing smooth function with ϕ = 0 on ∂O. Hence, we must make sure that, despite the transport term present in equation ( 46), the x-support of v will not encounter points where ∇ϕ vanishes.
We consider the extended domain O. Its boundary coincides with the set {x ∈ R d ; ϕ(x) = 0}. For any δ ≥ 0, we define V δ := {x ∈ R d ; 0 ≤ ϕ(x) ≤ δ}. Hence, V δ is a neighborhood of ∂O in Ō. For δ large enough, V δ = Ō. As mentioned in paragraph 3.1, ϕ was chosen such that |∇ϕ| = 1 and |ϕ(x)| = dist(x, ∂O) in a neighborhood of ∂O. Let us introduce η > 0 such that this is true on V η . Hence, within this neighborhood of ∂O, the extension n(x) = -∇ϕ(x) of the outwards normal to ∂O is well defined (and of unit norm). We want to guarantee that v vanishes outside of V η .
Considering the evolution equation ( 75), we see it as an equation defined on the whole of O. Thanks to its structure, we see that the support of V is transported by the flow of u 0 . Moreover, V can be triggered either by fixed polluting right-hand side source term or by the control forcing term. We want to determine the supports of these sources such that V vanishes outside of V η .
Thanks to definitions ( 48), ( 73) and ( 74), the unwanted right-hand side source term of ( 75) is supported within the support of χ. We introduce η χ such that supp(χ) ⊂ V ηχ . For δ ≥ 0, we define:
S(δ) := sup ϕ Φ 0 (t, t ′ , x) ; t, t ′ ∈ [0, T ], x ∈ V δ ≥ δ. ( 86
)
With this notation, η χ includes the zone where pollution might be emitted. Hence S(η χ ) includes the zone that might be reached by some pollution. Iterating once more, S(S(η χ )) includes the zone where we might want to act using ξ v to prepare vanishing moments. Eventually, S(S(S(η χ )))) corresponds to the maximum localization of non vanishing values for v. First, since u 0 is smooth, Φ 0 is smooth. Moreover, ϕ is smooth. Hence, ( 86) defines a smooth function of δ. Second, due to the condition u 0 • n = 0, the characteristics cannot leave or enter the domain and thus follow the boundaries. Hence, S(0) = 0. Therefore, by continuity of S, there exists η χ > 0 small enough such that S(S(S(η χ )))) ≤ η. We assume χ is fixed from now on.
Controlling the boundary layer exactly to zero
In view of what has been proved in the previous paragraphs, a natural question is whether we could have controlled the boundary layer exactly to zero (instead of controlling only a finite number of modes and relying on self-dissipation of the higher order ones). This was indeed our initial approach but it turned out to be impossible. The boundary layer equation ( 46) is not exactly null controllable at time T . In fact, it is not even exactly null controllable in any finite time greater than T . Indeed, since u 0 (t, •) = 0 for t ≥ T , v is the solution to [START_REF] Guerrero | A result concerning the global approximate controllability of the Navier-Stokes system in dimension 3[END_REF] for t ≥ T . Hence, reaching exactly zero at time T is equivalent to reaching exactly zero at any later time.
Let us present a reduced toy model to explain the difficulty. We consider a rectangular domain and a scalar-valued unknown function v solution to the following system:
∂ t v + ∂ x v -∂ zz v = 0 [0, T ] × [0, 1] × [0, 1], v(t, x, 0) = g(t, x) [0, T ] × [0, 1], v(t, x, 1) = 0 [0, T ] × [0, 1], v(t, 0, z) = q(t, z) [0, T ] × [0, 1], v(0, x, z) = 0 [0, 1] × [0, 1]. (87)
System ( 87) involves both a known tangential transport term and a normal diffusive term. At the bottom boundary, g(t, x) is a smooth fixed pollution source term (which models the action of N (u 0 ), the boundary layer residue created by our reference Euler flow). At the left inlet vertical boundary x = 0, we can choose a Dirichlet boundary value control q(t, z). Hence, applying the same strategy as described above, we can control any finite number of vertical modes provided that T ≥ 1.
However, let us check that it would not be reasonable to try to control the system exactly to zero at any given time T ≥ 1. Let us consider a vertical slice located at x ⋆ ∈ (0, 1) of the domain at the final time and follow the flow backwards by defining:
v ⋆ (t, z) := v(t, x ⋆ + (t -T ), z). ( 88
)
Hence, letting T ⋆ := Tx ⋆ ≥ 0 and using ( 88), v ⋆ is the solution to a one dimensional heat system:
∂ t v ⋆ -∂ zz v ⋆ = 0 [T ⋆ , T ] × [0, 1], v ⋆ (t, 0) = g ⋆ (t) [T ⋆ , T ], v ⋆ (t, 1) = 0 [T ⋆ , T ], v ⋆ (0, z) = q ⋆ (z) [0, 1], (89)
where g ⋆ (t) := g(t, x ⋆ + (t -T )) is smooth but fixed and q ⋆ (z) := q(T ⋆ , z) is an initial data that we can choose as if it was a control. Actually, let us change a little the definition of v ⋆ to lift the inhomogeneous boundary condition at z = 0. We set:
v ⋆ (t, z) := v(t, x ⋆ + (t -T ), z) -(1 -z)g ⋆ (t). (90)
Hence, system (89) reduces to:
T T⋆ e -n 2 π 2 (T -t) 1 -z, e n g ′ ⋆ (t)dt. (92)
If we assume that the pollution term g vanishes at the final time, equation ( 92) and exact null controllability would impose the choice of the initial control data:
q n ⋆ = 1 -z, e n T T⋆ e n 2 π 2 t g ′ ⋆ (t)dt. ( 93
)
Even if the pollution term g is very smooth, there is nothing good to be expected from relation [START_REF] Xin | On the global existence of solutions to the Prandtl's system[END_REF].
Hoping for cancellations or vanishing moments is not reasonable because we would have to guarantee this relation for all Fourier modes n and all x ⋆ ∈ [0, 1]. Thus, the boundary data control that we must choose has exponentially growing Fourier modes. Heuristically, it belongs to the dual of a Gevrey space. The intuition behind relation [START_REF] Xin | On the global existence of solutions to the Prandtl's system[END_REF] is that the control data emitted from the left inlet boundary undergoes a heat regularization process as they move towards their final position. In the meantime, the fixed polluting boundary data is injected directly at positions within the domain and undergoes less smoothing. This prevents any hope from proving exact null controllability for system (87) within reasonable functional spaces and explains why we had to resort to a low-modes control process.
Theorem 1 is an exact null controllability result. To conclude our proof, we use a local argument stated as Lemma 4 in paragraph 2.6 which uses diffusion in all directions. Boundary layer systems like [START_REF] Van Dommelen | The spontaneous generation of the singularity in a separating laminar boundary layer[END_REF] exhibit no diffusion in the tangential direction and are thus harder to handle. The conclusion of our proof uses the initial formulation of the Navier-Stokes equation with a fixed O(1) viscosity.
Estimation of the remainder and technical profiles
In the previous sections, we presented the construction of the Euler reference flushing trajectory u 0 , the transported flow involving the initial data u 1 and the leading order boundary layer correction v. In this section, we follow on with the expansion and introduce technical profiles, which do not have a clear physical interpretation. The purpose of the technical decomposition we propose is to help us prove that the remainder we obtain is indeed small. We will use the following expansion:
u ε = u 0 + √ ε {v} + εu 1 + ε∇θ ε + ε {w} + εr ε , (94)
p ε = p 0 + ε {q} + εp 1 + εµ ε + επ ε , (95)
where v, w and q are profiles depending on t, x and z. For such a function f (t, x, z), we use the notation {f } to denote its evaluation at z = ϕ(x)/ √ ε. In the sequel, operators ∇, ∆, D and div only act on x variables. We will use the following straightforward commutation formulas:
div {f } = {div f } -n • {∂ z f } / √ ε (96) ∇ {f } = {∇f } -n {∂ z f } / √ ε, (97)
N ({f }) = {N (f )} - 1 2 {[∂ z f ] tan } / √ ε, ( 98
)
ε∆ {f } = ε {∆f } + √ ε∆ϕ {∂ z f } -2 √ ε {(n • ∇)∂ z f } + |n| 2 {∂ zz f } . ( 99
)
Within the x-support of boundary layer terms, |n| 2 = 1.
Formal expansions of constraints
In this paragraph, we are interested in the formulation of the boundary conditions and the incompressibility condition for the full expansion. We plug expansion (94) into these conditions and identify the successive orders of power of √ ε.
Impermeability boundary condition
The impermeability boundary condition u ε • n = 0 on ∂O yields:
u 0 • n = 0, ( 100
) v(•, •, 0) • n = 0, (101)
u 1 • n + ∂ n θ ε + w(•, •, 0) • n + r ε • n = 0. ( 102
)
By construction of the Euler trajectory u 0 , equation ( 100) is satisfied. Since the boundary profile v is tangential, equation ( 101) is also satisfied. By construction, we also already have u 1 • n = 0. In order to be able to carry out integrations by part for the estimates of the remainder, we also would like to impose r ε • n = 0. Thus, we read (102) as a definition of ∂ n θ ε once w is known:
∀t ≥ 0, ∀x ∈ ∂O, ∂ n θ ε (t, x) = -w(t, x, 0) • n. (103)
Incompressibility condition
The (almost) incompressibility condition div u ε = σ 0 in O (σ 0 is smooth forcing terms supported outside of the physical domain Ω) yields:
div u 0 -n • {∂ z v} = σ 0 , ( 104
)
{div v} -n • {∂ z w} = 0, (105)
div u 1 + div ∇θ ε + {div w} + div r ε = 0. (106)
In ( 105) and (106), we used formula (96) to isolate the contributions to the divergence coming from the slow derivatives with the one coming from the fast derivative ∂ z . By construction div u 0 = σ 0 , div u 1 = 0, n • ∂ z v = 0 and we would like to work with div r ε = 0. Hence, we read (105) and (106) as:
n • {∂ z w} = {div v} , (107)
-∆θ ε = {div w} .
(108)
Navier boundary condition
Last, we turn to the slip-with-friction boundary condition. Proceeding as above yields by identification:
N (u 0 ) - 1 2 [∂ z v] tan z=0 = 0, (109)
N (v) z=0 - 1 2 [∂ z w] tan z=0 = 0, (110)
N (u 1 ) + N (∇θ ε ) + N (w) z=0 + N (r ε ) = 0. (111)
By construction, (109) is satisfied. We will choose a basic lifting to guarantee (110). Last, we read (111) as an inhomogeneous boundary condition for the remainder:
N (r ε ) = g ε := -N (u 1 ) -N (∇θ ε ) -N (w) z=0 . (112)
Definitions of technical profiles
At this stage, the three main terms u 0 , v and u 1 are defined. In this paragraph, we explain step by step how we build the following technical profiles of the expansion. For any t ≥ 0, the profiles are built sequentially from the values of v(t, •, •). Hence, they will inherit from the boundary layer profile its smoothness with respect to the slow variables x and its time decay estimates obtained from Lemma 6.
Boundary layer pressure
Equation ( 46) only involves the tangential part of the symmetrical convective product between u 0 and v. Hence, to compensate its normal part, we introduce as in [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF] the pressure q which is defined as the unique solution vanishing as z → +∞ to:
(u 0 • ∇)v + (v • ∇)u 0 • n = ∂ z q. (113)
Hence, we can now write:
∂ t v + (u 0 • ∇)v + (v • ∇)u 0 + u 0 ♭ z∂ z v -∂ zz v -n∂ z q = 0. (114)
This pressure profile vanishes as soon as u 0 vanishes, hence in particular for t ≥ T . For any p, s, n ∈ N, the following estimate is straightforward:
|q(t, •, •)| H 1 x (H 0,0 z ) |v(t, •, •)| H 2 x (H 0,2 z ) . (115)
Second boundary corrector
The first boundary condition v generates a non vanishing slow divergence and a non vanishing tangential boundary flux. The role of the profile w is to lift two unwanted terms that would be too hard to handle directly in the equation of the remainder. We define w as:
w(t, x, z) := -2e -z N (v)(t, x, 0) -n(x) +∞ z div v(t, x, z ′ )dz ′ (116)
Definition (116) allows to guarantee condition (110). Moreover, under the assumption |n(x)| 2 = 1 for any x in the x-support of the boundary layer, this definition also fulfills condition (105). In equation ( 116) it is essential that n(x) does not vanish on the x-support of v. This is why we dedicated paragraph 3.4 to proving we could maintain a small enough support for the boundary layer. For any p, s, n ∈ N, the following estimates are straightforward:
|[w(t, •, •)] tan | H p x (H s,n z ) |v(t, •, •)| H p+1 x (H 1,1 z ) , (117)
|w(t, •, •) • n| H p x (H 0,n z ) |v(t, •, •)| H p+1 x (H 0,n+2 z ) , (118)
|w(t, •, •) • n| H p x (H s+1,n z ) |v(t, •, •)| H p+1 x (H s,n z ) . (119)
Estimates ( 117), ( 118) and ( 119) can be grossly summarized sub-optimally by:
|w(t, •, •)| H p x (H s,n z ) |v(t, •, •)| H p+1 x (H s+1,n+2 z ) . (120)
Inner domain corrector
Once w is defined by (116), the collateral damage is that this generates a non vanishing boundary flux w • n on ∂O and a slow divergence. For a fixed time t ≥ 0, we define θ ε as the solution to:
∆θ ε = -{div w} in O, ∂ n θ ε = -w(t, •, 0) • n on ∂O. (121)
System ( 121) is well-posed as soon as the usual compatibility condition between the source terms is satisfied. Using Stokes formula, equations ( 96) and (105), we compute:
∂O w(t, •, 0) • n = ∂O {w} • n = O div {w} = O {div w} -ε -1 2 n • {∂ z w} = O {div w} -ε -1 2 {div v} = O {div w} -ε -1 2 div {v} + ε -1 n • {∂ z v} = O {div w} -ε -1 2 ∂O {v} • n = O {div w} , (122)
where we used twice the fact that v is tangential. Thus, the compatibility condition is satisfied and system (121) has a unique solution. The associated potential flow ∇θ ε solves:
∂ t ∇θ ε + u 0 • ∇ ∇θ ε + (∇θ ε • ∇) u 0 + ∇µ ε = 0, in O for t ≥ 0, div ∇θ ε = -{div w} in O for t ≥ 0, ∇θ ε • n = -w |z=0 • n on ∂O for t ≥ 0, (123)
where the pressure term µ ε := -∂ t θ ε -u 0 •∇θ ε absorbs all other terms in the evolution equation (see [START_REF] Weinan | Boundary layer theory and the zero-viscosity limit of the Navier-Stokes equation[END_REF]).
Estimating roughly θ ε using standard regularity estimates for the Laplace equation yields:
|θ ε (t, •)| H 4 x |{div w} (t, •)| H 2 x + |w(t, •, 0) • n| H 3 x ε 1 4 |w(t)| H 4 x (H 0,0 z ) + ε -1 4 |w(t)| H 3 x (H 1,0 z ) + ε -3 4 |w(t)| H 2 x (H 2,0 z ) + |v(t)| H 3 x (H 0,1 z ) ε -3 4 |w(t)| H 4 x (H 2,0 z ) + |v(t)| H 3 x (H 0,1 z ) , (124)
where we used [55, Lemma 3, page 150] to benefit from the fast variable scaling. Similarly,
|θ ε (t, •)| H 3 x ε -1 4 |w(t)| H 3 x (H 1,0 z ) + |v(t)| H 2 x (H 0,1 z ) , (125)
|θ ε (t, •)| H 2 x ε 1 4 |w(t)| H 2 x (H 0,0 z ) + |v(t)| H 1 x (H 0,1 z ) . (126)
Equation for the remainder
In the extended domain O, the remainder is a solution to:
∂ t r ε -ε∆r ε + (u ε • ∇) r ε + ∇π ε = {f ε } -{A ε r ε } in O for t ≥ 0, div r ε = 0 in O for t ≥ 0, N (r ε ) = g ε on ∂O for t ≥ 0, r ε • n = 0 on ∂O for t ≥ 0, r ε (0, •) = 0 in O at t = 0. (127)
Recall that g ε is defined in (112). We introduce the amplification operator:
A ε r ε := (r ε • ∇) u 0 + √ εv + εu 1 + ε∇θ ε + εw -(r ε • n) ∂ z v + √ ε∂ z w (128)
and the forcing term:
f ε :=(∆ϕ∂ z v -2(n • ∇)∂ z v + ∂ zz w) + √ ε(∆v + ∆ϕ∂ z w -2(n • ∇)∂ z w) + ε(∆w + ∆u 1 + ∆∇θ ε ) -(v + √ ε(w + u 1 + ∇θ ε ))•∇ (v + √ ε(w + u 1 + ∇θ ε )) -(u 0 • ∇)w -(w • ∇)u 0 -u 0 ♭ z∂ z w + (w + u 1 + ∇θ ε ) • n∂ z v + √ εw -∇q -∂ t w. (129)
In ( 128) and (129), many functions depend on t, x and z. The differential operators ∇ and ∆ only act on the slow variables x and the evaluation at z = ϕ(x)/ √ ε is done a posteriori in (127). The derivatives in the fast variable direction are explicitly marked with the ∂ z operator. Moreover, most terms are independent of ε, except where explicitly stated in θ ε and r ε . Expansion (94) contains 4 slowly varying profiles and 2 boundary layer profiles. Thus, computing ε∆u ε using formula (99) produces 4 + 2 × 4 = 12 terms. Terms ∆u 0 and {∂ zz v} have already been taken into account respectively in [START_REF] Weinan | Blowup of solutions of the unsteady Prandtl's equation[END_REF] and [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF]. Term ∆r ε is written in (127). The remaining 9 terms are gathered in the first line of the forcing term (129).
Computing the non-linear term (u ε • ∇)u ε using formula (97) produces 6 × 4 + 6 × 2 × 2 = 48 terms. First, 8 have already been taken into account in ( 22), ( 26), ( 46) and (123). Moreover, 6 are written in (127) as (u ε • ∇)r ε , 7 more as the amplification term (128) and 25 in the second and third line of (129). The two missing terms
{(v • n)∂ z v} and {(v • n)∂ z w} vanish because v • n = 0.
Size of the remainder
We need to prove that equation ( 127) satisfies an energy estimate on the long time interval [0, T /ε]. Moreover, we need to estimate the size of the remainder at the final time and check that it is small. The key point is that the size of the source term {f ε } is small in L 2 (O). Indeed, for terms appearing at order O(1), the fast scaling makes us win a ε 1 4 factor (see for example [56, Lemma 3, page 150]). We proceed as we have done in the case of the shape operator in paragraph 2.5.
The only difference is the estimation of the boundary term [START_REF] Geymonat | On the vanishing viscosity limit for acoustic phenomena in a bounded region[END_REF]. We have to take into account the inhomogeneous boundary condition g ε and the fact that, in the general case, the boundary condition matrix M is different from the shape operator M w . Using [START_REF] Carlo | Some results on the Navier-Stokes equations with Navier boundary conditions[END_REF] allows us to write, on ∂O:
(r ε × (∇ × r ε )) • n = ((∇ × r ε ) × n) • r ε = 2 (N (r ε ) + [(M -M w )r ε ] tan ) • r ε . ( 130
)
Introducing smooth extensions of M and M w to the whole domain O also allows to extend the Navier operator N defined in (4), since the extension of the normal n extends the definition of the tangential part [START_REF] Barrat | Large slip effect at a nonwetting fluid-solid interface[END_REF]. Using (130), we transform the boundary term into an inner term:
∂O (r ε × (∇ × r ε )) • n = 2 ∂O g ε •r ε + ((M -M w )r ε )•r ε = 2 O div [(g ε •r ε )n + (((M -M w )r ε )•r ε )n] ≤ λ |∇r ε | 2 2 + C λ |r ε | 2 2 + + |g ε | 2 2 + |∇g ε | 2 2 , (131)
for any λ > 0 to be chosen and where C λ is a positive constant depending on λ. We intend to absorb the |∇r ε | 2 2 term of (131) using the dissipative term. However, the dissipative term only provides the norm of the symmetric part of the gradient. We recover the full gradient using the Korn inequality. Indeed, since div r ε = 0 in O and r ε • n = 0 on ∂O, the following estimate holds (see [23, Corollary 1, Chapter IX, page 212]):
|r ε | 2 H 1 (O) ≤C K |r ε | 2 L 2 (O) + C K |∇ × r ε | 2 L 2 (O) . (132)
We choose λ = 1/(2C K ) in (131). Combined with (132) and a Grönwall inequality as in paragraph 2.5 yields an energy estimate for t ∈ [0, T /ε]:
|r ε | 2 L ∞ (L 2 ) + ε |r ε | 2 L 2 (H 1 ) = O(ε 1 4 ), (133)
as long as we can check that the following estimates hold: 4 ).
A ε L 1 (L ∞ ) = O(1), (134)
ε g ε 2 L 2 (H 1 ) = O(ε 1 4 ), (135)
f ε L 1 (L 2 ) = O(ε 1
In particular, the remainder at time T /ε is small and we can conclude the proof of Theorem 1 with the same arguments as in paragraph 2.6. Therefore, it only remains to be checked that estimates (134), ( 136) and (135) hold on the time interval [0, T /ε]. In fact, they even hold on the whole time interval [0, +∞).
Estimates for A ε . The two terms involving u 0 and u 1 vanish for t ≥ T . Thus, they satisfy estimate (134). For t ≥ 0, we estimate the other terms in A ε in the following way:
√ ε |∇v(t)| L ∞ √ ε |v(t)| H 3 x (H 1,0 z ) , (137)
ε |∇w(t)| L ∞ ε |w(t)| H 3 x (H 1,0 z ) , (138)
|∂ z v(t)| L ∞ |v(t)| H 2 x (H 2,0 z ) , (139) √ ε |∂ z w(t)| L ∞ √ ε |w(t)| H 2 x (H 2,0 z ) , (140)
ε ∇ 2 θ ε (t) L ∞ ε |θ ε (t)| H 4 . (141)
Combining these estimates with (124) and (120) yields:
A ε L 1 (L ∞ ) u 0 L 1 [0,T ] (H 3 ) + ε u 1 L 1 [0,T ] (H 3 ) + v L 1 (H 5 x (H 3,2 z )) . (142)
Applying Lemma 7 with p = 5, n = 4 and m = 2 concludes the proof of (134).
Estimates for g ε . For t ≥ 0, using the definition of g ε in (112), we estimate:
ε N (u 1 )(t) 2 H 1 ε u 1 (t) 2 H 2 , (143)
ε |N (∇θ ε )(t)| 2 H 1 ε |θ ε (t)| 2 H 3 , (144)
ε N (w) |z=0 (t) 2 H 1 ε |w(t)| 2 H 2 x (H 1,1 z ) . (145)
Combining these estimates with (125) and (120) yields:
ε g ε 2 L 2 (H 1 ) ε u 1 2 L 2 [0,T ] (H 2 ) + ε 3 4 v 2 L 2 (H 4 x (H 2,3 z )) . (146)
Applying Lemma 7 with p = 4, n = 4 and m = 3 concludes the proof of (135).
Estimates for f ε . For t ≥ 0, we estimate the 36 terms involved in the definition of f ε in (129). The conclusion is that (136) holds as soon as v is bounded in L 1 (H 4
x (H 3,4 z )). This can be obtained from Lemma 7 with p = 4, n = 6 and m = 4. Let us give a few examples of some of the terms requiring the most regularity. The key point is that all terms of (129) appearing at order O(1) involve a boundary layer term and thus benefit from the fast variable scaling gain of ε
|{∂ zz w} (t)| L 2 ε 1 4 |w(t)| H 1 x (H 2,0 z ) ε 1 4 |v(t)| H 2 x (H 3,2 z ) . (147)
Using ( 125) and (120), we obtain:
ε |∆∇θ ε (t)| L 2 ε 3 4 |w(t)| H 3 x (H 1,0 z ) + |v(t)| H 2 x (H 0,1 z ) ε 3 4 |v(t)| H 4 x (H 2,2 z ) . (148)
The time derivative {∂ t w} can be estimated easily because the time derivative commutes with the definition of w through formula (116). Moreover, ∂ t v can be recovered from its evolution equation [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF]:
|{∂ t w} (t)| L 2 ε 1 4 |∂ t w(t)| H 1 x (H 0,0 z ) ε 1 4 |∂ t v(t)| H 2 x (H 1,2 z ) ε 1 4 |v(t)| H 3 x (H 2,4 z ) + |ξ v (t)| H 3 x (H 2,4
z ) . (149) The forcing term ξ v is smooth and supported in [0, T ]. As a last example, consider the term
(∇θ ε • n)∂ z v.
We use the injection H 1 ֒→ L 4 which is valid in 2D and in 3D and estimate (126):
|(∇θ ε • n) {∂ z v} (t)| L 2 |∇θ ε (t)| H 1 |{∂ z v} (t)| H 1 ε 1 4 |v(t)| H 3 x (H 1,2 z ) |v(t)| H 2 x (H 1,0 z ) . (150)
As (82) both yields L ∞ and L 1 estimates in time, this estimation is enough to conclude. All remaining nonlinear convective terms can be handled in the same way or even more easily. The pressure term is estimated using (115).
These estimates conclude the proof of small-time global approximate null controllability in the general case. Indeed, both the boundary layer profile (thanks to Lemma 7) and the remainder are small at the final time. Thus, as announced in Remark 2, we have not only proved that there exists a weak trajectory going approximately to zero, but that any weak trajectory corresponding to our source terms ξ ε and σ ε goes approximately to zero. We combine this result with the local and regularization arguments explained in paragraph 2.6 to conclude the proof of Theorem 1 in the general case.
Global controllability to the trajectories
In this section, we explain how our method can be adapted to prove small-time global exact controllability to other states than the null equilibrium state. Since the Navier-Stokes equation exhibits smoothing properties, all conceivable target states must be smooth enough. Generally speaking, the exact description of the set of reachable states for a given controlled system is a difficult question. Already for the heat equation on a line segment, the complete description of this set is still open (see [START_REF] Dardé | On the reachable set for the one-dimensional heat equation[END_REF] and [START_REF] Martin | On the reachable states for the boundary control of the heat equation[END_REF] for recent developments on this topic). The usual circumvention is to study the notion of global exact controllability to the trajectories. That is, we are interested in whether all known final states of the system are reachable from any other arbitrary initial state using a control: Theorem 2. Let T > 0. Assume that the intersection of Γ with each connected component of ∂Ω is smooth. Let ū ∈ C 0 w ([0, T ]; L 2 γ (Ω)) ∩ L 2 ((0, T ); H 1 (Ω)) be a fixed weak trajectory of (1) with smooth ξ. Let u * ∈ L 2 γ (Ω) be another initial data unrelated with ū. Then there exists u ∈ C 0 w ([0, T ]; L 2 γ (Ω)) ∩ L 2 ((0, T ); H 1 (Ω)) a weak trajectory of (1) with u(0,
•) = u * satisfying u(T, •) = ū(T, •).
The strategy is very similar to the one described in the previous sections to prove the global null controllability. We start with the following lemma, asserting small-time global approximate controllability to smooth trajectories in the extended domain.
Lemma 8. Let T > 0. Let (ū, ξ, σ) ∈ C ∞ ([0, T ] × Ō) be a fixed smooth trajectory of [START_REF] Bocquet | Flow boundary conditions from nano-to micro-scales[END_REF]. Let u * ∈ L 2 div (O) be another initial data unrelated with ū. For any δ > 0, there exists u ∈ C 0 w ([0, T ]; L 2 div (O)) ∩ L 2 ((0, T ); H 1 (O)) a weak Leray solution of (12) with u(0,
•) = u * satisfying |u(T ) -ū(T )| L 2 (O) ≤ δ.
Proof. We build a sequence u (ε) to achieve global approximate controllability to the trajectories. Still using the same scaling, we define it as:
u (ε) (t, x) := 1 ε u ε t ε , x , (151)
where u ε solves the vanishing viscosity Navier-Stokes equation [START_REF] Glass | Approximate Lagrangian controllability for the 2-D Euler equation. Application to the control of the shape of vortex patches[END_REF] with initial data εu * on the time interval [0, T /ε]. As previously, this time interval will be used in two different stages. First, a short stage of fixed length T to achieve controllability of the Euler system by means of a return-method strategy. Then, a long stage [T, T /ε], during which the boundary layer dissipates thanks to the careful choice of the boundary controls during the first stage. During the first stage, we use the expansion:
u ε = u 0 + √ ε {v} + εu 1,ε + . . . , (152)
where u 1,ε is built such that u 1,ε (0, •) = u * and u 1,ε (T, •) = ū(εT, •). This is the main difference with respect to the null controllability strategy. Here, we need to aim for a non zero state at the first order. Of course, this is also possible because the state u 1,ε is mostly transported by u 0 (which is such that the linearized Euler system is controllable). The profile u 1,ε now depends on ε. However, since the reference trajectory belongs to C ∞ , all required estimates can be made independent on ε. During this first stage, u 1,ε solves the usual first-order system [START_REF] Weinan | Blowup of solutions of the unsteady Prandtl's equation[END_REF]. For large times t ≥ T , we change our expansion into:
u ε = √ ε {v} + εū(εt, •) + . . . , (153)
where the boundary layer profile solves the homogeneous heat system (49) and ū is the reference trajectory solving the true Navier-Stokes equation. As we have done in the case of null controllability, we can derive the equations satisfied by the remainders in the previous equations and carry on both well-posedness and smallness estimates using the same arguments. Changing expansion (152) into (153) allows to get rid of some unwanted terms in the equation satisfied by the remainder. Indeed, terms such as ε∆u 1 or ε(u 1 ∇)u 1 don't appear anymore because they are already taken into account by ū. One important remark is that it is necessary to aim for ū(εT ) ≈ ū(0) at the linear order and not towards the desired end state ū(T ). Indeed, the inviscid stage is very short and the state will continue evolving while the boundary layer dissipates. This explains our choice of pivot state. We obtain:
u (ε) (T ) -ū(T ) L 2 (O) = O ε 1 8 , (154)
which concludes the proof of approximate controllability.
We will also need the following regularization lemma:
∂ t r -∆r + (ū•∇)r + (r•∇)ū + (r•∇)r + ∇π = 0 in [0, T ] × O, div r = 0 in [0, T ] × O, r • n = 0 on [0, T ] × ∂O, N (r) = 0 on [0, T ] × ∂O, r(0, •) = r * in O, (155)
the following property holds true:
∃t r ∈ [0, T ], |r(t r , •)| H 3 (O) ≤ C |r * | L 2 (O) . (156)
Proof. This regularization lemma is easy in our context because we assumed a lot of smoothness on the reference trajectory ū and we are not demanding anything on the time t r at which the solution becomes smoother. We only sketch out the steps that we go through. We repeatedly use the Korn inequality from [68, Theorem 10.2, page 299] to derive estimates from the symmetrical part of gradients. Let È denote the usual orthogonal Leray projector on divergence-free vectors fields tangent to the boundaries. We will use the fact |∆r| L 2 |È∆r| L 2 which follows from maximal regularity result for the Stokes problem with div r = 0 in O, r • n = 0 and N (r) = 0 on ∂O. Our scheme is inspired from [START_REF] Galdi | An introduction to the Navier-Stokes initial-boundary value problem[END_REF].
Weak solution energy estimate. We start with the usual weak solution energy estimate (which is included in the definition of a weak Leray solution to (155)), formally multiplying (155) by r and integrating by parts. We obtain:
∃C 1 , for a.e. t ∈ [0, T ], |r(t)| 2 L 2 (O) + t 0 |r(t ′ )| 2 H 1 (O) dt ′ ≤ C 1 |r * | 2 L 2 (O) . (157)
In particular (157) yields the existence of 0 ≤ t 1 ≤ T /3 such that:
|r(t 1 )| H 1 (O) ≤ 3C 1 T |r * | L 2 (O) . (158)
Strong solution energy estimate. We move on to the usual strong solution energy estimate, multiplying (155) by È∆r and integrating by parts. We obtain:
∃C 2 , ∀t ∈ [t 1 , t 1 + τ 1 ], |r(t)| 2 H 1 (O) + t t1 |r(t ′ )| 2 H 2 (O) dt ′ ≤ C 2 |r(t 1 )| 2 H 1 (O) , (159)
where τ 1 ≤ T /3 is a short existence time coming from the estimation of the nonlinear term and bounded below as a function of |r(t 1 )| H 1 (O) . See [32, Theorem 6.1] for a detailed proof. Our situation introduces an unwanted boundary term during the integration by parts of ∂ t r, È∆r :
t t1 ∂O [D(r)n] tan [∂ t r] tan = - t t1 ∂O (M r) • ∂ t r. (160)
Luckily, the Navier boundary conditions helps us win one space derivative. When M is a scalar (or a symmetric matrix), this term can be seen as a time derivative. In the general case, we have to conduct a parallel estimate for ∂ t r ∈ L 2 by multiplying equation (155) by ∂ t r, which allows us to maintain the conclusion (159). In particular, this yields the existence of 0 ≤ t 2 ≤ 2T /3 such that:
|r(t 2 )| H 2 (O) ≤ C 2 τ 1 |r(t 1 )| H 1 (O) . (161)
Third energy estimate. We iterate once more. We differentiate (155) with respect to time to obtain an evolution equation on ∂ t r which we multiply by ∂ t r and integrate by parts. We obtain:
∃C 3 , ∀t ∈ [t 2 , t 2 + τ 2 ], |∂ t r(t)| 2 L 2 (O) + t t2 |∂ t r(t ′ )| 2 H 1 (O) dt ′ ≤ C 3 |∂ t r(t 2 )| 2 L 2 (O) , (162)
0 ≤ T 1 < T 2 ≤ T such that ū is smooth on [T 1 , T 2 ]
. This is a classical statement (see [START_REF] Temam | Behaviour at time t = 0 of the solutions of semilinear evolution equations[END_REF]Remark 3.2] for the case of Dirichlet boundary conditions). We will start our control strategy by doing nothing on [0, T 1 ]. Thus, the weak trajectory u will move from u * to some state u(T 1 ) which we will use as a new initial data. Then, we use our control to drive u(T 1 ) to ū(T 2 ) at time T 2 . After T 2 , we choose null controls. The trajectory u follows ū. Hence, without loss of generality, we can assume that T 1 = 0 and T 2 = T . This allows to work with a smooth reference trajectory.
To finish the control strategy, we use the local result from [START_REF] Guerrero | Local exact controllability to the trajectories of the Navier-Stokes system with nonlinear Navier-slip boundary conditions[END_REF]. According to this result, there exists δ T /3 > 0 such that, if we succeed to prove that there exists 0 < τ < 2T /3 such that |u(τ )ū(τ )| H 3 (O) ≤ δ T /3 , then there exist controls driving u to ū(T ) at time T . If we choose null controls r := uū satisfies the hypothesis of Lemma 9. Hence, there exists δ > 0 such that C(δ) ≤ δ T /3 and we only need to build a trajectory such that |u(T /3)ū(T /3)| L 2 (O) ≤ δ, which is precisely what has been proved in Lemma 8. This concludes the proof of Theorem 2.
Perspectives
The results obtained in this work can probably be extended in following directions:
• As stated in Remark 2, for the 3D case, it would be interesting to prove that the constructed trajectory is a strong solution of the Navier-Stokes system (provided that the initial data is smooth enough). Since the first order profiles are smooth, the key point is whether we can obtain strong energy estimates for the remainder despite the presence of a boundary layer. In the uncontrolled setting, an alternative approach to the asymptotic expansion of [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF] consists in introducing conormal Sobolev spaces to perform energy estimates (see [START_REF] Masmoudi | Uniform regularity for the Navier-Stokes equation with Navier boundary condition[END_REF]).
• As proposed in [START_REF] Glass | Approximate Lagrangian controllability for the 2-D Euler equation. Application to the control of the shape of vortex patches[END_REF], [START_REF] Glass | Prescribing the motion of a set of particles in a three-dimensional perfect fluid[END_REF] then [START_REF] Glass | Lagrangian controllability at low Reynolds number[END_REF], respectively for the case of perfect fluids (Euler equation) then very viscous fluids (stationary Stokes equation), the notion of Lagrangian controllability is interesting for applications. It is likely that the proofs of these references can be adapted to the case of the Navier-Stokes equation with Navier boundary conditions thanks to our method, since the boundary layers are located in a small neighborhood of the boundaries of the domain which can be kept separated from the Lagrangian trajectories of the considered movements. This adaptation might involve stronger estimates on the remainder.
• As stated after Lemma 2, the hypothesis that the control domain Γ intersects all connected components of the boundary ∂Ω of the domain is necessary to obtain controllability of the Euler equation. However, since we are dealing with the Navier-Stokes equation, it might be possible to release this assumption, obtain partial results in its absence, or prove that it remains necessary. This question is also linked to the possibility of controlling a fluid-structure system where one tries to control the position of a small solid immersed in a fluid domain by a control on a part of the external border only. Existence of weak solutions for such a system is studied in [START_REF] Gérard | Existence of weak solutions up to collision for viscous fluid-solid systems with slip[END_REF].
• At least for simple geometric settings of Open Problem (OP), our method might be adapted to the challenging Dirichlet boundary condition. In this case, the amplitude of the boundary layer is O(1) instead of O( √ ε) here for the Navier condition. This scaling deeply changes the equations satisfied by the boundary layer profile. Moreover, the new evolution equation satisfied by the remainder involves a difficult term
1 √ ε (r ε • n)∂ z v.
Well-posedness and smallness estimates for the remainder are much harder and might involve analytic techniques. We refer to paragraph 1.5.1 for a short overview of some of the difficulties to be expected.
More generally speaking, we expect that the well-prepared dissipation method can be applied to other fluid mechanics systems to obtain small-time global controllability results, as soon as asymptotic expansions for the boundary layers are known.
A Smooth controls for the linearized Euler equation
In this appendix, we provide a constructive proof of Lemma 3. The main idea is to construct a force term ξ 1 such that ∇ × u 1 (T, •) = 0 in O. Hence, the final time profile U := u 1 (T, •) satisfies:
∇ • U = 0 in O, ∇ × U = 0 in O, U • n = 0 on ∂O. (163)
For simply connected domains, this implies that U = 0 in O. For multiply connected domains, the situation is more complex. Roughly speaking, a finite number of non vanishing solutions to (163) must be ruled out by sending in appropriate vorticity circulations. For more details on this specific topic, we refer to the original references: [START_REF] Coron | On the controllability of 2-D incompressible perfect fluids[END_REF] for 2D, then [START_REF] Glass | Exact boundary controllability of 3-D Euler equation[END_REF] for 3D. Here, we give an explicit construction of a regular force term such that ∇ × u 1 (T, •) = 0. The proof is slightly different in the 2D and 3D settings, because the vorticity formulation of ( 26) is not exactly the same. In both cases, we need to build an appropriate partition of unity.
A.1 Construction of an appropriate partition of unity
First, thanks to hypothesis [START_REF] Duoandikoetxea | Moments, masses de Dirac et décomposition de fonctions[END_REF], the continuity of the flow Φ 0 and the compactness of Ō, there exists δ > 0 such that: ∀x ∈ Ō, ∃t x ∈ (0, T ), dist Φ 0 (0, t x , x), Ω ≥ δ.
Hence, there exists a smooth closed control region K ⊂ Ō such that K ∩ Ω = ∅ and:
∀x ∈ Ō, ∃t x ∈ (0, T ), Φ 0 (0, t x , x) ∈ K.
Hence, each ball spends a positive amount of time within a given square (resp. cube) where we can use a local control to act on the u 1 profile. This square (resp. cube) can be of one of two types as constructed above: either of inner type, or of boundary type. We also introduce a smooth partition of unity η l for 1 ≤ l ≤ L, such that 0 ≤ η l (x) ≤ 1, η l ≡ 1 and each η l is compactly supported in B l . Last, we introduce a smooth function β : R → [0, 1] such that β ≡ 1 on (-∞, ǫ) and β ≡ 0 on (ǫ, +∞).
A.2 Planar case
We consider the initial data u * ∈ H 3 (O) ∩ L 2 div (O) and we split it using the constructed partition of unity. Writing [START_REF] Weinan | Blowup of solutions of the unsteady Prandtl's equation[END_REF] in vorticity form, ω 1 := ∇ × u 1 can be computed as ω l where ω l is the solution to:
∂ t ω l + (div u 0 )ω l + u 0 • ∇ ω l = ∇ × ξ l in (0, T ) × Ō, ω l (0, •) = ∇ × (η l u * ) in Ō. (168)
We consider ωl , the solution to (168) with ξ l = 0. Setting ω l := β(tt l )ω l defines a solution to (168), vanishing at time T , provided that we can find ξ l such that ∇ × ξ l = β ωl . The main difficulty is that we need ξ l to be supported in Ō \ Ω. Since β ≡ 0 outside of (-ǫ, ǫ), β ωl is supported in C m l thanks to (167) because the support of ω l is transported by (168). We distinguish two cases.
Inner balls. Assume that C m l is an inner square. Then B l does not intersect ∂O. Indeed, the streamlines of u 0 follow the boundary ∂O. If there existed x ∈ B l ∩ ∂O, then Φ 0 (0, t l , x) ∈ ∂O could not belong to C m l , which would violate (167). Hence, B l must be an inner ball. Then, thanks to Stokes' theorem, the average of ω l (0, •) on B l is null (since the circulation of η l u * along its border is null). Moreover, this average is preserved under the evolution by (168) with ξ l = 0. Thus, the average of ωl is identically null. It remains to be checked that, if w is a zero-average scalar function supported in an inner square, we can find functions (ξ 1 , ξ 2 ) supported in the same square such that ∂ 1 ξ 2 -∂ 2 ξ 1 = w. Up to translation, rescaling and rotation, we can assume that the inner square is C = [0, 1] 2 . We define: a(x 2 ) := Moreover, thanks to this explicit construction, the spatial regularity of ξ l is at least as good as that of ωl , which is the same as that of ∇ × (η l u * ). If u * ∈ H 3 (O), then ξ l ∈ C 1 ([0, T ], H 1 (O)) ∩ C 0 ([0, T ], H 2 (O)). This remains true after summation with respect to 1 ≤ l ≤ L and for the following constructions exposed below. If the initial data u * was smoother, we could also build smoother controls.
ξ 1 (x 1 , x 2 ) := -c ′ (x 1 )b(x 2 ), (170)
Boundary balls. Assume that C m l is a boundary square. Then, B l can either be an inner ball or a boundary ball and we can no longer assume that the average of ωl is identically null. However, the same construction also works. Up to translation, rescaling and rotation, we can assume that the boundary square is C = [0, 1] 2 , with the side x 2 = 0 inside O and the side 171) and (172). One checks that this defines a force which vanishes for x 1 ≤ 0, for x 1 ≥ 1 and for x 2 ≤ 0.
A.3 Spatial case
In 3D, each vorticity patch ω l satisfies:
∂ t ω l + ∇ × (ω l × u 0 ) = ∇ × ξ l in (0, T ) × Ō, ω l (0, •) = ∇ × (η l u * ) in Ō. (173)
Equation ( 173) preserves the divergence-free condition of its initial data. Hence, proceeding as above, the only thing that we need to check is that, given a vector field w = (w 1 , w 2 , w 3 ) : R 3 → R 3 such that:
support(w) ⊂ (0, 1) 3 , (174)
div(w) = 0, (175)
we can find a vector field ξ = (ξ 1 , ξ 2 , ξ 3 ) : R 3 → R 3 such that:
∂ 2 ξ 3 -∂ 3 ξ 2 = w 1 , (176)
∂ 3 ξ 1 -∂ 1 ξ 3 = w 2 , (177)
∂ 1 ξ 2 -∂ 2 ξ 1 = w 3 , (178)
support(ξ) ⊂ (0, 1) 3 .
(179)
As in the planar case, we distinguish the case of inner and boundary cubes.
Inner cubes. Let a ∈ C ∞ (R, R) be such that:
1 0 a(x)dx = 1, (180)
support(a) ⊂ (0, 1).
We define:
where : R 2 → R will specified later on. From (183), one has (178). From (184), one has (177). From (174), (175), (183) (183) and (184), one has (176). Using (174), (182), ( 183) and (184) one checks that (179) holds if h satisfies support(h) ⊂ (0, 1) 2 , (185)
∂ 2 h(x 2 , x 3 ) = W 2 (x 2 , x 3 ), (186)
∂ 3 h(x 2 , x 3 ) = W 3 (x 2 , x 3 ), (187)
where W 2 (x 2 , x 3 ) := -
Figure 1 :
1 Figure 1: Setting of the main Navier-Stokes control problem.
4 (δ) ≤ δ T 4 , where δ T 4 comes
444 from Lemma 4 and the function C T 4 comes from Lemma 5.
1 4
1 in L 2 of [56, Lemma 3, page 150]. For example, with (120):
Lemma 9 .
9 Let T > 0. Let ū ∈ C ∞ ([0, T ]× Ō) be a fixed smooth function with ū•n = 0 on ∂O. There exists a smooth function C, with C(0) = 0, such that, for any r * ∈ L 2 div (O) and any r ∈ C 0 w ([0, T ]; L 2 div (O)) ∩ L 2 ((0, T ); H 1 (O)), weak Leray solution to:
Figure 3 :
3 Figure 3: Paving the control region K with appropriate squares. Thanks to (165) and to the continuity of the flow Φ 0 :∀x ∈ Ō, ∃ǫ x > 0,∃t x ∈ (ǫ x , Tǫ x ), ∃m x ∈ {1, . . . M }, ∀t ′ ∈ (0, T ), ∀x ′ ∈ Ō, |t ′t x | < ǫ x and |xx ′ | < ǫ x ⇒ Φ 0 (0, t ′ , x ′ ) ∈ C mx . (166)By compactness of Ō, we can find ǫ > 0 and balls B l for 1 ≤ l ≤ M , covering Ō, such that:∀l ∈ {1, . . . L}, ∃t l ∈ (ǫ, Tǫ), ∃m l ∈ {1, . . . M }, ∀t ∈ (t lǫ, t l + ǫ), Φ 0 (0, t, B l ) ∈ C m l .(167)
1 0w(x 1 , x 2 )dx 1 ,
1121
ξ 2 (
2 x 1 , x 2 ) := -c(x 1 )a(x 2 ) + x1 0 w(x, x 2 )dx,(172)where c : R → [0, 1] is a smooth function with c ≡ 0 on (-∞, 1/4) and c ≡ 1 on (3/4, +∞). Thanks to (169), a vanishes for x 2 / ∈ [0, 1]. Thanks to (170), b vanishes for x 2 ≤ 0 (because a(x 2 ) = 0 when x 2 ≤ 0) and for x 2 ≥ 1 (because the b(x 2 ) = C w = 0 for x 2 ≥ 1). Thanks to (171) and (172), (ξ 1 , ξ 2 ) vanish outside of C and ∂ 1 ξ 2 -∂ 2 ξ 1 = w. Thus, we can build ξ l , supported in C m l such that ∇ × ξ l = β ωl .
x 2 = 1 in R 2 Figure 4 :
2124 Figure 4: A boundary square We start by extending w from C ∩ Ō to C, choosing a regular extension operator. Then, we use the same formulas (169), (170), (171) and (172). One checks that this defines a force which vanishes for x 1 ≤ 0, for x 1 ≥ 1 and for x 2 ≤ 0.
ξ 1 ( 0 (∂ 2 ξ 1 + 0 (∂ 3 ξ 1 -
10101 x 1 , x 2 , x 3 ) := a(x 1 )h(x 2 , x 3 ) (182)ξ 2 (x 1 , x 2 , x 3 ) := x1 w 3 )(x, x 2 , x 3 )dx,(183)ξ 3 (x 1 , x 2 , x 3 ) := x1 w 2 )(x, x 2 , x 3 )dx,
1 0w 3 1 0w 2 0 W 2 w 3
1312023 (x, x 2 , x 3 )dx, (188) W 3 (x 2 , x 3 ) := (x, x 2 , x 3 )dx.(189)From (174), (175), (188) and (189), one has:support(W 2 ) ⊂ (0, 1) 2 , support(W 3 ) ⊂ (0, 1) 2 ,(190)∂ 2 W 3 -∂ 3 W 2 = 0.(191)We define h byh(x 2 , x 3 ) := x2 (x, x 3 )dx,(192)so that (186) holds. From (190), (191) and (192), one gets (187). Finally, from (188), (190) and (192) one sees that (185) holds if and only if:k(x 3 ) = 0, (x 1 , x 2 , x 3 )dx 1 dx 2 .(194)Using (174), (175) and (194), one sees that k ′ ≡ 0 and support(k) ⊂ (0, 1), which implies (193).Boundary cubes. Now we consider a boundary cube. Up to translation, scaling and rotation, we assume that we are considering the cube C = [0, 1] 3 with the face x 1 = 0 lying inside O and the facex 1 = 1 lying in R 3 \ O.Similarly as in the planar case, we choose a regular extension of w to C. We set ξ 1 = 0 and we define ξ 2 by (183) and ξ 3 by (184). One has (176), (177), (178) in C ∩ Ō with support(ξ) ∩ Ō ⊂ C.
where τ 2 is a short existence time bounded from below as a function of |∂ t r(t 2 )| L 2 (O) , which is bounded at time t 2 since we can compute it from equation (155). Using (162), we deduce an L ∞ (H 2 ) bound on r seeing (155) as a Stokes problem for r. Using the same argument as above, we find a time t 3 such that r ∈ H 3 with a quantitative estimate. Now we can prove Theorem 2. Even though ū is only a weak trajectory on [0, T ], there exists
∂ t v ⋆ -∂ zz v ⋆ = -(1z)g ′ ⋆ (t) [T ⋆ , T ] × [0, 1], v ⋆ (t, 0) = 0 [T ⋆ , T ], v ⋆ (t, 1) = 0 [T ⋆ , T ], v ⋆ (0, z) = q ⋆ (z) [0, 1],(91)where we change the definition of q ⋆ (z) := q(T ⋆ , z) -(1z)g ⋆ (T ⋆ ). Introducing the Fourier basis adapted to system (91), e n (z) := sin(nπz), we can solve explicitly for the evolution of v ⋆ :v n ⋆ (T ) = e -n 2 π 2 T v n ⋆ (0) -
* Work supported by ERC Advanced Grant 266907 (CPDENL) of the 7th Research Framework Programme (FP7). | 117,612 | [
"12845",
"177864"
] | [
"1005052",
"1005054",
"1005052",
"27730"
] |
01485213 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2017 | https://inria.hal.science/hal-01485213/file/SocialCommunicationBus.pdf | Rafael Angarita
Nikolaos Georgantas
Cristhian Parra
James Holston
email: jholston@berkeley.edu
Valérie Issarny
Leveraging the Service Bus Paradigm for Computer-mediated Social Communication Interoperability
Keywords: Social Communication, Computer-mediated Communication, Interoperability, Middleware, Service-oriented Architecture
Computer-mediated communication can be defined as any form of human communication achieved through computer technology. From its beginnings, it has been shaping the way humans interact with each other, and it has influenced many areas of society. There exist a plethora of communication services enabling computer-mediated social communication (e.g., Skype, Facebook Messenger, Telegram, WhatsApp, Twitter, Slack, etc.). Based on personal preferences, users may prefer a communication service rather than another. As a result, users sharing same interests may not be able to interact since they are using incompatible technologies. To tackle this interoperability barrier, we propose the Social Communication Bus, a middleware solution targeted to enable the interaction between heterogeneous communication services. More precisely, the contribution of this paper is threefold: (i), we propose a survey of the various forms of computer-mediated social communication, and we make an analogy with the computing communication paradigms; (ii), we revisit the eXtensible Service Bus (XSB) that supports interoperability across computing interaction paradigms to provide a solution for computer-mediated social communication interoperability; and (iii), we present Social-MQ, an implementation of the Social Communication Bus that has been integrated into the AppCivist platform for participatory democracy.
I. INTRODUCTION
People increasingly rely on computer-mediated communication for their social interactions (e.g., see [START_REF] Pillet | Email-free collaboration: An exploratory study on the formation of new work habits among knowledge workers[END_REF]). This is a direct consequence of the global reach of the Internet combined with the massive adoption of social media and mobile technologies that make it easy for people to view, create and share information within their communities almost anywhere, anytime. The success of social media has further led -and is still leading -to the introduction of a large diversity of social communication services (e.g., Skype, Facebook, Google Plus, Telegram, Instagram, WhatsApp, Twitter, Slack, ...). These services differ according to the types of communities and interactions they primarily aim at supporting. However, existing services are not orthogonal and users ultimately adopt one service rather than another based on their personal experience (e.g., see the impact of age on the use of computerbased communication in [START_REF] Dickinson | Keeping in touch: Talking to older people about computers and communication[END_REF]). As a result, users who share similar interests from a social perspective may not be able to interact in a computer-mediated social sphere because they adopt different technologies. This is particularly exacerbated by the fact that the latest social media are proprietary services that offer an increasingly rich set of functionalities, and the function of one service does not easily translate -both socially and technically-into the function of another. As an illustration, compare the early and primitive computer-mediated social communication media that is email with the richer social network technology. Protocols associated with the former are rather simple and email communication between any two individuals is now trivial, independent of the mail servers used at both ends. On the other hand, protocols associated with today's social networks involve complex interaction processes, which prevent communication across social networks.
The above issue is no different from the long-standing issue of interoperability in distributed computing systems, which requires mediating (or translating) the protocols run by the interacting parties for them to be able to exchange meaningful messages and coordinate. And, while interoperability in the early days of distributed systems was essentially relying on the definition of standards, the increasing complexity and diversity of networked systems has led to the introduction of various interoperability solutions [START_REF] Issarny | Middleware-layer connector synthesis: Beyond state of the art in middleware interoperability[END_REF]. In particular, today's solutions allow connecting networked systems in a nonintrusive way, i.e., without requiring to modify the systems [START_REF] Spitznagel | A compositional formalization of connector wrappers[END_REF], [START_REF] Mateescu | Adaptation of service protocols using process algebra and on-the-fly reduction techniques[END_REF], [START_REF] Gierds | Reducing adapter synthesis to controller synthesis[END_REF], [START_REF] Bennaceur | Automated synthesis of mediators to support component interoperability[END_REF], [START_REF] Bennaceur | A unifying perspective on protocol mediation: interoperability in the future internet[END_REF]. These solutions typically use intermediary software entities whose name differ in the literature, e.g., mediators [START_REF] Wiederhold | Mediators in the architecture of future information systems[END_REF], wrappers [START_REF] Spitznagel | A compositional formalization of connector wrappers[END_REF], mediating adapters [START_REF] Mateescu | Adaptation of service protocols using process algebra and on-the-fly reduction techniques[END_REF], or binding components [START_REF] Bouloukakis | Integration of Heterogeneous Services and Things into Choreographies[END_REF]. However, the key role of this software entity, whatever its name, is always the same: it translates the data model and interaction processes of one system into the ones of the other system the former needs to interact with, assuming of course that the systems are functionally compatible. In the following, we use the term binding component to refer to the software entity realizing the necessary translation. The binding component is then either implemented in full by the developer, or synthesized -possibly partially -by a dedicated software tool (e.g., [START_REF] Bennaceur | Automated synthesis of mediators to support component interoperability[END_REF]).
The development of binding components depends on the architecture of the overall interoperability system, since the components need to be deployed in the network and connected to the systems for which they realize the necessary data and process translation. A successful architectural paradigm for the interoperability system is the (Enterprise) Service Bus. A service bus introduces a reference communication protocol and data model to translate to and from, as well as a set of commodity services such as service repository, enforcing quality of service and service composition. Conceptually, the advantage of the service bus that is well illustrated by the analogy with the hardware bus from which it derives, is that it acts as a pivot communication protocol to which networked systems may plug into. Then, still from a conceptual perspective, a networked system becomes interoperable "simply" by implementing a binding component that translates the system's protocol to that of the bus. It is important to highlight that the service bus is a solution to middleware-protocol interoperability; it does not deal with application-layer interoperability [START_REF] Issarny | Middleware-layer connector synthesis: Beyond state of the art in middleware interoperability[END_REF], although nothing prevents the introduction of higher-level domain-specific buses.
This paper is specifically about that topic: introducing a "social communication bus" to allow interoperability across computer-mediated social communication paradigms. Our work is motivated by our research effort within the AppCivist project (http://www.appcivist.org/) [START_REF] Pathak | AppCivist -A Service-oriented Software Platform for Socially Sustainable Activism[END_REF]. AppCivist provides a software platform for participatory democracy that leverages the reach of the Internet and the powers of computation to enhance the experience and efficacy of civic participation. Its first instance, AppCivist-PB, targets participatory budgeting, an exemplary process of participatory democracy that lets citizens prepare and select projects to be implemented with public funds by their cities [START_REF] Holston | Engineering software assemblies for participatory democracy: The participatory budgeting use case[END_REF]. For city-wide engagement, AppCivist-PB must enable citizens to participate with the Internet-based communication services they are the most comfortable with. In current practice, for example, seniors and teenagers (or youngsters under 18) are often the most common participants of this process [START_REF] Hagelskamp | Public Spending, by the People. Participatory Budgeting in the United States and Canada in 2014 -15[END_REF], and their uses of technology can be fairly different. While seniors prefer traditional means of communication like phone calls and emails [START_REF] Dickinson | Keeping in touch: Talking to older people about computers and communication[END_REF], a typical teenager will send and receive 30 texts per day [START_REF] Lenhart | Teens, social media & technology overview 2015[END_REF]. The need for interoperability in this context is paramount since the idea is to include people in the participatory processes without leaving anyone behind. This has led us to revisit the service bus paradigm, for the sake of social communication across communities, to gather together the many communities of our cities.
The contributions of our paper are as follows:
• Social communication paradigms: Section II surveys the various forms of computer-mediated social communication supported by today's software services and tools. We then make an analogy with the communication paradigms implemented by middleware technologies, thereby highlighting that approaches to middleware interoperability conveniently apply to computer-mediated social communication interoperability. • Social Communication Bus architecture: Section III then revisits the service bus paradigm for the domain-specific context of computer-mediated social interactions. We specifically build on the XSB bus [START_REF] Georgantas | Serviceoriented distributed applications in the future internet: The case for interaction paradigm interoperability[END_REF], [START_REF] Kattepur | Analysis of timing constraints in heterogeneous middleware interactions[END_REF] that supports interoperability across interaction paradigms as opposed to interoperability across heterogeneous middleware protocols implementing the same paradigm. The proposed bus architecture features the traditional concepts of bus protocols and binding components, but those are customized for the sake of social interactions whose couplings differ along the social and presence dimensions. • Social Communication Bus instance for participatory democracy: Section IV refines our bus architecture, introducing the Social-MQ implementation that leverages state of the art technologies. Section V then introduces how Social-MQ is used by the AppCivist-PB platform to enable reaching out a larger community of citizens in participatory budgeting campaigns. Finally, Section VI summarizes our contributions and introduces some perspectives for future work.
II. COMPUTER-MEDIATED SOCIAL COMMUNICATION
A. Computer-mediated Social Communication: An Overview Social communication technologies change the way humans interact with each other by influencing identities, relationships, and communities [START_REF] Thurlow | Computer mediated communication: Social interaction and the internet[END_REF]. Any human communication achieved through, or with the help of, computer technology is called computer-mediated communication [START_REF] Thurlow | Computer mediated communication: Social interaction and the internet[END_REF], or as we call it in our work, computer-mediated social communication to highlight the fact that we are dealing with human communication. In this paper, we more specifically focus on text-and voicebased social communication technologies. These social communication technologies are usually conceived as Internetbased services -which we call communication services -that allow individuals to communicate between them [START_REF] Richter | Functions of social networking services[END_REF]. Popular communication services include: Skype, which focuses on video chat and voice call services; Facebook Messenger, Telegram, WhatsApp, Slack, and Google Hangouts, which focus on instant messaging services; Twitter, which enables users to send and read short (140-character) messages; email, which lets users exchange messages, and SMS, which provides text messaging services for mobile telephony and also for the Web.
Depending on the communication service, users can send messages directly to each other or to a group of users; for example, a user can send an email directly to another user or to a mailing list where several users participate. In the former case, the users communicating via direct messaging "know each other". It does not mean that they have to know each other personally, it means they have to know the address indicating where and how to send messages directly to each other. In the latter example, communication is achieved via an intermediary: the mailing list address. In this case, senders do not specify explicitly the receivers of their messages; instead, they only have to know the address of the intermediary to which they can send messages. The intermediary then sends messages to the relevant receivers, or receivers ask the intermediary for messages they are interested in. Another example of an intermediary is Twitter, where users can send messages to a topic. Interested users can subscribe to that topic and retrieve published messages.
Overall, existing communication services may be classified according to the types of interactions they support [START_REF] Walther | Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction[END_REF]: interpersonal non-mediated communication, where individuals interact directly; impersonal group communication, where people interact within a group; and impersonal notifications, where people interact in relation with some events to be notified. Our goal is then to leverage the technical interoperability solutions introduced for distributed systems for the specific domain of computer-mediated social communication so as to enable users to interact across communication services.
B. Computer-mediated Social Communication: A Technical Perspective
Communication protocols underlying distributed systems may be classified along the following coupling dimensions [START_REF] Georgantas | Serviceoriented distributed applications in the future internet: The case for interaction paradigm interoperability[END_REF], [START_REF] Kattepur | Analysis of timing constraints in heterogeneous middleware interactions[END_REF], [START_REF] Eugster | The many faces of publish/subscribe[END_REF]:
• Space coupling: A tight (resp. loose) space coupling means that the sender and target receiver(s) need (resp. do not need) to know about each other to communicate. • Time coupling: A tight time coupling indicates that the sender and target receiver(s) have to be online at same time to communicate, whereas a loose space coupling allows the receiver to be offline at the time of the emission; the receiver will receive messages when it is online again. • Synchronization coupling: Under a tight synchronization coupling, the sender is blocked while sending a message and the receiver(s) is(are) blocked while waiting for a message. Under a loose synchronization coupling, the sender is not blocked, and the target receiver(s) can get messages asynchronously while performing some concurrent activity. Following the above, we may define the coupling dimensions associated with computer-mediated social communication as:
• Social coupling: It is analogous to space coupling and refers to whether or not participants need to know each other to communicate. • Presence coupling: It is analogous to the time coupling concept and refers to whether participants need to interact simultaneously. • Synchronization coupling: Since we are addressing human interacting components, the synchronization coupling is alway loose since humans can do other activities after sending a message or while waiting for one. Hence, we do not consider this specific coupling in the remainder. We may then characterize the types of interactions of communication services in terms of the above coupling (see
C. Communication Service Interoperability
In general, users prefer a type of social interaction over the others [START_REF] Dickinson | Keeping in touch: Talking to older people about computers and communication[END_REF], [START_REF] Lenhart | Teens, social media & technology overview 2015[END_REF], [START_REF] Joinson | Self-esteem, interpersonal risk, and preference for e-mail to face-to-face communication[END_REF]. This preference translates in favoring certain communication services. For example, someone may want to never interact directly and thus uses email whenever possible. Further, the adoption of specific communication service instances for social interactions increasingly limits the population of users with which an individual can communicate. Our work focuses on the study of interoperability across communication services, including services promoting different types of social interaction. This is illustrated in We then need to study the extent to which different types of social interaction may be reconciled and when it is appropriate to synthesize the corresponding communication protocol adaptation. To do so, we build upon the eXtensible Service Bus (XSB) [START_REF] Georgantas | Serviceoriented distributed applications in the future internet: The case for interaction paradigm interoperability[END_REF], [START_REF] Kattepur | Analysis of timing constraints in heterogeneous middleware interactions[END_REF] that is an approach to reconcile the middleware protocols run by networked systems across the various coupling dimensions (i.e., space, time, synchronization). This leads us to introduce the Social communication bus paradigm.
III. THE SOCIAL COMMUNICATION BUS
A. The eXtensible Service Bus
The eXtensible Service Bus (XSB) [START_REF] Georgantas | Serviceoriented distributed applications in the future internet: The case for interaction paradigm interoperability[END_REF], [START_REF] Kattepur | Analysis of timing constraints in heterogeneous middleware interactions[END_REF] defines a connector that abstracts and unifies three interaction paradigms found in distributed computing systems: client-server, a common paradigm for Web services where a client communicates directly with a server; publish-subscribe, a paradigm for content broadcasting; and tuple-space, a paradigm for sharing data with multiple users who can read and modify that data. XSB is implemented as a common bus protocol that enables interoperability among services employing heterogeneous interactions following one of these computing paradigms. It also provides an API based on the post and get primitives to abstract the native primitives of the client-server (send and receive), publish-subscribe (publish and retrieve), and tuple-space interactions (out, take, and read).
In this work, we present the Social Communication Bus as a higher-level abstraction: XSB abstracts interactions of Fig. 2: The Social Communication Bus architecture distributed computing interaction paradigms, while the Social Communication Bus abstracts interaction at the human level, that is, the computer-mediated social communication. Nonetheless, the Social Communication Bus relies on the XSB architectural paradigm. Most notably, the proposed Social Communication Bus inherits from XSB the approach to crossparadigm interoperability that allows overcoming the coupling heterogeneity of protocols.
B. Social Communication Bus Architecture
Figure 2 introduces the architecture of the Social Communication Bus. The bus revisits the integration paradigm of the conventional Enterprise Service Bus [START_REF] Chappell | Enterprise service bus[END_REF] to enable interoperability across the computer-mediated social communication paradigms presented in Section II and concrete communication services implementing them.
In more detail and as depicted, the Social Communication Bus implements a common intermediate bus protocol that facilitates the interconnection of heterogeneous communication services: plugging-in a new communication service only requires to implement a conversion from the protocol of the service to that of the bus, thus considerably reducing the development effort. This conversion is realized by a dedicated component, called Binding Component (BC), which connects the communication service to the Social Communication Bus. The binding that is implemented then overcomes communication heterogeneity at both abstract (i.e., it solves coupling mismatches) and concrete (i.e., it solves data and protocol message mismatches) levels. The BCs perform the bridging between a communication service and the Social Communication Bus by relying on the SC connectors. A SC connector provides access to the operations of a particular communication service and to the operations of the Social Communication Bus. Communi-cation services can communicate in a loosely coupled fashion via the Social Communication Bus.
The Social Communication Bus architecture not only reduces the development effort but also allows solving the interoperability issues presented in Section II-C as follows:
• Social coupling mediation.
C. The API for Social Communication (SC API)
To reconcile the different interfaces of communication services and connect them to the Social Communication Bus, we introduce a generic abstraction. The proposed abstraction comes in the form of a Social Communication Application Programming Interface (SC API). The SC API abstracts communication operations executed by the human user of a communication service, such as, e.g., sending or receiving a message. We also assume that these operations are exported by the communication service in a public (concrete) API, native to the specific communication service. This enables deploying interoperability artifacts (i.e., BCs) between heterogeneous communication services that leverage these APIs.
The SC API expresses basic common end-to-end interaction semantics shared by the different communication services, while it abstracts potentially heterogeneous semantics that is proper to each service. The SC API relies on the two following basic primitives:
• a post() primitive employed by a communication service to send a message; • a get() primitive employed by a communication service to receive a message. To describe a communication service according to the SC API, we propose a generic interface description (SC-IDL). This interface describes a communication service's operations, including the name and type of their parameters. The description is complemented with the following communication service information: name, its name; address, the address of the endpoint of its public API; protocol, its middleware protocol (e.g., HTTP, SMTP, AMQP, MQTT); and social_properties, which specifies if the communication service handles messages when its users are offline.
D. Higher-order Binding Components
BCs are in charge of the underlying middleware protocol conversion and application data adaptation between a communication service and the Social Communication Bus. As presented in XSB, BCs do not alter the behavior and properties of the communication services associated with them, they do not change the end-to-end communication semantics; however, since these communication services can be heterogeneous and can belong to different providers, it may be desirable to improve their end-to-end semantics to satisfy additional user requirements and to mediate social and presence coupling incompatibility. To this end, we introduce the higher-order BCs, which are BCs capable of altering the perceived behavior of communication services. We propose the two following higher-order BCs capabilities:
• Handling offline receiver: this case is related to the mediation of presence coupling in computer-based social communication, and it occurs when the receiver is not online and he is using a communication service that does not support offline message reception. Even though the server hosting this communication service is up and running, it discards received messages if the recipient is offline. A higher-order BC will send undelivered messages when the receiver logs back into the system. We do not enforce this capability; instead, we let users decide if they want to accept offline messages or not. • Handling unavailable receiver: this case is similar to the previous one but from a computing perspective, related to fault tolerance; for example, the server providing the receiver service is down, or there is no connectivity between the BC and the receiver. The BC will send undelivered messages once the receiver service is available again. In contrast to the previous case, this capability is provided by higher-order BCs by default.
IV. IMPLEMENTATION
A. Social-MQ: An AMQP-based Implementation of the Social Communication Bus
Social-MQ leverages the AMQP protocol as the Social Communication Bus. AMQP has several open source implementations, and its direct and publish/subscribe models serve well the purpose of social interactions: interpersonal non-mediated communication, impersonal group communication, and impersonal notifications. Additionally, AMQP has proven to have good reliability and performance in real-world critical applications in a variety of domains [START_REF] Appel | Towards Benchmarking of AMQP[END_REF]. We use RabbitMQ [START_REF]Rabbitmq[END_REF] as the AMQP implementation.
The bus comes along with a BC generator (see Figure 3). The generator takes as input the description of a communication service (SC-IDL), chooses the corresponding SC connector from the Implementation Pool, and produces a Concrete BC connecting the communication service with Social-MQ. The BC generator is implemented on the Node.js [START_REF]Node.js[END_REF] platform, which is based on the Chrome JavaScript virtual machine. Node.js implements the reactor pattern that allows Fig. 3: BC Generator Fig. 4: Social-MQ architecture building highly concurrent applications. Currently, BCs are generated for the Node.js platform uniquely. We intend to support other languages or platforms in future versions of the bus. Social-MQ currently supports four middleware protocols: AMQP, HTTP, MQTT, and SMTP.
Figure 4 illustrates the connection of communication services to Social-MQ. All the associated BCs are AMQP publishers and/or subscribers so that they can communicate with Social-MQ. In more detail:
• BC 1 exposes an HTTP endpoint so that the HTTP communication service can send messages to it, and it can act as HTTP client to post messages to the communication service. • BC 2 acts as an SMTP server and client to communicate with Email; • BC 3 has MQTT publisher and subscriber capabilities to communicate with the MQTT communication service. The above BCs are further refined according to the actual application data used by AppCivist, Email, and Facebook Messenger.
The interested reader may find a set of BCs generated by Social-MQ at https://github.com/rafaelangarita/bc-examples. These BCs can be executed and tested easily by following the provided instructions.
B. Social-MQ Implementation of Social Interaction Mediation
The loosely coupled interaction model between communication services provided by Social-MQ allows the mediation between the various types of social interactions supported by • Social coupling mediation: In the publish/subscribe model implemented by Social-MQ (Figure 5 (a)), senders publish a message to an address in Social-MQ, instead of sending it directly to a receiver. Receivers can subscribe to this address and be notified when new messages are published. This way, all social communication paradigms can interact using the publish/subscribe model. • Presence coupling mediation. When a communication service cannot receive messages because it is not available or its user is offline, messages intended for it are sent to a database to be queried and sent when the communication service can receive messages again (Figure 5 (b)).
V. THE APPCIVIST USE CASE
A. The AppCivist Platform for Participatory Democracy
To illustrate our approach, we elaborate on the use of Social-MQ to enable the AppCivist application for participatory democracy [START_REF] Pathak | AppCivist -A Service-oriented Software Platform for Socially Sustainable Activism[END_REF], [START_REF] Holston | Engineering software assemblies for participatory democracy: The participatory budgeting use case[END_REF] to interoperate with various communication services. This way, the citizens participating to AppCivist actions may keep interacting using the social media they prefer.
AppCivist allows activist users to compose their own applications, called Assemblies, using relevant Web-based components enabling democratic assembly and collective action. AppCivist provides a modular platform of services that range from proposal making and voting to communication and notifications. Some of these modules are offered as services implemented within the platform itself (e.g., proposal making), but for others, it relies on existing services. One of such cases is that of communication and notifications. Participatory processes often rely on a multitude of diverse users, who not always coincide in their technological choices. For instance, participatory budgeting processes involve people from diverse backgrounds and of all ages: from adolescents (or youngsters under 18), to seniors [START_REF] Hagelskamp | Public Spending, by the People. Participatory Budgeting in the United States and Canada in 2014 -15[END_REF]. Naturally, their technology adoption can be fairly different. While seniors favor traditional means of communication like phone calls and emails [START_REF] Dickinson | Keeping in touch: Talking to older people about computers and communication[END_REF], a typical teenager will send and receive 30 texts per day [START_REF] Lenhart | Teens, social media & technology overview 2015[END_REF]. The need for interoperability in this context is outstanding, and the Social Communication Bus is a perfect fit, with its ability to bridge communication services that power computerbased social communication. In the following, we discuss three communication scenarios: (i), impersonal notifications interconnected with impersonal group communication; (ii), interpersonal non-mediated communication interconnected with impersonal group communication; and (iii), interpersonal nonmediated communication interconnected with impersonal notifications. The last scenario also illustrates the presence coupling mediation feature of Social-MQ.
B. Impersonal Notifications Interconnected with Impersonal Group Communication
In this scenario, users of AppCivist interact via the impersonal notification paradigm by using a notification system implemented using AMQP as described in Listing 1. This notification system sends messages to concerned or interested users when different events occur in AppCivist; for example, when a user posts a new forum message. This scenario is illustrated in Figure 6 (a). AppCivist is connected to Social-MQ via BC 1 ; however, there is no need of protocol mediation, since both AppCivist and Social-MQ use AMQP. Mailing List is another system which exists independently of AppCivist. It is a traditional mailing list in which users communicate with each other using the impersonal group communication paradigm by sending emails to the group email address. Mailing List is connected to Social-MQ via BC 2 , and it is described in Listing 2. It accepts receiving messages whether or not receivers are online or offline (properties.offline.handling = true, Listing 2). It is due to the loose presence coupling nature of email communication. Now, suppose users in Mailing List want to be notified when a user posts a new forum message in AppCivist. Then, since AppCivist is an AMQP-based notification system, BC 1 can act as AMQP subscriber, receive notifications of new forum posts, and publish them in Social-MQ. In the same way, BC 2 acts as AMQP subscriber and receives notifications of new forum posts; however, this time BC 2 receives the notifications from Social-MQ. Finally, BC 2 sends an email to Mailing List using the SC Connector SMTP.
Listing 1: AppCivist AMQP SC-IDL { "name":"AppCivist" "address":"appcivist.littlemacondo.com:5672", "protocol":"AMQP", "operations":[ "notify":{ "interaction":"one-way", "type":"data", "scope":"assembly_id.forum.post", "post_message":[ {"name":"notification", "type":"text", "optional":"false"}] } ], "properties":[ {"offline":"true"} ] } Listing 2: Mailing List SC-IDL { "name":"Mailing List", "address":"mailinglist_server", "protocol":"SMTP", "operations":[ "receive_email":{ "interaction":"one-way", "type":"data", "scope":"mailinglist_address", "get_message":[ {"name":"subject", "type":"emailSubject", "optional":"true"}, {"name":"message", "type":"messageBody", "optional":"true"}, {"name":"attachment", "type":"file", "optional":"true"}] }, "send_email":{ //same as receive_email } ], "properties":[ {"offline":"true"} ] }
C. Interpersonal Non-mediated Communication Interconnected with Impersonal Group Communication
In the scenario illustrated in Figure 6 (b), there is an AppCivist communication service called Weekly Notifier. It queries the AppCivist database once a week, extracts the messages posted in AppCivist forums during the last week, builds a message with them, and sends the message to concerned users using interpersonal non-mediated communication via HTTP. That is, it is an HTTP client, so it sends the message to an HTTP server. Now, suppose we want Weekly Notifier to communicate with Mailing List. BC 1 exposes an HTTP endpoint to which Weekly Notifier can post HTTP messages. Differently from the previous case, we need to modify the original Weekly Notifier communication service since it needs to send messages to the endpoint exposed by BC 1 and it needs to specify Mailing List as a recipient. "name":"AppCivist" "address":"", "protocol":"HTTP", "operations":[ "notify":{ "interaction":"one-way", "type":"data", "post_message":[ {"name":"notification", "type":"text", "optional":"false"}] } ], "properties":[ {"offline":"false"} ] }
D. Interpersonal Non-mediated Communication Interconnected with Impersonal Notifications
After having introduced the previous scenario, we can pose the following question: what if messages sent by Weekly Notifier must be sent to multiple receivers? Should Weekly Notifier know them all and send the message individually to each one of them? Independently of the communication services registered in Social-MQ and their social communication paradigms, they can all interact in a fully decoupled fashion in terms of social coupling.
Social-MQ takes advantage of the exchanges concept of AMQP, which are entities where messages can be sent. Then, they can route messages to receivers, or interested receivers can subscribe to them. In the scenario illustrated in Figure 6 (c), Weekly Notifier sends HTTP messages directed to the Social-MQ exchange named AppCivist weekly notification. Interested receivers can then subscribe to AppCivist weekly notification to receive messages from Weekly Notifier. Finally, Mailing List and the instant messaging communication service, IM (Listing 4), can subscribe to AppCivist weekly notification via their corresponding BCs.
Listing 4: IM SC-IDL { "name":"IM", "address":"mqtt.example", "protocol":"MQTT", "operations":[ "receive_message":{ "interaction":"one-way", "type":"data", "scope":"receiver_id", "get_message":[ {"name":"message", "type":"text", "optional":"false"}] }, "receive_attachement":{ "interaction":"one-way", "type":"data", "scope":"receiver_id", "get_message":[ {"name":"message", "type":"file", "optional":"false"}] }, "send_message":{ //same as receive_message }, "send_attachement":{ //same as receive_attachement } ], "properties":[ {"offline":"false"} ] }
E. Assessment
In this section, we have studied three case studies illustrating how Social-MQ can solve the problem of computer-mediated social communication interoperability. These case studies are implemented for the AppCivist application for participatory democracy. As a conclusion, we argue that: (i), Social-MQ can be easily integrated into existing or new systems since it is non-intrusive and most of its processes are automated; (ii), regarding performance and scalability, Social-MQ is implemented on top of technologies that have proven to have high performance and scalability in real-world critical applications; and (iii), Social-MQ allows AppCivist users to continue using the communication service they prefer, enabling to reach a larger community of citizens, and promoting citizen participation.
VI. CONCLUSION AND FUTURE WORK
We have presented an approach to enable social communication interoperability in heterogeneous environments. Our main objective is to let users use their favorite communication service. More specifically, the main contributions of this paper are: a classification of the social communication paradigms in the context of computing; an Enterprise Service Bus-based architecture to deal with the social communication interoperability; and a concrete implementation of the Social Communication Bus studying real-world scenarios in the context of participatory democracy.
For our future work, we plan to present the formalization of our approach and to incorporate popular communication services such as Facebook Messenger, Twitter, and Slack. The interoperability with these kinds of services poses additional challenges, since the systems they belong to can be closed; for example, Facebook Messenger allows sending and receiving messages only to and from participants that are already registered in the Facebook platform. Another key issue to study is the security & privacy aspect of the Social Communication Bus to ensure that privacy needs of users communicating across heterogeneous social media are met. Last but not least, our studies will report the real-world experiences of AppCivist users regarding the Social Communication Bus.
Fig. 1 :Figure 1
11 Fig. 1: Social communication interoperability
Fig. 5 :
5 Fig. 5: (a) Space coupling mediation; (b) Presence coupling mediation
Fig. 6 :
6 Fig. 6: Use Cases: (a), impersonal notifications interconnected with impersonal group communication; (b), interpersonal non-mediated communication interconnected with impersonal group communication; and (c), interpersonal non-mediated communication interconnected with impersonal notifications
TABLE I :
I Properties of computer-mediated social interactions Table I for a summary and Table II for the related classification of popular services):
• Interpersonal non-mediated communication: Communi-
cating parties need to know each other. Thus, the social
coupling is tight. However, the presence coupling may be
either tight or loose. Communication services enforcing
a tight presence coupling relate to Video/voice calls
and chat systems. On the other hand, base services like
email, SMS, and instant messaging adopt a loose presence
coupling.
• Impersonal group communication: The social coupling is loose because any participant may communicate with a group without the need of knowing its members. A space serves as an area that holds all the information making up the communication. To participate, users modify the information in the space. The presence coupling may be either loose or tight. As an example of tight presence coupling, shared meeting notes may be deleted once a meeting is over, so that newcomers cannot read it. Similarly, newcomers in a Q&A session cannot hear previous discussions. In a different situation, a service may implement loose presence coupling so that a participant (group member) can write a post-it note and let it available to anybody entering the meeting room. In addition, groups can be either closed or open
[START_REF] Liang | Process groups and group communications: Classifications and requirements[END_REF]
. In a closed group, only members can send messages. In an open group, non-members may also send messages to the group. Video/voice conferences and real-time multi-user chat systems are examples of group communication with a tight presence coupling. Message forums, file sharing, and multi-user messaging systems are examples of group communication with a loose presence coupling. • Impersonal notifications: The social and presence coupling are loose. Participants do not need to know each other to interact. They communicate on the basis of shared interests (aka hashtags or topics). Twitter and Instagram are popular examples of such services.
TABLE II :
II Classification of popular communication services
ACKNOWLEDGMENTS
This work is partially supported by the Inria Project Lab CityLab (citylab.inria.fr), the Inria@SiliconValley program (project.inria.fr/siliconvalley) and the Social Apps Lab (citrisuc.org/initiatives/social-apps-lab) at CITRIS at UC Berkeley. The authors also acknowledge the support of the CivicBudget activity of EIT Digital (www.eitdigital.eu). | 42,066 | [
"9865",
"963734"
] | [
"454659",
"454659",
"454659",
"82005",
"82005",
"454659"
] |
01485231 | en | [
"chim"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01485231/file/BougharrafRJPCa2017_Postprint.pdf | B Lakhrissi
Kabouchi
H Bougharraf
email: hafida.bougharraf@gmail.com
R Benallal
T Sahdane
D Mondieig
Ph Negrier
S Massip
M Elfaydy
B Kabouchi
Study of 5-Azidomethyl-8-hydroxyquinoline Structure by X-ray Diffraction and HF-DFT Computational Methods 1
Keywords: 5-azidomethyl-8-hydroxyquinoline, X-ray diffraction, single crystal structure, hydrogen bonding, HF-DFT, HOMO-LUMO
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
8-Hydroxyquinoline molecule is a widely studied ligand. It is frequently used due to its biological effects ascribed to complexation of specific metal ions, such as copper(II) and zinc(II) [1,2]. This chelator properties determine its antibacterial action [3][4][5]. Aluminum(III) 8-hydroxyquinolinate has great application potential in the development of organic light-emitting diodes (OLEDs) and electroluminescent displays [6][7][8][9][10]. One of the serious problems of this technology is the failure of these devices at elevated temperatures. Also the use of 8-hydroxyquinoline in liquid-liquid extraction is limited because of its high solubility in acidic and alkaline aqueous solutions. In order to obtain the materials with improved properties for these specific applications, some 8-hydroxyquinoline deriv-atives have been synthesized. The antitumor and antibacterial properties of these compounds are extensively studied [11][12][13][14][15].
The literature presents X-ray crystal structure analysis of some derivatives of 8-hydroxyquinoline. It was shown, for example, that 8-hydroxyquinoline N-oxide crystallizes in the monoclinic system with space group P2 1 /c, Z = 4, and presents intramolecular H-bonding [16]. Gavin et al. reported the synthesis of 8-hydroxyquinoline derivatives [17,18] and their X-ray crystal structure analysis. 7-Bromoquinolin-8-ol structure was determined as monoclinic with space group C2/c, Z = 8. Its ring system is planar [11]. Recently, azo compounds based on 8-hydroxyquinoline derivatives attract more attention as chelating agents for a large number of metal ions [START_REF] Hunger | Industrial Dyes, Chemistry, Properties, Applications[END_REF][START_REF] La Deda | [END_REF]. Series of heteroarylazo 8-hydroxyquinoline dyes were synthesized and studied in solution to determine the most stable tautomeric form. The X-ray analysis revealed a strong intramolecular H-bond between the hydroxy H and the quinoline N atoms. This result suggests that the synthesized dyes are azo compounds stable in solid state [21].
In the present work, we choose one of the 8hydroxyquinoline derivatives, namely 5-azidomethyl-8-hydroxyquinoline (AHQ) (Scheme 1), also known for its applicability in extraction of some metal ions. It has been inferred from literature that the structural and geometrical data of AHQ molecule have not been reported till date, although several techniques were used in order to understand its behavior in different solvents [22], but many aspects of this behavior remain unknown. Here we report for the first time the structural characterization of AHQ molecule by X-ray diffraction analysis and the results of our calculations using density functional theory (B3LYP) and Hartree-Fock (HF) methods with the 6-311G(d,p) basis set, which are chosen to study the structural, geometric and charge transfer properties of AHQ molecule in the ground state.
EXPERIMENTAL
Synthesis of 5-Azidomethyl-8-hydroxyquinoline
All chemicals were purchased from Aldrich or Acros (France). 5-Azidomethyl-8-hydroxyquinoline was synthesized according to the method described by Himmi et al. [22], by reaction of sodium azide with 5chloromethyl-8-hydroxyquinoline hydrochloride in refluxing acetone for 24 h (Scheme 1).
A suspension of 5-chloromethyl-8-hydroxyquinoline hydrochloride (1 g, 4.33 mmol) in acetone (40 mL) was added dropwise to NaN 3 (1.3 g, 17 mmol) in acetone (10 mL). The mixture was refluxed for 24 h. After cooling, the solvent was evaporated under reduced pressure and the residue was partitioned between CHCl 3 /H 2 O (150 mL, 1 : 1). The organic phase was isolated, washed with water (3 × 20 mL) and dried over anhydrous magnesium sulfate. The solvent was removed by rotary evaporation under reduced pressure to give a crude product which was purified by recrystalization from ethanol to give the pure product as white solid (0.73 g, 85%).
Characterization of 5-Azidomethyl-8-hydroxyquinoline
The structure of the product was confirmed by 1 H and 13 C NMR and IR spectra. Melting points were determined on an automatic IA 9200 digital melting point apparatus in capillary tubes and are uncorrected. 1 H NMR spectra were recorded on a Bruker 300 WB spectrometer at 300 MHz for solutions in DMSO-d 6 . Chemical shifts are given as δ values with reference to tetramethylsilane (TMS) as internal standard. Infrared spectra were recorded from 400 to 4000 cm -1 on a Bruker IFS 66v Fourier transform spectrometer using KBr pellets. Mass spectrum was recorded on THERMO Electron DSQ II.
Mp: 116-118°C; IR (KBr) (cm -1 ): ν 2090 (C-N 3 , stretching); 1 H NMR (300 MHz, DMSO-d 6 ), δ ppm = 7.04-8.90 (m, 4H, quinoline), 4.80 (s, 1H, OH), 2.48 (s, 2H, aromatic-CH 2 -N 3 ), 13
Differential Scanning Calorimetry
To study the thermal behavior and to verify a possible phase transition [23] for the studied product, differential scanning calorimetric (DSC) analysis using ~4 mg samples was performed on Perkin-Elmer DSC-7 apparatus. Samples were hermetically sealed into aluminum pans. The heating rate was 10 K/min.
Crystallographic Data and Structure Analysis
X-ray powder diffraction analysis was performed on an Inel CPS 120 diffractometer. The diffraction lines were collected on a 4096 channel detector over an arc of 120° and centered on the sample. The CuK α1 (λ = 1.5406 Å) radiation was obtained by means of a curved quartz monochromator at a voltage of 40 kV and a current of 25 mA. The powder was put in a Lindemann glass capillary 0.5 mm in diameter, which was rotated to minimize preferential orientations. The experiment providing good signal/noise ratio took approximately 8 h under normal temperature and pressure. The refinement of the structure was performed using the Materials Studio software [24]. For the monocrystal experiment, a colorless single crystal of 0.12 × 0.10 × 0.05 mm size was selected and mounted on the diffractometer Rigaku Ultrahigh instrument with microfocus X-ray rotating anode tube (45 kV, 66 mA, CuK α radiation, λ = 1.54187 Å), The structure was solved by direct methods using SHELXS-97 [START_REF] Sheldrick | SHELXS-97 Program for the Refinement of Crystal Structure[END_REF] program and the Crystal Clear-SM Expert 2.1 software.
Theoretical Calculations
Density functional theory (DFT) calculations were performed to determine the geometrical and structural parameters of AHQ molecule in ground state, because this approach has a greater accuracy in reproducing the experimental values in geometry. It requires less time and offers similar accuracy for middle sized and large systems. Recently it's more used to study chemical and biochemical phenomena [START_REF] Assyry | [END_REF]27]. All calculations were performed with the Gaussian program package [START_REF] Frisch | Gaussian 03, Revision D.01 and D.02[END_REF], using B3LYP and Hartree-Fock (HF) methods with the 6-311G(d,p) basis set. Starting geometries of compound were taken from X-ray refinement data.
RESULTS AND DISCUSSION
Thermal analysis revealed no solid-solid phase transitions (Fig. 1). The melting temperature (mp = 115°C) was in agreement with the value measured in capillary with visual fixation of melting point. The melting heat found by DSC for the compound was ΔH = 155 J/g. X-ray diffraction patterns for AHQ powder at 295 K (Fig. 2) show a good agreement between calculated profile and the experimental result.
The results of refinement for both powder and single crystal techniques converged practically to the same crystallographic structure. Data collection parameters are given in Table 1.
The structure of AHQ molecule and packing view calculated from single crystal diffraction data, are shown in Figs. 3 and4, respectively.
The Fig. 3 indicates the nomination and the anisotropic displacement parameters of disordered pairs for the ORTEP drawing for AHQ molecule. Absorption corrections were carried out by the semi-empirical method from equivalent. The calculation of average values of intensities gives R int = 0.0324 for 1622 independent reflections. A total of 6430 reflections were 3).
It is well known that the hydrogen bonds between the molecule and its environment play an important role in stabilization of the supramolecular structure formed with the neighboring molecules [START_REF] Kadiri | [END_REF]30]. The Fig. 5 and Table 2 show 2. Because of the intramolecular hydrogen bonding, the phenol ring is twisted slightly, the torsion angle N(1)-C(6)-C(10)-O( 11) is 1.9(2)°. In addition, all the H-bonds involving neighboring molecules are practically in the same rings plane (Fig. 5b).
The standard geometrical parameters were minimized at DFT (B3LYP) level with 6-311G(d,p) basis set, then re-optimized again at HF level using the same basis set [START_REF] Frisch | Gaussian 03, Revision D.01 and D.02[END_REF] for better description. Initial geometry generated from X-ray refinement data and the optimized structures were confirmed to be minimum energy conformations. The energy and dipole moments for DFT and HF methods are respectively -18501.70 eV and 2.5114 D, -18388.96 eV and 2.2864 D.
The molecular structure of AHQ by optimized DFT (B3LYP) is shown in Fig. 6. The geometry parameters available from experimental data (1), optimized by DFT (B3LYP) (2) and HF (3) of the molecule are presented in Table 3. The calculated and experimental structural parameters for each method were compared.
As seen from Table 3, most of the calculated bond lengths and the bond angles are in good agreement with experimental ones. The highest differences are observed for N(1)-C( 6) bond with a value 0.012 Å for DFT method and N(14)-N( 15) bond with the difference being 0.037 Å for HF method.
For the bond angles those differences occur at O( 11 2)°] obtained by X-ray crystallography, these torsion angles have been calculated to be -66.5778°, -62.3307° for DFT and -65.2385°, -62.3058° for HF, respectively. This shows the larger deviation from the experimental values because the theoretical calculations have been performed for isolated molecule whereas the experimental data has been recorded in solid state and are related to molecular packing [31].
Figure 7 shows the patterns of the HOMO and LUMO of 5-azidomethyl-8-hydroxyquinoline molecule calculated at the B3LYP level. Generally this diagram shows the charge distribution around the different types of donors and acceptors bonds presented in the molecule in the ground and first excited states. HOMO as an electron donor represents the ability to donate an electron, while LUMO as an electron acceptor represent the ability to receive an electron 2. Geometry of the intra-and intermolecular hydrogen bonds Symmetry codes: 1: x, y, z; 2: 1 -x, -y, -z; 3: 1 -x, 1/2 + y, 1/2 -z. [32][33][34]. The energy values of LUMO, HOMO and their energy gap reflect the chemical activity of the molecule. In our case, the calculated energy values of HOMO is -6.165424 eV and LUMO is -1.726656 eV in gaseous phase. The energy separation between the HOMO and LUMO is 4.438768 eV, this lower value of HOMO-LUMO energy gap is generally associated with a high chemical reactivity [35,36], explains the eventual charge transfer interaction within the molecule, which is responsible for the bioactive properties of AHQ [37].
D H A D-H (Å) H•••A (Å) D•••A (Å) D-H•••A (deg)
Supplementary Material
Crystallographic data for the structure of 5-azidomethyl-8-hydroxyquinoline have been deposited at the Cambridge Crystallographic Data Center (CCDC 1029534). This information may be obtained on the web: http//www.ccdc.cam.ac.uk/deposit.
CONCLUSION
In the present work, 5-azidomethyl-8-hydroxyquinoline was synthesized and its chemical structure was confirmed using 1 H NMR, 13 C NMR and X-ray diffraction. The DSC analysis revealed no solid-solid transition for this product. The unit cell parameters obtained for the single crystal are: a = 12.2879( 9 This system of hydrogen bonds involves two neighboring molecules in the same plane. The geometric parameters of AHQ compound in ground state, calculated by density functional theory (B3LYP) and Hartree-Fock (HF) methods with the 6-311G(d,p) basis set, are in good agreement with the X-ray, except torsion angles which showed the deviation from the experimental, because of the geometry of the crystal structure is subject to intermolecular forces, such as van der Waals interactions and crystal packing forces, while only intramolecular interactions were considered for isolated molecule. The energy gap was found using HOMO and LUMO calculations, the less band gap indicates an eventual charge transfer within the molecule.
Scheme 1 .
1 Scheme 1. Synthesis of 5-azidomethyl-8-hydroxyquinoline molecule.
Fig. 1 .Fig. 2 .
12 Fig. 1. DSC thermogram for AHQ material: a, heating and b, cooling.
Fig. 3 .
3 Fig. 3. ORTEP drawing of AHQ showing the atom numbering. Displacement ellipsoids are drawn at the 50% probability level. H atoms are represented as small circles.
7.34° to 68.12° θ range. The final refinement produced with anisotropic atomic displacement parameters for all atoms converged to R 1 = 0.0485, wR 2 = 0.1312. The unit cell parameters obtained for the single crystal are: a = 12.2879(9) Å, b = 4.8782(3) Å, c = 15.7423(12) Å, β =100.807(14)°, which indicates that the structure is monoclinic with the space group P21/c. The crystal packing of AHQ shows that the molecule is not planar (Fig. 4). The orientation of the azide group is defined by torsion angles C(5)-C(7)-C(12)-N(13) [80.75(19)°] and C(8)-C(7)-C(12)-N(13) [-96.42(18)°] obtained by X-ray crystallography (Table
intra-and intermolecular hydrogen bonds present in the crystal structure of the AHQ. Weak intramolecular O-H•••N hydrogenbonding is present between the phenol donor and the adjacent pyridine N-atom acceptor [O11-N1 = 2.7580(17) Å and O11-H11•••N1 = 115.1(16)°] (Fig. 5a). Moderate intermolecular O-H•••N hydrogen bond is also present [O11-N11 = 2.8746(17) Å and O11-H11•••N1 = 130.1(17)°]. The acceptor function of oxygen atom is employed by two weak intermolecular C-H•••O hydrogen bonds, which parameters values are reported in Table
)-C(10)-C(9) bond angle with the different value 4.64° for DFT method, and O(11)-C(10)-C(9) bond angle with a value 4.75° for HF method. When the X-ray structure of AHQ is compared to the opti-mized one, the most notable discrepancy is observed in the orientation of the azide moiety, which is defined by torsion angles C(5)-C(7)-C(12)-N(13) [80.75(19)°] and C(7)-C(12)-N(13)-N(14) [47.0(
Fig. 4 .
4 Fig. 4. Crystal packing the AHQ chains.
Fig. 5 .Fig. 6 .
56 Fig. 5. View of H-bonding as dashed lines, H atoms not involved are omitted.
Fig. 7 .
7 Fig. 7. Molecular orbital surfaces and energy levels for the HOMO, LUMO of the AHQ compound computed at DFT/B3LYP/6-311G (d,p) level.
) Å, b = 4.8782(3) Å, c = 15.7423(12) Å, β = 100.807(14)°w hich indicates that the structure is monoclinic, P2 1 /c, with Z = 4 and Z ' = 1. The crystal structure is stabilized by intra and intermolecular O-H•••N and C-H•••O hydrogen bonds.
C NMR (75 MHz, DMSO-d 6 ), δ ppm = 51.38, 110.47, 122.69, 127.550, 130.03, 133.17, 139.29, 148.72, 154.54.
Table 1 .
1 Crystallographic data for AHQ molecule
Parameters Monocrystal Powder
Temperature, K 260(2) 295
Wavelength, Å 1.54187 1.54056
Space group P2 1 /c P 2 1 /c
a, Å 12.2879(9) 12.2643(12)
b, Å 4.8782(3) 4.8558(6)
c, Å 15.7423(12) 15.6838(14)
β, deg 100.807(14) 100.952(7)
Volume, Å 3 926.90(11) 917.01(17)
Z(Z') 4(1) 4(1)
Density (calcd.), g/cm 3 1.435 1.450
Table 3 .
3 Structural parameters of AHQ determined experimentally by X-ray diffraction (1) and calculated by the
Symmetry | 16,267 | [
"170942",
"13893"
] | [
"487966",
"487966",
"136813",
"136813",
"23279",
"487967",
"487967",
"487966"
] |
01485243 | en | [
"info"
] | 2024/03/04 23:41:48 | 2017 | https://inria.hal.science/hal-01485243/file/RR-9038.pdf | George Bosilca
Clément Foyer † Emmanuel Jeannot
Guillaume Mercier
Guillaume Papauré
Online Dynamic Monitoring of MPI Communications: Scientific User and Developper Guide
Keywords: MPI, Monitoring, Communication Pattern, Process Placement MPI, Surveillance, Schéma de communication, Placement de processus
Understanding application communication patterns became increasingly relevant as the complexity and diversity of the underlying hardware along with elaborate network topologies are making the implementation of portable and efficient algorithms more challenging. Equipped with the knowledge of the communication patterns, external tools can predict and improve the performance of applications either by modifying the process placement or by changing the communication infrastructure parameters to refine the match between the application requirements and the message passing library capabilities. This report presents the design and evaluation of a communication monitoring infrastructure developed in the Open MPI software stack and able to expose a dynamically configurable level of detail about the application communication patterns, accompanied by a user documentation and a technical report about the implementation details.
Introduction
With the expected increase of applications concurrency and input data size, one of the most important challenges to be addressed in the forthcoming years is that of data transfers and locality, i.e. how to improve data accesses and transfers in the application.Among the various aspects of locality, one particular issue stems from both the memory and the network. Indeed, the transfer time of data exchanges between processes of an application depends on both the affinity of the processes and their location. A thorough analysis of an application behavior and of the target underlying execution platform, combined with clever algorithms and strategies have the potential to dramatically improve the application communication time, making it more efficient and robust to the changing network conditions (e.g. contention). In general the consensus is that the performance of many existing applications could benefit from an improved data locality [START_REF] Hoefler | An overview of topology mapping algorithms and techniques in high-performance computing[END_REF].
Hence, to compute an optimal -or at least an efficient -process placement we need to understand on one hand the underlying hardware characteristics (including memory hierarchies and network topology) and on the other hand how the application processes are exchanging messages. The two inputs of the decision algorithm are therefore the machine topology and the application communication pattern. The machine topology information can be gathered through existing tools, or be provided by a management system. Among these tools Netloc/Hwloc [START_REF] Broquedis | hwloc: A generic framework for managing hardware affinities in hpc applications[END_REF] provides a (almost) portable way to abstract the underlying topology as a graph interconnecting the various computing resources. Moreover, the batch scheduler and system tools can provide the list of resources available to the running jobs and their interconnections.
To address the second point, and understand the data exchanges between processes, precise information about the application communication patterns is needed. Existing tools are either addressing the issue at a high level failing to provide accurate details, or they are intrusive, deeply embedded in the communication library. To confront these issues we have designed a light and flexible monitoring interface for MPI applications that possess the following features. First, the need to monitor more than simply two-sided communications (a communication where the source and destination of the message are explicitly invoking an API for each message) is becoming prevalent. As such, our monitoring support is capable of extracting information about all types of data transfers: two-sided, one-sided (or Remote Memory Access) and I/O. In the scope of this report, we will focus our analysis on one-sided and two-sided communications. We record the number of messages, the sum of message sizes and the distribution of the sizes between each pair of processes. We also record how these messages have been generated (direct user calls via the two-sided API, automatically generated as a result of collective algorithms, related to one-sided messages). Second, we provide mechanisms for the MPI applications themselves to access this monitoring information, through the MPI Tool interface. This allows to dynamically enable or disable the monitoring (to record only specific parts of the code, or only during particular time periods) and gives the ability to introspect the application behavior. Last, the output of this monitoring provides different matrices describing this information for each pair of processes. Such data is available both on-line (i.e. during the application execution) or/and off-line (i.e. for post-mortem analysis and optimization of a subsequent run).
We have conducted experiments to assess the overhead of this monitoring infrastructure and to demonstrate is effectiveness compared to other solutions from the literature.
The outline of this report is as follows: in Section 2 we present the related work. The required background is exposed in Section 3. We then present the design in Section 4, and the implementation in Section 5. Results are discussed in Section 6 while the scientific conclusion is exposed in Section 7. The user documentation of the monitoring component is to be found in Section 8 with an example and the technical details are in Section 9.
Related Work
Monitoring an MPI application can be achieved in many ways but in general relies on intercepting the MPI API calls and delivering aggregated information. We present here some example of such tools.
PMPI is a customizable profiling layer that allows tools to intercept MPI calls. Therefore, when a communication routine is called, it is possible to keep track of the processes involved as well as the amount of data exchanged. However, this approach has several drawbacks. First, managing MPI datatypes is awkward and requires a conversion at each call. And last but not least, it cannot comprehend some of the most critical data movements, as an MPI collective is eventually implemented by point-to-point communications but the participants in the underlying data exchange pattern cannot be guessed without the knowledge of the collective algorithm implementation. For instance, a reduce operation is often implemented with an asymmetric tree of point-to-point sends/receives in which every process has a different role (root, intermediary and leaves). Known examples of stand-alone libraries using PMPI are DUMPI [START_REF] Janssen | A simulator for large-scale parallel computer architectures[END_REF] and mpiP [START_REF] Vetter | Statistical scalability analysis of communication operations in distributed applications[END_REF].
Score-P [START_REF] Knüpfer | Score-P: A Joint Performance Measurement Run-Time Infrastructure for Periscope[END_REF] is another tool for analyzing and monitoring MPI programs. This tool is based on different but partially redundant analyzers that have been gathered within a single tool to allow both online and offline analysis. Score-P relies on MPI wrappers and call-path profiles for online monitoring. Nevertheless, the application monitoring support offered by these tools is kept outside of the library, limiting the access to the implementation details and the communication pattern of collective operations once decomposed.
PERUSE [START_REF] Keller | Implementation and Usage of the PERUSE-Interface in Open MPI[END_REF] took a different approach by allowing the application to register callbacks that will be raised at critical moments in the point-to-point request lifetime, providing an opportunity to gather information on state-changes inside the MPI library and therefore gaining a very low-level insight on what data (not Inria only point-to-point but also collectives), how and when is exchanged between processes. This technique has been used in [START_REF] Brown | Tracing Data Movements Within MPI Collectives[END_REF][START_REF] Keller | Implementation and Usage of the PERUSE-Interface in Open MPI[END_REF]. Despite their interesting outcome the PERUSE interface failed to gain traction in the community.
We see that there does not exist tools that provide a monitoring that is both light and precise (e.g. showing collective communication decomposition).
Background
The Open MPI Project [START_REF] Gabriel | Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation[END_REF] is a comprehensive implementation of the MPI 3.1 standard [START_REF] Forum | MPI: A Message-Passing Interface Standard[END_REF] that was started in 2003, taking ideas from four earlier institutionallybased MPI implementations. It is developed and maintained by a consortium of academic, laboratory, and industry partners, and distributed under a modified BSD open source license. It supports a wide variety of CPU and network architectures that is used in the HPC systems. It is also the base for a number of vendors commercial MPI offerings, including Mellanox, Cisco, Fujitsu, Bull, and IBM. The Open MPI software is built on the Modular Component Architecture (MCA) [START_REF] Barrett | Analysis of the Component Architecture Overhead in Open MPI[END_REF], which allows for compile or runtime selection of the components used by the MPI library. This modularity enables experiments with new designs, algorithms, and ideas to be explored, while fully maintaining functionality and performance. In the context of this study, we take advantage of this functionality to seamlessly interpose our profiling components along with the highly optimized components provided by the stock Open MPI version.
MPI Tool, is an interface that has been added in the MPI-3 standard [START_REF] Forum | MPI: A Message-Passing Interface Standard[END_REF]. This interface allows the application to configure internal parameters of the MPI library, and also get access to internal information from the MPI library. In our context, this interface will offer a convenient and flexible way to access the monitored data stored by the implementation as well as control the monitoring phases.
Process placement is an optimization strategy that takes into account the affinity of processes (represented by a communication matrix) and the machine topology to decrease the communication costs of an application [START_REF] Hoefler | An overview of topology mapping algorithms and techniques in high-performance computing[END_REF]. Various algorithms to compute such a process placement exist, one being TreeMatch [START_REF] Jeannot | Process Placement in Multicore Clusters: Algorithmic Issues and Practical Techniques[END_REF] (designed by a subset of the authors of this article). We can distinguish between static process placement which is computed from traces of previous runs, and dynamic placement computed during the application execution (See experiments in Section 6).
Design
The monitoring generates the application communication pattern matrix. The order of the matrix is the number of processes and each (i, j) entry gives the amount of communication between process i and process j. It outputs several values and hence several matrices: the number of bytes and the number of messages exchanged. Moreover it distinguishes between point-to-point communications and collective or internal protocol communications.
It is also able to monitor collective operations once decomposed into pointto-point communications. Therefore, it requires to intercept the communication inside the MPI library itself, instead of relinking weak symbols to a third-party dynamic one, which allows this component to be used in parallel with other profiling tools (e.g. PMPI).
For scalability reasons, we can automatically gather the monitoring data into one file instead of dumping one file per rank.
To sum up, we aim at covering a wide spectrum of needs, with different levels of complexity for various levels of precision. It provides an API for each application to enable, disable or access its own monitoring information. Otherwise, it is possible to monitor an application without any modification of its source code by activating the monitoring components at launch time and to retrieve results when the application completes. We also supply a set of mechanisms to combine monitored data into communication matrices. They can be used either at the end of the application (when MPI_Finalize is called), or post-mortem. For each pair of processes, an histogram of geometrically increasing message sizes is available.
Implementation
The precision needed for the results had us to implement the solution within the Open MPI stack1 . The component described in this article has been developed in a branch of Open MPI (available at [13]) that will soon be made available on the stock version. As we were planning to intercept all types of communications, two-sided, one-sided and collectives, we have exposed a minimalistic common API for the profiling as an independent engine, and then linked all the MCA components doing the profiling with this engine. Due to the flexibility of the MCA infrastructure, the active components can be configured at runtime, either via mpiexec arguments or via the API (implemented with the MPI Tool interface).
In order to cover the wide range of operations provided by MPI, four components were added to the software stack. One in the collective communication layer (COLL), one in the one-sided layer (remote memory accesses , OSC), one in the point-to-point management layer (PML), and finally one common layer capable of orchestrating the information gathered by the other layers and record data. This set of components when activated at launch time (through the mpiexec option --mca pml_monitoring_enable x ), monitors all specified types of communications, as indicated by the value of x. The design of Open MPI allows for easy distinctions between different types of communication tags, and x allows the user to include or exclude tags related to collective communications, or to other internal coordination (these are called internal tags in opposition to external tags that are available to the user via the MPI API). Specifically, the PML layer sees communications once collectives have been decomposed into point-to-point operations. COLL and OSC both work at a higher level, in order to be able to record operations that do not go through the PML layer, for instance when using dedicated drivers. Therefore, as opposed to the MPI standard profiling interface (PMPI) approach where the MPI calls are intercepted, we monitor the actual point-to-point calls that are issued by Open MPI, which yields much more precise information. For instance, we can infer the underlying topologies and algorithms behind the collective algorithms, as an example the tree topology used for aggregating values in a MPI_Reduce call. However, this comes at the cost of a possible redundant recording of data for collective operations, when the data-path goes through the COLL and the PML components2 .
For an application to enable, disable or access its own monitoring, we implemented a set of callback functions using MPI Tool. At any time, it is possible to know the amount of data exchanged between a pair of processes since the beginning of the application or just in a specific part of the code. Furthermore, the final summary dumped at the end of the application gives a detailed output of the data exchanged between processes for each point-to-point, one-sided and collective operation. The user is then able to refine the results.
Internally, these components use an internal process identifier (ids) and a single associative array employed to translate sender and receiver ids into their MPI_COMM_WORLD counterparts. Our mechanism is therefore oblivious to communicator splitting, merging or duplication. When a message is sent, the sender updates three arrays: the number of messages, the size (in bytes) sent to the specific receiver, and the message size distribution. Moreover, to distinguish between external and internal tags, one-sided emitted and received messages, and collective operations, we maintain five versions of the first two arrays. Also, the histogram of message sizes distribution is kept for each pair of ids, and goes from 0 byte messages to messages of more than 2 64 bytes. Therefore, the memory overhead of this component is at maximum 10 arrays of N 64 bits elements, in addition to the N arrays of 66 elements of 64 bits for the histograms, with N being the number of MPI processes. These arrays are lazily allocated, so they only exist for a remote process if there are communications with it.
In addition to the amount of data and the number of messages exchanged between processes, we keep track of the type of collective operations issued on each communicator: one-to-all operations (e.g MPI_Scatter), all-to-one operations (e.g MPI_Gather) and all-to-all operations (e.g MPI_Alltoall). For the first two types of operations, the root process records the total amount of data sent and received, respectively, and the count of operations of each kind. For all-to-all operations, each process records the total amount of data sent, and the count of operations. All these pieces of data can be flushed into files either at the end of the application or when requested through the API.
Results
We carried out the experiments on an Infiniband cluster (HCA: Mellanox Technologies MT26428 (ConnectX IB QDR)). Each node features two Intel Xeon Nehalem X5550 CPUs with 4 cores (2.66 GHz) per each CPU.
Overhead Measurement
One of the main issues of monitoring is the potential impact on the application time-to-solution. As our monitoring can be dynamically enabled and disabled, we can compute the upper bound of the overhead by measuring the impact with the monitoring enabled on the entire application. We wrote a micro benchmark that computes the overhead induced by our component for various kinds of MPI functions, and measured this overhead for both shared-and distributed-memory cases. The number of processes varies from 2 to 24 and the amount of data ranges from 0 up to 1MB. Fig. 1 displays the results as heatmaps (the median of thousand measures). Blue nuances correspond to low overhead while yellow colors to higher overhead. As expected the overhead is more visible on a shared memory setting, where the cost of the monitoring is more significant compared with the decreasing cost of data transfers. Also, as the overhead is related to the number of messages and not to their content, the overhead decreases as the size of the messages increases. Overall, the median overhead is 4.4% and 2.4% for respectively the shared-and distributed-memory cases, which proves that our monitoring is cost effective.
We have also build a second micro benchmarks that performs a series of all-to-all only (with no computation) of a given buffer size. In Fig. 2, we outline the average difference between monitoring and non monitoring time when the exchanged buffer size varies and once we have normalize to one all-to-all call and to one processes. We also plot, as error bars, the 95% confidence interval computed with the Student paired T-Test.
We see that when the buffer size is small (less than 50 integers), the monitoring time is statistically longer than the non-monitoring time. On average monitoring one all-to-all call to one processes takes around 10ns. However, when the buffer size increases the error bars cover both negative and positive values meaning that, statistically, there is no difference between the monitoring time and the non-monitoring time. This is explained as follows : when the buffer size increases, the execution time increases while the monitoring time stays constant (we have the same number of messages). Therefore, the whole execution time is less stable (due to noise in the network traffic and software stack) and hence the difference between the monitoring case and the non-monitoring case becomes less visible and is hidden by this noise.
In order to measure the impact on applications, we used some of the NAS parallel benchmarks, namely BT, CG and LU. The choice of these tests is not innocent, we picked the ones with the highest number of MPI calls, in order to maximize the potential impact of the monitoring on the application. Table 1 shows the results, which are an average of 20 runs. Shaded rows mean that the measures display a statistically significant difference (using the Student's t-Test on the measures) between a monitored run and non-monitored one.
Only BT, CG and LU kernels have been evaluated as they are the ones issuing the largest number of messages per processors. They are therefore the ones for which the monitoring overhead should be most visible.
Overall, we see that the overhead is consistently below 1% and on average around 0.35%. Interestingly, for the LU kernel, the overhead seems lightly correlated with the message rate meaning that the larger the communication activity, the higher the overhead. For the CG kernel, however, the timings are Inria q q q q q q q q q q -400 -200 0 200 1 10 100 1000
Number of MPI INT sent Avg Time difference (monitoring -no monitoring) (ns)
All-To-All monitoring overhead (90% confidence interval) so small that it is hard to see any influence of this factor beyond measurements noise.
We have also tested the Minighost mini-application [START_REF] Barrett | Minighost: a miniapp for exploring boundary exchange strategies using stencil computations in scientific parallel computing[END_REF] that computes a stencil in various dimensions to evaluate the overhead. An interesting feature of this mini-application is that it outputs the percentage of time spent to perform communication. In Fig. 3, we depict the overhead depending on this communication ratio. We have run 114 different executions of the Minighost application and have split these runs in four categories depending on the percentage of time spent in communications (0%-25%, 25%-50%, 50%-75% and 75%-100%). A point represents the median overhead (in percent) and the error bars represent the first and third quantile. We see that the median overhead is increasing with the percentage of communication. Indeed, the more time you spend in communication the more visible is the overhead for monitoring these communications. However, the overhead accounts for only a small percentage.
MPI Collective Operations Optimization
In these experiments we have executed a MPI_Reduce collective call on 32 and 64 ranks (on 4 and 8 nodes respectively), with a buffer which size ranges between 1.10 6 and 2.10 8 integers and rank 0 acts as the root. We took advantage of the Open MPI infrastructure, to block the dynamic selection of the collective algorithm and instead forced the reduce operation to use a binary tree algorithm. Since we monitor the collective communications once they have been broken down into point-to-point communications, we are able to identify details of the collective algorithm implementation, and expose the underlying binary tree algorithm (see Fig. 4b). This provides a much more detailed understanding of the underlying communication pattern compared with existing tools, where the use of a higher-level monitoring tool (e.g. PMPI) completely hides the new process placement with the TreeMatch algorithm, and compared with the placement obtained using a high-level monitoring (that does not see the tree and hence is equivalent to the round-robin placement). Results are shown in Fig. 4a. We see that the optimized placement is much more efficient than the one based on high-level monitoring. For instance with 64 ranks and a buffer of 5.10 6 integers the walltime is 338 ms vs. 470 ms (39% faster).
Use Case: Fault Tolerance with Online Monitoring
In addition to the usage scenarios mentioned above, the proposed dynamic monitoring tool has been demonstrated in one of our recent work. In [START_REF] Cores | An application-level solution for the dynamic reconfiguration of mpi applications[END_REF], we have used the dynamic monitoring feature to compute the communication matrix during the execution of an MPI application. The goal was to perform elastic computations in case of node failures or when new nodes are available. The runtime system migrates MPI processes when the number of computing resources changes. To this end, the authors used the TreeMatch [START_REF] Jeannot | Process Placement in Multicore Clusters: Algorithmic Issues and Practical Techniques[END_REF] algorithm to recom- pute the process mapping onto the available resources. The algorithm decides how to move processes based on the application's gathered communication matrix: the more two processes communicate, the closer they shall be re-mapped onto the physical resources. Gathering the communication matrix was performed online using the callback routines of the monitoring: such a result would not have been possible without the tool proposed in this report.
Inria q q q q 0 1 2 3 0% -25% 25% -50% 50% -75% 75% -100%
Ratio of communication Median overhead percentage
1_21 1_23 1_24 10_21 10_23 10_24 1_21 1_23 1_24 10_21 10_23 10_24 10_21 10_23 10_24 1_23 1_24 10_21 10_23 10_24 70_21 70_23 1_21 1_23 1_24 10_21 10_23 10_24 95_21 95_23 1_21 1_23 1_24 10_21 10_23 10_24 55_21 55_23 55_24 79_21 79_23 79_24 95_21 95_23 95_24 10_21 10_23 10_24 10_21 10_23 10_24 55_21 55_23 55_24 79_21 79_23 79_24 95_21 95_23 95_24
Grid size and stencil type
Gain (avg values)
Group by Number of proc, Number of variables and affinity metric type
Static Process Placement of applications
We have tested the TreeMatch algorithm for performing static placement to show that the monitoring provides relevant information allowing execution optimization. To do so, we first monitor the application using the proposed monitoring tool of this report, second we build the communication matrix (here using the number of messages) then we apply the TreeMatch algorithm on this matrix and the topology of the target architecture and last we re-execute the application using the newly computed mapping. Different settings (kind of stencil, the stencil dimension, number of variables per stencil point, and number of processes) are shown in fig. 5. We see that the gain is up to 40% when compared to round-robin placement (the standard MPI placement) and 300% against random placement. The decrease of performance is never greater than 2%.
Scientific Conclusions
Parallel applications tend to use a growing number of computational resources connected via complex communication schemes that naturally diverge from the underlying network topology. Optimizing application's performance requires to identify any mismatch between the application communication pattern and the network topology, and this demands a precise mapping of all data exchanges between the application processes.
In this report we proposed a new monitoring framework to consistently track all types of data exchanges in MPI applications. We have implemented the tool as a set of modular components in Open MPI, allowing fast and flexible low level monitoring (with collective operation decomposed to their point-to-point expression) of all types of communications supported by the MPI-3 standard Inria (including one-sided communications and IO). We have also provided an API based on the MPI Tool standard, for applications to monitor their state dynamically, focusing the monitoring to only critical portions of the code. The basic usage of this tool does not require any change in the application, nor any special compilation flag. The data gathered can be provided at different granularities, either as communication matrices, or as histograms of message sizes. Another significant feature of this tool is that it leaves the PMPI interface available for other usages, allowing additional monitoring of the application using more traditional tools.
Micro-benchmarks show that the overhead is minimal for intra-node communications (over shared memory) and barely noticeable for large messages or distributed memory. Once applied to real applications, the overhead remains hardly visible (at most a few percents). Having such a precise and flexible monitoring tool opens the door to dynamic process placement strategies, and could lead to highly efficient process placement strategies. Experiments show that this tool enables large gain for dynamic or static cases. The fact that the monitoring records the communication after collective decomposition into point-to-points allows optimizations that were not otherwise possible.
User Documentation
This section details how the component is to be used. This documentation presents the concepts on which we based our component's API, and the different options available. It first explains how to use the component, then summarize it in a quick start tutorial.
Introduction
MPI_Tool is a concept introduced in the MPI-3 standard. It allows MPI developers, or third party, to offer a portable interface to different tools. These tools may be used to monitor application, measure its performances, or profile it.
MPI_Tool is an interface that ease the addition of external functions to a MPI library. It also allows the user to control and monitor given internal variables of the runtime system.
The present section is here to introduce the use the MPI_Tool interface from a user point of view, and to facilitate the usage of the Open MPI monitoring component. This component allows for precisely recording the message exchanges between nodes during MPI applications execution. The number of messages and the amount of data exchanged are recorded, including or excluding internal communications (such as those generated by the implementation of the collective algorithms).
This component offers two types of monitoring, whether the user wants a fine control over the monitoring, or just an overall view of the messages. Moreover, the fine control allows the user to access the results through the application, and let him reset the variables when needed. The fine control is achieved via the MPI_Tool interface, which needs the code to be adapted by adding a specific initialization function. However, the basic overall monitoring is achieved without any modification of the application code.
Whether you are using one version or the other, the monitoring need to be enabled with parameters added when calling mpiexec, or globally on your Open MPI MCA configuration file ($HOME/.openmpi/mca-params.conf). Three new parameters have been introduced:
--mca pml_monitoring_enable value This parameter sets the monitoring mode.
value may be:
0 monitoring is disabled
1 monitoring is enabled, with no distinction between user issued and library issued messages.
≥ 2 monitoring enabled, with a distinction between messages issued from the library (internal) and messages issued from the user (external).
--mca pml_monitoring_enable_output value This parameter enables the automatic flushing of monitored values during the call to MPI_Finalize. This option is to be used only without MPI_Tool, or with value = 0. value may be: Each MPI process flushes its recorded data. The pieces of information can be aggregated whether with the use of PMPI (see Section 8.4) or with the distributed script test/monitoring/profile2mat.pl.
0
--mca pml_monitoring_filename filename Set the file where to flush the resulting output from monitoring. The output is a communication matrix of both the number of messages and the total size of exchanged data between each couple of nodes. This parameter is needed if pml_monitoring _enable_output ≥ 3.
Also, in order to run an application without some monitoring enabled, you need to add the following parameters at mpiexec time: This mode should be used to monitor the whole application from its start until its end. It is defined such as you can record the amount of communications without any code modification.
In order to do so, you have to get Open MPI compiled with monitoring enabled. When you launch your application, you need to set the parameter pml _monitoring_enable to a value > 0, and, if pml_monitoring_enable_output ≥ 3, to set the pml_monitoring_filename parameter to a proper filename, which path must exists.
With MPI_Tool
This section explains how to monitor your applications with the use of MPI_Tool.
How it works
MPI_Tool is a layer that is added to the standard MPI implementation. As such, it must be noted first that it may have an impact to the performances.
As these functionality are orthogonal to the core ones, MPI_Tool initialization and finalization are independent from MPI's one. There is no restriction regarding the order or the different calls. Also, the MPI_Tool interface initialization function can be called more than once within the execution, as long as the finalize function is called as many times.
MPI_Tool introduces two types of variables, control variables and performance variables. These variables will be referred to respectively as cvar and pvar. The variables can be used to tune dynamically the application to fit best the needs of the application. They are defined by the library (or by the external component), and accessed with the given accessors functions, specified in the standard. The variables are named uniquely through the application. Every variable, once defined and registered within the MPI engine, is given an index that will not change during the entire execution.
Same as for the monitoring without MPI_Tool, you need to start your application with the control variable pml_monitoring_enable properly set. Even though, it is not required, you can also add for your command line the desired filename to flush the monitoring output. As long as no filename is provided, no output can be generated.
Initialization
The initialization is made by a call to MPI_T_init_thread. This function takes two parameters. The first one is the desired level of thread support, the second one is the provided level of thread support. It has the same semantic as the MPI _Init_thread function. Please note that the first function to be called (between MPI_T_init_thread and MPI_Init_thread) may influence the second one for the provided level of thread support. This function goal is to initialize control and performance variables.
But, in order to use the performance variables within one context without influencing the one from an other context, a variable has to be bound to a session. To create a session, you have to call MPI_T_pvar_session_create in order to initialize a session.
In addition to the binding of a session, a performance variable may also depend on a MPI object. For example, the pml_monitoring_flush variable needs to be bound to a communicator. In order to do so, you need to use the MPI_T_pvar_handle_alloc function, which takes as parameters the used session, the id of the variable, the MPI object (i.e. MPI_COMM_WORLD in the case of pml_monitoring_flush), the reference to the performance variable handle and a reference to an integer value. The last parameter allow the user to receive some additional information about the variable, or the MPI object bound. As an example, when binding to the pml_monitoring_flush performance variable, the last parameter is set to the length of the current filename used for the flush, if any, and 0 otherwise ; when binding to the pml_monitoring_messages _count performance variable, the parameter is set to the size of the size of bound communicator, as it corresponds to the expected size of the array (in number of elements) when retrieving the data. This parameter is used to let the application determines the amount of data to be returned when reading the performance variables. Please note that the handle_alloc function takes the variable id as parameter. In order to retrieve this value, you have to call MPI _T_pvar_get_index which take as a IN parameter a string that contains the name of the desired variable.
How to use the performance variables
Some performance variables are defined in the monitoring component: pml_monitoring_flush Allow the user to define a file where to flush the recorded data.
pml_monitoring_messages_count Allow the user to access within the application the number of messages exchanged through the PML framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
pml_monitoring_messages_size Allow the user to access within the application the amount of data exchanged through the PML framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
osc_monitoring_messages_sent_count Allow the user to access within the application the number of messages sent through the OSC framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
osc_monitoring_messages_sent_size Allow the user to access within the application the amount of data sent through the OSC framework with each Inria node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
osc_monitoring_messages_recv_count Allow the user to access within the application the number of messages received through the OSC framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
osc_monitoring_messages_recv_size Allow the user to access within the application the amount of data received through the OSC framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
coll_monitoring_messages_count Allow the user to access within the application the number of messages exchanged through the COLL framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
coll_monitoring_messages_size Allow the user to access within the application the amount of data exchanged through the COLL framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
coll_monitoring_o2a_count Allow the user to access within the application the number of one-to-all collective operations across the bound communicator (MPI_Comm) where the process was defined as root. This variable returns a single unsigned long integer.
coll_monitoring_o2a_size Allow the user to access within the application the amount of data sent as one-to-all collective operations across the bound communicator (MPI_Comm). This variable returns a single unsigned long integers. The communications between a process and itself are not taken in account coll_monitoring_a2o_count Allow the user to access within the application the number of all-to-one collective operations across the bound communicator (MPI_Comm) where the process was defined as root. This variable returns a single unsigned long integer.
coll_monitoring_a2o_size Allow the user to access within the application the amount of data received from all-to-one collective operations across the bound communicator (MPI_Comm). This variable returns a single unsigned long integers. The communications between a process and itself are not taken in account coll_monitoring_a2a_count Allow the user to access within the application the number of all-to-all collective operations across the bound communicator (MPI_Comm). This variable returns a single unsigned long integer.
coll_monitoring_a2a_size Allow the user to access within the application the amount of data sent as all-to-all collective operations across the bound communicator (MPI_Comm). This variable returns a single unsigned long integers. The communications between a process and itself are not taken in account
In case of uncertainty about how a collective in categorized as, please refer to the list given in Table 2.
Once bound to a session and to the proper MPI object, these variables may be accessed through a set of given functions. It must be noted here that each of the functions applied to the different variables need, in fact, to be called with the handle of the variable.
The first variable may be modified by using the MPI_T_pvar_write function. The later variables may be read using MPI_T_pvar_read but cannot be written. Stopping the flush performance variable, with a call to MPI_T_pvar_stop, force the counters to be flushed into the given file, reseting to 0 the counters at the same time. Also, binding a new handle to the flush variable will reset the counters. Finally, please note that the size and counter performance variables may overflow for multiple large amounts of communications.
The monitoring will start on the call to the MPI_T_pvar_start until the moment you call the MPI_T_pvar_stop function.
Once you are done with the different monitoring, you can clean everything by calling the function MPI_T_pvar_handle_free to free the allocated handles, MPI_T_pvar_session_free to free the session, and MPI_T_Finalize to state the end of your use of performance and control variables.
Overview of the calls
To summarize the previous informations, here is the list of available performance variables, and the outline of the different calls to be used to properly access monitored data through the MPI_Tool interface.
• pml_monitoring_flush • coll_monitoring_o2a_size
• pml_monitoring_messages_count • pml_monitoring_messages_size • osc_monitoring_messages_sent_count • osc_monitoring_messages_sent_size • osc_monitoring_messages_recv_count • osc_monitoring_messages_recv_size • coll_monitoring_messages_count • coll_monitoring_messages_size Inria One-To-All All-
• coll_monitoring_a2o_count
• coll_monitoring_a2o_size
• coll_monitoring_a2a_count
• coll_monitoring_a2a_size
Add to your command line at least --mca pml_monitoring_enable [START_REF] Barrett | Analysis of the Component Architecture Overhead in Open MPI[END_REF][START_REF] Barrett | Minighost: a miniapp for exploring boundary exchange strategies using stencil computations in scientific parallel computing[END_REF] Sequence of MPI_Tool :
Use of LD_PRELOAD
In order to automatically generate communication matrices, you can use the monitoring_prof tool that can be found in test/monitoring/monitoring_prof.c. While launching your application, you can add the following option in addition to the --mca pml_monitoring_enable parameter:
-x LD_PRELOAD=ompi_install_dir/lib/monitoring_prof.so
This library automatically gathers sent and received data into one communication matrix. Although, the use of monitoring MPI_Tool within the code may interfere with this library. The main goal of this library is to avoir dumping one file per MPI process, and gather everything in one file aggregating all pieces of information.
The resulting communication matrices are as close as possible as the effective amount of data exchanged between nodes. But it has to be kept in mind Inria that because of the stack of the logical layers in Open MPI, the amount of data recorded as part of collectives or one-sided operations may be duplicated when the PML layer handles the communication. For an exact measure of communications, the application must use MPI_Tool 's monitoring performance variables to potentially subtract double-recorded data.
Examples
First is presented an example of monitoring using the MPI_Tool in order to define phases during which the monitoring component is active. A second snippet is presented for how to access monitoring performance variables with MPI_Tool.
Monitoring Phases
You can execute the following example with mpiexec -n 4 --mca pml_monitoring_enable 2 test_monitoring. Please note that you need the prof directory to already exists to retrieve the dumped files. Following the complete code example, you will find a sample dumped file and the corresponding explanations. This letter is followed by the rank of the issuing process, and the rank of the receiving one. Then you have the total amount in bytes exchanged and the count of messages. For point-to-point entries (i.e. E of I entries), the line is completed by the full distribution of messages in the form of a histogram. See variable size_histogram in Section 9.1.1 for the corresponding values. In the case of a disabled filtering between external and internal messages, the I lines are merged with the E lines, keeping the E header.
The end of the summary is a per communicator information, where you find the name of the communicator, the ranks of the processes included in this communicator, and the amount of data send (or received) for each kind of collective, with the corresponding count of operations of each kind. The first integer corresponds to the rank of the process that sent or recieved through the given collective operation type.
Accessing Monitoring Performance Variables
The following snippet presents how to access the performances variables defined as part of the MPI_Tool interface. The session allocation is not presented as it is the same as in the previous example. Please note that contrary to the pml _monitoring_flush variable, the class of the monitoring performance values is MPI_T_PVAR_CLASS_SIZE, whereas the flush variable is of class GENERIC. Also, performances variables are only to be read. printf("failed to stop handle on \"%s\" pvar, check that you" " have monitoring pml\n", count_pvar_name); MPI_Abort(MPI_COMM_WORLD, MPIT_result); } MPIT_result = MPI_T_pvar_handle_free(session, &count_handle); if (MPIT_result != MPI_SUCCESS) { printf("failed to free handle on \"%s\" pvar, check that you" " have monitoring pml\n", count_pvar_name); MPI_Abort(MPI_COMM_WORLD, MPIT_result); }
Technical Documentation of the Implementation
This section describes the technical details of the components implementation. It is of no use from a user point of view but it is made to facilitate the work for future developer that would debug or enrich the monitoring components. The architecture of this component is as follows. The Common component is the main part where the magic occurs. PML, OSC and COLL components are the entry points to the monitoring tool from the software stack point-of-view. The relevant files can be found in accordance with the partial directory tree presented in Figure 6.
Common
This part of the monitoring components is the place where data is managed. It centralizes all recorded information, the translation hash-table and ensures a unique initialization of the monitoring structures. This component is also the one where the MCA variables (to be set as part of the command line) are defined and where the final output, if any requested, is dealt with.
The header file defines the unique monitoring version number, different preprocessing macros for printing information using the monitoring output stream object, and the ompi monitoring API (i.e. the API to be used INSIDE Inria software stack, not the one to be exposed to the end-user). It has to be noted that the mca_common_monitoring_record_* functions are to be used with the destination rank translated into the corresponding rank in MPI_COMM_WORLD. This translation is done by using mca_common_monitoring_get_world_rank.
The use of this function may be limited by how the initialization occurred (see in 9.2).
Common monitoring
The the common_monitoring.c file defines multiples variables that has the following use:
mca_common_monitoring_hold is the counter that keeps tracks of whether the common component has already been initialized or if it is to be released. The operations on this variable are atomic to avoid race conditions in a multi-threaded environment.
mca_common_monitoring_output_stream_obj is the structure used internally by Open MPI for output streams. The monitoring output stream states that this output is for debug, so the actual output will only happen when OPAL is configured with --enable-debug. The output is sent to stderr standard output stream. The prefix field, initialized in mca _common_monitoring_init, states that every log message emitted from this stream object will be prefixed by "[hostname:PID] monitoring: ", where hostname is the configured name of the machine running the process and PID is the process id, with 6 digits, prefixed with zeros if needed.
mca_common_monitoring_enabled is the variable retaining the original value given to the MCA option system, as an example as part of the command line. The corresponding variable is pml_monitoring_enable. This variable is not to be written by the monitoring component. It is used to reset the mca_common_monitoring_current_state variable between phases. The value given to this parameter also defines whether or not the filtering between internal and externals messages is enabled.
mca_common_monitoring_current_state is the variable used to determine the actual current state of the monitoring. This variable is the one used to define phases.
mca_common_monitoring_output_enabled is a variable, set by the MCA engine, that states whether or not the user requested a summary of the monitored data to be streamed out at the end of the execution. It also states whether the output should be to stdout, stderr or to a file. If a file is requested, the next two variables have to be set. The corresponding variable is pml_monitoring_enable_output. Warning: This variable may be set to 0 in case the monitoring is also controlled with MPI_Tool. We cannot both control the monitoring via MPI_Tool and expect accurate answer upon MPI_Finalize.
mca_common_monitoring_initial_filename works the same as mca_common _monitoring_enabled. This variable is, and has to be, only used as a placeholder for the pml_monitoring_filename variable. This variable has to be handled very carefully as it has to live as long as the program and it has to be a valid pointer address, which content is not to be released by the component. The way MCA handles variable (especially strings) makes it very easy to create segmentation faults. But it deals with the memory release of the content. So, in the end, mca_common_monitoring _initial_filename is just to be read.
mca_common_monitoring_current_filename is the variable the monitoring component will work with. This variable is the one to be set by MPI_Tool's control variable pml_monitoring_flush. Even though this control variable is prefixed with pml for historical and easy reasons, it depends on the common section for its behavior.
pml_data and pml_count arrays of unsigned 64-bits integers record respectively the cumulated amount of bytes sent from the current process to another process p, and the count of messages. The data in this array at the index i corresponds to the data sent to the process p, of id i in MPI _COMM_WORLD. These arrays are of size N , where N is the number of nodes in the MPI application. If the filtering is disabled, these variables gather all information regardless of the tags. In this case, the next two arrays are, obviously, not used, even though they will still be allocated. The pml_data and pml_count arrays, and the nine next arrays described, are allocated, initialized, reset and freed all at once, and are concurrent in the memory.
filtered_pml_data and filtered_pml_count arrays of unsigned 64-bits integers record respectively the cumulated amount of bytes sent from the current process to another process p, and the count of internal messages. The data in this array at the index i corresponds to the data sent to the process p, of id i in MPI_COMM_WORLD. These arrays are of size N , where N is the number of nodes in the MPI application. The internal messages are defined as messages sent through the PML layer, with a negative tag. They are issued, as an example, from the decomposition of collectives operations.
osc_data_s and osc_count_s arrays of unsigned 64-bits integers record respectively the cumulated amount of bytes sent from the current process to another process p, and the count of messages. The data in this array at the index i corresponds to the data sent to the process p, of id i in MPI _COMM_WORLD. These arrays are of size N , where N is the number of nodes in the MPI application.
osc_data_r and osc_count_r arrays of unsigned 64-bits integers record respectively the cumulated amount of bytes received to the current process to another process p, and the count of messages. The data in this array Inria at the index i corresponds to the data sent to the process p, of id i in MPI _COMM_WORLD. These arrays are of size N , where N is the number of nodes in the MPI application.
coll_data and coll_count arrays of unsigned 64-bits integers record respectively the cumulated amount of bytes sent from the current process to another process p, in the case of a all-to-all or one-to-all operations, or received from another process p to the current process, in the case of allto-one operations, and the count of messages. The data in this array at the index i corresponds to the data sent to the process p, of id i in MPI_COMM _WORLD. These arrays are of size N , where N is the number nodes in the MPI application. The communications are thus considered symmetrical in the resulting matrices.
size_histogram array of unsigned 64-bits integers records the distribution of sizes of pml messages, filtered or not, between the current process and a process p. This histogram is of log-2 scale. The index 0 is for empty messages. Messages of size between 1 and 2 64 are recorded such as the following. For a given size S, with 2 k ≤ S < 2 k+1 , the k-th element of the histogram is incremented. This array is of size N ×max_size_histogram, where N is the number of nodes in the MPI application.
max_size_histogram constant value correspond to the number of elements in the size_histogram array for each processor. It is stored here to avoid having its value hang here and there in the code. This value is used to compute the total size of the array to be allocated, initialized, reset or freed. This value equals (10 + max_size_histogram) × N , where N correspond to the number of nodes in the MPI application. This value is also used to compute the index to the histogram of a given process p ; this index equals i × max_size_histogram, where i is p's id in MPI_COMM _WORLD.
log10_2 is a cached value for the common logarithm (or decimal logarithm) of 2. This value is used to compute the index at which increment the histogram value. This index j, for a message that is not empty, is computed as follow j = 1 + log 10 (S)/log 10 (2) , where log 10 is the decimal logarithm and S the size of the message.
rank_world is the cached value of the current process in MPI_COMM_WORLD.
nprocs_world is the cached value of the size of MPI_COMM_WORLD.
common_monitoring_translation_ht is the hash table used to translate the rank of any process p of rank r from any communicator, into its rank in MPI_COMM_WORLD. It lives as long as the monitoring components do.
In any case, we never monitor communications between one process and itself.
The different functions to access MPI_Tool performance variables are pretty straight forward. Note that for PML, OSC and COLL, for both count and size, performance variables the notify function is the same. At binding, it sets the count variable to the size of MPI_COMM_WORLD, as requested by the MPI-3 standard (for arrays, the parameter should be set to the number of elements of the array). Also, the notify function is responsible for starting the monitoring when any monitoring performance value handle is started, and it also disable the monitoring when any monitoring performance value handle is stopped. The flush control variable behave as follows. On binding, it returns the size of the filename defined if any, 0 otherwise. On start event, this variable also enable the monitoring, as the performance variables do, but it also disable the final output, even though it was previously requested by the end-user. On the stop event, this variable flushes the monitored data to the proper output stream (i.e. stdout, stderr or the requested file). Note that these variables are to be bound only with the MPI_COMM_WORLD communicator. For far, the behavior in case of a binding to another communicator is not tested.
For the flushing itself, it is decomposed into two functions. The first one (mca_common_monitoring_flush) is responsible for opening the proper stream. If it is given 0 as its first parameter, it does nothing with no error propagated as it correspond to a disable monitoring. The filename parameter is only taken in account if fd is strictly greater than 2. Note that upon flushing, the record arrays are reset to 0. Also, the flushing called in common_monitoring.c call the specific flushing for per communicator collectives monitoring data.
For historical reasons, and because of the fact that the PML layer is the first one to be loaded, MCA parameters and the monitoring_flush control variable are linked to the PML framework. The other performance variables, though, are linked to the proper frameworks.
Common Coll Monitoring
In addition to the monitored data kept in the arrays, the monitoring component also provide a per communicator set of records. It keeps pieces of information about collective operations. As we cannot know how the data are indeed exchanged (see Section 9.4), we added this complement to the final summary of the monitored operations.
We keep the per communicator data set as part of the coll_monitoring _module. Each data set is also kept in a hash table, with the communicator structure address as the hash-key. This data set is made to keep tracks of the mount of data sent through a communicator with collective operations and the count of each kind of operations. It also cache the list of the processes' ranks, translated to their rank in MPI_COMM_WORLD, as a string, the rank of the current process, translated into its rank in MPI_COMM_WORLD and the communicator's name.
The process list is generated with the following algorithm. First, we allocate a string long enough to contain it. We define long enough as 1 + (d + 2) × s, where d is the number of digit of the higher rank in MPI_COMM_WORLD and s the Inria size of the current communicator. We add 2 to d, to consider the space needed for the comma and the space between each rank, and 1 to ensure there is enough room for the NULL character terminating the string. Then, we fill the string with the proper values, and adjust the final size of the string.
When possible, this process happen when the communicator is being created. If it fails, this process will be tested again when the communicator is being released.
This data set lifetime is different from the one of its corresponding communicator. It is actually destroyed only once its data had been flushed (at the end of the execution or at the end of a monitoring phase). To this end, this structure keeps a flag to know if it is safe to release it or not.
PML
As specified in Section 9.1.1, this component is closely working with the common component. They were merged initially, but separated later in order to propose a cleaner and more logical architecture.
This module is the first one to be initialized by the Open MPI software stack ; thus it is the one responsible for the proper initialization, as an example, of the translation hash table. Open MPI relies on the PML layer to add process logical structures as far as communicators are concerned.
To this end, and because of the way the PML layer is managed by the MCA engine, this component has some specific variables to manage its own state, in order to be properly instantiated. The module selection process works as follows. All the PML modules available for the framework are loaded, initialized and asked for a priority. The higher the priority, the higher the odds to be selected. This is why our component returns a priority of 0. Note that the priority is returned and initialization of the common module is done at this point only if the monitoring had been requested by the user.
If everything works properly, we should not be selected. The next step in the PML initialization is to finalize every module that is not the selected one, and then close components that were not used. At this point the winner component and its module are saved for the PML. The variables mca_pml_base_selected _component and mca_pml, defined in ompi/mca/pml/base/pml_base_frame.c, are now initialized. This point is the one where we install our interception layer. We also indicate ourself now initialized, in order to know on the next call to the component_close function that we actually have to be closed this time. Note that the adding of our layer require the add of the MCA_PML_BASE_FLAG _REQUIRE_WORLD flag in order to request for the whole list of processes to be given at the initialization of MPI_COMM_WORLD, so we can properly fill our hash table. The downside of this trick is that it stops the Open MPI optimization of lazily adding them.
Once that is done, we are properly installed, and we can monitor every messages going through the PML layer. As we only monitor messages from the emitter side, we only actually record when the messages are using the MPI_Send, MPI_Isend or MPI_Start functions.
OSC
This layer is responsible for remote memory access operations, and thus, it has its specificities. Even though the component selection process is quite close to the PML selection's one, there are some aspects on the usage of OSC modules that had us to adapt the interception layer.
The first problem comes from how the module is accessed inside the components. In the OSC layer, the module is part of the ompi_win_t structure. This implies that it is possible to access directly to the proper field of the structure to find the reference to the module. And it how it is done. Because of that it is not possible to directly replace a module with ours that would have saved the original module. The first solution was then to "extend" (in the ompi manner of extending objects) with a structure that would have contain as the first field a union type of every possible module. We would have then copy their fields values, save their functions, and replace them with pointers to our inception functions. This solution was implemented but a second problem was faced, stopping us from going with this solution.
The second problem was that the osc/rdma uses internally a hash table to keep tracks of its modules and allocated segments, with the module's pointer address as the hash key. Hence, it was not possible for us to modify this address, as the RDMA module would not be able to find the corresponding segments. This also implies that it is neither possible for us to extend the structures. Therefore, we could only modify the common fields of the structures to keep our "module" adapted to any OSC component. We designed templates, dynamically adapted for each kind of module.
To this end and for each kind of OSC module, we generate and instantiate three variables: OMPI_OSC_MONITORING_MODULE_VARIABLE(template) is the structure that keeps the address of the original module functions of a given component type (i.e. RDMA, PORTALS4, PT2PT or SM). It is initialized once, and referred to to propagate the calls after the initial interception. There is one generated for each kind of OSC component.
OMPI_OSC_MONITORING_MODULE_INIT(template) is a flag to ensure the module variable is only initialized once, in order to avoid race conditions. There is one generated for each OMPI_OSC_MONITORING_MODULE_VARIABLE(template), thus one per kind of OSC component.
OMPI_OSC_MONITORING_TEMPLATE_VARIABLE(template) is a structure containing the address of the interception functions. There is one generated for each kind of OSC component.
The interception is done with the following steps. First, we follow the selecting process. Our priority is set to INT_MAX in order to ensure that we would be the selected component. Then we do this selection ourselves. This gives us the opportunity to modify as needed the communication module. If it is the first time a module of this kind of component is used, we extract from the given Inria module the function's addresses and save them to the OMPI_OSC_MONITORING _MODULE_VARIABLE(template) structure, after setting the initialization flag. Then we replace the origin functions in the module with our interception ones.
To make everything work for each kind of component, the variables are generated with the corresponding interception functions. These operations are done at compilation time. An issue appeared with the use of PORTALS4, that have its symbols propagated only when the card are available on the system. In the header files, where we define the template functions and structures, template refers to the OSC component name.
We found two drawbacks to this solution. First, the readability of the code is bad. Second, is that this solution is not auto-adaptive to new components. If a new component is added, the code in ompi/mca/osc/monitoring/osc _monitoring_component.c needs to be modified in order to monitor the operations going through it. Even though the modification is three lines long, it my be preferred to have the monitoring working without any modification related to other components.
A second solution for the OSC monitoring could have been the use of a hash table. We would have save in the hash table the structure containing the original function's addresses, with the module address as a hash key. Our interception functions would have then search in the hash table the corresponding structure on every call, in order to propagate the functions calls. This solution was not implemented because because it offers an higher memory footprint for a large amount of windows allocated. Also, the cost of our interceptions would have been then higher, because of the search in the hash table. This reason was the main reason we choose the first solution. The OSC layer is designed to be very cost-effective in order to take the best advantages of the background communication and communication/computations overlap. This solution would have however give us the adaptability our solution lacks.
COLL
The collective module (or to be closer to the reality, modules) is part of the communicator. The modules selection is made with the following algorithm. First all available components are selected, queried and sorted in ascending order of priorities. The modules may provide part or all operations, keeping in mind that modules with higher priority may take your place. The sorted list of module is iterated over, and for each module, for each operation, if the function's address is not NULL, the previous module is replace with the current one, and so is the corresponding function. Every time a module is selected it is retained and enabled (i.e. the coll_module_enable function is called), and every time it gets replaced, it is disabled (i.e. the coll_module_disable function is called) and released.
When the monitoring module is queried, the priority returned is INT_MAX to ensure that our module comes last in the list. Then, when enabled, all the previous function-module couples are kept as part of our monitoring module. The modules are retained to avoid having the module freed when released by the selecting process. To ensure the error detection in communicator (i.e. an incomplete collective API), if, for a given operation, there is no corresponding module given, we set this function's address to NULL. Symmetrically, when our module is released, we also propagate this call to each underlying module, and we also release the objects. Also, when the module is enabled, we initialize the per communicator data record, which gets released when the module is disabled.
When an collective operation is called, both blocking or non blocking, we intercept the call and record the data in two different entries. The operations are groups between three kinds. One-to-all operations, all-to-one operations and all-to-all operations.
For one-to-all operations, the root process of the operation computes the total amount of data to be sent, and keep it as part of the per communicator data (see Section 9.1.2). Then it update the common_monitoring array with the amount of data each pair has to receive in the end. As we cannot predict the actual algorithm used to communicate the data, we assume the root send everything directly to each process.
For all-to-one operations, each non-root process compute the amount of data to send to the root and update the common_monitoring array with the amount of data at the index i, with i being the rank in MPI_COMM_WORLD of the root process. As we cannot predict the actual algorithm used to communicate the data, we assume each process send its data directly to the root. The root process compute the total amount of data to receive and update the per communicator data.
For all-to-all operations, each process compute for each other process the amount of data to both send and receive from it. The amount of data to be sent to each process p is added to update the common_monitoring array at the index i, with i being the rank of p in MPI_COMM_WORLD. The total amount of data sent by a process is also added to the per communicator data.
For every rank translation, we use the common_monitoring_translation _ht hash table.
Inria
(a) MPI_Send (b) MPI_Send (prog. overhead) (c) MPI_Bcast
Figure 1 :
1 Figure 1: Monitoring overhead for MPI_Send, MPI_Bcast, MPI_Alltoall, MPI_Put and MPI_Get operations. Left: distributed memory, right: shared memory.
Figure 1 :
1 Figure 1: Monitoring overhead for MPI_Send, MPI_Alltoall and MPI_Put operations. Left: distributed memory, right: shared memory. (cont.)
Figure 2 :
2 Figure 2: Mircobenchmark experiments.
Figure 3 :
3 Figure 3: Minighost application overhead in function of the communication percentage of the total execution time.
Figure 4 :
4 Figure 4: MPI_Reduce Optimization
Figure 5 :
5 Figure 5: Average gain of TreeMatch placement vs. Round Robin and random placements for various Minighost runs
final output flushing is disable 1
1 final output flushing is done in the standard output stream (stdout) 2 final output flushing is done in the error output stream (stderr) ≥ 3 final output flushing is done in the file which name is given with the pml_monitoring_filename parameter.
-
-mca pml ˆmonitoring This parameter disable the monitoring component of the PML framework --mca osc ˆmonitoring This parameter disable the monitoring component of the OSC framework --mca coll ˆmonitoring This parameter disable the monitoring component of the COLL framework Inria 8.2 Without MPI_Tool
1 . 3 . 4 . 5 . 7 .
13457 MPI_T_init_thread Initialize the MPI_Tools interface 2. MPI_T_pvar_get_index To retrieve the variable id MPI_T_session_create To create a new context in which you use your variable MPI_T_handle_alloc To bind your variable to the proper session and MPI object MPI_T_pvar_start To start the monitoring 6. Now you do all the communications you want to monitor MPI_T_pvar_stop To stop and flush the monitoring
test_monitoring.c (extract) #include <stdlib.h> #include <stdio.h> #include <mpi.h> static const void* nullbuff = NULL; static MPI_T_pvar_handle flush_handle; static const char flush_pvar_name[] = "pml_monitoring_flush"; static const char flush_cvar_name[] = "pml_monitoring_enable"; static int flush_pvar_idx; int main(int argc, char*
As it show on the sample profiling, for each kind of communication (pointto-point, one-sided and collective), you find all the related informations. There is one line per peers communicating. Each line start with a lettre describing the kind of communication, such as follows:EExternal messages, i.e. issued by the user I Internal messages, i.e. issued by the library S Sent one-sided messages, i.e. writing access to the remote memory Inria R Received one-sided messages, i.e. reading access to the remote memory C Collective messages
test/monitoring/example_reduce_count.c (extract) MPI_T_pvar_handle count_handle; int count_pvar_idx; const char count_pvar_name[] = "pml_monitoring_messages_count"; uint64_t*counts; /* Retrieve the proper pvar index */ MPIT_result = MPI_T_pvar_get_index(count_pvar_name, MPI_T_PVAR_CLASS_SIZE, &count_pvar_idx); if (MPIT_result != MPI_SUCCESS) { printf("cannot find monitoring MPI_T \"%s\" pvar, check that" " you have monitoring pml\n", count_pvar_name); MPI_Abort(MPI_COMM_WORLD, MPIT_result); } /* Allocating a new PVAR in a session will reset the counters */ MPIT_result = MPI_T_pvar_handle_alloc(session, count_pvar_idx, MPI_MAX, MPI_COMM_WORLD); /* OPERATIONS ON COUNTS */ ... free(counts); MPIT_result = MPI_T_pvar_stop(session, count_handle); if (MPIT_result != MPI_SUCCESS) {
Table 1 :
1
Kernel Class NP Monitoring time Non mon. time #msg/proc Overhead #msg/sec
bt A 16 6.449 6.443 2436.25 0.09% 6044.35
bt A 64 1.609 1.604 4853.81 0.31% 193066.5
bt B 16 27.1285 27.1275 2436.25 0.0% 1436.87
bt B 64 6.807 6.8005 4853.81 0.1% 45635.96
bt C 16 114.6285 114.5925 2436.25 0.03% 340.06
bt C 64 27.23 27.2045 4853.81 0.09% 11408.15
cg A 16 0.1375 0.1365 1526.25 0.73% 177600.0
cg A 32 0.103 0.1 2158.66 3.0% 670650.49
cg A 64 0.087 0.0835 2133.09 4.19% 1569172.41
cg B 8 11.613 11.622 7487.87 -0.08% 5158.27
cg B 16 6.7695 6.7675 7241.25 0.03% 17115.0
cg B 32 3.8015 3.796 10243.66 0.14% 86228.33
cg B 64 2.5065 2.495 10120.59 0.46% 258415.32
cg C 32 9.539 9.565 10243.66 -0.27% 34363.87
cg C 64 6.023 6.0215 10120.59 0.02% 107540.76
lu A 8 8.5815 8.563 19793.38 0.22% 18452.14
lu A 16 4.2185 4.2025 23753.44 0.38% 90092.45
lu A 32 2.233 2.2205 25736.47 0.56% 368816.39
lu A 64 1.219 1.202 27719.36 1.41% 1455323.22
lu B 8 35.2885 35.2465 31715.88 0.12% 7190.08
lu B 16 18.309 18.291 38060.44 0.1% 33260.53
lu B 32 9.976 9.949 41235.72 0.27% 132271.75
lu B 64 4.8795 4.839 44410.86 0.84% 582497.18
lu C 16 72.656 72.5845 60650.44 0.1% 13356.19
lu C 32 38.3815 38.376 65708.22 0.01% 54783.24
lu C 64 20.095 20.056 70765.86 0.19% 225380.19
Overhead for the BT, CG and LU NAS kernels collective algorithm communications. With this pattern, we have computed a
RR n°9038
A proof-of-concept version of this monitoring has been implemented in MPICH Inria
Nevertheless, a precise monitoring is still possible with the use of the monitoring API.RR n°9038
Inria
Publisher Inria Domaine de Voluceau -Rocquencourt BP 105 -78153 Le Chesnay Cedex inria.fr ISSN 0249-6399
Acknowledgments
This work is partially funded under the ITEA3 COLOC project #13024, and by the USA NSF grant #1339820. The PlaFRIM experimental testbed is being developed with support from Inria, LaBRI, IMB and other entities: Conseil Régional d'Aquitaine, FeDER, Université de Bordeaux and CNRS.
MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); to = (rank + 1) % size; from = (rank + size - | 75,644 | [
"748686",
"15678",
"176883"
] | [
"135613",
"409750",
"409750",
"409750",
"456313"
] |
01485251 | en | [
"phys"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01485251/file/SooPRE2017_Postprint.pdf | Heino Soo
David S Dean
Matthias Krüger
Particles with nonlinear electric response: Suppressing van der Waals forces by an external field
We study the classical thermal component of Casimir, or van der Waals, forces between point particles with highly anharmonic dipole Hamiltonians when they are subjected to an external electric field. Using a model for which the individual dipole moments saturate in a strong field (a model that mimics the charges in a neutral, perfectly conducting sphere), we find that the resulting Casimir force depends strongly on the strength of the field, as demonstrated by analytical results. For a certain angle between the external field and center-to-center axis, the fluctuation force can be tuned and suppressed to arbitrarily small values. We compare the forces between these particles with those between particles with harmonic Hamiltonians and also provide a simple formula for asymptotically large external fields, which we expect to be generally valid for the case of saturating dipole moments.
I. INTRODUCTION
Neutral bodies exhibit attractive forces, called van der Waals or Casimir forces depending on context. The earliest calculations were formulated by Casimir, who studied the force between two metallic parallel plates [1], and generalized by Lifshitz [2] for the case of dielectric materials. Casimir and Polder found the force between two polarizable atoms [3]. Although van der Waals forces are only relevant at small (micron scale) distances, they have been extensively measured (see, e.g., Refs. [4,5]). With recent advances in measurement techniques, including the microelectromechanical systems (MEMS) framework [START_REF] Gad-El Hak | The MEMS Handbook[END_REF], Casimir-Polder forces become accessible in many other interesting conditions.
Due to the dominance of van der Waals forces in nanoscale devices, there has been much interest in controlling such forces. The full Lifshitz theory for van der Waals forces [2] shows their dependence on the electrical properties of the materials involved. Consequently, the possibility of tuning a material's electric properties opens up the possibility of tuning fluctuation-induced interactions. This principle has been demonstrated in a number of experimental setups, for instance, by changing the charge carrier density of materials via laser light [START_REF] Chen | [END_REF]8], as well as inducing phase transformations by laser heating, which of course engenders a consequent change in electrical properties [8]. There is also experimental evidence of the reduction of van der Waals forces for refractive-indexmatched colloids [9][10][11]. The question of forces in external fields, electric and magnetic, has been studied in several articles [12][13][14][15][16][17][18][19]. When applying external fields, materials with a nonlinear electric response (which exhibit "nonlinear optics") open up a variety of possibilities; these possibilities are absent in purely linear systems where the external field and fluctuating field are merely superimposed. Practically, metamaterials are promising candidates for Casimir force modulation, as they can exhibit strongly nonlinear optical properties [20,21] and their properties can be tuned by external fields [22]. The nature and description of fluctuation-induced effects in nonlinear systems are still under active research [23][24][25][26], including critical systems, where the underlying phenomenon is per se nonlinear [11]. For example, in Ref. [26], it was shown that nonlinear properties may alter Casimir forces over distances in the nanoscale. However, in the presence of only a small number of explicit examples, more research is needed to understand the possibilities opened up by nonlinear materials.
In this article, we consider an analytically solvable model for (anharmonic) point particles with strongly nonlinear responses. This is achieved by introducing a maximal, limiting value for the polarization of the particles, i.e., by confining the polarization vector in anharmonic potential wells. Casimir forces in such systems appear to be largely unexplored, even at the level of two-particle interactions. We find that strong external electric fields can be used to completely suppress the Casimir force in such systems. We discuss the stark difference of forces compared with the case of harmonic dipoles and give an asymptotic formula for the force in strong external fields, which we believe is valid in general if the involved particles have a maximal value for the polarization (saturate). In order to allow for analytical results, we restrict our analysis to the classical (high temperature) limit. However, similar effects are to be expected in quantum (low temperature) cases.
We start by computing the Casimir force for harmonic dipoles in an external field in Sec. II, where in Sec. II B we discuss the role of the angle between the field and the center-to-center axis. In Sec. III A we introduce the nonlinear (anharmonic) well model and compute the Casimir force in an external field in Sec. III C. We finally give an asymptotic expression for high fields in Sec. III D.
II. FORCE BETWEEN HARMONIC DIPOLES IN A STATIC EXTERNAL FIELD
A. Model
Classical van der Waals forces can be described by use of quadratic Hamiltonians describing the polarization of the 1 particles involved [27][28][29]. We introduce the system comprising two dipole carrying particles having the Hamiltonian,
H (h) = H (h) 1 + H (h) 2 + H int , (1)
H (h) i = p 2 i 2α -p i • E, (2)
H int = -2k[3(p 1 • R)(p 2 • R) -p 1 • p 2 ], (3)
where p i is the instantaneous dipole moments of particle i.
Here α denotes the polarizability, where, for simplicity of presentation, we choose identical particles. The external, homogeneous static electric field E couples to p i in the standard manner. The term H int describes the nonretarded dipole-dipole interaction in d = 3 dimensions with the coupling constant
k = 1 4πε 0 R -3 , ( 4
)
where R = |R| with R the vector connecting the centers of the two dipoles, while R denotes the corresponding unit vector. Since we are considering purely classical forces, retardation is irrelevant. Here ε 0 is the vacuum permittivity, and we use SI units. Inertial terms are irrelevant as well and have been omitted. (Since the interaction does not depend on, e.g., the change of p i with time, inertial parts can be integrated out from the start in the classical setting.)
B. Casimir force as a function of the external field
The force F for the system given in Eqs. ( 1)-( 3), at fixed separation R, can be calculated from (as the external electric field is stationary, the system is throughout in equilibrium)
F = 1 β ∂ R ln Z, ( 5
)
where Z = d 3 p 1 d 3 p 2 exp (-βH ) is the partition function, with the inverse temperature β = 1/k B T and H is the Hamiltonian of the system. By using the coupling constant k from Eq. ( 4), this may also be written as
F = 1 β (∂ R k) ∂ k Z Z . ( 6
)
Furthermore, we are interested in the large separation limit, and write the standard series in inverse center-to-center distance (introducing R ≡ |R|),
F = 1 β (∂ R k) ∂ k Z Z k=0 + 1 β (∂ R k)k ∂ 2 k Z Z - ∂ k Z Z 2 k=0
+ O(R -10 ). [START_REF] Chen | [END_REF] In this series, the first term is of order R -4 , while the second is of order R -7 . The external electric field induces finite (average) dipole moments. For an isolated particle, this is (index 0 denoting an isolated particle, or k = 0)
p i 0 = d 3 p i exp (-βH i )p i d 3 p i exp (-βH i ) . ( 8
)
For the case of harmonic particles, Eq. ( 2), this naturally gives
p i 0 = αE. ( 9
)
FIG. 1. Casimir force between harmonic dipoles as a function of the strength of the external field. The angle between the field and the center-to-center vector R is chosen ϕ = arccos ( 1 √ 3 ). The force component decaying with ∼R -4 [discussed after Eq. ( 10)] then vanishes, so that the force decays as ∼R -7 .
The mean dipole moments of the isolated particles in Eq. ( 9), induced by the external electric field, give rise to a force decaying as R -4 , i.e., the first term in Eq. ( 7). This can be made more explicit by writing
∂ k Z Z k=0 = 2 p 1 0 • p 2 0 -6( p 1 0 • R)( p 2 0 • R). ( 10
)
Representing a force decaying as R -4 , this term dominates at large separations. From Eq. ( 10), the dependence on the angle between E and R becomes apparent. The induced force to order R -4 can be either attractive (e.g., R E) or repulsive (e.g., R ⊥ E) [START_REF] Jackson | Classical Electrodynamics[END_REF]. We are aiming at reducing the Casimir force through the electric field, and thus, term by term, try to obtain small prefactors. The considered term ∼R -4 is readily reduced by choosing R • Ê = 1 √ 3 , for which this term is exactly zero, ( ∂ k Z Z ) k=0 = 0. See the inset of Fig. 1 for an illustration. In the following sections we will thus study the behavior of the term ∼R -7 as a function of the external field, keeping this angle throughout.
C. Force for the angle R •
Ê = 1 √ 3 For R • Ê = 1 √
3 , the force is of order R -7 for large R, and reads
F | R• Ê= 1 √ 3 = ∂ R k 2 2β ∂ 2 k Z Z k=0 + O(R -10 ). ( 11
)
The discussion up to here, including Eq. ( 11), is valid generally, i.e., for any model describing individual symmetric particles, where the induced polarization is in the direction of the applied field. For the case of harmonic dipoles, i.e., for Eq. ( 2), we denote F = F h . Calculating (
∂ 2 k Z
Z ) k=0 for this case yields a result which is partly familiar from the case of harmonic dipoles in the absence of external fields (denoted F 0 ),
F h = 1 + 2 3 αβE 2 F 0 + O(R -10 ), (12)
F 0 = - 72 β α 4πε 0 2 R -7 . ( 13
)
Again, for zero field, E → 0, this is in agreement with the Casimir-Polder force in the classical limit [27], given by F 0 .
As the field is applied, the force increases, being proportional to E 2 for αβE 2 1. This is due to interactions of a dipole induced by the E field with a fluctuating dipole [compare also (34) below]. The term proportional to E 2 is naturally independent of T . The force as a function of external field is shown in Fig. 1.
The Casimir force given by Eq. ( 12) is thus tunable through the external field, but it can only be increased due to the square power law. While this might be useful for certain applications, we shall in the following investigate the case of highly nonlinear particles. The fact that the force in Eq. ( 13) is proportional to α 2 suggests that reduction of the force could be achieved, if the polarizabilities were dependent on the external field. In the next section, we will investigate a model for saturating particle dipole moments, where indeed the forces can be suppressed.
III. FORCE BETWEEN SATURATING DIPOLES IN AN EXTERNAL FIELD
A. Model: Infinite wells
The response of a harmonic dipole to an external field is by construction linear for any value of the field [see Eq. ( 9)], and the polarization can be increased without bound. We aim here to include saturation by introducing a limit P for the polarization, such that |p i | < P at all times and for all external fields. This can be achieved by modifying the Hamiltonian in Eq. ( 2), assigning an infinite value for |p i | > P . The potential for |p i | obtained in such a way is illustrated in Fig. 2.
As we aim to study the effect of saturation, while keeping the number of parameters to a minimum, we additionally take the limit α → ∞. This yields an infinite well potential (see FIG. 2. Illustration of a simple potential for the individual dipoles, which describes saturation. A parabola of curvature α -1 is cut off by a hard "wall" at the value P . Practically, we simplify even further by letting the polarizability α tend to infinity, so that the potential of Eq. ( 14) is approached. Physically, α → ∞ means α βP 2 .
the lower curves of Fig. 2 for the approach of this limit),
H (w) i = -p i • E, |p i | < P, ∞, otherwise. (14)
Such models have been studied extensively in different contexts, as, e.g., asymmetric quantum wells of various shapes [START_REF] Rosencher | [END_REF][32][33], two-level systems with permanent dipole moments [34], and dipolar fluids [35]. These systems are also known to be tunable with an external electric field [36,37]. However, the Casimir effect has not been investigated. This model, for example, mimics free electrons confined to a spherical volume, such as in a perfectly conducting, neutral sphere. The maximum value for the dipole moment in this case is the product of the radius and the total free charge of the sphere. The charge distribution in a sphere has, additionally to the dipole moment, higher multipole moments, e.g., quadrupolar. For a homogeneous external field, the Hamiltonian in Eq. ( 14) is, however, precise, as higher multipoles couple to spatial derivatives (gradients) of the field [START_REF] Jackson | Classical Electrodynamics[END_REF], and only the dipole moment couples to a homogeneous field. Also, the interaction part, Eq. ( 3), contains, in principle, terms with higher multipoles. These do not, however, play a role for the force at the order R -7 .
B. Polarization and polarizability
We start by investigating the polarization of an individual particle as a function of the field E, resulting from Eq. ( 14), which is defined in Eq. ( 8). It can be found analytically,
p i 0 = Q(βEP)P Ê, (15)
Q(x) = 1 x (x 2 -3x + 3)e 2x -x 2 -3x -3 (x -1)e 2x + x + 1 . ( 16
)
Note that the product βEP is dimensionless. For a small external field, we find the average polarization is given by
p i 0 = 1 5 βP 2 E + O(E 3 ). ( 17
)
We hence observe, as expected, that for a small field the particles respond linearly, with a polarizability α 0 ≡ 1 5 βP 2 . This polarizability depends on temperature, as it measures how strongly the particles' thermal fluctuations in the well are perturbed by the field. We may now give another interpretation of the limit α → 0 in Fig. 2: In order to behave as a "perfect" well, the curvature, given by α -1 , must be small enough to fulfill α α 0 . The normalized polarization [i.e., Q(βEP) = | p i 0 | P ] is shown in Fig. 3 as a function of external field. For small values of E, one sees the linear increase, according to Eq. ( 17). In the large field limit, the polarization indeed saturates to P Ê. The dimensionless axis yields the relevant scale for E, which is given through (βP ) -1 . At low temperature (or large P ), saturation is approached already for low fields, while at high temperature (or low P ), large fields are necessary for saturation.
Another important quantity related to the polarization is the polarizability, which is a measure of how easy it is to induce or change a dipole moment in a system. For harmonic FIG. 3. Characterization of an isolated particle described by the well model. The mean dipole moment [see Eq. ( 15)] and polarizations [see Eqs. ( 20) and ( 21)]. P is the "width" of the well potential, and α 0 ≡ 1 5 βP 2 denotes the zero-field polarizability.
particles, it is independent of external fields [see Eq. ( 9)]. In the case of particles with a nonlinear response, the field-dependent polarizability tensor α ij is of interest. It is defined through the linear response,
α ij = ∂ p i ∂E j . ( 18
)
Note that this derivative is not necessarily taken at zero field E, so that α ij is a function of E. Indices i and j denote the components of vectors (in contrast to previous notation). The polarizability tensor as defined in Eq. ( 18) is measured in the absence of any other particle (in other words, at coupling k = 0). α ij can be deduced directly from the function Q in Eq. ( 16). In general, we can write
α ij (β,E,P ) = A ij (βEP )α 0 . ( 19
)
Recall the zero-field polarizability is given as α 0 ≡ 1 5 βP 2 [see Eq. ( 17)]. For the isolated particle, the only special direction is provided by the external field E, and it is instructive to examine the polarizability parallel and perpendicular to it. Taking, for example, E along the z axis, the corresponding dimensionless amplitudes A = A zz and
A ⊥ = A xx = A yy are A (x) = 5 d dx Q(x), ( 20
)
A ⊥ (x) = 5 1 x Q(x). ( 21
)
The amplitudes for parallel and perpendicular polarizability are also shown in Fig. 3. The direct connection with the polarization is evident. For small fields, where the polarization grows linearly, the polarizability is independent of E. Analytically,
A (x) = 1 -3 35 x 2 + O(x 3 ), (22)
A ⊥ (x) = 1 -1 35 x 2 + O(x 3 ). ( 23
)
For large fields, i.e., when βEP is large compared to unity, the polarizability reduces due to saturation effects. Asymptotically for large fields, the polarizability amplitudes are given as
A (x) = 10x -2 + O(x -3 ), ( 24
)
A ⊥ (x) = 5x -1 -10x -2 + O(x -3 ). ( 25
)
The parallel polarizability α falls off as E -2 and the parallel polarizability α ⊥ as E -1 . The different power laws may be expected, as near saturation, changing the dipole's direction is a softer mode compared to changing the dipole's absolute value.
C. Casimir force
The Casimir force between particles described by the well potential, Eq. ( 14), is computed from the following Hamiltonian,
H (w) = H (w) 1 + H (w) 2 + H int , ( 26
)
H (w) i = -p i • E, |p i | < P, ∞, otherwise, (27)
with the interaction potential H int given in Eq. ( 3). The discussion in Sec. II regarding the angle of the external field holds similarly here, i.e., Eq. ( 11) is valid and the force decaying as R -4 vanishes for the angle R • Ê = 1 √ 3 . Therefore, we continue by studying the R -7 term at this angle. Using Eq. ( 11), the Casimir force can be found analytically,
F w = f w (βEP )F 0 + O(R -10 ), (28)
with the zero-field force
F 0 = - 72 β α 0 4πε 0 2 R -7 , (29)
and the dimensionless amplitude
f w (x) = 25 3 1 x 4 (x 2 + 3) sinh(x) -3x cosh(x) [x cosh(x) -sinh(x)] 2 ×[(2x 2 + 21)x cosh(x) -(9x 2 + 21) sinh(x)]. (30)
Again, α 0 ≡ 1 5 βP 2 is the zero-field polarizability [see Eq. ( 17)]. The force is most naturally expressed in terms of F 0 , which is the force at zero field, equivalent to Eq. ( 13). The amplitude f w is then dimensionless and depends, as the polarization, on the dimensionless combination βEP.
The force is shown in Fig. 4. For zero external fields, the curve starts at unity by construction, where the force is given by F 0 . The force initially increases for small values of βEP, in accordance with our earlier analysis of harmonic dipoles. After this initial regime of linear response, the Casimir force decreases for βEP 1, and, for βEP 1, asymptotically approaches zero as E -1 ,
F w = - 48P 3 (4πε 0 ) 2 R -7 E -1 + O(E -2 ). ( 31
)
This behavior yields an enormous potential for applications: By changing the external field, the force can be switched on or off. The asymptotic law in Eq. ( 31) gives another intriguing insight: For large fields, the force is independent of temperature. This is in contrast to the fact that (classical) fluctuation-induced forces in general do depend on temperature. This peculiar observation is a consequence of cancellations between factors of β, and might yield further possibilities for applications. This is demonstrated in Fig. 5, where we introduced a reference temperature T 0 . Indeed, we see that for small values of E, the force does depend on temperature, while for large fields, the curves for different values of temperature fall on top of each other. As a remark, we note that F 0 is inversely proportional to temperature, in contrast to F 0 for harmonic particles in Eq. ( 13). This is because the zero-field polarizability depends on temperature for the well potentials considered here.
Regarding experimental relevance, it is interesting to note that, in a somewhat counterintuitive way, larger values of P lead to stronger dependence on the external field E (the important parameter is βEP). We thus expect that larger particles FIG. 5. Temperature dependence of the Casimir force for saturating particles. For small E, the force decreases with temperature because the zero-field polarizability is α 0 = 1 5 βP 2 . For large E, the force is unexpectedly independent of T . are better candidates for observing the effects discussed here. For example, for a gold sphere of radius 100 nm, we estimate P = 5 × 10 -19 Cm, so that βEP ∼ 1 for E = 10 mV/m at room temperature.
D. Asymptotic formula for high fields
What is the physical reason for the decay of the force for large field E observed in Fig. 4? For large values of βEP, the force may be seen as an interaction between a stationary dipole and a fluctuating one. This is corroborated by a direct computation of the force between a stationary dipole q, pointing in the direction of the electric field, and a particle with the Hamiltonian
H (s) 1 = p 2 2α + p 2 ⊥ 2α ⊥ -p • E, (32)
where "perpendicular" and "parallel" refer to the direction of the E field as before. The two such hypothetical particles interact via the Hamiltonian
H (s) int = -2k[3(p • R)(q • R) -p • q]. (33)
Choosing the angle between R and E as before, we find for the force between these particles (to leading order in k),
F s = -24α ⊥ q 2 1 4πε 0 2 R -7 . ( 34
)
This result can be related to Eq. [START_REF] Rosencher | [END_REF]. Substituting q = P Ê, the value at saturation, and α ⊥ = 5/(βEP )α 0 = P /E [using the leading term for large field from Eq. ( 25)], we find
F s = -24 P 3 E 1 4πε 0 2 R -7 . ( 35
)
This is identical to Eq. ( 31), except for a factor of 2. This is expected, as this factor of 2 takes into account the force from the first fixed dipole interacting with the second fluctuating one and vice versa. We have thus demonstrated that Eq. ( 34) may be used to describe the behavior of the force for large values of E. The importance of this observation lies in the statement, that such reasoning might be applicable more generally: in the case of more complex behavior of p(E), i.e., more complex (or realistic) particles. We believe that the value of q at saturation and the polarizability α ⊥ near saturation can be used to accurately predict the force in the limit of large external fields via Eq. (34).
IV. SUMMARY
We have demonstrated how the classical Casimir-Polder force between two saturating dipoles can be suppressed by applying an external static electric field. Of special interest is the angle ϕ = arccos ( 1 √ 3 ) between the external field and the vector connecting the dipoles, for which the deterministic dipole-dipole interaction vanishes. The remaining "Casimir-Polder" part can then be tuned and is arbitrarily suppressed at large values of external fields due to the vanishing polarizability. The force in this case decays as E -1 . This is in strong contrast to harmonic dipoles, which experience an increase of the force in the presence of an external field, growing with E 2 . We also provided a simple formula to estimate the force between particles under strong fields. It would be interesting to extend the results here to macroscopic objects composed of such dipole carrying particles, where multibody effects will potentially change the physics for dense systems. However, for dilute systems, where the pairwise approximation of van der Waals forces is accurate, the results obtained here are directly applicable and thus the modulation of Casimir or van der Waals forces predicted here will apply to a certain extent. Of course, an important main difference in more than two-body systems is that the deterministic component of the interaction cannot be obviously canceled by a uniform electric field, as there is more than one center-to-center vector, denoted by R in this article, separating the interacting dipoles.
FIG. 4 .
4 FIG. 4. Casimir force between two saturating particles in an external electric field E. The angle between the field and the vector R is ϕ = arccos ( 1 √ 3 ).
ACKNOWLEDGMENTS
We thank G. Bimonte, T. Emig, N. Graham, R. L. Jaffe, and M. Kardar for useful discussions. This work was supported by Deutsche Forschungsgemeinschaft (DFG) Grant No. KR 3844/2-1 and MIT-Germany Seed Fund Grant No. 2746830. | 24,116 | [
"14411"
] | [
"237119",
"498426",
"136813",
"237119",
"498426"
] |
01485412 | en | [
"info"
] | 2024/03/04 23:41:48 | 2017 | https://inria.hal.science/hal-01485412/file/iwspa17-alaggan-HAL-PREPRINT.pdf | Mohammad Alaggan
email: mohammad.alaggan@inria.fr
Mathieu Cunche
email: mathieu.cunche@inria.fr§marine.minier@loria.fr
Marine Minier
Non-interactive (t, n)-Incidence Counting from Differentially Private Indicator Vectors *
. Given one or two differen-
tially private indicator vectors, estimating the distinct count of elements in each [START_REF] Balu | Challenging Differential Privacy: The Case of Non-Interactive Mechanisms[END_REF] and their intersection cardinality (equivalently, their inner product [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF]) have been studied in the literature, along with other extensions for estimating the cardinality set intersection in case the elements are hashed prior to insertion [START_REF] Alaggan | Sanitization of Call Detail Records via Differentially-Private Bloom Filters[END_REF]. The core contribution behind all these studies was to address the problem of estimating the Hamming weight (the number of bits set to one) of a bit vector from its differentially private version, and in the case of inner product and set intersection, estimating the number of positions which are jointly set to one in both bit vectors.
We develop the most general case of estimating the number of positions which are set to one in exactly t out of n bit vectors (this quantity is denoted the (t, n)-incidence count), given access only to the differentially private version of those bit vectors. This means that if each bit vector belongs to a different owner, each can locally sanitize their bit vector prior to sharing it, hence the non-interactive nature of our algorithm.
Our main contribution is a novel algorithm that simultaneously estimates the (t, n)-incidence counts for all t ∈ {0, . . . , n}. We provide upper and lower bounds to the estimation error.
Our lower bound is achieved by generalizing the limit of two-party differential privacy [START_REF] Mcgregor | The Limits of Two-Party Differential Privacy[END_REF] into nparty differential privacy, which is a contribution of independent interest. In particular we prove a lower bound on the additive error that must be incurred by any n-wise inner product of n mutually differentiallyprivate bit vectors.
Our results are very general and are not limited to differentially private bit vectors. They should apply to a large class of sanitization mechanism of bit vectors which depend on flipping the bits with a constant probability.
Some potential applications for our technique include physical mobility analytics [START_REF] Musa | Tracking unmodified smartphones using wi-fi monitors[END_REF], call-detailrecord analysis [START_REF] Alaggan | Sanitization of Call Detail Records via Differentially-Private Bloom Filters[END_REF], and similarity metrics computation [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF].
Introduction
Consider a set of n bit vectors, each of size m. Let a be the vector with m components, in which a i ∈ {0, . . . , n} is the sum of the bits in the i-th position in each of the n bit vectors. Then the (t, n)-incidence count is the number of positions i such that a i = t. Let the incidence vector Φ be the vector of n + 1 components in which Φ t is the (t, n)-incidence count, for t ∈ {0, . . . , n}. It should be noted that t Φ t = m, since all m buckets must be accounted for. Φ can also be viewed as the frequency of elements or histogram of a.
Now consider the vector ã resulting from the sanitized version of those vectors, if they have been sanitized by probabilistically flipping each bit b independently with probability 0 < p < 1/2:
b → b ⊕ Bernoulli(p) . (1)
Then each component of ã will be a random variable 1 defined as: ãi = Binomial(a i , 1 -p) + Binomial(na i , p). This is because (1) can be rewritten as: b → Bernoulli(p) if b = 0 and b → Bernoulli(1 -p) if b = 1, and there are a i bits whose value is one, and n -a i bits whose value is zero, and the sum of identical Bernoulli random variables is a Binomial random variable.
Finally, define Ψ to be the histogram of ã, similarly to Φ. To understand Ψ consider entry i of Φ, which is the number Φ i of buckets containing i ones out of n. Take one such bucket; there is a probability that the i ones in that bucket be turned into any of j = 0, 1, . . . , n. The vector describing such probabilistic transformation follows a multinomial distribution. This is visually illustrated in Figure 1, by virtue of an example on two bit vectors.
The main contribution of this paper is a novel algorithm to estimate the true incidence vector Φ given the sanitized incidence vector Ψ and p.
This model captures perturbed Linear Counting Sketches (similar to [START_REF] Dwork | Pan-Private Streaming Algorithms[END_REF] which is not a flipping model), and BLIP [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF][START_REF] Alaggan | Sanitization of Call Detail Records via Differentially-Private Bloom Filters[END_REF] (a differentially-private Bloom filter).
In [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF], Alaggan, Gambs, and Kermarrec showed that when the flipping probability satisfies (1-p)/p = exp( ) for > 0, then this flipping mechanism will satisfy -differential privacy (cf. Definition 2.1). This means that the underlying bit vectors will be pro- 1 Which is a special case of the Poisson binomial distribution, where there are only two distinct means for the underlying Bernoulli distributions. The mean and variance of ãi are defined as the sums of the means and variances of the two underlying binomial distribution, because they are independent. Figure 1: An example to our model. There are two bit vectors and a represents the number of bits set to one in each adjacent position, while Φ represents the histogram of a. For example Φ 1 is the number of entries in a which are equal to 1 (shown in red). The rest of the diagram shows what happens to entries of a if the bit vectors are sanitized by randomly and independently flipping each of their bits with probably p < 1/2, and how the histogram consequently changes to the random variable Ψ. In particular, that Φ t is probabilistically transformed into a vector-valued Multinomial random variable.
tected with non-interactive randomized-responsedifferential privacy in which = ln((1 -p)/p).
Summary of Our Results
We find that our results are best presented in terms of another parameter 0 < η < 1 instead of p. Let η be such that the flipping probably p = 1/2 -η/2. We will not reference p again in this paper.
In our presentation and through the entirety of this paper, both η (which we will reference as "the privcay parameter") and (which will be referenced as "the differential privacy parameter") are completely interchangeable; since one fully determines the other through the relation
= ln( 1 + η 1 -η ) . (2)
However, our theoretical results will be presented in terms of η for the sake of simplicity of presentation.
On the other hand, the experimental evaluation will be presented in terms of ; since is the differential privacy parameter and it will provide more intuition to the reader about the privacy guarantees provided for the reported utility (additive error). In a practical application, one may decide the value of first to suit their privacy and utility needs and then compute the resulting η value that is then given to our algorithm. A discussion on how to choose is provided in [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF], which may also aid the reader with having an intuition to the value of used in our experimental evaluation and why we decided to use those values.
(t, n)-Incidence Estimation. In the following we describe upper U and lower bounds L to the additive error. That is,
max i |Ψ i -Φ i | U and L min i |Ψ i -Φ i |,
in which Φ is the estimate output by our algorithm (for the upper bound) or the estimate output by any algorithm (for the lower bound). L and U may depend on m, the size of the bit vectors, n, the number of bit vectors, η, the privacy parameter, and β, the probability that the bounds fails for at least one i.
Upper Bound. Theorem 4.4 states that there exist an algorithm that is -differentially private that, with probability at least 1 -β, simultaneously estimates Φ i for all i with additive error no more than
√ 2m • O(η -n ) • ln 1 β • ln(n + 1) .
Note that this is not a trivial bound since it is a bound on estimating n > 2 simultaneous n-wise inner products. Additionally, in relation to the literature on communication complexity [START_REF] Mcgregor | The Limits of Two-Party Differential Privacy[END_REF], we consider the numberin-hand rather than number-on-forehead communication model, which is more strict.
The O(η -n ) factor is formally proven, but in practice the actual value is much smaller, as explained in Section 6.1. A discussion of the practicality of this bound given the exponential dependence on n is given in Section 4.2.
Lower Bound. In Theorem 5.10 we generalize the results of [START_REF] Mcgregor | The Limits of Two-Party Differential Privacy[END_REF] to multiple bit vectors and obtain the lower bound that for all i, any -differentially private algorithm for approximating Φ i must incur additive error
Ω √ m log 2 (m) • β • 1 -η 1 + η ,
with probability at least 1 -β over randomness of the bit vectors and the randomness of the perturbation.
It is worth noting that the upper bounds hold for all values of , but the lower bound is only shown for < 1. Also notice that this lower bound does not depend on n.
The result also presents a lower bound on the additive error that must be incurred by any such algorithm for estimating n-wise inner product. The relation between the n-wise inner product and (t, n)incidence is made explicit in the proof of Theorem 5.10.
In Section 2, we start by presenting differential privacy, after which we discuss the related work in Section 3, then in Section 4 we describe the the (t, n)incidence counting algorithm and prove its upper bounds. The lower bound on n-wise inner product is then presented in Section 5. Finally, we finish by validating our algorithm and bounds on a real dataset in Section 6 before concluding in Section 7.
Background
Differential Privacy
The notion of privacy we are interested is Differential Privacy [START_REF] Dwork | Differential Privacy[END_REF]. It is considered a strong definition of privacy since it is a condition on the sanitization mechanism that holds equally well for any instance of the data to be protected. Furthermore, it makes no assumptions about the adversary. That is, the adversary may be computationally unbounded and has access to arbitrary auxiliary information. To achieve this, any differentially private mechanism must be randomized. In fact, the definition itself is a statement about a probabilistic event where the probability is taken only over the coin tosses of such mechanism. The intuition behind differential privacy is that the distribution of the output of the mechanism should not change much (as quantified by a parameter ) when an individual is added or removed from the input. Therefore, the output does not reveal much information about that individual nor even about the very fact whether they were in the input or not. Definition 2.1 ( -Differential Privacy [START_REF] Dwork | Differential Privacy[END_REF]). A randomized function F : {0, 1} n → {0, 1} n is -differentially private, if for all vectors x, y, t ∈ {0, 1} n :
Pr[F(x) = t] exp( • x -y H ) Pr[F(y) = t] , (3)
in which xy H is the Hamming distance between x and y, that is, the number of positions at which they differ. The probability is taken over all the coin tosses of F.
The parameter is typically small and is usually thought of as being less than one. The smaller its value the less information is revealed and more private the mechanism is. However, it also means less estimation accuracy and higher estimation error. Therefore the choice of a value to use for is a trade-off between privacy and utility. To the best of our knowledge there is no consensus on a method to decide what this value should be. In some of the literature relevant to differentially private bit vectors [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF], an attack-based approach was adopted as a way to choose the largest (and thus highest utility) possible such that the attacks fail. Given the attacks from [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF] we can choose up to three without great risk.
Related Work
Incidence counting has been studied in the streaming literature as well as in the privacy-preserving algorithms literature under the names: t-incidence counting [START_REF] Dwork | Pan-Private Streaming Algorithms[END_REF], occurrence frequency estimation [START_REF] Cormode | Finding the Frequent Items in Streams of Data[END_REF][START_REF] Datar | Estimating Rarity and Similarity over Data Stream Windows[END_REF], or distinct counting [START_REF] Mir | Pan-Private Algorithms via Statistics on Sketches[END_REF]. We use these terms interchangeably to mean an accurate estimate of the distinct count, not an upper or lower bound on it.
There are several algorithms in the streaming literature that estimate the occurrence frequency of different items or find the most frequent items [START_REF] Dwork | Pan-Private Streaming Algorithms[END_REF][START_REF] Datar | Estimating Rarity and Similarity over Data Stream Windows[END_REF][START_REF] Cormode | Finding the Frequent Items in Streams of Data[END_REF]. The problem of occurrence frequency estimation is related to that of incidence counting in the following manner: they are basically the same thing except the former reports normalized relative values. Our algorithm, instead, reports all the occurrence frequencies, not just the most frequent ones. We face the additional challenging that we are given a privacy-preserving version of the input instead of its raw value, but since in our application (indicator vectors) usually m n, we use linear space in n, rather than logarithmic space like most streaming algorithms.
The closest to our work is the t-incidence count estimator of Dwork, Naor, Pitassi, Rothblum, and Yekhanin [START_REF] Dwork | Pan-Private Streaming Algorithms[END_REF]. Their differentially private algorithm takes the private stream elements a i before sanitation and sanitizes them. To the contrary, our algorithm takes the elements a i after they have already been sanitized. An example inspired by [START_REF] Alaggan | Sanitization of Call Detail Records via Differentially-Private Bloom Filters[END_REF] is that of call detail records stored by cell towers. Each cell tower stores the set of caller/callee IDs making calls for every time slot (an hour or a day for instance), as an indicator vector. After the time slot ends, the resulting indicator vector is submitted to a central facility for further analysis that involves multiple cell towers. Our work allows this central facility to be untrusted, which is not supported by [START_REF] Dwork | Pan-Private Streaming Algorithms[END_REF].
In subsequent work, Mir, Muthukrishnan, Nikolov, and Wright [START_REF] Mir | Pan-Private Algorithms via Statistics on Sketches[END_REF] propose a p-stable distribution-based sketching technique for differentially private distinct count. Their approach also supports deletions (i.e. a i may be negative), which we do not support. However, to reduce the noise, they employ the exponential mechanism [START_REF] Mcsherry | Mechanism Design via Differential Privacy[END_REF], which is known to be computationally inefficient. Their algorithm also faces the same limitations than the ones of [START_REF] Dwork | Pan-Private Streaming Algorithms[END_REF].
Upper Bounds
The algorithm we present and the upper bounds thereof depend on the probabilistic linear mapping A between the observed random variable Ψ and the unknown Φ which we want to estimate. In fact, A and its expected value A = E[A ] are the primary objects of analysis of this section. Therefore we begin by characterizing them.
Recall that Ψ is the histogram of ã (cf. Figure 1) and that the distribution of ãi is Z(n, p, a i ) in which Z(n, p, j) = Binomial(j, 1 -p) + Binomial(n -j, p) , (4) and p < 1/2. The probability mass function of
Z(n, p, j) is presented in Appendix A.
In what follows we drop the n and p parameters of Z(n, p, j) since they are always implied from context. We will also denote P (Z(j)) be the probability vector characterizing Z(j):
(Pr[Z(j) = 0], Pr[Z(j) = 1], . . . , Pr[Z(j) = n]).
Finally, e i will denote the ith basis vector. That is, the vector whose components are zero except the ith component which is set to one.
The following proposition defines the probabilistic linear mapping A between Ψ and Φ. Proposition 4.1. Let A be a matrix random variable whose jth column independently follows the multinomial distribution Multinomial(Φ j , P (Z(j))). Then the histogram of ã is the sum of the columns of A : Ψ = A 1 in which 1 = (1, 1, . . . , 1), and thus Ψ = j Multinomial(Φ j , P (Z(j))). Proof. Since Ψ is the histogram of ã, it is thus can be written as Ψ = i e ãi = j i∈{k|a k =j} e ãi . Then since 1) the sum of k independent and identical copies of Multinomial(1, p), for any p, has distribution Multinomial(k, p), and 2) |{k | a k = j}| = Φ j by definition, and 3) e ãi is a random variable whose distribution is Multinomial(1, P (Z(a i ))), then the result follows; because i∈{k|a k =j} e ãi has distribution Multinomial(Φ j , P (Z(j))).
The following corollary defines the matrix A, which is the the expected value of A . Corollary 4.2. Let A ∈ R (n+1)×(n+1) be the matrix whose jth column is P (Z(j)). Then EΨ = AΦ.
Proof. Follows from the mean of the multinomial distribution: E[Multinomial(Φ j , P (Z(j)))] = Φ j P (Z(j)).
It is also worth noting that due to the symmetry in (4), we have that
A ij = Pr[Z(j) = i] = Pr[Z n-j = n -i] = A n-i,n-j .
(5) For the rest of the paper we will be working exclusively with 1 -normalized versions of Ψ and Φ. That is, the normalized versions will sum to one. Since they both originally sum to m, dividing both of them by m will yield a vector that sums to one. The following corollary extends the results of this section to the case when Ψ and Φ are normalized to sum to one. In the following, diag(x) is the diagonal matrix whose off-diagonal entries are zero and whose diagonal equals x.
Corollary 4.3. Ψ = A 1 = A diag(1/Φ)Φ =⇒ Ψ/m = A diag(1/Φ)(Φ/m) , and consequently EΨ = AΦ =⇒ EΨ/m = AΦ/m .
The Estimation Algorithm
Let Φ = Φ/m and Ψ = Ψ/m be the 1 -normalized versions of Φ and Ψ.
Intuition. The first step in our algorithm is to establish a confidence interval2 of diameter f (δ)/2 around the perturbed incidence vector Ψ, such that, with probability at least 1 -β, its expected value x def = A Φ is within this interval. Note that this confidence interval depends only of public parameters such as η, m, and n, but not on the specific Ψ vector. Afterwards, we use linear programming to find a valid incidence vector within this interval that could be the preimage of Ψ, yielding the vector y def = A Φ . Since x is within this interval with probability at least 1 -β then the linear program has a solution with probability at least 1 -β. Consequently, x and y are within ∞ distance f (δ) from each other, with probability at least 1-β. It remains to establish, given this fact, the ∞ distance between the true Φ and the estimated Ψ , which is an upper bound to the additive error of the estimate. The details are provided later in Section 4.2.
Our estimation algorithm will take Ψ and A as input and will produce an estimate Φ to Φ. It will basically use linear programming to guarantee that
Ψ -A Φ ∞ f (δ)/2 . ( 6
)
The notation x ∞ is the max norm or ∞ norm and is equal to max i |x i |. Suitable constraints to guarantee that Φ is a valid frequency vector (that its components are nonnegative and sum to 1) are employed. These constraints cannot be enforced in case the naïve unbiased estimator A -1 Ψ is used (it would be unbiased because of Corollary 4.3). This linear program is shown in Algorithm 1.
The objective function of the linear program. The set of constraints of the linear program specify a finite convex polytope with the guarantee that, with probability 1 -β, the polytope contain the true solution, and that all points in this polytope are within a bounded distance from the true solution. We are then simply using the linear program as a linear constraint solver that computes an arbitrary point within this polytope. In particular, we are not using the linear program as an optimization mechanism. Hence, the reader should not be confused by observing that the objective function which the linear program would normally minimize is simply a constant (zero) which is independent of the LP solution.
From a practical point of view, however, it matters which point inside the polytope gets chosen. In particular, the polytope represents the probabilisticallybounded preimage of the perturbed observation. It is unlikely that the true solution lies exactly on or close to the boundary of such polytope, and is rather expected, probabilistically speaking, to exist closer to the centroid of the polytope that to its boundary. We have experimentally validated that, for low n, the centroid of the polytope is at least twice as close to the true solution than the output of the linear program (using the interior point method) which is reported in Section 6. Unfortunately, it is computationally intensive to compute the centroid for high n and thus we were not able to experimentally validate this claim in these cases. This also means that the centroid method is not practical enough. Instead, we recomment the use of the interior point algorithm for linear programming which is more likely to report a point from the interior of the polytope that the simplex algorithm which always reports points exactly on the boundary. We have also experimentally validated that the former always produces better estimates than the latter, even though both of them do satisfy our upper bound (which is independent of the LP algorithm used). An alternative theoretical analysis which provides an formal error bound for the centroid method could be the topic of future work. In Section 6 we only report results using the interior method algorithm.
Parameter Selection. The remainder of this section and our main result will proceed to show sufficient conditions that, with high probability, make (6) imply Φ -Φ ∞ δ for user-specified accuracy requirement δ. These conditions will dictate that either one of δ, , or m depend on the other two. Typically the user will choose the two that matter to him most and let our upper bounds decide the third. For example, if the user wants m to be small for efficiency and δ also be small for accuracy, then she will have to settle for a probably large value of which sacrifices privacy. Sometimes the resulting combination may be unfeasible or uninteresting. For instance, maybe m is required to be too large to fit in memory or secondary storage. Or perhaps δ will be required to be greater than one, which means that the result will be completely useless. In these cases the user will have to either refine his choice of parameters or consider whether his task is privately computable in the randomized response model. It may also be the case that a tighter analysis may solve this problem, since some parts of our analysis are somewhat loose bounds and there may be room for improvement. The probability 1 -β that the bound holds can be part of the trade-off as well.
Algorithm 1 Linear Program
Given Ψ and η, solve the following linear program for the variable Φ , in which f (δ
)/2 = A -1 ∞ 2 ln(1/β) ln(n + 1)/m. minimize 0 , s.t. ∀i -f (δ)/2 j Ψj -A ij Φ j f (
δ)/2 , and ∀i Φ i 0 , and
i Φ i = 1 .
Then output Φ = m Φ as the estimate of Φ.
Upper Bounding the Additive Error
As explained earlier, the first step is to find an ∞ ball of confidence around the the expected value of the perturbed incidence vector. This is is provided by Theorem B.1 through a series of approximations and convergences between probability distributions, which are detailed in two lemmas, all in the appendix. The high level flow and the end result is shown in the following theorem and is meant only to be indicative. For details or exact definitions of particular symbols, kindly refer to Appendix B.
Theorem 4.4. The component-wise additive error between the estimated incidence vector output by Algorithm 1 and the true incidence vector is
Φ -Φ ∞ √ 2m • O(η -n ) • ln 1 β • ln(n + 1).
Proof. Assuming the matrix A is nonsingular, the matrix norm (of A -1 ) induced by the max norm is, by definition:
A -1 ∞ = sup x =0 A -1 x ∞ /
x ∞ , and since A is nonsingular we can substitute x = Ay in the quantifier:
A -1 ∞ = sup y =0 A -1 Ay ∞ / Ay ∞ without loss of gen- erality, yeilding sup y =0 { y ∞ / Ay ∞ }. Thus for all y = 0, A -1 ∞ y ∞ Ay ∞
. If we multiply both sides by Ay
∞ / A -1
∞ (which is positive), we get:
A -1 ∞ y ∞ Ay ∞ .
In the following, we let y = Φ -Φ . The rest of the proof begins by upper bounding the following expression using the preceding derivation: Practicality of the bound. The factor O(η -n ) grows exponentially with n since η < 1. Therefore, if the bound is used in this form it may be useful for parameter selection only for very small n. In practice, however, the O(η -n ) factor is a over-estimation and its effective value is asymptotically sub-exponential. We discuss this issue and propose a practical solution in Section 6.1.
A -1 -1 ∞ Φ -Φ ∞ A( Φ -Φ ) ∞ = A Φ -A Φ ∞ = A Φ + Ψ -Ψ -A Φ ∞ A Φ -Ψ ∞ + A Φ -Ψ ∞ 2 A Φ -Ψ ∞ (LP constraint) 2 m CDF -1 G(aR+M,bR) (1 -β) (By Lemma B.3;Φ j ↑, n ↑) = 2 m (M + Rβ ) → 2 m (E 2 + (E 3 -E 1 )β ) (By Lemma B.2;η ↓) → 2
Lower Bounds
In this section we generalize the results of [START_REF] Mcgregor | The Limits of Two-Party Differential Privacy[END_REF] to multiple bit strings and obtain the lower bound on approximating Φ i . In the rest of this section we use lg(x) to denote the logarithm to base 2, and we let µ 0 = 1/2 -η/2 and µ 1 = 1/2 + η/2. Definition 5.1. (Strongly α-unpredictable bit source) [START_REF] Mcgregor | The Limits of Two-Party Differential Privacy[END_REF]Definition 3.2] For α ∈ [0, 1], a random variable X = (X 1 , . . . , X m ) taking values in {0, 1} m is a strongly α-unpredictable bit source if for every i ∈ {1, . . . , m}, we have α Pr[Xi=0|X1=x1,...,Xi-1=xi-1,Xi+1=xi+1,...,Xm=xm] Pr[Xi=1|X1=x1,...,Xi-1=xi-1,Xi+1=xi+1,...,Xm=xm] 1/α , for every x 1 , . . . , x i-1 , x i+1 , . . . , x n ∈ {0, 1} m-1 . Definition 5.2 (β-closeness). Two random variables X and Y are β-close if the statistical distance between their distributions is at most β:
v 2 -1 |Pr[X = v] -Pr[Y = v]| β
, where the sum is over the set supp(X) ∪ supp(Y ).
Definition 5.3 (Min-entropy). Min-entropy of a ran-
dom variable X is H ∞ (X) = inf x∈Supp(X) lg 1 Pr[X=x] .
Proposition 5.4. (Min-entropy of strongly αunpredictable bit sources) If X is a strongly α-unpredictable bit source, then X has min-entropy at least m lg(1 + α).
Proof. Let p = Pr[X i = 1 | X 1 = x 1 , . . . , X i-1 = x i-1 , X i+1 = x i+1 , . . . , X m = x m ]
for any x 1 , . . . , x i-1 , x i+1 , . . . , x n ∈ {0, 1} m-1 . Then we know that α (1 -p)/p 1/α, and thus p 1/(1 + α). We can then verify that no string in the support of X has probability greater than 1/(1 + α) m . Thus X has min-entropy at least βm, in which β = lg(1 + α) α. Lemma 5.5. (A uniformly random bit string conditioned on its sanitized version is an unpredictable bit source) Let X be a uniform random variable on bit strings of length m, and let X be a perturbed version of X, such that X i = Bernoulli(µ 0 ) if X i = 0 and Bernoulli(µ 1 ) otherwise. Then X conditioned on X is a strongly 1-η 1+ηunpredictable bit source.
Proof. Observe that since X is a uniformly random bit string then X i and X j are independent random variables for i = j. Since X i depends only on X i for all i and not on any other X j for j = i, then X i and X j are also independent random variables. Then using Bayes theorem and uniformity of X we can verify that for all x ∈ {0, 1} m and for all x 1 , . . . , x i-1 , x i+1 , . . . , x n ∈ {0, 1} m-1 α Pr[Xi=0|X1=x1,...,Xi-1=xi-1,Xi+1=xi+1,...,Xm=xm,X =x ] Pr[Xi=1|X1=x1,...,Xi-1=xi-1,Xi+1=xi+1,...,Xm=xm,X =x ]
1/α , in which α = µ 0 /µ 1 = (1 -η)/(1 + η).
Lemma 5.6. Let S 1 , . . . , S n be n uniform random variables on bit strings of length m, and for all 1 i n let S i be a perturbed version of S i , such that for all 1 j m, S ij = Bernoulli(µ 0 ) if S ij = 0 and Bernoulli(µ 1 ) otherwise. Let Y be a vector such that Y j = i S ij and Y be a vector such that Proof. Follows the same line of the proof of Lemma 5.5. Theorem 5.7. [START_REF] Mcgregor | The Limits of Two-Party Differential Privacy[END_REF]Theorem 3.4] There is a universal constant c such that the following holds. Let X be an α-unpredictable bit source on {0, 1} m , let Y be a source on {0, 1} m with min-entropy γm (independent from X), and let Z = X • Y mod k for some k ∈ N, be the inner product of X and Y mod k. Then for every β ∈ [0, 1], the random variable (Y, Z) is β-close to (Y, U ) where U is uniform on Z k and independent of Y , provided that
Y j = i S ij . Then Y conditioned on Y is a strongly 1-η 1+η n -unpredictable
m c • k 2 αγ • lg k γ • lg k β .
Theorem 5.8. [11, Theorem 3.9] Let P (x, y) be a randomized protocol which takes as input two uniformly random bit vectors x, y of length m and outputs a real number. Let P be ln( 1+η 1-η )-differentially private and let β 0. Then with probability at least 1 -β over the inputs x, y ← {0, 1} m and the coin tosses of P , the output differs from x T y by at least
Ω √ m lg(m) • β • 1-η 1+η .
Theorem 5.9. Let P (S 1 , . . . , S n ) = j i S ij be the n-wise inner product of the vectors S 1 , . . . , S n . If for all i, S i is a uniform random variable on {0, 1} m , and S i is the perturbed version of S i , such that S ij = Bernoulli(µ 0 ) if S ij = 0 and Bernoulli(µ 1 ) otherwise, then with probability at least 1 -β the output of any algorithm taking S 1 , . . . , S n as inputs will differ from P (S 1 , . . . , S n ) by at least
Ω √ m lg(m) • β • 1-η 1+η .
Proof. Without loss of generality take S 1 to be one vector and Y with Y j = n i=2 S ij to be the other vector. Then we will use Theorem 5.8 to bound S T 1 Y . To use Theorem 5.8, we first highlight that S i is a ln( 1+η 1-η )-differentially private version of S i . Then since Theorem 5.8 depends on Theorem 5.7, we will show that S 1 and Y satisfies the condition of the latter theorem. Theorem 5.7 concerns inner product between two bit sources, one is an unpredictable bit source while the other has linear min-entropy. Lemma 5.5 shows that S 1 conditioned on its sanitized version S 1 is an α-unpredictable bit source and Lemma 5.6 shows that Y has linear min-entropy (assuming n is constant in m). Theorem 5.10. Let S 1 , . . . , S n be uniformly random binary strings of length m and let S i be a perturbed version of S i , such that S ij = Bernoulli(µ 0 ) if S ij = 0 and Bernoulli(µ 1 ) otherwise. Then let the vectors v, v of length m be such that v i = j S ji and v i = j S ji , and the vector Φ = (Φ 0 , . . . , Φ n ) in which Φ i = |{j : v j = i}| is the frequency of i in v, and similarly for Φ the frequency in v . Then with probability at least 1 -β the output of an algorithm taking S differs from Φ i for all i by at least
Ω √ m lg(m) • β • 1-η 1+η .
Proof. We will proceed by reducing n-wise inner product to frequency estimation. Since Theorem 5.9 forbids the former, then the theorem follows. The reduction is as follows. Let P (j, A) = i∈A S ij be the product of the bits in a particular position j across a subset A of the binary strings. Observe that j P (j, [n]), with [n] = {1, . . . , n}, is the nwise inner product of all the binary string. Similarly, let P (j, A) = i∈A (1 -S ij ), be the product of the negated bits. Finally, denote Q(A) = j P (j, A)P (j, A C ), in which
A C = [n] \ A is the complement of the set A. Now we claim that Φ k = A⊆[n],|A|=k Q(A)
, in which the sum is over all subsets of [n] of size k. This can be seen since for a set A of size k, P (j, A)P (j, A C ) is one only if a i = i S ij = k. Since there may be several sets A of the same size k, we can therefore conclude that the sum over all such sets A⊆[n],|A|=k P (j, A)P (j, A C ) is one if and only if a i = i S ij = k, and thus the sum (over all j) of the former quantity is the count (frequency) of the latter.
We will then show why the result follows first for Φ 0 and Φ n then for Φ 1 , Φ 2 , . . . , Φ n-1 . According to this reduction, Φ 0 (resp. Φ n ) is equivalent to the nwise inner product of {1 -S 1 , 1 -S 2 , . . . , 1 -S n } (resp. {S 1 , S 2 , . . . , S n }) and thus if one was able to compute Φ 0 (resp. Φ n ) within error γ they would have also been able to compute those two n-wise inner products within error γ. Then we employ the lower bound on the n-wise inner product from Theorem 5.9 to lower bound γ for Φ 0 and Φ n . For Φ i for i ∈ {0, n}, Φ i is equivalent to the sum of n i n-wise inner products. In the case all but one of those n-wise inner products are zero, an estimate of Φ i within error γ gives an estimate for a particular n-wise inner product within error γ as well, in which we can invoke Theorem 5.9 again to lower bound γ for Φ i .
Experimental Evaluation
We use the Sapienza dataset [START_REF] Barbera | CRAW-DAD dataset sapienza/probe-requests (v. 2013-09-10[END_REF] to evaluate our method. It is a real-life dataset composed of wireless probe requests sent by mobile devices in various locations and settings in Rome, Italy. We only use the MAC address part of the dataset, as typical physical analytics systems do [START_REF] Musa | Tracking unmodified smartphones using wi-fi monitors[END_REF]. It covers a university campus and as city-wide national and international events. The data was collected for three months between February and May 2013, and contains around 11 million probes sent by 162305 different devices (different MAC addresses), therefore this is the size (m) of our indicator vectors. The released data is anonymized. The dataset contains 8 setting called POLITICS1, POLITICS2, VATICAN1, VATICAN2, UNI-VERSITY, TRAINSTATION, THEMALL, and OTHERS. Each setting is composed of several files. Files are labeled according to the day of capture and files within the same setting occurring in the same day are numbered sequentially. In our experiments we set the parameter n ∈ {1, 2, . . . , 21}, indicating the number of sets we want to experiment on. Then we pick n random files from all settings and proceed to estimate their t-incidence according to our algorithm. We add 1 to all incidence counts to reduce the computational overhead necessary to find a combination of files with non-zero incidence for all t for large n, so that the t-incidence for this random subset is nonzero for all t. This is unlikely to affect the results since the additive error will be much larger than 1 (about O( √ m)) anyway.
The additive error reported is the maximum additive error across all t. In real-life datasets, the additive error would be a problem only for low values of t (closer to the "intersection"), since the true value may be smaller than the additive error. However, for high t (closer to the "union"), high additive error is unlikely to be damaging to utility. This is a property of most real-world datasets since they are likely to follow a Zipf distribution. If this is the case it may be useful to consider employing the estimated union (or high t) to compute the intersection (or low t) via the inclusion-exclusion principle instead.
Calibrating to the Dataset
In our experiments we observe that the value of A -1 ∞ may be too high for small , making it useless as an upper bound in this case. This is due to the definition of the induced norm, which takes the maximum over all vectors whose max norm is 1. This maximum is achieved for vectors in {-1, 1} n+1 . However, in reality it is unlikely that the error vector will be this large and thus it may never actually reach this upper bound (as confirmed by the experiments). Instead, we consider the maximum over Γ = {-γ, γ} n+1 for γ < 1 and use the fact that linearity implies
max x∈Γ A -1 x ∞ / x ∞ = γ A -1
∞ . We empirically estimate γ by estimating, from the dataset, the multinomial distribution of Φ for each n and each , then we sample vectors from this distribution and run our algorithm on them, and then compute γ from the the resulting error vector. We stress that this calibration process thus does not use any aspects of the dataset other than the distribution of Φ and that γ depends only on n and and not on the actual incidence vector. Therefore, in real-life situation where there is no dataset prior to deployment to run this calibration on, it suffices to have prior knowledge (or expectation) to the distribution of the incidence vectors. For most applications it should follow a power-law distribution.
If Figure 2, all the lines represent the 1 -β quantile. For instance, in the Sapienza line, the 1 -β quantile (over 1000 runs) is shown. For the other line, the upper bound value was computed to hold with probability at least 1-β. The value of β we used is 0.1. The corresponding values for the lower bound are independent from n and are {1.3 × 10 -5 , 8.7 × 10 -6 , 5.3 × 10 -6 , 3.2 × 10 -6 , 1.9 × 10 -6 , 1.2 × 10 -6 , 7.1 × 10 -7 }, respective to the x-axis. We observe that the upper bound is validated by the experiments as it is very close to the observed additive error. In addition, the additive error itself resulting from our algorithm is very small even for as small as 0.5. For = 0.1 the additive error increase is unavoidable since such relatively high error may be necessary to protect the high privacy standard in this case.
Conclusion
We have presented a novel algorithm for estimating incidence counts of sanitized indicator vectors. It can also be used to estimate the n-wise inner product of sanitized bit vectors as the relationship is described in the proof of Theorem 5.10. We provided a theoretical upper bound that is validated by experiments on real-life datasets to be very accurate. Moreover, we extended a previous lower bound on 2-wise inner product to n-wise inner product. Finally, we evaluated our algorithm on a real-world dataset and validated the accuracy, the general upper bound and the lower bound.
Figure 2: The additive error Φ -Φ ∞ (on the y-axis), is plotted against the differential privacy parameter (on the x-axis), and the number of vectors n (in different subplots). The y-axis is in logarithmic scale while the x-axis is in linear. bound p i = A ij (1 -A ij ) 1/4 (for any j, since in the limit A ij (1 -A ij ) = A ik (1 -A ik ) for all j, k). The bound holds since A ij is a probability value in (0, 1) and the maximum of the polynomial x(1 -x) is 1/4. Consequently, F η↓ (x) erf x 2/m n+1 .
Therefore F -1 η↓ (q) = ( m/2) erf -1 (q 1/(n+1) ). Hence, M + RC(1 -β) = ( m/2)D(n, β). Proof. Consider the following transformation of the random variable A :
m A Φ -A Φ ∞ = m (A -A )(Φ/m) ∞ = m (A -A diag(1/Φ))(Φ/m) ∞ = m (Adiag(Φ) -A )diag(1/Φ)(Φ/m) ∞ = m (Adiag(Φ) -A )1/m ∞ = (Adiag(Φ) -A )1 ∞ = AΦ -A 1 ∞ = max i {|A i• Φ -A i• |} = max i j A ij Φ j -A ij max i j A ij Φ j -Binomial(Φ j , A ij ) ,
since the marginal distribution of a multinomial random variable is the binomial distribution, which in turn converges in distribution to the normal distribution, by the centrali limit theorem, as min j Φ j grows (which is justified in Lemma C.1),
d → max i j A ij Φ j -N (Φ j A ij , σ 2 ij = Φ j A ij (1 -A ij ))
= max . The last convergence result is due the fact that maximums approach Gumbel and therefore we choose a Gumbel distribution matching the median and interquantile range of the actual distribution of the maximum of HalfNormals, whose CDF is the multiplication of their CDFs. This is done by setting a and b to be the parameters of a Gumbel distribution G(a, b) with zero median and interquantile range of one, and then using the fact that Gumbel distribution belong to a location-scale family, which also implies that the Gumbel distribution is uniquely defined by its median and interquantile range (two unknowns and two equations).
C Discrete Uniform Distribution on Φ
We treat the case of uniform streams in this section. We call the vector a = (a 1 , . . . , a m ) uniform if a i is uniform on the range {0, . . . , n}. Considering the marginal distribution of the resulting incidence vector, we observe that in this case EΦ i = EΦ j for all i, j. However, Φ i will be strongly concentrated around
ln(1/β) ln(n + 1)/m (By Theorem B.1;n ↑) , in which G is the Gumbel distribution, β = a -b ln(-ln(1 -β)), R, M which depend n, η and a, b which are absolute constants, are all defined in Lemma B.3 and subsequently approximated in Lemma B.2. It remains to show that A -1 ∞ = O(η -n ), which holds since η -n is the largest eigenvalue of A -1 . The increase of Φ j may either be justified or quantified in probability by Lemma C.1.
bit source, and therefore has at least m lg 1 + 1
Lemma B. 3 .
3 Let F (x) = i erf x/ j 2Φ j A ij (1 -A ij ) be a cumulative distribution function (CDF) and let M = F -1 (1/2) and R = F -1 (3/4) -F -1(1/4) be the median and the interquantile range of the distribution represented by F , respectively. Additionally, let C(x) = c 0 ln log 2 (1/x), in which c 0 = 1/ ln log 4 (4/3). Then, if α < 1 is a positive real number, then with probability at least 1 -β we have that A Φ -A Φ ∞ m -1 (M + RC(1 -β)).
n grows, converges in distribution to the Gumbel distribution, by the extreme value theorem[START_REF] Coles | An Introduction to Statistical Modeling of Extreme Values[END_REF]:d → G(aR + M, bR), in which R = F -1 (3/4) -F -1 (1/4), M = F -1 (1/2), a =-ln(ln 2)/ ln(log 4 (4/3)), and b = -1/ ln(log 4 (4/3)), where F (x) = i F i (x) and F i (x) is the CDF of HalfNormal θ 2 i . Therefore, computing the quantile function of the Gumbel distribution at 1 -β shows that with probability at least 1 -β we have m A Φ -A Φ ∞ M + R(a -b ln(-ln(1 -β))) = M + R ln(-log 2 (1-β))ln(log 4 (4/3))
The word "interval" is inappropriate here since the random variable is a vector. Technically, " ∞-ball" would be more appropriate.
If η is not small enough, then the probability matrix A approaches the identity matrix I and Ψ, a known quantity, becomes close to the unknown quantity Φ. Hence, we can substitute it instead. For practical purposes, if this is the case, then we we would not need this lemma and could using Lemma B.3 directly. In special cases where Φ is close to uniform, we may quantify the probability of setting Φ j = m/(n + 1) by Lemma C.1.
supported by Cisco grant CG# 593780.
A The Probability Mass Function (PMF) of Z(j)
Since Z(j) = Binomial(j, 1 -p) + Binomial(nj, p) then its PMF is equivalent to the convolution: Pr[Z(j) = i] = Pr[Binomial(j, 1p) = ] Pr[Binomial(n -j, p) = i -]. Consider one term in the summation, t , which equals (1 -p) p j-j p i-(1 -p) -i-j+n+ n-j i-
. Since the ratio
2 is a rational function in then the summation over can be represented as a hypergeometric function:
, given that i + j n. The case of i + j > n is computed by symmetry as in [START_REF] Coles | An Introduction to Statistical Modeling of Extreme Values[END_REF]. The notation 2 F 1 denotes the Gauss hypergeomet-
is the rising factorial notation, also known as the Pochhammer symbol (x) k .
B Bounding Deviation of A from its Mean
Theorem B.1 (Bounding deviation of A ). Let α = f (δ)/2 and β be positive real numbers less than one. Then with probability at least
Proof. Using Lemma B.2 and the fact that E(n, x) approaches Z -ln(πZ -π ln(π))/ √ 2, as n approaches ∞ (according to its expansion at n → ∞), in which Z(x) = ln(2n 2 / ln 2 (4/x)). This is a good approximation even for n 1 except for x = 1 it becomes a good approximation for n 4. Let
, and c 0 = 1/ ln log 4 (4/3).
Proof. Using Lemma B.3. Since A goes to a rank-1 matrix as fast as η n (its smallest eigenvalue), we see that for every i, A ij (1 -A ij ) approaches a value that does not depend on j, call it p i . Therefore,
a value which does not depend on the particular, unknown, composition. Since thus the choice of the weak (n + 1)-composition Φ of m does not matter, we set Φ j = m/(n + 1) in the statement of Lemma B.3 and proceed 3 . Therefore F (x) approaches F η↓ (x)
We could compute the limit p i if we require dependence on η for fine tuning. However, we will instead use the its mean. Therefore, we consider an even stronger model in which EΦ i is still equal to EΦ j for all i, j, but Φ i is marginally almost uniform on its range. An algorithm doing well in this latter case (with higher variance) can intuitively do at least as well in the former case (with less variance).
The vector Φ is a vector of n + 1 elements but only n degrees of freedom; since it has to sum to m. Therefore, we cannot consider the discrete uniform product distribution on its entries. Instead, we will consider the joint uniform distribution on all nonnegative integer vectors which sum to m. All such vectors are the set of weak (n + 1)-compositions of m [15, p. 25].
1-β , then with probability at least 1 -β, min j Φ j δ, assuming Φ are picked uniformly at random from all weak (n + 1)-compositions of m.
Proof. Notice that the sum of Φ must be m, therefore, it has only n degrees of freedom instead of n + 1. In fact, Φ is the multivariate uniform distribution on weak (n + 1)-compositions 4 of m. Notice that the marginal distribution of Φ j is not Uniform(0, m), but rather lower values of Φ j have strictly higher probability than greater ones.
Consider the compositions of m into exactly n + 1 parts, in which each part is greater than or equals δ. There is exactly 5
Hence, the joint probability that all entries of Φ exceed a desired threshold δ, simultaneously, is Cn+1(m;δ) Cn+1(m) . In the rest of this proof we will use n k , the unsigned Stirling cycle number (i.e. Stirling numbers of the first kind), -(n-1)) the falling factorial power, and x n = x(x + 1) • • • (x + (n -1)) the rising factorial power. We will also use the identity
All the definitions and a proof of the aforementioned identity could be found in [10, 4 A weak k-composition of an integer n is a way of writing n as the sum of k non-negative integers (zero is allowed) [15, p. 25]. It is similar to integer partitions except that the order is significant. The number of such weak compositions is
and then, substituting µ = m + 1 and ν = n + 1 for readability
which is true when the sufficient condition (µ-νδ) k -(1 -β)µ k 0 holds for all 1 k n. Equivalently, when | 47,443 | [
"915035",
"5208",
"1084364"
] | [
"203831",
"206120",
"203831",
"206120",
"206040",
"450090"
] |
01485736 | en | [
"info"
] | 2024/03/04 23:41:48 | 2018 | https://hal.science/hal-01485736/file/driving4HAL.pdf | Antonio Paolillo
email: paolillo@lirmm.fr
Pierre Gergondet
email: pierre.gergondet@aist.go.jp
Andrea Cherubini
email: cherubini@lirmm.fr
Marilena Vendittelli
email: vendittelli@diag.uniroma.it
Abderrahmane Kheddar
email: kheddar@lirmm.fr
Autonomous car driving by a humanoid robot
Enabling a humanoid robot to drive a car, requires the development of a set of basic primitive actions. These include: walking to the vehicle, manually controlling its commands (e.g., ignition, gas pedal and steering), and moving with the whole-body, to ingress/egress the car. In this paper, we present a sensorbased reactive framework for realizing the central part of the complete task, consisting in driving the car along unknown roads. The proposed framework provides three driving strategies by which a human supervisor can teleoperate the car, ask for assistive driving, or give the robot full control of the car. A visual servoing scheme uses features of the road image to provide the reference angle for the steering wheel to drive the car at the center of the road. Simultaneously, a Kalman filter merges optical flow and accelerometer measurements, to estimate the car linear velocity and correspondingly compute the gas pedal command for driving at a desired speed. The steering wheel and gas pedal reference are sent to the robot control to achieve the driving task with the humanoid. We present results from a driving experience with a real car and the humanoid robot HRP-2Kai. Part of the framework has been used to perform the driving task at the DARPA Robotics Challenge.
Introduction
The potential of humanoid robots in the context of disaster has been exhibited recently at the DARPA Robotics Challenge (DRC), where robots performed complex locomotion and manipulation tasks (DARPA Robotics Challenge, 2015). The DRC has shown that humanoids should be capable of operating machinery, originally designed for humans. The DRC utility car driving task is a good illustration of the complexity of such tasks.
Worldwide, to have the right to drive a vehicle, one needs to be delivered a license, requiring months of practice, followed by an examination test. To make a robot drive in similar conditions, the perception and control algorithms should reproduce the human driving skills.
If the vehicle can neither be customized nor automated, it is more convenient to think of a robot in terms of anthropomorphic design. A driving robot must have motion capabilities for operations such as: reaching the vehicle, entering it, sitting in a stable posture, controlling its commands (e.g., ignition, steering wheel, pedals), and finally egressing it. All these skills can be seen as action templates, to be tailored to each vehicle and robot, and, more importantly, to be properly combined and sequenced to achieve driving tasks.
Noticeable research is currently made, to automate the driving operation of unmanned vehicles, with the ultimate goal of reproducing the tasks usually performed by human drivers [START_REF] Nunes | Guest editorial introducing perception, planning, and navigation for intelligent vehicles[END_REF][START_REF] Liu | Vision-based real-time lane marking detection and tracking[END_REF][START_REF] Hentschel | Autonomous robot navigation based on open street map geodata[END_REF], by relying on visual sensors [START_REF] Newman | Navigating, recognizing and describing urban spaces with vision and lasers[END_REF][START_REF] Broggi | Sensing requirements for a 13,000 km intercontinental autonomous drive[END_REF][START_REF] Cherubini | Autonomous visual navigation and laser-based moving obstacle avoidance[END_REF]. The success of the DARPA Urban Challenges [START_REF] Buehler | Special issue on the 2007 DARPA Urban Challenge, part I-III[END_REF][START_REF] Thrun | Stanley: The robot that won the DARPA Grand Challenge[END_REF], and the impressive demonstrations made by Google (Google, 2015), have heightened expectations that autonomous cars will very soon be able to operate in urban environments. Considering this, why bother making a robot drive a car, if the car can make its way without a robot? Although both approaches are not exclusive, this is certainly a legitimate question.
One possible answer springs from the complexity of autonomous cars, which host a distributed robot, with various sensors and actuators controlling the different tasks. With a centralized robot, such embedded devices can be removed from the car. The reader may also wonder when should a centralized robot be preferred to a distributed one, i.e., a fully automated car?
We answer this question through concrete application examples. In the DRC [START_REF] Pratt | The DARPA Robotics Challenges[END_REF], one of the eight tasks that robot must overtake is driving a utility vehicle. The reason is that in disaster situations, the intervention robot must operate vehicles -usually driven by humans -to transport tools, debris, etc. Once the vehicle reaches the intervention area, the robot should execute other tasks, (e.g., turning a valve, operating a drill). Without a humanoid, these tasks can be hardly achieved by a unique system. Moreover, the robot should operate cranks or other tools attached to the vehicle [START_REF] Hasunuma | A tele-operated humanoid robot drives a backhoe[END_REF][START_REF] Yokoi | A tele-operated humanoid operator[END_REF]. A second demand comes from the car manufacturing industry [START_REF] Hirata | Fuel consumption in a driving test cycle by robotic driver considering system dynamics[END_REF]. In fact, current crash-tests dummies are passive and non-actuated. Instead, in crash situations, real humans perform protective motions and stiffen their body, all behaviors that are programmable on humanoid robots. Therefore, robotic crash-test dummies would be 2 more realistic in reproducing typical human behaviors.
These applications, along with the DRC itself, and with the related algorithmic questions, motivate the interest for developing a robot driver. However, this requires the solution of an unprecedented "humanoid-in-the-loop" control problem. In our work, we successfully address this, and demonstrate the capability of a humanoid robot to drive a real car. This work is based on preliminary results carried out with the HRP-4 robot, driving a simulated car [START_REF] Paolillo | Toward autonomous car driving by a humanoid robot: A sensor-based framework[END_REF]. Here, we add new features to that framework, and present experiments with humanoid HRP-2Kai driving a real car outdoor on an unknown road.
The proposed framework presents the following main features:
• car steering control, to keep the car at a defined center of the road;
• car velocity control, to drive the car at a desired speed;
• admittance control, to ensure safe manipulation of the steering wheel;
• three different driving strategies, allowing intervention or supervision of a human operator, in a smooth shared autonomy manner.
The modularity of the approach allows to easily enable or disable each of the modules that compose the framework. Furthermore, to achieve the driving task, we propose to use only standard sensors for a common full-size humanoid robot, i.e., a monocular camera mounted on the head of the robot, the Inertial Measurement Unit (IMU) in the chest, and the force sensors at the wrists. Finally, the approach being purely reactive, it does not need any a priori knowledge of the environment. As a result, the framework allows -under certain assumptions -to make the robot drive along a previously unknown road.
The paper organization reflects the schematic description of the approach given in the next Sect. 2, at the end of which we also provide a short description of the paper sections.
Problem formulation and proposed approach
The objective of this work is to enable a humanoid robot to autonomously drive a car at the center of an unknown road, at a desired velocity. More specifically, we focus on the driving task and, therefore, consider the robot sitting in the car, already in a correct driving posture.
Most of the existing approaches have achieved this goal by relying on teleoperation (DRC-Teams, 2015;[START_REF] Kim | Approach of team SNU to the DARPA Robotics Challenge finals[END_REF][START_REF] Mcgill | Team THOR's adaptive autonomy for disaster response humanoids[END_REF]. Atkeson and colleagues [START_REF] Atkeson | NO RESETS: Reliable humanoid behavior in the DARPA Robotics Challenge[END_REF] propose an hybrid solution, with teleoperated steering and autonomous speed control. The velocity of the car, estimated with stereo cameras, is fed back to a PI controller, while LIDAR, IMU and visual odometry data support the operator during the steering procedures.
In [START_REF] Kumagai | Achievement of recognition guided teleoperation driving system for humanoid robots with vehicle path estimation[END_REF], the gas pedal is teleoperated and a local planner, using robot kinematics for vehicle path estimation, and point cloud data for obstacle detection, enables autonomous steering. An impedance system is used to ensure safe manipulation of the steering wheel.
Other researchers have proposed fully autonomous solutions. For instance, in [START_REF] Jeong | Control strategies for a humanoid robot to drive and then egress a utility vehicle for remote approach[END_REF], autonomous robot driving is achieved by following the proper trajectory among obstacles, detected with laser measurements. LIDAR scans are used in [START_REF] Rasmussen | Perception and control strategies for driving utility vehicles with a humanoid robot[END_REF] to plan a path for the car, while the velocity is estimated with a visual odometry module.
The operation of the steering wheel and gas pedal is realized with simple controllers.
We propose a reactive approach for autonomous driving that relies solely on standard humanoids sensor equipment, thus making it independent from the vehicle sensorial capabilities, and does not require expensive data elaboration for building local representations of the environment and planning safe paths. In particular, we use data from the robot on-board camera and IMU, to close the autonomous driver feedback loop. The force measured on the robot wrists is exploited to operate the car steering wheel.
In designing the proposed solution, some simplifying assumptions have been introduced, to capture the conceptual structure of the problem, without losing generality:
1. The car brake and clutch pedals are not considered, and the driving speed is assumed to be positive and independently controlled through the gas pedal. Hence, the steering wheel and the gas pedals are the only vehicle controls used by the robot for driving.
2. The robot is already in its driving posture on the seat, with one hand on the steering wheel, the foot on the pedal, and the camera pointing the road, with focal axis aligned with the car sagittal plane. The hand grasping configuration is unchanged during operation.
3. The road is assumed to be locally flat, horizontal, straight, and delimited by parallel borders1 . Although global convergence can be proved only for straight roads, turns with admissible curvature bounds are also feasible, as shown in the Experimental section. Instead, crossings, traffic lights, and pedestrians are not negotiated, and road signs are not interpreted.
Given these assumptions, we propose the control architecture in Fig. 1. The robot sits in the car, with its camera pointing to the road. The acquired images and IMU data are used by two branches of the framework running in parallel: car steering and velocity control. These are described hereby.
The car steering algorithm guarantees that the car is maintained at the center of the road.
To this end, the IMU is used to get the camera orientation with respect to the road, while an image processing algorithm detects the road borders (road detection). These borders are used to compute the visual features feeding the steering control block. Finally, the computed steering wheel reference angle is transformed by the wheel operation block into a desired trajectory for the robot hand that is operating the steering wheel. This trajectory can be adjusted by an admittance system, depending on the force exchanged between the robot hand and the steering wheel.
The car velocity control branch aims at making the car progress at a desired speed, through the gas pedal operation by the robot foot. A Kalman Filter (KF) fuses visual and inertial data to estimate the velocity of the vehicle (car velocity estimation) sent as feedback to the car velocity control, which provides the gas pedal reference angle for obtaining the desired velocity. The pedal operation block transforms this signal into a reference for the robot foot.
Finally, the reference trajectories for the hand and the foot respectively operating the steering wheel and the pedal, are converted into robot postural tasks, by the task-based quadratic programming controller.
The driving framework, as described above, allows a humanoid robot to autonomously drive a car along an unknown road, at a desired velocity. We further extend the versatility of our framework by implementing three different "driving modes", in order to ease human supervision and eventual intervention if needed:
• Autonomous. Car steering and velocity control are both enabled, as indicated above, and the robot autonomously drives the car without any human aid. • Assisted. A human takes care of the road detection, the car velocity estimation, and the control, by teleoperating the robot ankle, and manually selecting the visual features (road borders). These are then used by the steering controller to compute the robot arm command.
• Teleoperated. Both the robot hand and foot are teleoperated for steering the wheel and the gas pedal operation, respectively. The reference signals are sent to the taskbased quadratic programming control through a keyboard or joystick. The human uses the robot camera images as visual feedback for driving.
For each of the driving modes, the car steering and velocity controllers are enabled or disabled, as described in Table 1. The human user/supervisor can intervene at any moment during the execution of the driving task, to select one of the three driving modes. The selection, as well as the switching between modes, is done by pushing proper joystick (or keyboard) buttons.
The framework has a modular structure, as presented in Fig. 1. In the following Sections, we detail the primitive functionalities required by the autonomous mode, since the assisted and teleoperation modes use a subset of such functionalities.
The rest of paper is organized as follows. Section 3 describes the model used for the carrobot system. Then, the main components of the proposed framework are detailed. Sect. 4 presents the perception part, i.e., the algorithms used to detect the road and to estimate the car velocity. Section 5 deals with car control, i.e., how the feedback signals are transformed into references for the steering wheel and for the gas pedal, while Sect. 6 focuses on humanoid control, i.e., on the computation of the commands for the robot hand and foot. The experiments carried out with HRP-2Kai are presented in Sect. 7. Finally, Sect. 8 concludes the paper and outlines future research perspectives.
Modelling
The design of the steering controller is based on the car kinematic model. This is a reasonable choice since, for nonholonomic systems, it is possible to cancel the dynamic parameters via feedback, and to solve the control problem at the velocity level, provided that the velocity issued by the controller is differentiable [START_REF] De Luca | Kinematics and Dynamics of Multi-Body Systems[END_REF]. To recover the dynamic system control input, it is however necessary to know the exact dynamic model, which is in general not available. Although some approximations are therefore necessary, these do not affect the controller in the considered scenario (low accelerations, flat and horizontal road).
On-line car dynamic parameter identification could be envisaged, and seamlessly integrated in our framework, whenever the above assumptions are not valid. Note, however, that the proposed kinematic controller would remain valid, since it captures the theoretic challenge of driving in the presence of nonholonomic constraints.
To derive the car control model, consider the reference frame F w placed on the car rear axle midpoint W , with the y-axis pointing forward, the z-axis upward and the x-axis completing the right handed frame (see Fig. 2a). The path to be followed is defined as the set of points that maximize the distance from both the left and right road borders. On this path, we consider a tangent Frenet Frame F p , with origin on the normal projection of W on the path. Then, the car configuration with respect to the path is defined by x, the Cartesian abscissa of W in F p , and by θ, the car orientation with respect to the path tangent (see Fig. 2b).
Describing the car motion through the model of a unicycle, with an upper curvature bound c M ∈ R + , x and θ evolve according to:
ẋ = v sin θ θ = ω ω v < c M , (1)
where v and ω represent respectively the linear and angular velocity of the unicycle. The front wheel orientation φ can be approximately related to v and ω through:
φ = arctan ωl v , (2)
with l the constant distance between the rear and front wheel axes2 . The parameters r, the radius of the wheel, and β, characterizing the grasp configuration, are also shown here.
Note that a complete car-like model could have been used, for control design purposes, by considering the front wheels orientation derivative as the control input. The unicycle stabilizing controller adopted in this paper can in fact be easily extended to include the dynamics of the front wheels orientation, for example through backstepping techniques. However, in this case, a feedback from wheel orientation would have been required by the controller, but is, generally, not available. A far more practical solution is to neglect the front wheels orientation dynamics, usually faster than that of the car, and consider a static relationship between the front wheels orientation and the car angular velocity. This will only require a rough guess on the value of the parameter l, since the developed controller shows some robustness with respect to model parameters uncertainties as will be shown in Sect. 5.
The steering wheel is shown in Fig. 3, where we indicate, respectively with F h and F s , the hand and steering wheel reference frames. The origin of F s is placed at the center of the wheel, and α is the rotation around its z-axis, that points upward. Thus, positive values of α make the car turn left (i.e., lead to negative ω).
Neglecting the dynamics of the steering mechanism [START_REF] Mohellebi | Adaptive haptic feedback steering wheel for driving simulators[END_REF], assuming the front wheels orientation φ to be proportional to the steering wheel angle α, controlled by the driver hands, and finally assuming small angles ωl/v in (2), leads to:
α = k α ω v , (3)
with k α a negative 3 scalar, characteristic of the car, accounting also for l.
The gas pedal is modeled by its inclination angle ζ, that yields a given car acceleration a = dv/dt. According to experimental observations, at low velocities, the relationship between the pedal inclination and the car acceleration is linear:
ζ = k ζ a. ( 4
)
The pedal is actuated by the motion of the robot foot, that is pushing it (see Fig. 4a).
ture constraint in (1).
3 Because of the chosen angular conventions. Assuming small values of ∆q a and ∆ζ, the point of contact between the foot and the pedal can be considered fixed on both the foot and the pedal, i.e., the length of the segment C 2 C 3 in Fig. 4b can be considered close to zero4 . Hence, the relationship between ∆q a and ∆ζ is easily found to be
∆ζ = l a l p ∆q a , (5)
where l a (l p ) is the distance of the ankle (pedal) rotation axis from the contact point of the foot with the pedal.
The robot body reference frame F b is placed on the robot chest, with x-axis pointing forward, and z-axis upward. Both the accelerations measured by the IMU, and the humanoid tasks, are expressed in this frame. We also indicate with F c the robot camera frame (see Fig. 2). Its origin is in the optical center of the camera, with z-axis coincident with the focal axis. The y-axis points downwards, and the x-axis completes the right-handed frame. F c is tilted by an angle γ (taken positive downwards) with respect to the frame F w , whereas the vector p w c = (x w c , y w c , z w c ) T indicates the position vector of the camera frame expressed in the car reference frame. Now, the driving task can be formulated. It consists in leading the car on the path, and aligning it with the path tangent:
(x, θ) → (0, 0) , (6)
while driving at a desired velocity:
v → v * . (7)
Task ( 6) is achieved by the steering control that uses the kinematic model ( 1), and is realized by the robot hand according to the steering angle α. Concurrently, ( 7) is achieved by the car
Perception
The block diagram of Fig. 1 shows our perception-action approach. At a higher level, the perception block, whose details are described in this Section, provides the feedback signals for the car and robot control.
Road detection
This Section describes the procedure used to derive the road visual features, required to control the steering wheel. These visual features are: (i) the vanishing point (V ), i.e., the intersection of the two borders, and (ii) the middle point (M ), i.e., the midpoint of the segment connecting the intersections of the borders with the image horizontal axis. Both are shown in Fig. 5.
Hence, road detection consists of extracting the road borders from the robot camera images. After this operation, deriving the vanishing and middle point is trivial. Since the focus of this work is not to advance the state-of-the-art on road/lane detection, but rather to propose a control architecture for humanoid car driving, we develop a simple image processing algorithm for road border extraction. More complex algorithms can be used to improve the detection and tracking of the road [START_REF] Liu | Vision-based real-time lane marking detection and tracking[END_REF][START_REF] Lim | Real-time implementation of vision-based lane detection and tracking[END_REF][START_REF] Meuter | A novel approach to lane detection and tracking[END_REF][START_REF] Nieto | Real-time lane tracking using Rao-Blackwellized particle filter[END_REF], or even to detect road markings [START_REF] Vacek | Road-marking analysis for autonomous vehicle guidance[END_REF]. However, our method has the advantage of being based solely on vision, avoiding the complexity induced by integration of other sensors [START_REF] Dahlkamp | Selfsupervised monocular road detection in desert terrain[END_REF][START_REF] Ma | Simultaneous detection of lane and pavement boundaries using model-based multisensor fusion[END_REF]. Note that more advanced software is owned by car industries, and therefore hard to find in open-code source or binary.
Part of the road borders extraction procedure follows standard techniques used in the field of computer vision [START_REF] Laganière | OpenCV 2 Computer Vision Application Programming Cookbook: Over 50 recipes to master this library of programming functions for real-time computer vision[END_REF] and is based on the OpenCV library [START_REF] Bradski | The OpenCV library[END_REF] that provides ready-to-use methods for our vision-based algorithm. More in detail, the steps used for the detection of the road borders on the currently acquired image are described below, with reference to Fig. 6.
• From the image, a Region Of Interest (ROI), shown with white borders in Fig. 6a, is manually selected at the initialization, and kept constant during the driving experiment. Then, at each cycle of the image processing, we compute the average and standard deviation of hue and saturation channels of the HSV (Hue, Saturation and Value) color space on two central rectangular areas in the ROI. These values are considered for the thresholding operations described in the next step.
• Two binary images (Fig. 6b and6c) are obtained by discerning the pixels in the ROI, whose hue and saturation value are in the ranges (average ± standard deviation) defined in the previous step. This operation allows to detect the road, while being adaptive to color variation. The HSV value channel is not considered, in order to be robust to luminosity changes.
• To remove "salt and pepper noise", the dilation and erosion operators are applied to the binary images. Then, the two images are merged by using the OR logic operator to obtain a mask of the road (Fig. 6d).
• The convex hull is computed with areas greater than a given threshold on the mask found in the previous step; then, a Gaussian filter is applied for smoothing. The result is shown in Fig. 6e.
• The Canny edge detector (Fig. 6f), followed by Hough transform (Fig. 6g) are applied, to detect the line segments on the image.
• Similar segments are merged5 , as depicted in Fig. 6h.
This procedure gives two lines corresponding to the image projection of the road borders. However, in real working conditions, it may happen that one or both the borders are not detectable because of noise on the image, or failures in the detection process. For this reason, we added a recovery strategy, as well as a tracking procedure, to the pipeline. The recovery strategy consists in substituting the borders, that are not detected, with artificial ones, defined offline as oblique lines that, according to the geometry of the road and to the configuration of the camera, most likely correspond to the road borders. This allows the computation of the vanishing and middle point even when one (or both) real road borders are not correctly detected. On the other hand, the tracking procedure gives continuity and robustness to the detection process, by taking into account the borders detected on the previous image. It consists of a simple KF, with state composed of the slope and intercept of the two borders6 . In the prediction step, the KF models the position of lines on the image plane as constant (a reasonable design choice, under Assumption 3, of locally flat and straight road), whereas the measurement step uses the road borders as detected in the current image.
From the obtained road borders (shown in red in Fig. 6a), the vanishing and middle point are derived, with simple geometrical computations. Their values are then smoothed with a low-pass frequency filter, and finally fed to the steering control, that will be described in Sect. 5.1.
Car velocity estimation
To keep the proposed framework independent from the car characteristics, we propose to estimate the car speed v, by using only the robot sensors, and avoiding information coming from the car equipment, such as GPS, or speedometer. To this end, we use the robot camera, to measure the optical flow, i.e. the apparent motion of selected visual features, due to the relative motion between camera and scene.
The literature in the field of autonomous car control provides numerous methods for estimating the car speed by means of optical flow [START_REF] Giachetti | The use of optical flow for road navigation[END_REF][START_REF] Barbosa | Velocity estimation of a mobile mapping vehicle using filtered monocular optical flow[END_REF]. To improve the velocity estimate, the optical flow can be fused with inertial measurements, as done in the case of aerial robots, in [START_REF] Grabe | On-board velocity estimation and closedloop control of a quadrotor UAV based on optical flow[END_REF]. Inspired by that approach, we design a KF, fusing the acceleration measured by the robot IMU and the velocity measured with optical flow.
Considering the linear velocity and acceleration along the forward car axis y w as state ξ = (v a) T of the KF, we use a simple discrete-time stochastic model to describe the car motion:
ξ k+1 = 1 ∆T 0 1 ξ k + n k , (8)
with ∆T the sampling time, and n k the zero-mean white gaussian noise. The corresponding output of the KF is modeled as:
η k = ξ k + m k , (9)
where m k indicates the zero-mean white gaussian noise associated to the measurement process. The state estimate is corrected, thanks to the computation of the residual, i.e., the difference between measured and predicted outputs. The measurement is based on both the optical flow (v OF ), and the output of the IMU accelerometers (a IM U ). Then, the estimation of the car velocity v will correspond to the first element of state vector ξ. The process to obtain v OF and a IM U is detailed below.
Measure of the car speed with optical flow
To measure the car velocity v OF in the KF, we use optical flow. Optical flow can be used to reconstruct the motion of the camera, and from that, assuming that the transformation from the robot camera frame to the car frame is known, it is straightforward to derive the vehicle velocity.
More in detail, the 6D velocity vector v c of the frame F c can be related to the velocity of the point tracked in the image ẋp through the following relation:
ẋp = Lv c , (10)
where the interaction matrix L is expressed as follows [START_REF] Chaumette | Visual servo control, Part I: Basic approaches[END_REF]:
L = -Sx
Here, (x p , y p ) are the image coordinates (in pixels) of the point on the ground, expressed as (x g , y g , z g ) in the camera frame (see Fig. 7). Furthermore, it is S x,y = f α x,y , where f is the camera focal length and α x /α y the pixel aspect ratio. In the computation of L, we consider that the image principal point coincides with the image center. As shown in Fig. 7b, the point depth z g can be reconstructed through the image point ordinate y p and the camera configuration (tilt angle γ and height z w c ):
z g = z w c cos sin(γ + ) , = arctan y p S y . (12)
Actually, the camera velocity v c is computed by taking into account n tracked points, i.e., in (10), we consider respectively L
= (L 1 • • • L n ) T and xp = ( ẋp,1 • • • ẋp,n ) T
, instead of L and ẋp . Then, v c is obtained by solving a least-squares problem7 :
v c = arg min χ || Lχ -xp || 2 . ( 13
)
The reconstruction of xp in ( 13) is based on the computation of the optical flow. However, during the navigation of the car, the vibration of the engine, poor textured views and other un-modeled effects add noise to the measurement process [START_REF] Giachetti | The use of optical flow for road navigation[END_REF]. Furthermore, other factors, such as variable light conditions, shadows, and repetitive textures, can jeopardize feature tracking. Therefore, raw optical flow, as provided by off-the-shelf algorithms -e.g., from the OpenCV library [START_REF] Bradski | The OpenCV library[END_REF], gives noisy data that are insufficient for accurate velocity estimation; so filtering and outlier rejection techniques must be added.
Since the roads are generally poor in features, we use a dense optical flow algorithm, that differs from sparse algorithms, in that it computes the apparent motion of all the pixels of the image plane. Then, we filter the dense optical flow, first according to geometric rationales, and then with an outlier rejection method [START_REF] Barbosa | Velocity estimation of a mobile mapping vehicle using filtered monocular optical flow[END_REF]. The whole procedure is described below, step-by-step:
• Take two consecutive images from the robot on-board camera.
• Consider only the pixels in a ROI that includes the area of the image plane corresponding to the road. This ROI is kept constant along all the experiment and, thus, identical for the two consecutive frames.
• Covert the frames to gray scale, apply a Gaussian filter, and equalize with respect to the histogram. This operation reduces the measurement noise, and robustifies the method with respect to light changes.
• Compute the dense optical flow, using the Farnebäck algorithm [START_REF] Farnebäck | chapter Two-Frame Motion Estimation Based on Polynomial Expansion[END_REF] implemented in OpenCV.
• Since the car is supposed to move forward, in the dense optical flow vector, consider only those elements pointing downwards on the image plane, and discard those not having a significant centrifugal motion from the principal point. Furthermore, consider only contributions with length between an upper and a lower threshold, and whose origin is on an image edge (detected applying Canny operator).
• Reject the outliers, i.e., the contributions ( ẋp,i , ẏp,i ), i ∈ {1, . . . , n}, such that ẋp,i / ∈ [ xp ± σ x ] and ẏp,i / ∈ [ ȳp ± σ y ], where xp ( ȳp ) and σ x (σ y ) are the average and standard deviation of the optical flow horizontal (vertical) contributions. This operation is made separately for the contributions of the right and left side of the image, where the module and the direction of the optical flow vectors can be quite different (e.g., on turns).
The final output of this procedure, xp , is fed to (13), to obtain v c , that is then low-pass filtered. To transform the velocity v c in frame F c , obtained from (13), into velocity v w in the car frame F w , we apply:
v w = W w c v c , (14)
with W w c the twist transformation matrix
W w c = R w c S w c R w c 0 3×3 R w c , (15)
R w c the rotation matrix from car to camera frame, and S w c the skew symmetric matrix associated to the position p w c of the origin of F c in F w .
Finally, the speed of the car is set as the y-component of v w : v OF = v w,y . This will constitute the first component of the KF measurement vector.
Measure of the car acceleration with robot accelerometers
The IMU mounted on-board the humanoid robot is used to measure acceleration, in order to improve the car velocity estimation through the KF. In particular, given the raw accelerometer data, we first compensate the gravity component, with a calibration executed at the beginning of each experiment8 . This gives a b , the 3D robot acceleration, expressed in the robot frame F b . Then, we transform a b in the car frame F w , to obtain:
a w = R w b a b , (16)
where R w b is the rotation matrix relative to the robot body -vehicle transformation. Finally, a IM U is obtained by selecting the y-component of a w . This will constitute the second component of the KF measurement vector.
Car control
The objective of car control is (i) to drive the rear wheel axis center W along the curvilinear path that is equally distant from the left and right road borders (see Fig. 2b), while aligning the car with the tangent to this path, and (ii) to track desired vehicle velocity v * . Basically, car control consists in achieving tasks ( 6) and ( 7), with the steering and car velocity controllers described in the following subsections.
Steering control
Given the visual features extracted from the images of the robot on-board camera, the visionbased steering controller generates the car angular velocity input ω to regulate both x and θ to zero. This reference input is eventually translated in motion commands for the robot hands.
The controller is based on the algorithm introduced by [START_REF] Toibero | Switching visual servoing approach for stable corridor navigation[END_REF] for unicycle corridor following, and recently extended to the navigation of humanoids in environments with corridors connected through curves and T-junctions [START_REF] Paolillo | Vision-based maze navigation for humanoid robots[END_REF]. In view of Assumption 3 in Sect. 2, the same algorithm can be applied here. For the sake of completeness, in the following, we briefly recall the derivation of the features model (that can be found, for example, also in [START_REF] Vassallo | Visual servoing and appearance for navigation[END_REF]) and the control law originally presented by [START_REF] Toibero | Switching visual servoing approach for stable corridor navigation[END_REF]. In doing so, we illustrate the adaptations needed to deal with the specificity of our problem.
The projection matrix transforming the homogeneous coordinates of a point, expressed in F p , to its homogeneous coordinates in the image, is:
P = K T c w T w p , ( 17
)
where K is the camera calibration matrix [START_REF] Ma | An Invitation to 3-D Vision: From Images to Geometric Models[END_REF], T c w the transformation from the car frame F w to F c , and T w p from the path frame F p to F w .
As intuitive from Fig. 2, the projection matrix depends on both the car coordinates, and the camera intrinsic and extrinsic parameters. Here, we assume that the camera principal point coincides with the image center, and we neglect image distortion. Furthermore, P has been computed neglecting the z-coordinates of the features, since they do not affect the control task. Under these assumptions, using P , the abscissas of the vanishing and middle point, respectively denoted by x v and x m , can be expressed as [START_REF] Toibero | Switching visual servoing approach for stable corridor navigation[END_REF][START_REF] Vassallo | Visual servoing and appearance for navigation[END_REF]:
x v = k 1 tan θ x m = k 2 x c θ + k 3 tan θ + k 4 , (18)
where
k 1 = -S x /c γ k 2 = -S x s γ /z w c k 3 = -S x c γ -S x s γ y w c /z w c k 4 = -S x s γ x w c /z w c .
We denote cos( * ) and sin( * ) with c * and s * , respectively. Note that with respect to the visual features model in [START_REF] Toibero | Switching visual servoing approach for stable corridor navigation[END_REF][START_REF] Vassallo | Visual servoing and appearance for navigation[END_REF], the expression of the middle point changes, due to the introduction of the lateral and longitudinal displacement, x w c and y w c respectively, of the camera frame with respect to the car frame. As a consequence, to regulate the car position to the road center, we must define a new visual feature xm = x m -k 4 . Then, the navigation task ( 6) is equivalent to the following visual task:
(x m , x v ) → (0, 0) . (19)
In fact, according to (18), asymptotic convergence of x v and xm to zero implies convergence of x and θ to zero, achieving the desired path following task.
Feedback stabilization of the dynamics of xm , is given by the following angular velocity controller [START_REF] Toibero | Switching visual servoing approach for stable corridor navigation[END_REF]:
ω = k 1 k 1 k 3 + xm x v - k 2 k 1 vx v -k p xm , (20)
with k p a positive scalar gain. This controller guarantees asymptotic convergence of both xm and x v to zero, under the conditions that v > 0, and that k 2 and k 3 have the same sign, which is always true if (i) γ ∈ (0, π/2) and (ii) y w c > -z w c / tan γ, two conditions always verified with the proposed setup.
Note that this controller has been obtained considering the assumption of parallel road borders. Nevertheless, this assumption can be easily relaxed since we showed in [START_REF] Paolillo | Vision-based maze navigation for humanoid robots[END_REF] that the presence of non-parallel borders does not jeopardize the controller's local convergence.
To realize the desired ω in (20), the steering wheel must be turned according to (3):
α = k α k 1 k 1 k 3 + xm x v - k 2 k 1 x v -k p xm v , (21)
where xm and x v are obtained by the image processing algorithm of Sect. 4.1, while the value of v is estimated through the velocity estimation module presented in Sect. 4.2.
Car velocity control
In view of the assumption of low acceleration, and by virtue of the linear relationship between the car acceleration and the pedal angle (eq. ( 4)), to track a desired car linear velocity v * we designed a PID feedback controller to compute the gas pedal command:
ζ = k v,p e v + k v,i e v + k v,d d dt e v . (22)
Here, e v = (v * -v) is the difference between the desired and current value of the velocity, as computed by the car velocity estimation block, while k v,p , k v,i and k v,d are the positive proportional, integral and derivative gains, respectively. In the design of the velocity control law, we decided to insert an integral action to compensate for constant disturbances (like, e.g., the effect of a small road slope) at steady state. The derivative term helped achieving a damped control action. The desired velocity v * is set constant here.
Robot control
This section presents the lower level of our controller, which enables the humanoid robot to turn the driving wheel by α, and push the pedal by ζ.
Wheel operation
The reference steering angle α is converted to the reference pose of the hand grasping the wheel, through the rigid transformation
T b * h = T b s (α) T s h (r, β) .
Here, T b * h and T b s are the transformation matrices expressing respectively the poses of frames F h and F s in Fig. 3 with respect to F b in Fig. 2a. Constant matrix T s h expresses the pose of F h with respect to F s , and depends on the steering wheel radius r, and on the angle β parameterizing the hand position on the wheel.
For a safe interaction between the robot hand and the steering wheel, it is obvious to think of an admittance or impedance controller, rather than solely a force or position controller [START_REF] Hogan | Impedance control -An approach to manipulation. I -Theory. II -Implementation. III -Applications[END_REF]. We choose to use the following admittance scheme:
f -f * = M ∆ẍ + B∆ ẋ + K∆x, (23)
where f and f * are respectively the sensed and desired generalized interaction forces in F h ; M , B and K ∈ R 6×6 are respectively the mass, damping and stiffness diagonal matrices. As a consequence of the force f applied on F h , and on the base of the values of the admittance matrices, ( 23) generates variations of pose ∆x, velocity ∆ ẋ and acceleration ∆ẍ of F h with respect to F s . Thus, the solution of ( 23) leads to the vector ∆x that can be used to compute the transformation matrix ∆T , and to build up the new desired pose for the robot hands:
T b h = T b * h • ∆T . ( 24
)
In cases where the admittance controller is not necessary, we simply set ∆T = I.
Pedal operation
Since there exists a linear relationship between the variation of the robot ankle and the variation of the gas pedal angle, to operate the gas pedal it is sufficient to move the ankle joint angle q a . From ( 22), we compute the command for the robot ankle's angle as:
q a = ζ ζ max
(q a,max -q a,min ) + q a,min .
Here, q a,max is the robot ankle configuration, at which the foot pushes the gas pedal, producing a significant car acceleration. Instead, at q a = q a,min , the foot is in contact with the pedal, but not yet pushing it. These values depend both on the car type, and on the position of the foot with respect to the gas pedal. A calibration procedure is run before starting driving, to identify the proper values of q a,min and q a,max . Finally, ζ max is set to avoid large accelerations, while saturating the control action.
Humanoid task-based control
As shown above, wheel and pedal operation are realized respectively in the operational space (by defining a desired hand pose T b h ) and in the articular space (via the desired ankle joint angle q a ). Both can be realized using our task-based quadratic programming (QP) controller, assessed in complex tasks such as ladder climbing [START_REF] Vaillant | Multi-contact vertical ladder climbing with an HRP-2 humanoid[END_REF]. The joint angles and desired hand pose are formulated as errors that appear among the sum of weighted leastsquares terms in the QP cost function. Other intrinsic robot constraints are formulated as linear expressions of the QP variables, and appear in the constraints. The QP controller is solved at each control step. The QP variable vector x = (q T , λ T ) T , gathers the joint acceleration q, and the linearized friction cones' base weights λ, such that the contact forces f are equal to K f λ (with K f the discretized friction cone matrix). The desired acceleration q is integrated twice to feed the low level built-in PD control of HRP-2Kai. The driving task with the QP controller writes as follows: minimize
x N i=1
w i E i (q, q, q) 2 + w λ λ 2 subject to 1) dynamic constraints 2) sustained contact positions 3) joint limits 4) non-desired collision avoidance constraints 5) self-collision avoidance constraints,
where w i and w λ are task weights or gains, and E i (q, q, q) is the error in the task space. Details on the QP constraints (since they are common to most tasks) can be found in [START_REF] Vaillant | Multi-contact vertical ladder climbing with an HRP-2 humanoid[END_REF].
Here, we explicit the tasks used specifically during the driving (i.e. after the driving posture is reached). We use four (N = 4) set-point objective tasks; each task (i) is defined by its associated task-error i so that
E i = K p i i + K v i ˙ i + ¨ i .
The driving wheel of the car has been modeled as another 'robot' having one joint (rotation).
We then merged the model of the driving wheel to that of the humanoid and linked them, through a position and orientation constraint, so that the desired driving wheel steering angle α, as computed by ( 24), induces a motion on the robot (right arm) gripper. The task linking the humanoid robot to the driving wheel 'robot' is set as part of the QP constraints, along with all sustained contacts (e.g. buttock on the car seat, thighs, left foot).
The steering angle α (i.e. the posture of the driving wheel robot) is a set-point task (E 1 ). The robot whole-body posture including the right ankle joint control (pedal) is also a setpoint task (E 2 ), which realizes the angle q a provided by ( 25). Additional tasks were set to keep the gaze direction constant (E 3 ), and to fix the left arm, to avoid collisions with the car cockpit during the driving operation (E 4 ).
Experimental results
We tested our driving framework with the full-size humanoid robot HRP-2Kai built by Kawada Industries. For the experiments, we used the Polaris Ranger XP900, the same utility vehicle employed at the DRC. HRP-2Kai has 32 degrees of freedom, is 1.71 m tall and weighs 65 kg. It is equipped with an Asus Xtion Pro 3D sensor, mounted on its head and used in this work as a monocular camera. The Xtion camera provides images at 30 Hz with a resolution of 640 × 480 pixels. From camera calibration, it results S x S y = 535 pixels. In the presented experiments, x w c = -0.4 m, y w c = 1 m and z w c = 1.5 m were manually measured. However, it would be possible to estimate the robot camera position, with respect to the car frame, by localization of the humanoid [START_REF] Oriolo | Humanoid odometric localization integrating kinematic, inertial and visual information[END_REF], or by using the geometric information of the car (that can be known, e.g., in the form of a CAD model, as shown in Fig. 2). HRP-2Kai is also equipped with an IMU (of rate 500 Hz) located in the chest. Accelerometer data have been merged with the optical flow to estimate the car linear velocity, as explained in Sect. 4.2. Furthermore, a built-in filter processes the IMU data to provide an accurate measurement of the robot chest orientation. This is kinematically propagated up to the Xtion sensor to get γ, the tilt angle of the camera with respect to the ground.
The task-based control is realized through the QP framework (see Sect. 6.3) which allows to easily set different tasks that can be achieved concurently by the robot. The following table gives the weights of the 4 set-point tasks described in Sect. 6.3. Note that As for the gains in Section 5, we set k v,p = 10 -8 , k v,d = 3 • 10 -9 and k v,i = 2 • 10 -9 to track the car desired velocity v * , whereas in the steering wheel controller we choose the gain k p = 3, and we set the parameter k α = -5. While the controller gains have been chosen as a tradeoff between reactivity and control effort, the parameter k α was roughly estimated. Given the considered scenario, an exact knowledge of this parameter is generally not possible, since it depends on the car characteristics. It is however possible to show that, at the kinematic level, this kind of parameter uncertainty will induce a non-persistent perturbation on the nominal closed loop dynamics.
K v i = 2 × K p i .
Proving the boundedness of the perturbation term induced by parameter uncertainties would allow to conclude about the local asymptotic stability of the perturbed system. In general, this would imply a bound on the parameter uncertainty, to be satisfied to preserve local stability. While this analysis is beyond the scope of this paper, we note also that in practice it is not possible to limit the parameter uncertainty, that depends on the car and the environment characteristics. Therefore, we rely on the experimental verification of the visionbased controller robustness, delegating to the autonomous-assisted-teleoperated framework the task of taking the autonomous mode controller within its region of local asymptotic stability. In other words, when the system is too far from the equilibrium condition, and convergence of the vision-based controller could be compromised, due to model uncertainties and unexpected perturbations, the user can always resort to the other driving modes.
In the KF used for the car velocity estimation, the process and the measurement noise covariances matrices are set to diag(1e -4 , 1e -4 ) and diag(1e 2 , 1e 2 ), respectively. Since the forward axis of the robot frame is aligned with the forward axis of the vehicle frame, to get a IMU we didn't apply the transformation ( 16), but we simply collected the acceleration along the forward axis of the robot frame, as given by the accelerometers. The sampling time of the KF was set to ∆T = 0.002 s (being 500 Hz the frequency of the IMU measurements, the The cut-off frequencies of the low-pass filters applied to the visual features and the car velocity estimate were set to 8 and 2.5 Hz, respectively.
At the beginning of each campaign of experiments, we arrange the robot in the correct driving posture in the car as shown in Fig. 9a. This posture (except for the driving leg and arm) is assumed constant during driving: all control parameters are kept constant. At initialization, we also correct eventual bad orientations of the camera with respect to the ground plane, by applying a rotation to the acquired image, and by regulating the pitch and yaw angles of the robot neck, so as to align the focal axis with the forward axis of the car reference frame. The right foot is positioned on the gas pedal, and the calibration procedure described in Sect. 6.2 is used to obtain q a,max and q a,min .
To ease full and stable grasping of the steering wheel, we designed a handle, fixed to the wheel (visible in Fig. 9a), allowing the alignment of the wrist axis with that of the steer. With reference to Fig. 3, this corresponds to configuring the hand grasp with r = 0 and, to comply with the shape of the steering wheel, β = 0.57 rad. Due to the robot kinematic constraints, such as joint limits and auto-collisions avoidance, imposed by our driving configuration, the range of the steering angle α is restricted from approximately -2 rad to 3 rad. These limits cause bounds on the maximum curvature realizable by the car. Nevertheless, all of the followed paths were compatible with this constraint. For more challenging maneuvers, grasp reconfiguration should be integrated in the framework.
With this grasping setup, we achieved a good alignment between the robot hand and the steering wheel. Hence, during driving, the robot did not violate the geometrical constraints imposed by the steering wheel mechanism. In this case, the use of the admittance control for safe manipulation is not necessary. However, we showed in [START_REF] Paolillo | Toward autonomous car driving by a humanoid robot: A sensor-based framework[END_REF], that the admittance control can be easily plugged in our framework, whenever needed. In fact, in that work, an HRP-4, from Kawada Industries, turns the steering wheel with a more 'humanlike' grasp (r = 0.2 m and β = 1.05 rad, see Fig. 8a). Due to the characteristics of both the grasp and the HRP-4 hand, admittance control is necessary. For sake of completeness, we report, in Fig. 8b-8d, plots of the admittance behavior relative to that experiment. In particular, to have good tracking of the steering angle α, while complying with the steering wheel geometric constraint, we designed a fast (stiff) behavior along the z-axis of the hand frame, F h , and a slow (compliant) along the x and y-axes. To this end, we set the admittance parameters: m x = m y = 2000 kg, m z = 10 kg, b x = b y = 1600 kg/s, b z = 240 kg/s, and k x = k y = 20 kg/s 2 , k z = 1000 kg/s 2 . Furthermore, we set the desired forces f * x = f * z = 0 N, while along the y-axis of the hand frame f * y = 5 N, to improve the grasping stability. Note that the evolution of the displacements along the x and y-axes (plots in Fig. 8b-8c), are the results of a dynamic behavior that filters the high frequency of the input forces, while along the z-axis the response of the system is more reactive.
In the rest of this section, we present the HRP-2Kai outdoor driving experiments. In particular, we present the results of the experiments performed at the authorized portion of the AIST campus in Tsukuba, Japan. A top view of this experimental field is shown in Fig. 9b. The areas highlighted in red and yellow correspond to the paths driven using the autonomous and teleoperated mode, respectively, as further described below. Furthermore, we present an experiment performed at the DRC final, showing the effectiveness of the assisted driving mode. For a quantitative evaluation of the approach, we present the plots of the variables of interest. The same experiments are shown in the video available at https://youtu.be/SYHI2JmJ-lk, that also allows a qualitative evaluation of the online image processing. Quantitatively, we successfully carried out 14 experiments over of 15 repetitions, executed at different times, between 10:30 a.m. and 4 p.m., proving image processing robustness in different light conditions.
First experiment: autonomous car driving
In the first experiment, we tested the autonomous mode, i.e., the effectiveness of our framework to make a humanoid robot drive a car autonomously. For this experiment, we choose v * = 1.2 m/s, while the foot calibration procedure gave q a,max = -0.44 rad and q a,min = -0.5 rad.
Figure 10 shows eight snapshots taken from the video of the experiment. The car starts with an initial lateral offset, that is corrected after a few meters. The snapshots (as well as the video) of the experiment show that the car correctly travels at the center of a curved path, for about 100 m. Furthermore, one can observe that the differences in the light conditions (due to the tree shadows) and in the color of the road, do not jeopardize the correct detection of the borders and, consequently, the driving performance.
Figure 11 shows the plots related to the estimation of the car speed, as described in Sect. 4.2. On the top, we plot a IMU , the acceleration along the forward axis of the car, as reconstructed from the robot accelerometers. The center plot shows the car speed measured with the optical flow-based method (v OF ), whereas the bottom plot gives the trace of the car speed v obtained by fusing a IMU and v OF . Note that the KF reduces the noise of the v OF signal, a very important feature for keeping the derivative action in the velocity control law (22).
As well known, reconstruction from vision (e.g., the "structure from motion" problem) suffers from a scale problem, in the translation vector estimate [START_REF] Ma | An Invitation to 3-D Vision: From Images to Geometric Models[END_REF]. This issue, due to the loss of information in mapping 2D to 3D data, is also present in optical flow velocity estimation methods. Here, this can lead to a scaled estimate of the car velocity. For this reason, we decided to include another sensor information in the estimation process: the acceleration provided by the IMU. Note, however, that in the current state of the work, the velocity estimate accuracy has been only evaluated qualitatively. In fact, that high accuracy is only important in the transient phases (initial error recovery and curve negotiation). Instead, it can be easily shown that the perturbation induced by velocity estimate inaccuracy on the features dynamics vanishes at the regulation point corresponding to the desired driving task, and that by limiting the uncertainty on the velocity value, it is possible to preserve local stability. In fact, the driving performance showed that the estimation was accurate enough, for the considered scenario. In different conditions, finer tuning of the velocity estimator may be necessary.
Plots related to the steering wheel control are shown in Fig. 12a. The steering control is activated about 8 s after the start of the experiment and, after a transient time of a few seconds, it leads the car to the road center. Thus, the middle and vanishing points (the top and center plots, respectively) correctly converge to the desired values, i.e., x m goes to k 4 = 30 pixels (since γ = 0.2145 rad -see expression of k 4 in Sect. 5.1), and x v to 0. The bottom plot shows the trend of the desired steering command α, as computed from the visual features, and from the estimated car speed according to (21). The same signal, reconstructed from the encoders (black dashed line) shows that the steering command is smoothed by the task-based quadratic programming control, avoiding undesirable fast signal variations.
Fig. 12b presents the plots of the estimated vs desired car speed (top) and the ankle angle command sent to the robot to operate the gas pedal and drive the car at the desired velocity (bottom).
Also in this case, after the initial transient, the car speed converges to the nominal desired values (no ground truth was available). The oscillations observable at steady state are due to the fact that the resolution of the ankle joint is coarser than that of the gas pedal. Note, in fact, that even if the robot ankle moves in a small range, the car speed changes significantly. The noise on the ankle command, as well as the initial peak, are due to the derivative term of the gas pedal control (22). However, the signal is smoothed by the task-based quadratic programming control (see the dashed black line, i.e., the signal reconstructed by encoder In the same campaign of experiments, we performed ten autonomous car driving experiments.
In nine of them (including the one presented just above), the robot successfully drove the car for the entire path. One of the experiments failed due to a critical failure of the image processing. It was not possible to perform experiments on other tracks (with different road shapes and environmental conditions), because our application was rejected after complex administrative paperwork required to access other roads in the campus.
Second experiment: switching between teleoperated and autonomous modes
In some cases, the conditions ensuring the correct behaviour of the autonomous mode are risky. Thus, it is important to allow a user to supervise the driving operation, and control the car if required. As described in Sect. 2, our framework allows a human user to intervene at any time, during the driving operation, to select a particular driving strategy. The second experiment shows the switching between the autonomous and teleoperated modes.
In particular, in some phases of the experiment, the human takes control of the robot, by selecting the teleoperated mode. In these phases, proper commands are sent to the robot, to drive the car along two very sharp curves, connecting two straight roads traveled in autonomous mode. Snapshots of this second experiment are shown in Fig. 13.
For this experiment we set v * = 1.5 m/s, while after the initial calibration of the gas pedal, q a,min = -0.5 rad and q a,max = -0.43 rad. Note that the difference in the admissible ankle range with respect to the previous experiment is due to a slightly different position of the robot foot on the gas pedal.
Figure 14a shows the signals of interest for the steering control. In particular, one can observe that when the control is enabled (shadowed areas of the plots) there is the same correct behavior of the system seen in the first experiment. When the user asks for the teleoperated mode (non-shadowed areas of the plots), the visual features are not considered, and the steering command is sent to the robot via keyboard or joystick by the user. Between 75 and 100 s, the user controlled the robot (in teleoperated mode) to make it steer on the right as much as possible. Because of the kinematic limits and of the grasping configuration, the robot saturated the steering angle at about -2 rad even if the user asked a wider steering. This is evident on the plot of the steering angle command of Fig. 14a (bottom): note the difference between the command (blue continuous curve), and the steering angle reconstructed from the encoders (black dashed curve).
Similarly, Fig. 14b shows the gas pedal control behavior when switching between the two modes. When the gas pedal control is enabled, the desired car speed is properly tracked by operating the robot ankle joint (shadowed areas of the top plot in Fig. 14b). On the other hand, when the control is disabled (non-shadowed areas of the plots), the ankle command (blue curve in Fig. 14b, bottom), as computed by (25), is not considered, and the robot ankle is teleoperated with the keyboard/joystick interface, as noticeable from the encoder plot (black dashed curve). At the switching between the two modes, the control keeps sending commands to the robot without any interruption, and the smoothness of the signals allows to have continuous robot operation. In summary, the robot could perform the entire experiment (along a path of 130 m ca., for more than 160 s) without the need to stop the car. This was achieved thanks to two main design choices. Firstly, from a perception viewpoint, monocular camera and IMU data are light to be processed, allowing a fast and reactive behavior. Secondly, the control framework at all the stages (from the higher level visual control to the low level kinematic control) guarantees smooth signals, even at the switching moments.
The same experiment presented just above was performed five other times, during the same day. Four experiments resulted successful, while two failed do to human errors during teleoperation.
Third experiment: assisted driving at the DRC finals
The third experiment shows the effectiveness of the assisted driving mode. This strategy was used to make the robot drive at the DRC finals, where the first of the eight tasks consisted in driving a utility vehicle along a straight path, with two sets of obstacles. We successfully completed the task, by using the assisted mode. Snapshots taken from the DRC finals official video [START_REF] Darpatv | Team AIST-NEDO driving on the second day of the DRC finals[END_REF] are shown in Fig. 15. The human user teleoperated HRP-2Kai remotely, by using the video stream from the robot camera as the only feedback from the challenge field. In the received images, the user selected, via mouse, the proper artificial road borders (red lines in the figure), to steer the car along the path. Note that these artificial road borders, manually set by the user, may not correspond to the real borders of the road. In fact, they just represent geometrical references -more intuitive for humans -to easily define the vanishing and middle points and steer the car by using (21). Concurrently, the robot ankle was teleoperated to achieve a desired car velocity. In other words, with reference to the block diagram of Fig. 1, the user provides the visual features to the steering control, and the gas pedal reference to the pedal operation block. Basically, s/he takes the place of the road detection and car velocity estimation/control blocks. The assisted mode could be seen as a sort of shared control between the robot and the a human supervisor, and allows the human to interfere with the robot operation if required. As stated in the previous section, at any time, during the execution of the driving experience, the user can instantly and smoothly switch to one of the other two driving modes. At the DRC, we used a wide angle camera, although the effectiveness of the assisted mode was also verified with a Xtion camera.
Conclusions
In this paper, we have proposed a reactive control architecture for car driving by a humanoid robot on unkown roads. The proposed approach consists in extracting road visual features, to determine a reference steering angle to keep the car at the center of a road. The gas pedal, operated by the robot foot, is controlled by estimating the car speed using visual and inertial data. Three different driving modes (autonomous, assisted, and teleoperated) extend the versatility of our framework. The experimental results carried out with the humanoid robot HRP-2Kai have shown the effectiveness of the proposed approach. The assisted mode was successfully used to complete the driving task at the DRC finals.
The driving task has addressed, as an explicative case-study of humanoids controlling humantailored devices. In fact, besides the achievement of the driving experience, we believe that humanoids are the most sensible platforms for helping humans with everyday task, and the proposed work shows that complex real-world tasks can be actually performed in autonomous, assisted and teleoperated way. Obviously, the complexity of the task comes also with the complexity of the framework design, on both perception and control point-ofviews. This led us to make some working assumptions that, in some cases, limited the range of application of our methods.
Further investigations shall deal with the task complexity, to advance the state-of-art of algorithms, and make humanoids capable of helping humans with dirty, dangerous and demanding jobs. Future work will be done, in order to make the autonomous mode work efficiently in the presence of sharp curves. To this end, and to overcome the problem of limited steering motions, we plan to include, in the framework, the planning of variable grasping configurations, to achieve more complex manoeuvres. We are also planning to go to driving on uneven terrains, where the robot has also to sustain its attitude, w.r.t. sharp changes of the car orientation. Furthermore, the introduction of obstacle avoidance algorithms, based on optical flow, will improve the driving safety. Finally, we plan to add brake control and to perform the entire driving task, including car ingress and egress.
Figure 1 :
1 Figure 1: Conceptual block diagram of the driving framework.
Figure 2 :
2 Figure 2: Side (a) and top view (b) of a humanoid robot driving a car with relevant variables.
Figure 3 :
3 Figure3: The steering wheel, with rotation angle α, hand and steering frames, F h and F s . The parameters r, the radius of the wheel, and β, characterizing the grasp configuration, are also shown here.
Figure 4 :
4 Figure4: (a) The robot foot operates the gas pedal by regulating the joint angle at the ankle q a , to set a pedal angle ζ, and yield car acceleration a. (b) Geometric relationship between the ankle and the gas pedal angles.
Figure 5 :
5 Figure 5: The images of the road borders define the middle and vanishing point, respectively M and V . Their abscissa values are denoted with x m and x v .
(a) On-board camera image with the red detected road borders. The vanishing and middle point are shown respectively in cyan and green. (b) First color detection. (c) Second color detection. (d) Mask obtained after dilation and erosion. (e) Convex hull after Gaussian filtering. (f) Canny edge detection. (g) Hough transform. (h) Merged segments.
Figure 6 :
6 Figure 6: Main steps of the road detection algorithm. Although the acquired robot image (a) is shown in gray-scale here, the proposed road detection algorithm processes color images.
Figure 7 :
7 Figure7: Schematic representation of the robot camera looking at the road. (a) Any visible cartesian point (x g , y g , z g ) on the ground has a projection on the camera image plane, whose coordinates expressed in pixels are (x p , y p ). (b) The measurement of this point on the image plane, together with the camera configuration parameters, can be used to estimate the depth z g of the point.
Figure 8 :
8 Figure 8: Left: setup of an experiment that requires admittance control on the steering hand. Right: output of the admittance controller in the hand frame during the same experiment.
(a) Driving posture (b) Top view of the experimental area
Figure 9 :
9 Figure 9: The posture taken by HRP-2Kai during the experiments (a) and the experimental area at the AIST campus (b).
Figure 10 :
10 Figure 10: First experiment: autonomous car driving.
car from Kalman filter
Figure 11 :
11 Figure 11: First experiment: autonomous car driving. Acceleration a IMU measured with the robot IMU (top), linear velocity v OF measured with the optical flow (center), and car speed v estimated by the KF (bottom).
Figure 12 :
12 Figure 12: First experiment: autonomous car driving. (a) Middle point abscissa x m (top), vanishing point abscissa x v (center), and steering angle α (bottom). (b) Car speed v (top), already shown in fig 10, and ankle joint angle q a (bottom).
Figure 13 :
13 Figure 13: Second experiment: switching between teleoperated and autonomous modes.
Figure 14 :
14 Figure 14: Second experiment: switching between teleoperated and autonomous modes. (a) Middle point abscissa x m (top), vanishing point abscissa x v (center), and steering angle α (bottom). (b) Car speed v (top) and ankle joint angle q a (bottom).
Figure 15 :
15 Figure 15: Third experiment: assisted driving mode at the DRC finals. Snapshots taken from the DRC official video.
Table 1 :
1 Driving modes. For each mode, the steering and the car velocity control are properly enabled or disabled.
Driving mode Steering Car velocity
control control
Autonomous enabled enabled
Assisted enabled disabled
Teleoperated disabled disabled
Road detection is assisted by the human.
Table 2 :
2 QP weights and set-point gains.
E 1 E 2 E 3 E 4
w 100 5 1000 1000
K p 5 1 (ankle = 100) 10 10
The assumption on parallel road borders can be relaxed, as proved in(Paolillo et al.,
2016). We maintain the assumption here to keep the description of the controller simpler, as will be shown in Sect. 5.1.
Bounds on the front wheels orientation characterizing common service cars induce the maximum curva-
For the sake of clarity, in Fig.4bthe length of the segment C 2 C 3 is much bigger than zero. However, this length, along with angles ∆q a and ∆ζ, is almost null.
For details on this step, refer to[START_REF] Paolillo | Vision-based maze navigation for humanoid robots[END_REF].
Although 3 parameters are sufficient if the borders are parallel, a 4-dimensional state vector will cover all cases, while guaranteeing robustness to image processing noise.
To solve the least-square problem, n ≥ 3 points are necessary. In our implementation, we used the openCV solve function, and in order to filter the noise due to few contributions, we set n ≥ 25. If n < 25, we set v c = 0.
The assumption on horizontal road in Sect. 3 avoids the need for repeating this calibration.
Acknowledgments
This work is supported by the EU FP7 strep project KOROIBOT www.koroibot.eu, and by the Japan Society for Promotion of Science (JSPS) Grant-in-Aid for Scientific Research (B) 25280096. This work was also in part supported by the CNRS PICS Project ViNCI. The authors deeply thank Dr. Eiichi Yoshida for taking in charge the administrative procedures in terms of AIST clearance and transportation logistics, without which the experiments could not be conducted; Dr Fumio Kanehiro for lending the car and promoting this research; Hervé Audren and Arnaud Tanguy for their kind support during the experiments. | 72,573 | [
"1003690",
"935991",
"6566",
"1003691",
"176001"
] | [
"226175",
"395113",
"226175",
"395113",
"244844",
"226175",
"395113"
] |
00148575 | en | [
"phys"
] | 2024/03/04 23:41:48 | 2007 | https://hal.science/hal-00148575/file/Oberdisse_RMC_SoftMatter2007.pdf | Julian Oberdisse
Peter Hine
Wim Pyckhout-Hintzen
Structure of interacting aggregates of silica nanoparticles in a polymer matrix: Small-angle scattering and Reverse Monte-Carlo simulations
Reinforcement of elastomers by colloidal nanoparticles is an important application where microstructure needs to be understood -and if possible controlled -if one wishes to tune macroscopic mechanical properties. Here the three-dimensional structure of big aggregates of nanometric silica particles embedded in a soft polymeric matrix is determined by Small Angle Neutron Scattering. Experimentally, the crowded environment leading to strong reinforcement induces a strong interaction between aggregates, which generates a prominent interaction peak in the scattering. We propose to analyze the total signal by means of a decomposition in a classical colloidal structure factor describing aggregate interaction and an aggregate form factor determined by a Reverse Monte Carlo technique. The result gives new insights in the shape of aggregates and their complex interaction in elastomers. For comparison, fractal models for aggregate scattering are also discussed.
Figures : 10
Tables : 3 I. INTRODUCTION
There is an intimate relationship between microscopic structure and mechanical properties of composite materials [1][START_REF] Nielsen | Mechanical Properties of Polymers and Composite[END_REF][START_REF] Frohlich | [END_REF][4][5]. Knowledge of both is therefore a prerequisite if one wishes to model this link [6][7][8]. A precise characterization of the three-dimensional composite structure, however, is usually difficult, as it has often to be reconstructed from two-dimensional images made on surfaces, cuts or thin slices, using electron microscopy techniques or Atomic Force Microscopy [9][10][11]. Scattering is a powerful tool to access the bulk structure in a nondestructive way [START_REF] Neutrons | X-ray and Light: Scattering Methods Applied to Soft Condensed Matter[END_REF][START_REF] Peterlik | [END_REF]. X-ray scattering is well suited for many polymer-inorganic composites [14][15][16], but neutron scattering is preferred here due to the extended q-range (with respect to standard x-ray lab-sources), giving access to length scales between some and several thousand Angstroms. Also, cold neutrons penetrate more easily macroscopically thick samples, and they offer the possibility to extract the conformation of polymer chains inside the composite in future work [17]. Small Angle Neutron Scattering (SANS) is therefore a method of choice to unveil the structure of nanocomposites. This article deals with the structural analysis by SANS of silica aggregates in a polymeric matrix. Such structures have been investigated by many authors, often with the scope of mechanical reinforcement [18][19][20][21], but sometimes also in solution [22][23][24]. One major drawback of scattering methods is that the structure is obtained in reciprocal space. It is sometimes possible to read off certain key features like fractal dimensions directly from the intensity curves, and extensive modeling can be done, e.g. in the presence of a hierarchy of fractal dimensions, using the famous Beaucage expressions [25]. Also, major progress has been made with inversion to real space data [26]. Nonetheless, complex structures like interacting aggregates of filler particles embedded in an elastomer for reinforcement purposes are still an important challenge. The scope of this article is to report on recent progress in this field.
II. MATERIALS AND METHODS
II.1 Sample preparation.
We briefly recall the sample preparation, which is presented in [27]. The starting components are aqueous colloidal suspensions of silica from Akzo Nobel (Bindzil 30/220 and Bindzil 40/130), and nanolatex polymer beads. The latter was kindly provided by Rhodia. It is a coreshell latex of randomly copolymerized Poly(methyl methacrylate) (PMMA) and Poly(butylacrylate) (PBuA), with some hydrophilic polyelectrolyte (methacrylic acid) on the surface. From the analysis of the form factors of silica and nanolatex measured separately by SANS in dilute aqueous solutions we have deduced the radii and polydispersities of a lognormal size distribution of spheres [27]. The silica B30 has an approximate average radius of 78 Å (resp. 96 Å for B40), with about 20% (resp. 28%) polydispersity, and the nanolatex 143 Å (24% polydispersity).
Colloidal stock solutions of silica and nanolatex are brought to desired concentration and pH, mixed, and degassed under primary vacuum in order to avoid bubble formation. Slow evaporation of the solvent at T = 65°C under atmospheric pressure takes about four days, conditions which have been found suitable for the synthesis of smooth and bubble-free films without any further thermal treatment. The typical thickness is between 0.5 and 1 mm, i.e. films are macroscopically thick.
II.2 Small Angle Neutron Scattering.
The data discussed here have been obtained in experiments performed at ILL on beamline D11 [27]. The wavelength was fixed to 10.0 Å and the sample-to-detector distances were 1.25 m, 3.50 m, 10.00 m, 36.70 m, with corresponding collimation distances of 5.50 m, 5.50 m, 10.50 m and 40.00 m, respectively. Primary data treatment has been done following standard procedures, with the usual subtraction of empty cell scattering and H 2 O as secondary calibration standard [START_REF] Neutrons | X-ray and Light: Scattering Methods Applied to Soft Condensed Matter[END_REF]. Intensities have been converted to cm -1 using a measurement of the direct beam intensity. Background runs of pure dry nanolatex films show only incoherent scattering due to the high concentration of protons, as expected for unstructured random copolymers. The resulting background is flat and very low as compared to the coherent scattering in the presence of silica, and has been subtracted after the primary data treatment.
III. STRUCTURAL MODELLING
III.1 Silica-latex model nanocomposites.
We have studied silica-latex nanocomposites made by drying a mixture of latex and silica colloidal solutions. The nanometric silica beads can be kept from aggregating during the drying process by increasing the precursor solution pH, and thus their electric charge.
Conversely, aggregation can be induced by reducing the solution pH. The resulting nanocomposite has been shown to have very interesting mechanical properties even at low filler volume fraction. The reinforcement factor, e.g., which is expressed as the ratio of Youngs modulus of the composite and the one of its matrix, E/E latex , can be varied by a factor of several tens at constant volume fraction of silica (typically from 3 to 15%) [28,29]. In this context it is important to recognize that the silica-polymer interface is practically unchanged from one sample to the other, in the sense that there are no ligands or grafted chains connecting the silica to the matrix. There might be changes to the presence of ions, but their impact on the reinforcement factor appears to be of 2 nd order [30]. Possible changes in the matrix properties are cancelled in the reinforcement factor representation, the influence of the silica structure is thus clearly highlighted in our experiments. Using a simplified analysis of the structural data measured by SANS, we could show that (i) the silica bead aggregation was indeed governed by the solution pH, and (ii) the change in aggregation number N agg was accompanied by a considerable change in reinforcement factor at constant silica volume fraction. Although we had convincing evidence for aggregation, it seemed difficult to close the gap and verify that the estimated N agg was indeed compatible with the measured intensity curves. This illustrates one of the key problems in the physical understanding of the reinforcement effect: interesting systems for reinforcement are usually highly crowded, making structural analysis complicated and thereby impeding the emergence of a clear structure-mechanical properties relationship. It is the scope of this article to propose a method for structural analysis in such systems.
III.2 Modelling the scattered intensity for interacting aggregates.
For monodisperse silica spheres of volume V si , the scattered intensity due to some arbitrary spatial organization can be decomposed in the product of contrast ∆ρ, volume fraction of spheres Φ, structure factor, and the normalized form factor of individual spheres, P(q) [START_REF] Neutrons | X-ray and Light: Scattering Methods Applied to Soft Condensed Matter[END_REF][START_REF] Peterlik | [END_REF]. If in addition spheres are organized in monodisperse aggregates, the structure factor can be separated in the intra-aggregate structure factor S intra (q), and a structure factor describing the center-of-mass correlations of aggregates, S inter (q): I(q) = ∆ρ 2 Φ V si S inter (q) S intra (q) P(q)
(1)
Here the product S intra (q) P(q) can also be interpreted as the average form factor of aggregates, as it would be measured at infinite dilution of aggregates. In order to be able to compare it to the intensity in cm -1 , we keep the prefactors and define the aggregate form factor P agg =∆ρ 2 Φ V si S intra (q) P(q).
The above mentioned conditions like monodispersity are not completely met in our experimental system. However, it can be considered sufficiently close to such an ideal situation for this simple scattering law to be applicable. The small polydispersity in silica beads, e.g., is not expected to induce specific aggregate structures. At larger scale, the monodispersity of the aggregates is a working hypothesis. It is plausible because of the strong scattering peak in I(q), which will be discussed with the data. Strong peaks are usually associated with ordered and thus not too polydisperse domain sizes [31].
To understand the difficulty of the structural characterization of the nanocomposites discussed here, one has to see that aggregates of unknown size interact with each other through an unknown potential, which determined their final (frozen) structure. Or from a more technical point of view, we know neither the intra-nor the inter-aggregate structure factor, respectively denoted S intra (q) (or equivalently, P agg (q)), and S inter (q).
In the following, we propose a method allowing the separation of the scattered intensity in P agg (q) and S inter (q), on the assumption of (a) a (relative) monodispersity in aggregate size, and (b) that P agg is smooth in the q-range around the maximum of S inter . The inter-aggregate structure factor will be described with a well-known model structure factor developed for simple liquids and applied routinely to repulsively interacting colloids [START_REF] Hansen | Theory of Simple Liquids[END_REF][START_REF] Hayter | [END_REF][34]. The second factor of the intensity, the aggregate form factor, will be analyzed in two different ways. First, P agg will be compared to fractal models [25]. Then, in a second part, its modeling in direct space by Reverse Monte Carlo will be implemented and discussed [35][36][37][38][39].
Determination of the average aggregation number and S inter .
Aggregation number and aggregate interaction need to be determined first. The silica-latex nanocomposites discussed here have a relatively well-ordered structure of the filler phase, as can be judged from the prominent correlation peak in I(q), see Fig. 1 as an example for data.
The peak is also shown in the upper inset in linear scale. The position of this correlation peak q o corresponds to a typical length scale of the sample, 2π/q o , the most probable distance between aggregates. As the volume fraction (e.g., Φ = 5% in Fig. 1) and the volume of the elementary silica filler particles V si are known, one can estimate the average aggregation number:
N agg = (2π/q o ) 3 Φ/V si (2)
Two ingredients are necessary for the determination of the inter aggregate structure factor.
The first one is the intensity in absolute units, or alternatively the independent measurement of scattering from isolated silica particles, i.e. at high dilution and under known contrast conditions and identical resolution. The second is a model for the structure factor of objects in repulsive interaction. We have chosen a well-known quasi-analytical structure factor based on the Rescaled Mean Spherical Approximation (RMSA) [START_REF] Hayter | [END_REF]34]. Originally, it was proposed for colloidal particles of volume V, at volume fraction Φ, carrying an electrostatic charge Q, and interacting through a medium characterized by a Debye length λ D . In the present study, we use this structure factor as a parametrical expression, with Q and λ D as parameters tuning the repulsive potential. The Debye length, with represents the screening in solutions, corresponds here to the range of the repulsive potential, whereas Q allows to vary the intensity of the interaction. Although the spatial organization of the silica beads in the polymer matrix is due to electrostatic interactions in solution before film formation, we emphasize that this original meaning is lost in the present, parametrical description.
For the calculation of S inter , Φ is given by the silica volume fraction, and the aggregate volume V = 4π/3 R e 3 by N agg V si , with N agg determined by eq.( 2). R e denotes the effective radius of a sphere representing an aggregate. In principle, we are thus left with two parameters, Q and λ D .
The range λ D must be typically of the order of the distance between the surfaces of neighboring aggregates represented by effective charged spheres of radius R e , otherwise the structure factor would not be peaked as experimentally observed. As a starting value, we have chosen to set λ D equal to the average distance between neighboring aggregate surfaces. We will come back to the determination of λ D below, and regard it as fixed for the moment. Then only the effective charge Q remains to be determined.
Here the absolute units of the intensity come into play. N agg is known from the peak position, and thus also the low-q limit of S intra (q→0), because forward scattering of isolated objects gives directly the mass of an aggregate [START_REF] Neutrons | X-ray and Light: Scattering Methods Applied to Soft Condensed Matter[END_REF]. The numerical value of the (hypothetical) forward scattering in the absence of interaction can be directly calculated using eq.( 1), setting S intra = N agg and S inter = 1. Of course the aggregates in our nanocomposites are not isolated, as their repulsion leads to the intensity peak and a depression of the intensity at small angles.
The limit of I(q→0) contains thus also an additional factor, S inter (q→0). In colloid science, this factor is known as the isothermal osmotic compressibility [START_REF] Neutrons | X-ray and Light: Scattering Methods Applied to Soft Condensed Matter[END_REF], and here its equivalent can be deduced from the ratio of the isolated aggregate limit of the intensity (S intra = N agg , S inter = 1), and the experimentally measured one I(q→0). It characterizes the strength of the aggregate-aggregate interaction.
Based on the RMSA-structure factor [START_REF] Hayter | [END_REF]34], we have implemented a search routine which finds the effective charge Q reproducing S inter (q→0). With λ D fixed, we are left with one free parameter, Q, which entirely determines the q-dependence of the inter-aggregate structure
factor. An immediate cross-check is that the resulting S inter (q) is peaked in the same q-region as the experimental intensity. In Fig. 1, the decomposition of the intensity in S inter (q) and S intra (q) is shown. It has been achieved with an aggregation number of 93, approximately forty charges per aggregate, and a Debye length of 741 Å, i.e. 85% of the average surface-tosurface distance between aggregates, and we come now back to the determination of λ D .
In Fig. 2, a series of inter-aggregate structure factors is shown with different Debye lengths: 50%, 85% and 125% of the distance between neighboring aggregate surfaces (872 Å). The charges needed to obtain the measured compressibility are 27, 40 and 64.5, respectively. In Fig. 2, the inter-aggregate structure factors are seen to be peaked in the vicinity of the experimentally observed peak, with higher peak heights for the lower Debye lengths.
Dividing the measured intensity I(q) by ∆ρ 2 Φ V si P(q) S inter yields S intra , also presented in the plot. At low-q, these structure factors decrease strongly, then pass through a minimum and a maximum at intermediate q , and tend towards one at large q (not shown). The high-q maximum is of course due to the interaction between primary particles.
In the low-q decrease, it can be observed that a too strong peak in S inter leads to a depression of S intra at the same q-value. Conversely, a peak that is too weak leads to a shoulder in S intra .
Only at intermediate values of the Debye length (85%), S intra is relatively smooth. In the following, it is supposed that there is no reason for S intra to present artefacts in the decrease from the Guinier regime to the global minimum (bumps or shoulders), and set the Debye length to the intermediate value (85%) for this sample. We have also checked that small variations around this intermediate Debye length (80 to 90%) yield essentially identical structure factors, with peak height differences of a view percent. This procedure of adjusting λ D to the value with a smooth S intra has been applied to all data discussed in this paper.
Fitting S intra using geometrical and fractal models.
Up to now, we have determined the inter-aggregate structure factor, and then deduced the experimental intra-aggregate structure factor S intra as shown in Fig. 2 by dividing the intensity by S inter according to eq.(1). To extract direct-space information from S intra for aggregates of unknown shape, two types of solutions can be sought. First, one can make use of the knowledge of the average aggregation number, and construct average aggregates in real space. This supposes some idea of possible structures, which can then be Fourier-transformed and compared to the experimental result S intra (q). For example, one may try small crystallites [40], or, in another context, amorphous aggregates [41]. Another prominent case is the one of fractal structures, which are often encountered in colloidal aggregation [42 -44].
Let us quickly discuss the scattering function of finite-sized fractals using the unified law with both Guinier regime and power law dependence [25,45]. An isolated finite-sized object with fractal geometry described by a fractal dimension d has three distinct scattering domains. At low q (roughly q < 1/R g ), the Guinier law reflects the finite size and allows the measurement of the aggregate mass from the intensity plateau, and of the radius of gyration R g from the low-q decay. At intermediate q (q > 1/R g ), the intensity follows a power law q -d up to the high-q regime (q > 1/R), which contains the shape information of the primary particles (of radius R) making up the aggregate. Generalizations to higher level structures have also been used [46][47][48][49]. Here we use a two-level description following Beaucage [25]:
( ) ( ) [ ] - ⋅ ⋅ + - ⋅ = 3 R q exp q 6 / qR erf B 3 R q exp G q I 2 2 d 3 2 / 1 g 1 2 g 2 1 ( ) [ ] p 3 2 / 1 2 2 2 2 q 6 / qR erf B 3 R q exp G ⋅ + - ⋅ + (3)
Note that there is no interaction term like S inter in eq.( 1), and that eq.( 3) accounts only for intra-aggregate structure in this case. The first term on the right-hand-side of eq.( 3) is the Guinier expression of the total aggregate. The second term, i.e. the first power law, corresponds to the fractal structure of the aggregate, the error function allowing for a smooth cross-over. This fractal law is weighted by the Guinier expression of the second level, which is the scattering of the primary silica particle in our case; this effectively suppresses the fractal law of the first level at high q. This is followed by an equivalent expression of the higher level, i.e. a Guinier law of primary particles followed by the power-law, which is the Porod law of the primary particles in this case.
Fitting S intra using Reverse Monte Carlo.
The second solution to extract real-space information from S intra is to fit the intra-aggregate structure factor by a Monte-Carlo approach which we describe here. It has been called
Reverse Monte Carlo (RMC) [35][36][37][38][39] because it is based on a feed-back between the structure in direct and reciprocal space, which makes it basically an automatic fitting procedure once the model is defined. The application of RMC to the determination of the aggregate structure from the scattered intensity is illustrated (in 2D) in Fig. 3. RMC was performed with a specially developed Fortran program as outlined in the Appendix. The method consists in generating representative aggregate shapes by moving elements of the aggregate in a random way -these are the Monte Carlo steps -, and calculate the corresponding structure factor at each step. The intensity is then compared to the experimentally measured one, which gives a criterion whether the Monte Carlo step is to be accepted or not. Monte-Carlo steps are repeated until no further improvement is obtained. If the algorithm converges, the outcome is a structure compatible with the scattered intensity. As an immediate result, it allows us to verify that an aggregate containing N agg filler particles -N agg being determined from the peak position q o -produces indeed the observed scattered intensity.
IV. APPLICATION TO EXPERIMENTAL RESULTS
IV.1 Moderate volume fraction of silica (Φ Φ Φ Φ = 5%, B30).
Aggregate interaction.
We now apply our analysis to the measured silica-latex nanocomposite structures [27]. We start with the example already discussed before (Figs. 1 and2), i.e. a sample with a moderate silica volume fraction of 5%, and neutral solution pH before solvent evaporation. From the peak position (q = 3.9 10 -3 Å -1 ), an average aggregation number of N agg = 93 can be deduced using eq.( 2). The aggregate mass gives us the hypothetical low-q limit of the intensity for non-interaction aggregates using eq. (1), with S inter =1, of 9550 cm -1 . The measured value being much lower, approximately 450 cm -1 , with some error induced by the extrapolation, the isothermal compressibility due to the interaction between aggregates amounts to about 0.05.
This rather low number expresses the strong repulsive interaction. The charged spheres representing the aggregates in the inter-aggregate structure factor calculation have the same volume as the aggregates, and thus an equivalent radius of R e = 367 Å. The surface-to-surface distance between spheres is therefore 872 Å. Following the discussion of Fig. 2, we have set the screening length λ D to 85% of this value, 741 Å. Using this input in the RMSAcalculation, together with the constraint on the compressibility, an electric charge of 40 elementary charges per aggregate is found. The corresponding s S inter are plotted in Fig. 2.
Fractal modeling.
A fit with a two level fractal, eq.( 3), has been performed with the aggregate form factor P agg obtained by dividing the experimental intensity by S inter . The result is shown in Fig. 4. There are several parameters to the fit, some of which can be found independently. The slope of the high-q power law, e.g., has been fixed to p= -4, in agreement with the Porod law. The radius of gyration of the primary particles is 76 Å, and the corresponding prefactor G 2 can be deduced from the particle properties [27] and concentration (103 cm -1 ). For comparison, the form factor of the individual particle is shown in Fig. 4 as a one level Beaucage function, i.e.
using only the last two terms of eq. ( 3). Furthermore, we have introduced the G 1 value of 9550 cm -1 calculated from N agg , i.e. from the peak position. Fitting yields the radius of gyration of aggregates (1650 Å), and a fractal dimension of 1.96. At intermediate q, however, the quality of the fit is less satisfying. The discrepancy is due to the minimum of S intra (cf. Fig.
2) around 0.02 Å -1 , a feature which is not captured by the model used here (eq. ( 3)).
Reverse Monte Carlo.
We now report on the results of the implementation of an RMC-routine applied to the structure of the sample discussed above (Φ = 5%, pH 7). In Fig. 5, we plot the evolution of χ 2 (cf. appendix) as a function of the number of Monte-Carlo tries for each bead (on average), starting from the a random initial condition as defined in the appendix. For illustration purposes, this is compared to the χ 2 from different initial conditions, i.e. aggregates constructed according to the same rule but with a different random seed. Such initial aggregate structures are also shown on the left-hand side of Fig. 6. In all cases, the χ 2 value is seen to decrease in Fig. 5 by about two orders of magnitude within five Monte-Carlo steps per bead. It then levels off to a plateau, around which it fluctuates due to the Boltzmann criterion.
We have checked that much longer runs do not further increase the quality of the fit, cf. the inset of Fig. 5. The corresponding aggregates at the beginning and at the end of the simulation run are also shown in Fig. 6. They are of course different depending on the initial condition and angle of view, but their statistical properties are identical, otherwise their Fourier transform would not fit the experimental data. It is interesting to see how much the final aggregate structures, rather elongated, look similar.
Having established that the algorithm robustly produces aggregates with similar statistical properties, we now compare the result to the experimental intensity in Fig. 7. Although some minor deviations between the intensities are still present, the agreement over five decades in intensity is quite remarkable. It shows that the aggregation number determined from the peak position q o is indeed a reasonable value, as it allows the construction of a representative aggregate with almost identical scattering behavior. In the lower inset of Fig. 7, the RMC result for the aggregate form factor P agg is compared to the experimental one (obtained by dividing the I(q) of Fig. 7 by S inter ). The fit is good, especially as the behavior around 0.02 Å -1 is better described than in the case of the fractal model, Fig. 4.
The radius of gyration can be calculated from the position of the primary particles in one given realization. We find R g around 1150 Å, a bit smaller than with the fractal model (1650 Å), a difference probably due to the fact that we are only approaching the low-q plateau. For the comparison of the fractal model to RMC, let us recall that both apply only to P agg , i.e. after the separation of the intensity in aggregate form factor P agg and structure factor S inter . Both methods give the same fractal dimension d of aggregates because this corresponds to the same slope of P agg . The aggregate form factor P agg and thus the intensity are better (although not perfectly) fitted with RMC. This is true namely for the minimum around 0.02 Å -1 , presumably because the nearest neighbor correlations inside each aggregate are captured by a physical model of touching beads. Last but not least, RMC gives snapshots of 3D real-space structures compatible with the scattered intensity, which validates the determination of N agg using eq. ( 2).
For the sake of completeness, we have tested RMC with aggregation numbers different from the one deduced from the peak position. Taking a very low aggregation number (i.e., smaller than the value obtained with eq.( 2))) leads to bad fits, whereas higher aggregation numbers give at first sight acceptable fits. The problem with too high aggregation numbers is that the peak position of S inter is different from the position of the intensity peak due to conservation of silica volume. RMC compensates for this by introducing an oscillation in S intra (or equivalently, P agg ) which effectively shifts the peak to its experimentally measured position.
In the upper inset of Fig. 7 P agg presenting such an artefact (N agg = 120 and 150) is compared to the one with the nominal aggregation number, N agg = 93 (filled symbols). The oscillation around 0.004 Å -1 is not present with N agg = 93, and becomes stronger as the aggregation number deviates more from the value determined from the intensity peak position, eq.( 2).
IV.2 Evolution with silica volume fraction.
In the preceding section we have analyzed a sample at moderate silica volume fraction, 5%. It is now interesting to check if the same type of modeling can be applied to higher silica volume fractions and bigger aggregates (i.e., lower solution pH), where the structure factor can be seen to be more prominent directly from I(q).
Evolution of structure with silica volume fraction (Φ Φ Φ Φ = 5 and 10%, B30).
In Fig. 8, two data sets corresponding to a lower pH of 5, for Φ = 5% and 10% (symbols) are compared to their RMC fits, in linear representation in order to emphasize the peaks. The parameters used for these calculations are given in Table 1, together with the aggregation numbers deduced from the peak position (using eq. ( 2)). As expected, these are considerably higher than at pH 7 [27]. Concerning the Debye length, it is interesting to note that its value relative to the inter-aggregate distance increases with volume fraction. As we have seen in section III.2, a higher Debye length leads to a weaker peak. This tendency is opposite to the influence of the volume fraction, and we have checked that the peak in S inter is comparable in height in both cases, i.e. the two tendencies compensate.
At first sight of Fig. 8, it is surprising that the intensity at 10% is lower than the one at 5%. This is only true at small-q -the 10% intensity being higher in the Porod domain, as it should, cf. P agg shown in the inset in log-scale. At both concentrations, the aggregate shape seems to be unchanged, (similar fractal dimension d, 2.25 and 2.3 for 5% and 10%, respectively), and together with the shift in peak position by a factor 2 ⅓ (as Φ is doubled) to a region where P agg is much lower, it explains the observed decrease in intensity. We will see in the discussion of a series with the silica B40 that this behavior is not general, and that aggregation depends (as observed before [27]) on the type of bead.
For illustration, the scattered intensity corresponding to the random initial condition of RMC (cf. appendix) is also shown in Fig. 8. The major initial deviation from the experimental values underlines the capacity of the RMC algorithm to converge quickly (cf. Fig. 5) towards a very satisfying fit of the experimental intensity. Note that there is a small angle upturn for the sample at 10%. This may be due to aggregation on a very large scale, which is outside the scope and the possibilities of our method.
Evolution of structure with silica volume fraction (Φ Φ Φ Φ = 3% -15%, B40)
We now turn to a series of samples with a different, slightly bigger silica beads (denoted B40), in a highly aggregated state (low pH), with a larger range of volume fractions. In Fig. 9 the intensities are plotted with the RMC fits, for the series Φ = 3 -15%, at pH 5, silica B40.
The parameters used for the calculations are given in the Table 2.
The fits shown in Fig. 9 are very good, which demonstrates that the model works well over a large range of volume fractions, i.e. varying aggregate-aggregate interaction. Concerning the parameters Debye length and charge, we have checked that the peaks in S inter are comparable in height (within 10%). Only their position shifts, as it was observed with the smaller silica (B30). Unlike the case of B30, however, the intensities follow a 'normal' increase with increase in volume fraction, which suggests a different evolution in aggregate shape and size for the bigger beads.
The case of the lowest volume fraction, Φ = 3%, deserves some discussion. The aggregation number is estimated to 188 using eq. ( 2). The peak is rather weak due to the low concentration, and it is also close to the minimum q-value. We thus had to base our analysis on an estimation of I(q→0), 700 cm -1 . The resulting inter-aggregate structure factor S inter is as expected only slightly peaked (peak height 1.1). We found that some variation of N agg does not deteriorate the quality of the fit, i.e. small variations do not introduce artificial oscillations in the aggregate form factor. We have, e.g., checked that the aggregate form factors P agg for N agg = 120 and 200 are equally smooth. At higher/lower aggregation number, like 100 or 230, oscillations appear in P agg . It is concluded that in this rather dilute case the weak ordering does not allow for a precise determination of N agg . For higher volume fractions, Φ>3%, the aggregation numbers given in Table 2 are trustworthy.
V. DISCUSSION
V.1 Uniqueness of the solution.
The question of the uniqueness of the solution found by RMC arises naturally. Here two different levels need to be discussed. The first one concerns the separation in aggregate form and structure factor. We have shown that the aggregate parameters (N agg , aggregate interaction) are fixed by the boundary conditions. Only in the case of weak interaction (Φ = 3%), acceptable solutions with quite different aggregation numbers (between about 120 and 200) can be found. In the other cases, variations by some 15% in N agg lead to bad intensity fits or artefacts in the aggregate form factor P agg . We can thus confirm that one of the main objectives is reached, namely that it is possible to find an aggregate of well-defined mass (given by eq.( 2)), the scattering of which is compatible with the intensity.
The second level is to know to what extend the RMC-realizations of aggregates are unique solutions. It is clear from the procedure that many similar realizations are created as the number of Monte Carlo steps increases (e.g., the plateau in Fig. 5), all with a comparable quality of fit. In Fig. 5, this is also seen to be independent from the initial condition, and Figs.
6 and 8 illustrated how far this initial condition is from the final structure. All the final realizations have equivalent statistical properties, and they can be looked at as representatives of a class of aggregates with identical scattering. However, no unique solution exists.
V.2 From aggregate structure to elastomer reinforcement.
We have shown in previous work that the mechanical properties of our nanocomposites depend strongly on aggregation number and silica volume fraction [28][29][30]. The aggregation number was estimated from the position, and we have now confirmed that such aggregates are indeed compatible with the complete scattering curves. It is therefore interesting to see how the real-space structures found by our method compare to the mechanical properties of the nanocomposites.
The low deformation reinforcement factors of the series in silica volume fraction (B40, pH5, Φ = 3 -15%) are recalled in Table 3 [30]. E/E latex is found to increase considerably with Φ, much more than N agg . Aggregate structures as resulting from the RMC-procedure applied to the data in Fig. 9 are shown in Fig. 10. At low Φ, aggregates are rather elongated, and with increasing they are seen to become slightly bulkier. We have determined their radii of gyration and fractal dimension with a one-level Beaucage fit, using only the first two terms of the right-hand-side of eq. ( 3), and applying the same method as in section IV.1. The results are summarized in Table 3. The fractal dimension is found to increase with Φ, as expected from Fig. 10. The aggregate radius R g first decreases, then increases again. If we compare R g the average distance between aggregates D (from the peak position of S inter ), we find a crowded environment. The aggregates appear to be tenuous structures, with an overall radius of gyration bigger than the average distance between aggregates, which suggests aggregate interpenetration.
In a recent article [30], we have determined the effective aggregate radius and fractal dimension from a mechanical model relating E/E latex to the compacity of aggregates. The numerical values are different (aggregate radii between 1200 and 980 Å, fractal dimensions between 2.1 and 2.45) due to the mechanical model which represents aggregates as spheres, but the tendency is the same: Radii decrease as Φ increases, implying bulkier aggregates with higher fractal dimensions. Only the increase in radius found at 15% is not captured by the mechanical model.
Our picture of reinforcement in this system is the based on the idea of percolation of hard silica structures in the matrix. Due to the (quasi-)incompressibility of the elastomer matrix, strain in any direction is accompanied by lateral compression, thus pushing aggregates together and creating mechanical percolation. Aggregates are tenuous, interpenetrating structures. The higher the silica volume fraction, the more compact the aggregates (higher d), the stronger the percolating links. At low N agg more or less constant, which implies that the aggregates decrease in size, cf. Table 3 for both fractal and RMC-analysis. Above 6%, increases, and the aggregates become both denser and grow again in size. At the same time, aggregates come closer (D goes down). This moves the system closer to percolation, and leads to the important increase in the reinforcement factor. In other systems, this is also what the reinforcement curves as a function of filler volume fraction suggest [28], where extremely strong structures made of the percolating hard filler phase are found above a critical volume fraction [50].
VI. CONCLUSION
have presented a complete analysis of the scattering function of complex spectra arising from strongly aggregated and interacting colloidal silica aggregates in nanocomposites. The main result is the validation of the determination of the average aggregation number by a complete fit of the data. This is achieved by a separation of the scattered intensity in a product of aggregate form and structure factor. The aggregate form factor can then be described either by a model, or Reverse Monte Carlo modeling. The use of the decomposition of I(q) in a product based on the assumption that aggregates are similar in size. This is justified by the strong peak in intensity, which indicates strong ordering, incompatible with too high polydispersity in size.
Fractal and RMC-modelling appear to be complementary, with the advantage of generality and simplicity for the fractal model, whereas RMC needs numerical simulations adapted to each case. However, RMC does not rely on approximations (Guinier), and by its geometrical construction it connects local configurations (bead-bead) to the global structure. RMC thus gives a real space picture of aggregates compatible with I(q), and thereby confirms calculation of aggregation numbers from the peak positions.
To finish, possible improvements of our method can be discussed. APPENDIX: Reverse Monte Carlo algorithm for scattering from aggregates.
A.1 Initial aggregate construction
The first step is to build an initial aggregate which can then evolve according to the Monte-Carlo rules in order to fit the experimental intensity I(q) of nanocomposites. From the intensity peak position and eq.( 2), the aggregation number N agg is known. The primary particles are the silica beads with a radius drawn from a size distribution function [27]. The initial aggregate is constructed by adding particles to a seed particle placed at the origin. Each new particle is positioned by randomly choosing one of the particles which are already part of the aggregate, and sticking it to it in a random direction. Then, collisions with all particles in the aggregate at this stage are checked, and the particle is accepted if there are no collisions. This is repeated until N agg is reached. Two realizations of initial aggregate structures are in Fig. 6.
A.2 Monte-Carlo steps
The Monte-Carlo steps are designed to change the shape of the aggregate, in order to reach closer agreement with the scattering data. To do this, the local aggregate topology has to be determined. The aim is to identify particles which can be removed from the aggregate without breaking it up, i.e. particles which sit on the (topological) surface of the aggregate. Moving such particles to another position in the aggregate leads to a new structure with updated topology. A Monte-Carlo step thus consists in randomly choosing one of the particles which can be removed, and repositioning it in contact with some other, randomly chosen particle, again in a random direction. As before, it is checked that there are no collisions with the other particles of the aggregate.
A.3 Fit to experimental intensity
Each Monte-Carlo step is evaluated by the calculation of the orientationally averaged aggregate form factor P agg (q) , which is multiplied by S inter (q), cf. eq. ( 1), and compared to the experimental intensity I(q). The comparison is done in terms of χ 2 :
( ) ( ) ∑ σ - = χ i 2 i RMC i 2 q I q I N 1 (A.1)
where the difference between RMC-prediction and experimental intensity is summed over the N q-values. The statistical error σ was kept fixed in all calculations. In the our algorithm, the move is accepted if it improves the agreement between the theoretical and experimental curves, or if the increase in χ 2 is moderate in order to allow for some fluctuations. This is implemented by a Boltzmann criterion on 2 : exp (-∆χ 2 / B) > random number in the interval [0,1] (A.2)
In the present implementation, B has been fixed to at most 1% of the plateau value of χ 2 . This plateau-value was found to be essentially independent of the choice of B. Given the quality of the fits, a simulated annealing approach was therefore not necessary.
Figure captions
Figure 1 :
1 Figure 1 : Structure of silica-latex nanocomposite (Φ = 5%, pH 7, B30) as seen by SANS. The experimental intensity ( ) is represented in log scale, and in linear scale in the upper inset. In the lower inset, the two structure factors S inter and S intra , are shown. Such a decomposition is the result of our data analysis as in the text.
Figure 2 :
2 Figure 2 : Structure factors (for Φ = 5%, pH 7, B30) obtained with different Debye lengths and charges, but identical compressibility. λ D = 436 Å (50%), Q = 64.5 ( ), λ D = 741 Å (85%), Q = 40 ( ). λ D = 1090 Å (125%), Q = 27 ( ). In parentheses the Debye lengths as a fraction of the inter-aggregate surface distance (872 Å).In the inset, a zoom on the artefact in S intra observed at 50%, but not at 85%, is shown.
Figure 3 : 4 :
34 Figure 3 : Schematic drawing illustrating the Reverse Monte Carlo algorithm applied to the generation of aggregates. An internal filler particle like the black bead can not be removed without destroying the aggregate.
Figure 5 :
5 Figure 5:Evolution of χ 2 with the number of Monte Carlo tries per bead for three different initial conditions. In the inset, a long run with 300 tries per bead.
Figure 6 :
6 Figure 6: Graphical representations of aggregate structures. Two initial configurations are shown on the left. The structures on the right are snapshots after 300 (top) and 30 (bottom) tries per bead, each starting from the initial configurations on the left.
Figure 7 :
7 Figure 7: Structure of silica-latex nanocomposite ( , Φ = 5%, pH 7, B30) compared to the RMC model prediction (N agg = 93, solid line). In the lower inset the aggregate form factor is compared to the RMC result. In the upper inset, the RMC-results (P agg ) for higher aggregation numbers (N agg = 120 and 150, solid lines) are compared to the nominal one (N agg = 93, symbols).
Figure 8 :
8 Figure 8: SANS-intensities of samples (B30 pH5) with silica volume fraction of 5% and 10% (symbols). The solid lines are the RMC-results. For illustration, the intensity of the RMC-algorithm calculated from the initial aggregate configuration is also shown (10%). In the inset, aggregate form factors P agg are compared.
Figure 9 :
9 Figure 9: Structure of silica-latex nanocomposites (symbols, Φ = 3%-15, pH 5, B40) compared to the RMC model predictions (see text for details).
Figure 10 :
10 Figure 10: Snapshots of aggregate structures at different silica volume fractions as calculated by RMC (pH5, B40-series).
Figures
Figures
FIGURE 1 (OBERDISSE)
Technically, the introduction of the spectrometer resolution function is straightforward but would not fundamentally change results, and considerably slow down the algorithm. A more ambitious project is be to get rid of the separation in aggregate form and structure factor by performing a RMC-simulation of a large system containing many aggregates [51]. It will be interesting to see if the Monte-Carlo algorithm converges spontaneously towards more or less monodisperse aggregates, or if very different solutions, not considered in the present work, exist.
Table 1 :
1 Parameters used for a successful decomposition in S inter and an artefact-free P agg , for series B30, pH5. The Debye length is given as a multiple of the surface-to-surface distance between neighboring aggregates.
Tables
Φ Φ Φ Φ Debye length factor Charge N agg
5% 60% 61 430
10% 175% 52 309
Φ Φ Φ Φ Debye length factor Charge N agg
3% 120% 52 120-200
6% 150% 58 168
9% 150% 78 196
12% 250% 63 238
15% 275% 55 292
Table 2 :
2 Parameters used for a successful decomposition in S inter and an artefact-free P agg , for series B40, pH5. The Debye length is given as a multiple of the surface-to-surface distance between neighboring aggregates.
Φ Φ Φ Φ d R g (Å) fractal R g (Å) RMC D (Å) E/E latex
3% 1.6 3470 2830 2400 2.8
6% 2.0 2640 1690 2000 6.4
9% 2.2 2290 2090 1780 23.2
12% 2.3 2150 1870 1750 29.6
15% 2.4 2550 2680 1750 42.5
Table 3 :
3 Series B40, pH5. Fractal dimension d and radius of gyration R g from one-level Beaucage fit compared to R g determined by RMC and inter-aggregate distance from S inter . The last column recalls the mechanical reinforcement factor of these samples.
Acknowledgements : Work conducted within the scientific program of the European Network of Excellence Softcomp: 'Soft Matter Composites: an approach to nanoscale functional supported by European Commission. Silica and latex stock solutions were a gift from Akzo Nobel and Rhodia. Help by Bruno Demé (ILL, Grenoble) as local contact on D11 and beam time by ILL is gratefully acknowledged, as well as support by the instrument responsible Peter Lindner. Thanks also to Rudolf Klein (Konstanz) for fruitful discussions on structure factors. | 47,075 | [
"995273"
] | [
"737",
"35905",
"35904"
] |
01485792 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2017 | https://shs.hal.science/halshs-01485792/file/DietrichList-OpinionPoolingGeneralized-Part1.pdf | Franz Dietrich
Probabilistic opinion pooling generalized Part one: General agendas
Keywords: Probabilistic opinion pooling, judgment aggregation, subjective probability, probabilistic preferences, vague/fuzzy preferences, agenda characterizations, a uni…ed perspective on aggregation
How can several individuals'probability assignments to some events be aggregated into a collective probability assignment? Classic results on this problem assume that the set of relevant events -the agenda -is a -algebra and is thus closed under disjunction (union) and conjunction (intersection). We drop this demanding assumption and explore probabilistic opinion pooling on general agendas. One might be interested in the probability of rain and that of an interest-rate increase, but not in the probability of rain or an interest-rate increase. We characterize linear pooling and neutral pooling for general agendas, with classic results as special cases for agendas that are -algebras. As an illustrative application, we also consider probabilistic preference aggregation. Finally, we unify our results with existing results on binary judgment aggregation and Arrovian preference aggregation. We show that the same kinds of axioms (independence and consensus preservation) have radically di¤erent implications for di¤erent aggregation problems: linearity for probability aggregation and dictatorship for binary judgment or preference aggregation.
Introduction
This paper addresses the problem of probabilistic opinion pooling. Suppose several individuals (e.g., decision makers or experts) each assign probabilities to some events. How can these individual probability assignments be aggregated into a collective probability assignment, while preserving probabilistic coherence? Although this problem has been extensively studied in statistics, economics, and philosophy, one standard assumption is seldom questioned: the set of events to which probabilities are assigned -the agenda -is a -algebra: it is closed under negation (complementation) and countable disjunction (union) of events. In practice, however, decision makers or expert panels may not be interested in such a rich set of events. They may be interested, for example, in the probability of a blizzard and the probability of an interest-rate increase, but not in the probability of a blizzard or an interest-rate increase. Of course, the assumption that the agenda is a -algebra is convenient: probability functions are de…ned onalgebras, and thus one can view probabilistic opinion pooling as the aggregation of probability functions. But convenience is no ultimate justi…cation. Real-world expert committees typically do not assign probabilities to all events in a -algebra. Instead, they focus on a limited set of relevant events, which need not contain all disjunctions of its elements, let alone all disjunctions of countably in…nite length.
There are two reasons why a disjunction of relevant events, or another logical combination, may not be relevant. Either we are not interested in the probability of such 'arti…cial'composite events. Or we (or the decision makers or experts) are unable to assign subjective probabilities to them. To see why it can be di¢ cult to assign a subjective probability to a logical combination of 'basic'events -such as 'a blizzard or an interest-rate increase'-note that it is not enough to assign probabilities to the underlying basic events: various probabilistic dependencies also a¤ect the probability of the composite event, and these may be the result of complex causal interconnections (such as the causal e¤ects between basic events and their possible common causes).
We investigate probabilistic opinion pooling for general agendas, dropping the assumption of a -algebra. Thus any set of events that is closed under negation (complementation) can qualify as an agenda. The general notion of an agenda is imported from the theory of binary judgment aggregation (e.g., List andPettit 2002, 2004;Pauly and van Hees 2006;[START_REF] Dietrich | Judgment Aggregation: (Im)Possibility Theorems[END_REF]Dietrich andList 2007a, 2013;[START_REF] Nehring | Abstract Arrovian Aggregation[END_REF][START_REF] Dokow | Aggregation of binary evaluations[END_REF][START_REF] Dietrich | The premise-based approach to judgment aggregation[END_REF]. We impose two axiomatic requirements on probabilistic opinion pooling:
(i) the familiar 'independence'requirement, according to which the collectively assigned probability for each event should depend only on the probabilities that the individuals assign to that event; (ii) the requirement that certain unanimous individual judgments should be preserved; we consider stronger and weaker variants of this requirement.
We prove two main results:
For a large class of agendas -with -algebras as special cases -any opinion pooling function satisfying (i) and (ii) is linear: the collective probability of each event in the agenda is a weighted linear average of the individuals' probabilities of that event, where the weights are the same for all events. For an even larger class of agendas, any opinion pooling function satisfying (i) and (ii) is neutral: the collective probability of each event in the agenda is some (possibly non-linear) function of the individuals'probabilities of that event, where the function is the same for all events.
We state three versions of each result, which di¤er in the nature of the unanimitypreservation requirement and in the class of agendas to which they apply. Our results generalize a classic characterization of linear pooling in the special case where the agenda is -algebra [START_REF] Aczél | Lectures on Functional Equations and their Applications[END_REF][START_REF] Aczél | A characterization of weighted arithmetic means[END_REF][START_REF] Mcconway | Marginalization and Linear Opinion Pools[END_REF]. 1 For a -algebra, every neutral pooling function is automatically linear, so that neutrality and linearity are equivalent here [START_REF] Mcconway | Marginalization and Linear Opinion Pools[END_REF][START_REF] Wagner | Allocation, Lehrer Models, and the Consensus of Probabilities[END_REF]. 2As we will see, this fact does not carry over to general agendas: many agendas permit neutral but non-linear opinion pooling functions.
Some of our results apply even to agendas containing only logically independent events, such as 'a blizzard' and 'an interest-rate increase' (and their negations), but no disjunctions or conjunctions of these events. Such agendas are relevant in practical applications where the events in question are only probabilistically dependent (correlated), but not logically dependent. If the agenda is a -algebra, by contrast, it is replete with logical interconnections. By focusing on -algebras alone, the standard results on probabilistic opinion pooling have therefore excluded many realistic applications.
We also present a new illustrative application of probabilistic opinion pooling, namely to probabilistic preference aggregation. Here each individual assigns subjective probabilities to events of the form 'x is preferable than y'(or 'x is better than y'), where x and y range over a given set of alternatives. These probability 1 Speci…cally, if the agenda is a -algebra (with more than four events), linear pooling functions are the only pooling functions which satisfy independence and preserve unanimous probabilistic judgments [START_REF] Aczél | Lectures on Functional Equations and their Applications[END_REF]Wagner 1980, McConway 1981). Linearity and neutrality (the latter sometimes under the names strong label neutrality or strong setwise function property) are among the most widely studied properties of opinion pooling functions. Linear pooling goes back to Stone (1961) or even Laplace, and neutral pooling to [START_REF] Mcconway | Marginalization and Linear Opinion Pools[END_REF] and [START_REF] Wagner | Allocation, Lehrer Models, and the Consensus of Probabilities[END_REF]. For extensions of (or alternatives to) the classic characterization of linear pooling, see [START_REF] Wagner | Allocation, Lehrer Models, and the Consensus of Probabilities[END_REF][START_REF] Wagner | On the Formal Properties of Weighted Averaging as a Method of Aggregation[END_REF], [START_REF] Aczél | Aggregation Theorems for Allocation Problems[END_REF], [START_REF] Genest | Pooling operators with the marginalization property[END_REF], [START_REF] Mongin | Consistent Bayesian aggregation[END_REF][START_REF] Chambers | An ordinal characterization of the linear opinion pool[END_REF]. All these works retain the assumption that the agenda is a -algebra. [START_REF] Genest | Combining Probability Distributions: A Critique and Annotated Bibliography[END_REF] and [START_REF] Clemen | Combining Probability Distributions from Experts in Risk Analysis[END_REF] provide surveys of the classic literature. For opinion pooling under asymmetric information, see [START_REF] Dietrich | Bayesian group belief[END_REF]. For the aggregation of qualitative rather than quantitative probabilities, see [START_REF] Weymark | Aggregating Ordinal Probabilities on Finite Sets[END_REF]. For a computational, non-axiomatic approach to the aggregation of partial probability assignments, where individuals do not assign probabilities to all events in the -algebra, see [START_REF] Osherson | Aggregating disparate estimates of chance[END_REF].
assignments may be interpreted as beliefs about which preferences are the 'correct' ones (e.g., which correctly capture objective quality comparisons between the alternatives). Alternatively, they may be interpreted as vague or fuzzy preferences. We then seek to arrive at corresponding collective probability assignments.
Each of our linearity or neutrality results (with one exception) is logically tight: the linearity or neutrality conclusion follows if and only if the agenda falls into a relevant class. In other words, we characterize the agendas for which our axiomatic requirements lead to linear or neutral aggregation. We thereby adopt the state-of-the-art approach in binary judgment-aggregation theory, which is to characterize the agendas leading to certain possibilities or impossibilities of aggregation. This approach was introduced by Nehring and Puppe (2002) in related work on strategy-proof social choice and subsequently applied throughout binary judgment-aggregation theory. One of our contributions is to show how it can be applied in the area of probabilistic opinion pooling.
We conclude by comparing our results with their analogues in binary judgmentaggregation theory and in Arrovian preference aggregation theory. Interestingly, the conditions leading to linear pooling in probability aggregation correspond exactly to the conditions leading to a dictatorship of one individual in both binary judgment aggregation and Arrovian judgment aggregation. This yields a new uni…ed perspective on several at …rst sight disparate aggregation problems.
The framework
We consider a group of n 2 individuals, labelled i = 1; :::; n, who have to assign collective probabilities to some events.
The agenda. Let be a non-empty set of possible worlds (or states). An event is a subset A of ; its complement ('negation') is denoted A c := nA. The agenda is the set of events to which probabilities are assigned. Traditionally, the agenda has been assumed to be a -algebra (i.e., closed under complementation and countable union, and thereby also under countable intersection). Here, we drop that assumption. As already noted, we may exclude some events from the agenda, either because they are of no interest, or because no probability assignments are available for them. For example, the agenda may contain the events that global warming will continue, that interest rates will remain low, and that the UK will remain in the European Union, but not the disjunction of these events. Formally, we de…ne an agenda as a non-empty set X of events which is closed under complementation, i.e., A 2 X ) A c 2 X. Examples are X = fA; A c g or X = fA; A c ; B; B c g, where A and B may or may not be logically related.
An example of an agenda without conjunctions or disjunctions. Suppose each possible world is a vector of three binary characteristics. The …rst takes the value 1 if atmospheric CO 2 is above some threshold, and 0 otherwise. The second takes the value 1 if there is a mechanism to the e¤ect that if atmospheric CO 2 is above that threshold, then Arctic summers are ice-free, and 0 otherwise. The third takes the value 1 if Arctic summers are ice-free, and 0 otherwise. Thus the set of possible worlds is the set of all triples of 0s and 1s, excluding the inconsistent triple in which the …rst and second characteristics are 1 and the third is 0, i.e., = f0; 1g3 nf(1; 1; 0)g. We now de…ne an agenda X consisting of A; A ! B; B, and their complements, where A is the event of a positive …rst characteristic, A ! B the event of a positive second characteristic, and B the event of a positive third characteristic. (We use the sentential notation 'A ! B'for better readability; formally, each of A, B, and A ! B are subsets of . 3 ) Although there are some logical connections between these events (in particular, A and A ! B are inconsistent with B c ), the set X contains no conjunctions or disjunctions.
Probabilistic opinions. We begin with the notion of a probability function.
The classical focus on agendas that are -algebras is motivated by the fact that such functions are de…ned on -algebras. Formally, a probability function on aalgebra is a function P : ! [0; 1] such that P ( ) = 1 and P is -additive (i.e., P (A 1 [A 2 [:::) = P (A 1 )+P (A 2 )+::: for every sequence of pairwise disjoint events A 1 ; A 2 ; ::: 2 ). In the context of an arbitrary agenda X, we speak of 'opinion functions'rather than 'probability functions'. Formally, an opinion function for an agenda X is a function P : X ! [0; 1] which is probabilistically coherent, i.e., extendable to a probability function on the -algebra generated by X. Thisalgebra is denoted (X) and de…ned as the smallest -algebra that includes X. It can be constructed by closing X under countable unions and complements. 4 In our expert-committee example, we have (X) = 2 , and an opinion function cannot assign probability 1 to all of A, A ! B, and B c . (This would not be extendable to a well-de…ned probability function on 2 , given that A \ (A ! B) \ B c = ?.) We write P X to denote the set of all opinion functions for the agenda X. If X is a -algebra, P X is the set of all probability functions on it.
Opinion pooling. Given the agenda X, a combination of opinion functions across the n individuals, (P 1 ; :::; P n ), is called a pro…le (of opinion functions). An (opinion) pooling function is a function F : P n X ! P X , which assigns to each pro…le (P 1 ; :::; P n ) a collective opinion function P = F (P 1 ; :::; P n ), also denoted P P 1 ;:::;Pn . For instance, P P 1 ;:::;Pn could be the arithmetic average 1 n P 1 + :::
+ 1 n P n .
Linearity and neutrality. A pooling function is linear if there exist realvalued weights w 1 ; :::; w n 0 with w 1 + ::: + w n = 1 such that, for every pro…le (P 1 ; :::; P n ) 2 P n X ,
P P 1 ;:::;Pn (A) = n X i=1 w i P i (A) for all A 2 X.
If w i = 1 for some 'expert' i, we obtain an expert rule given by P P 1 ;:::;Pn = P i . More generally, a pooling function is neutral if there exists some function D : [0; 1] n ! [0; 1] such that, for every pro…le (P 1 ; :::; P n ) 2 P n X , P P 1 ;:::;Pn (A) = D(P 1 (A); :::; P n (A)) for all A 2 X:
(1)
We call D the local pooling criterion. Since it does not depend on the event A, all events are treated equally ('neutrality'). Linearity is the special case in which D is a weighted linear averaging criterion of the form D(x) = X n i=1 w i x i for all x 2 [0; 1] n . Note that, while every combination of weights w 1 ; :::; w n 0 with sumtotal 1 de…nes a proper linear pooling function (since linear averaging preserves probabilistic coherence), a given non-linear function D : [0; 1] n ! [0; 1] might not de…ne a proper pooling function. Formula (1) might not yield a well-de…ned -i.e., probabilistically coherent -opinion function. We will show that whether there can be neutral but non-linear pooling functions depends on the agenda in question. If the agenda is a -algebra, the answer is known to be negative (assuming jXj > 4). However, we will also identify agendas for which the answer is positive. Some logical terminology. An event A is contingent if it is neither the empty set ? (impossible) nor the universal set (necessary). A set S of events is consistent if its intersection \ A2S A is non-empty, and inconsistent otherwise. A set S of events entails another event B if the intersection of S is included in B (i.e., \ A2S A B).
Two kinds of applications. It is useful to distinguish between two kinds of applications of probabilistic opinion pooling. We may be interested in either of the following:
(a) the probabilities of certain propositions expressed in natural language, such as 'it will rain tomorrow'or 'the new legislation will be repealed';
(b) the distribution of some real-valued (or vector-valued) random variable, such as the number of insurance claims over a given period, or tomorrow's price of a given share, or the weight of a randomly picked potato from some farm.
Arguably, probabilistic opinion pooling on general agendas is more relevant to applications of type (a) than to applications of type (b). An application of type (a) typically gives rise to an agenda expressible in natural language which does not constitute a -algebra. It is then implausible to replace X with the -algebra (X), many elements of which represent unduly complex combinations of other events. Further, even when (X) is …nite, it may be enormous. If X contains at least k logically independent events, then (X) contains at least 2 2 k events, so its size grows double-exponentially in k.5 This suggests that, unless k is small, (X) may be too large to serve as an agenda in practice. By contrast, an application of type (b) plausibly gives rise to an agenda that is a -algebra. Here, the decision makers may need a full probability distribution over the -algebra, and they may also be able to specify such a distribution. For instance, a market analyst estimating next month's distribution of Apple's share price might decide to specify a log-normal distribution. This, in turn, requires the speci…cation of only two parameters: the mean and the variance of the exponential of the share price. We discuss opinion pooling problems of type (b) in a companion paper [START_REF] Dietrich | Probabilistic opinion pooling generalized -Part two: The premise-based approach[END_REF], where they are one of our principal applications.
Axiomatic requirements on opinion pooling
We now introduce some requirements on opinion pooling functions.
The independence requirement
Our …rst requirement, familiar from the literature, says that the collective probability of each event in the agenda should depend only on the individual probabilities of that event. This requirement is sometimes also called the weak setwise function property.
Independence. For each event A 2 X, there exists a function D A : [0; 1] n ! [0; 1] (the local pooling criterion for A) such that, for all P 1 ; :::; P n 2 P X , P P 1 ;:::;Pn (A) = D A (P 1 (A); :::; P n (A)):
One justi…cation for independence is the Condorcetian idea that the collective view on any issue should depend only on individual views on that issue. This re ‡ects a local, rather than holistic, understanding of aggregation. (On a holistic understanding, the collective view on an issue may be in ‡uenced by individual views on other issues.) Independence, understood in this way, becomes less compelling if the agenda contains 'arti…cial'events, such as conjunctions of intuitively unrelated events, as in the case of a -algebra. It would be implausible, for instance, to disregard the individual probabilities assigned to 'a blizzard' and to 'an interest-rate increase'when determining the collective probability of the disjunction of these events. Here, however, we focus on general agendas, where the Condorcetian justi…cation for independence is more plausible.
There are also two pragmatic justi…cations for independence; these apply even when the agenda is a -algebra. First, aggregating probabilities issue-by-issue is informationally and computationally less demanding than a holistic approach and thus easier to implement in practice. Second, independence prevents certain types of agenda manipulation -the attempt by an agenda setter to in ‡uence the collective probability assigned to some events by adding other events to, or removing them from, the agenda.6 Nonetheless, independence should not be accepted uncritically, since it is vulnerable to a number of well-known objections.7
The consensus-preservation requirement
Our next requirement says that if all individuals assign probability 1 (certainty) to an event in the agenda, then its collective probability should also be 1.
Consensus preservation. For all A 2 X and all P 1 ; :::; P n 2 P X , if, for all i, P i (A) = 1, then P P 1 ;:::;Pn (A) = 1.
Like independence, this requirement is familiar from the literature, where it is sometimes expressed as a zero-probability preservation requirement. In the case of general agendas, we can also formulate several strengthened variants of the requirement, which extend it to other forms of consensus. Although these variants are not as compelling as their original precursor, they are still defensible in some cases. Moreover, when the agenda is a -algebra, they all collapse back into consensus preservation in its original form.
To introduce the di¤erent extensions of consensus preservation, we begin by drawing a distinction between 'explicitly revealed', 'implicitly revealed', and 'unrevealed'beliefs: Individual i's explicitly revealed beliefs are the probabilities assigned to events in the agenda X by the opinion function P i . Individual i's implicitly revealed beliefs are the probabilities assigned to any events in (X)nX by every probability function on (X) extending the opinion function P i ; we call such a probability function an extension of P i and use the notation P i . These probabilities are 'implied'by the opinion function P i . For instance, if P i assigns probability 1 to an event A in the agenda X, this 'implies'an assignment of probability 1 to all events B outside the agenda that are of the form B A. Individual i's unrevealed beliefs are probabilities for events in (X)nX that cannot be deduced from the opinion function P i . These are only privately held. For instance, the opinion function P i may admit extensions which assign probability 1 to an event B but may also admit extensions which assign a lower probability. Here, individual i's belief about B is unrevealed.
Consensus preservation in its original form concerns only explicitly revealed beliefs. The …rst strengthened variant extends the requirement to implicitly revealed beliefs. Let us say that an opinion function P on X implies certainty of an event A if P (A) = 1 for every extension P of P .
Implicit consensus preservation. For all A 2 (X) and all P 1 ; :::; P n 2 P X , if, for all i, P i implies certainty of A, then P P 1 ;:::;Pn also implies certainty of A.
This ensures that whenever all individuals either explicitly or implicitly assign probability 1 to some event, this is preserved at the collective level. Arguably, this requirement is almost as plausible as consensus preservation in its original form.
The second extension concerns unrevealed beliefs. Informally, it says that a unanimous assignment of probability 1 to some event should never be overruled, even if it is unrevealed. This is operationalized as the requirement that if every individual's opinion function is consistent with the assignment of probability 1 to some event (so that we cannot rule out the possibility of the individuals'privately making that assignment), then the collective opinion function should also be consistent with it. Formally, we say that an opinion function P on X is consistent with certainty of an event A if there exists some extension P of P such that P (A) = 1.
Consensus compatibility. For all A 2 (X) and all P 1 ; :::; P n 2 P X , if, for all i, P i is consistent with certainty of A, then P P 1 ;:::;Pn is also consistent with certainty of A.
The rationale for this requirement is a precautionary one: if it is possible that all individuals assign probability 1 to some event (though this may be unrevealed), the collective opinion function should not rule out certainty of A.
A third extension of consensus preservation concerns conditional beliefs. It looks more complicated than consensus compatibility, but it is less demanding. Its initial motivation is the idea that if all individuals are certain of some event in the agenda conditional on another event, then this conditional belief should be preserved collectively. For instance, if everyone is certain that there will be a famine, given a civil war, this belief should also be held collectively. Unfortunately, however, we cannot de…ne individual i's conditional probability of an event A, given another event B, simply as P i (AjB) = P i (A \ B)=P i (B) (where P i (B) 6 = 0 and P i is individual i's opinion function). This is because, even when A and B are in X, the event A \ B may be outside X and thus outside the domain of P i . So, we cannot know whether the individual is certain of A given B. But we can ask whether he or she could be certain of A given B, i.e., whether P i (AjB) = 1 for some extension P of P .
This motivates the requirement that if each individual could be certain of A given B, then the collective opinion function should also be consistent with this 'conditional certainty'. Again, this can be interpreted as requiring the preservation of certain unrevealed beliefs. A unanimous assignment of conditional probability 1 to one event, given another, should not be overruled, even if it is unrevealed.
We capture this in the following way. Suppose there is a …nite set of pairs of events in X -call them (A; B), (A 0 ; B 0 ), (A 00 ; B 00 ), and so on -such that each individual could be simultaneously certain of A given B, of A 0 given B 0 , of A 00 given B 00 , and so on. Then the collective opinion function should also be consistent with conditional certainty of A given B, A 0 given B 0 , and so on. Formally, for any …nite set S of pairs (A; B) of events in X, we say that an opinion function P on X is consistent with conditional certainty of all (A; B) in S if there exists some extension P of P such that P (AjB) = 1 for all (A; B) in S for which P (B) 6 = 0.
Conditional consensus compatibility. For all …nite sets S of pairs of events in X and all P 1 ; :::; P n 2 P X , if, for all i, P i is consistent with conditional certainty of all (A; B) in S, then P P 1 ;:::;Pn is also consistent with conditional certainty of all (A; B) in S.
The following proposition summarizes the logical relationships between the di¤erent consensus-preservation requirements; a proof is given in the Appendix.
Proposition 1 (a) Consensus preservation is implied by each of (i) implicit consensus preservation, (ii) consensus compatibility, and (iii) conditional consensus compatibility, and is equivalent to each of (i), (ii), and (iii) if the agenda X is a -algebra. (b) Consensus compatibility implies conditional consensus compatibility.
Each of our characterization results below uses consensus preservation in either its original form or one of the strengthened forms. Implicit consensus preservation does not appear in any of our results; we have included it here for the sake of conceptual completeness.8 4 When is opinion pooling neutral?
We now show that, for many agendas, the neutral pooling functions are the only pooling functions satisfying independence and consensus preservation in either its original form or one of the strengthened forms. The stronger the consensuspreservation requirement, the larger the class of agendas for which our characterization of neutral pooling holds. For the moment, we set aside the question of whether independence and consensus preservation imply linearity as well as neutrality; we address this question in the next section.
Three theorems
We begin with the strongest of our consensus-preservation requirements, i.e., consensus compatibility. If we impose this requirement, our characterization of neutral pooling holds for a very large class of agendas: all non-nested agendas. We call an agenda X nested if it has the form X = fA; A c : A 2 X + g for some set X + ( X) that is linearly ordered by set-inclusion, and non-nested otherwise. For example, binary agendas of the form X = fA; A c g are nested: take X + := fAg, which is trivially linearly ordered by set-inclusion. Also, the agenda X = f( 1; t]; (t; 1) : t 2 Rg (where the set of possible worlds is = R) is nested: take X + := f( 1; t] : t 2 Rg, which is linearly ordered by set-inclusion.
By contrast, any agenda consisting of multiple logically independent pairs
A; A c is non-nested, i.e., X is non-nested if X = fA k ; A c k : k 2 Kg with jKj 2 such that every subset S X containing precisely one member of each pair fA k ; A c k g (with k 2 K) is consistent.
As mentioned in the introduction, such agendas are of practical importance because many decision problems involve events that exhibit only probabilistic dependencies (correlations), but no logical ones. Another example of a non-nested agenda is the one in the expert-committee example above, containing A, A ! B, B, and their complements.
Theorem 1 (a) For any non-nested agenda X, every pooling function F : P n X ! P X satisfying independence and consensus compatibility is neutral.
(b) For any nested agenda X ( 6 = f?; g), there exists a non-neutral pooling function F : P n X ! P X satisfying independence and consensus compatibility.
Part (b) shows that the agenda condition used in part (a) -non-nestedness -is tight: whenever the agenda is nested, non-neutral pooling functions become possible. However, these pooling functions are non-neutral only in a limited sense: although the pooling criterion D A need not be the same for all events A 2 X, it must still be the same for all A 2 X + , and the same for all A 2 XnX + (with X + as de…ned above), so that pooling is 'neutral within X + 'and 'neutral within XnX + '. This is clear from the proof. 9What happens if we weaken the requirement of consensus compatibility to conditional consensus compatibility? Both parts of Theorem 1 continue to hold, though part (a) becomes logically stronger, and part (b) logically weaker. Let us state the modi…ed theorem explicitly:
Theorem 2 (a) For any non-nested agenda X, every pooling function F : P n X ! P X satisfying independence and conditional consensus compatibility is neutral. (b) For any nested agenda X ( 6 = f?; g), there exists a non-neutral pooling function F : P n X ! P X satisfying independence and conditional consensus compatibility.
The situation changes once we weaken the consensus requirement further, namely to consensus preservation simpliciter. The class of agendas for which our characterization of neutrality holds shrinks signi…cantly, namely to the class of path-connected agendas. Path-connectedness is an important condition in judgment-aggregation theory, where it was introduced by Nehring and Puppe (2010) (under the name 'total blockedness') and has been used, for example, to generalize Arrow's theorem (Dietrich andList 2007a, Dokow and[START_REF] Dokow | Aggregation of binary evaluations[END_REF].
To de…ne path-connectedness, we require one preliminary de…nition. Given an agenda X, we say that an event A 2 X conditionally entails another event B 2 X, written A ` B, if there exists a subset Y X (possibly empty, but not uncountably in…nite) such that fAg [ Y entails B, where, for non-triviality, Y [ fAg and Y [ fB c g are each consistent. For instance, if ? 6 = A B 6 = , then A ` B (take Y = ?; in fact, this is even an unconditional entailment). Also, for the agenda of our expert committee, X = fA;
A c ; A ! B; (A ! B) c ; B; B c g, we have A ` B (take Y = fA ! Bg).
We call an agenda X path-connected if any two events A; B 2 Xnf?; g can be connected by a path of conditional entailments, i.e., there exist events A 1 ; :::
; A k 2 X (k 1) such that A = A 1 ` A 2 ` ::: ` A k = B.
An example of a path-connected agenda is X := fA; A c : A R is a bounded intervalg, where the underlying set of worlds is = R. For instance, there is a path of conditional entailments from [0; 1] 2 X to [2; 3] 2 X given by [0; 1] ` [0; 3] ` [2; 3]. To establish [0; 1] ` [0; 3], it su¢ ces to conditionalize on the empty set of events Y = ? (i.e., [0; 1] even unconditionally entails [0; 3]). To establish [0; 3] ` [2; 3], one may conditionalize on Y = f[2; 4]g.
Many agendas are not path-connected, including all nested agendas (6 = f?; g) and the agenda in our expert-committee example. The following result holds.
Theorem 3 (a) For any path-connected agenda X, every pooling function F :
P n X ! P X satisfying independence and consensus preservation is neutral. (b) For any non-path-connected agenda X (…nite and distinct from f?; g), there exists a non-neutral pooling function F : P n X ! P X satisfying independence and consensus preservation.
Proof sketches
We now outline the proofs of Theorems 1 to 3. (Details are given in the Appendix.) We begin with part (a) of each theorem. Theorem 1(a) follows from Theorem 2(a), since both results apply to the same agendas but Theorem 1(a) uses a stronger consensus requirement.
To prove Theorem 2(a), we de…ne a binary relation on the set of all contingent events in the agenda. Recall that two events A and B are exclusive if A \ B = ? and exhaustive if A [ B = . For any A; B 2 Xnf?; g, we de…ne A B , there is a …nite sequence A 1 ; :::; A k 2 X of length k 1 with A 1 = A and A k = B such that any adjacent A j ; A j+1 are neither exclusive nor exhaustive.
Theorem 2(a) then follows immediately from the following two lemmas (proved in the Appendix).
Lemma 1 For any agenda X (6 = f?; g), the relation is an equivalence relation on Xnf?; g, with exactly two equivalence classes if X is nested, and exactly one if X is non-nested.
Lemma 2 For any agenda X (6 = f?; g), a pooling function satisfying independence and conditional consensus compatibility is neutral on each equivalence class with respect to (i.e., the local pooling criterion is the same for all events in the same equivalence class).
The proof of Theorem 3(a) uses the following lemma (broadly analogous to a lemma in binary judgment-aggregation theory; e.g., Nehring and[START_REF] Nehring | Abstract Arrovian Aggregation[END_REF]Dietrich andList 2007a).
Lemma 3 For any pooling function satisfying independence and consensus preservation, and all events A and B in the agenda X, if A ` B then D A D B , where D A and D B are the local pooling criteria for A and B, respectively. (Here D A D B means that, for all (p 1 ; :::; p n ), D A (p 1 ; :::; p n ) D B (p 1 ; :::; p n ).)
To see why Theorem 3(a) follows, simply note that D A D B whenever there is a path of conditional entailments from A 2 X to B 2 X (by repeated application of the lemma); thus, D A = D B whenever there are paths in both directions, as is guaranteed if the agenda is path-connected and A; B 6 2 f?; g.
Part (b) of each theorem can be proved by explicitly constructing a nonneutral pooling function -for an agenda of the relevant kind -which satis…es independence and the appropriate consensus-preservation requirement. In the case of Theorem 3(b), this pooling function is very complex, and hence we omit it in the main text. In the case of Theorems 1(a) and 1(b), the idea can be described informally. Recall that a nested agenda X can be partitioned into two subsets, X + and XnX + = fA c : A 2 X + g, each of which is linearly ordered by set-inclusion. The opinion pooling function constructed has the property that (i) all events A in X + have the same local pooling criterion D = D A , which can be de…ned, for example, as the square of a linear pooling criterion (i.e., we …rst apply a linear pooling criterion and then take the square), and (ii) all events in XnX + have the same 'complementary'pooling criterion D , de…ned as D (x 1 ; :::; x n ) = 1 D(1 x 1 ; :::; 1 x n ) for all (x 1 ; :::; x n ) 2 [0; 1] n . Showing that the resulting pooling function is well-de…ned and satis…es all the relevant requirements involves some technicality, in part because we allow the agenda to have any cardinality.
When is opinion pooling linear?
As we have seen, for many agendas, only neutral pooling functions can satisfy our two requirements. But are these functions also linear? As we now show, the answer depends on the agenda. If we suitably restrict the class of agendas considered in part (a) of each of our previous theorems, we can derive linearity rather than just neutrality. Similarly, we can expand the class of agendas considered in part (b) of each theorem, and replace non-neutrality with non-linearity.
Three theorems
As in the previous section, we begin with the strongest consensus-preservation requirement, i.e., consensus compatibility. While this requirement leads to neutrality for all non-nested agendas (by Theorem 1), it leads to linearity for all non-nested agendas above a certain size.
Theorem 4 (a) For any non-nested agenda X with jXnf ; ?gj > 4, every pooling function F : P n X ! P X satisfying independence and consensus compatibility is linear. (b) For any other agenda X ( 6 = f?; g), there exists a non-linear pooling function F : P n X ! P X satisfying independence and consensus compatibility.
Next, let us weaken the requirement of consensus compatibility to conditional consensus compatibility. While this requirement leads to neutrality for all nonnested agendas (by Theorem 2), it leads to linearity only for non-simple agendas. Like path-connected agendas, non-simple agendas play an important role in binary judgment-aggregation theory, where they are the agendas susceptible to the analogues of Condorcet's paradox: the possibility of inconsistent majority judgments (e.g., Dietrich andList 2007b, Nehring and[START_REF] Nehring | The structure of strategy-proof social choice -Part I: General characterization and possibility results on median spaces[END_REF].
To de…ne non-simplicity, we …rst require a preliminary de…nition. We call a set of events Y minimal inconsistent if it is inconsistent but every proper subset Y 0 ( Y is consistent. Examples of minimal inconsistent sets are (i) fA; B; (A \ B) c g, where A and B are logically independent events, and (ii) fA; A ! B; B c g, with A, B, and A ! B as de…ned in the expert-committee example above. In each case, the three events are mutually inconsistent, but any two of them are mutually consistent. The notion of a minimal inconsistent set is useful for characterizing logical dependencies between the events in the agenda. Trivial examples of minimal inconsistent subsets of the agenda are those of the form fA; A c g X, where A is contingent. But many interesting agendas have more complex minimal inconsistent subsets. One may regard sup Y X:Y is minimal inconsistent jY j as a measure of the complexity of the logical dependencies in the agenda X. Given this idea, we call an agenda X non-simple if it has at least one minimal inconsistent subset Y X containing more than two (but not uncountably many 10 ) events, and simple otherwise. For instance, the agenda consisting of A, A ! B, B and their complements in our expert-committee example is non-simple (take Y = fA; A ! B; B c g).
Non-simplicity lies logically between non-nestedness and path-connectedness: it implies non-nestedness, and is implied by path-connectedness (if X 6 = f ; ?g). 11
10 This countability addition can often be dropped because all minimal inconsistent sets Y X are automatically …nite or at least countable. This is so if X is …nite or countably in…nite, and also if the underlying set of worlds is countable. It can further be dropped in case the events in X are represented by sentences in a language. Then, provided this language belongs to a compact logic, all minimal inconsistent sets Y X are …nite (because any inconsistent set has a …nite inconsistent subset). By contrast, if X is a -algebra and has in…nite cardinality, then it usually contains events not representing sentences, because countably in…nite disjunctions cannot be formed in a language. Such agendas often have uncountable minimal inconsistent subsets. For instance, if X is the -algebra of Borel-measurable subsets of R, then its subset Y = fRnfxg : x 2 Rg is uncountable and minimal inconsistent. This agenda is nonetheless non-simple, since it also has many …nite minimal inconsistent subsets Y with jY j 3 (e.g., Y = ff1; 2g; f1; 3g; f2; 3gg). 11 To give an example of a non-nested but simple agenda X, let X = fA; A c ; B; B c g, where the events A and B are logically independent, i.e., A \
B; A \ B c ; A c \ B; A c \ B c 6 = ?. Clearly, this
To see how exactly non-simplicity strengthens non-nestedness, note the following fact [START_REF] Dietrich | Judgment aggregation and agenda manipulation[END_REF]:
Fact (a)
(Y nfAg) [ fA c g is consistent for each A 2 Y .
Note that the characterizing condition in (b) can be obtained from the one in (a) simply by replacing 'subset Y 'with 'inconsistent subset Y (of countable size)'.
Theorem 5 (a) For any non-simple agenda X with jXnf ; ?gj > 4, every pooling function F : P n X ! P X satisfying independence and conditional consensus compatibility is linear. (b) For any simple agenda X (…nite and distinct from f?; g), there exists a non-linear pooling function F : P n X ! P X satisfying independence and conditional consensus compatibility.
Finally, we turn to the least demanding consensus requirement, namely consensus preservation simpliciter. We have seen that this requirement leads to neutral pooling if the agenda is path-connected (by Theorem 3). To obtain a characterization of linear pooling, path-connectedness alone is not enough. In the following theorem, we impose an additional condition on the agenda. We call an agenda X partitional if it has a subset Y which partitions into at least three non-empty events (where Y is …nite or countably in…nite), and non-partitional otherwise. (A subset Y of X partitions if the elements of Y are individually non-empty, pairwise disjoint, and cover .) For instance, X is partitional if it contains (nonempty) events A, A c \ B, and
A c \ B c ; simply let Y = fA; A c \ B; A c \ B c g.
Theorem 6 (a) For any path-connected and partitional agenda X, every pooling function F : P n X ! P X satisfying independence and consensus preservation is linear. (b) For any non-path-connected (…nite) agenda X, there exists a non-linear pooling function F : P n X ! P X satisfying independence and consensus preservation.
agenda is non-nested. It is simple since its only minimal inconsistent subsets are fA; A c g and fB; B c g. To give an example of a non-path-connected, but non-simple agenda, let X consist of A; A ! B; B and their complements, as in our example above. We have already observed that it is non-simple. To see that it is not path-connected, note, for example, that there is no path of conditional entailments from B to B C . Part (b) shows that one of theorem's agenda conditions, path-connectedness, is necessary for the characterization of linear pooling (which is unsurprising, as it is necessary for the characterization of neutral pooling). By contrast, the other agenda condition, partitionality, is not necessary: linearity also follows from independence and consensus preservation for some non-partitional but path-connected agendas. So, the agenda conditions of part (a) are non-minimal. We leave the task of …nding minimal agenda conditions as a challenge for future research. 12Despite its non-minimality, the partionality condition in Theorem 6 is not redundant: if it were dropped (and not replaced by something else), part (a) would cease to hold. This follows from the following (non-trivial) proposition:
Proposition 2 For some path-connected and non-partitional (…nite) agenda X, there exists a non-linear pooling function F : P n X ! P X satisfying independence (even neutrality) and consensus preservation. 13Readers familiar with binary judgment-aggregation theory will notice that the agenda which we construct to prove this proposition violates an important agenda condition from that area, namely even-number negatability (or non-a¢ neness) (see [START_REF] Dietrich | A generalised model of judgment aggregation[END_REF], Dietrich and List 2007[START_REF] Dokow | Aggregation of binary evaluations[END_REF]. It would be intriguing if the same condition turned out to be the correct minimal substitute for partionality in Theorem 6.
Proof sketches
We now describe how Theorems 4 to 6 can be proved. (Again, details are given in the Appendix.) We begin with part (a) of each theorem. To prove Theorem 4(a), consider a non-nested agenda X with jXnf ; ?gj > 4 and a pooling function F satisfying independence and consensus compatibility. We want to show that F is linear. Neutrality follows from Theorem 1(a). From neutrality, we can infer linearity by using two lemmas. The …rst contains the bulk of the work, and the second is an application of Cauchy's functional equation (similar to its application in [START_REF] Aczél | Lectures on Functional Equations and their Applications[END_REF][START_REF] Aczél | A characterization of weighted arithmetic means[END_REF][START_REF] Mcconway | Marginalization and Linear Opinion Pools[END_REF]. Let us write 0 and 1 to denote the n-tuples (0; :::; 0) and (1; :::; 1), respectively.
:; x n ) = n X i=1 w i x i for all x 2 [0; 1] n
for some non-negative weights w 1 ; :::; w n with sum 1.
The proof of Theorem 5(a) follows a similar strategy, but replaces Lemma 4 with the following lemma:
Lemma 6 If D : [0; 1] n ! [0; 1]
is the local pooling criterion of a neutral and conditional-consensus-compatible pooling function for a non-simple agenda X, then (2) holds.
Finally, Theorem 6(a) can also be proved using a similar strategy, this time replacing Lemma 4 with the following lemma:
Lemma 7 If D : [0; 1] n ! [0; 1]
is the local pooling criterion of a neutral and consensus-preserving pooling function for a partitional agenda X, then (2) holds.
Part (b) of each of Theorems 4 to 6 can be proved by constructing a suitable example of a non-linear pooling function. In the case of Theorem 4(b), we can re-use the non-neutral pooling function constructed to prove Theorem 1(b) as long as the agenda satis…es jXnf ; ?gj > 4; for (small) agendas with jXnf ; ?gj 4, we construct a somewhat simplistic pooling function generating collective opinion functions that only assign probabilities of 0, 1 2 , or 1. The constructions for Theorems 5(b) and 6(b) are more di¢ cult; the one for Theorem 5(b) also has the property that collective probabilities never take values other than 0, 1 2 , or 1.
Classic results as special cases
It is instructive to see how our present results generalize classic results in the literature, where the agenda is a -algebra (especially [START_REF] Aczél | Lectures on Functional Equations and their Applications[END_REF][START_REF] Aczél | A characterization of weighted arithmetic means[END_REF][START_REF] Mcconway | Marginalization and Linear Opinion Pools[END_REF]. Note that, for a -algebra, all the agenda conditions we have used reduce to a simple condition on agenda size:
Lemma 8 For any agenda X (6 = f ; ?g) that is closed under pairwise union or intersection (i.e., any agenda that is an algebra), the conditions of non-nestedness, non-simplicity, path-connectedness, and partitionality are equivalent, and are each satis…ed if and only if jXj > 4.
Note, further, that when X is a -algebra, all of our consensus requirements become equivalent, as shown by Proposition 1(a). It follows that, in the special case of a -algebra, our six theorems reduce to two classical results:
Theorems 1 to 3 reduce to the result that all pooling functions satisfying independence and consensus preservation are neutral if jXj > 4, but not if jXj = 4; Theorems 4 to 6 reduce to the result that all pooling functions satisfying independence and consensus preservation are linear if jXj > 4, but not if jXj = 4.
The case jXj < 4 is uninteresting because it implies that X = f?; g, given that X is a -algebra. In fact, we can derive these classic theorems not only for -algebras, but also for algebras. This is because, given Lemma 8, Theorems 3 and 6 have the following implication:
Corollary 1 For any agenda X that is closed under pairwise union or intersection (i.e., any agenda that is an algebra), (a) if jXj > 4, every pooling function F : P n X ! P X satisfying independence and consensus preservation is linear (and by implication neutral); (b) if jXj = 4, there exists a non-neutral (and by implication non-linear) pooling function F : P n X ! P X satisfying independence and consensus preservation.
Probabilistic preference aggregation
To illustrate the use of general agendas, we now present an application to probabilistic preference aggregation, a probabilistic analogue of Arrovian preference aggregation. A group seeks to rank a set K of at least two (mutually exclusive and exhaustive) alternatives in a linear order. Let K be the set of all strict orderings over K (asymmetric, transitive, and connected binary relations). Informally, K can represent any set of distinct objects, e.g., policy options, candidates, social states, or distributions of goods, and an ordering over K can have any interpretation consistent with a linear form (e.g., 'better than', 'preferable to', 'higher than', 'more competent than', 'less unequal than'etc.).
For any two distinct alternatives x and y in K, let x y denote the event that x is ranked above y; i.e., x y denotes the subset of K consisting of all those orderings in K such that x y. We de…ne the preference agenda as the set X K = fx y : x; y 2 K with x 6 = yg; which is non-empty and closed under complementation, as required for an agenda (this construction draws on Dietrich and List 2007a). In our opinion pooling problem, each individual i submits probability assignments for the events in X K , and the group then determines corresponding collective probability assignments. An agent's opinion function P : X K ! [0; 1] can be interpreted as capturing the agent's degrees of belief about which of the various pairwise comparisons x y (in X K ) are 'correct'; call this the belief interpretation. Thus, for any two distinct alternatives x and y in K, P (x y) can be interpreted as the agent's degree of belief in the event x y, i.e., the event that x is ranked above (preferable to, better than, higher than ...) y. (On a di¤erent interpretation, the vaguepreference interpretation, P (x y) could represent the degree to which the agent prefers x to y, so that the present framework would capture vague preferences over alternatives as opposed to degrees of belief about how they are ranked in terms of the appropriate criterion.) A pooling function, as de…ned above, maps n individual such opinion functions to a single collective one.
What are the structural properties of this preference agenda? Lemma 9 For a preference agenda X K , the conditions of non-nestedness, nonsimplicity, and path-connectedness are equivalent, and are each satis…ed if and only if jKj > 2; the condition of partitionality is violated for any K.
The proof that the preference agenda is non-nested if and only if jKj > 2 is trivial. The analogous claims for non-simplicity and path-connectedness are wellestablished in binary judgment-aggregation theory, to which we refer the reader. 14Finally, it is easy to show that any preference agenda violates partitionality.
Since the preference agenda is non-nested, non-simple, and path-connected when jKj > 2, Theorems 1(a), 2(a), 3(a), 4(a), and 5(a) apply; but Theorem 6(a) does not, because partitionality is violated. Let us here focus on Theorem 5. This theorem has the following corollary for the preference agenda:
Corollary 2 For a preference agenda X K , (a) if jKj > 2, every pooling function F : P n X ! P X satisfying independence and conditional consensus compatibility is linear; (b) if jKj = 2, there exists a non-linear pooling function F : P n X ! P X satisfying independence and conditional consensus compatibility.
It is interesting to compare this result with Arrow's classic theorem. While Arrow's theorem yields a negative conclusion if jKj > 2 (showing that only dictatorial aggregation functions satisfy its requirements), our linearity result does not have any negative ‡avour. We obtain this positive result despite the fact that our axiomatic requirements are comparable to Arrow's. Independence, in our framework, is the probabilistic analogue of Arrow's independence of irrelevant alternatives: for any pair of distinct alternatives x; y in K, the collective probability for x y should depend only on individual probabilities for x y. Conditional consensus compatibility is a strengthened analogue of Arrow's weak Pareto principle (an exact analogue would be consensus preservation): it requires that, for any two pairs of distinct alternatives, x; y 2 K and v; w 2 K, if all individuals are certain that x y given that v w, then this agreement should be preserved at the collective level. The analogues of Arrow's universal domain and collective rationality are built into our de…nition of a pooling function, whose domain and co-domain are de…ned as the set of all (by de…nition coherent) opinion functions over X K .
Thus our result points towards an alternative escape-route from Arrow's impossibility theorem (though it may be practically applicable only in special contexts): if we enrich Arrow's informational framework by allowing degrees of belief over di¤erent possible linear orderings as input and output of the aggregation (or alternatively, vague preferences, understood probabilistically), then we can avoid Arrow's dictatorship conclusion. Instead, we obtain a positive characterization of linear pooling, despite imposing requirements on the pooling function that are stronger than Arrow's classic requirements (in so far as conditional consensus compatibility is stronger than the analogue of the weak Pareto principle).
On the belief interpretation, the present informational framework is meaningful so long as there exists a fact of the matter about which of the orderings in K is the 'correct' one (e.g., an objective quality ordering), so that it makes sense to form beliefs about this fact. On the vague-preference interpretation, our framework requires that vague preferences over pairs of alternatives are extendable to a coherent probability distribution over the set of 'crisp'orderings in K .
There are, of course, substantial bodies of literature on avoiding Arrow's dictatorship conclusion in richer informational frameworks and on probabilistic or vague preference aggregation. It is well known, for example, that the introduction of interpersonally comparable preferences (of an ordinal or cardinal type) is su¢ cient for avoiding Arrow's negative conclusion (e.g., [START_REF] Sen | Collective Choice and Social Welfare[END_REF][START_REF] Sen | Collective Choice and Social Welfare[END_REF]. Also, di¤erent models of probabilistic or vague preference aggregation have been proposed. 15 A typical assumption is that, for any pair of alternatives x; y 2 K, each individual prefers x to y to a certain degree between 0 and 1. However, the standard constraints on vague or fuzzy preferences do not require individuals to hold probabilistically coherent opinion functions in our sense; hence the literature has tended to generate Arrow-style impossibility results. By contrast, it is illuminating to see that a possibility result on probabilistic preference aggregation can be derived as a corollary of one of our new results on probabilistic opinion pooling.
A uni…ed perspective
Finally, we wish to compare probabilistic opinion pooling with binary judgment aggregation and Arrovian preference aggregation in its original form. Thanks to the notion of a general agenda, we can represent each of these other aggregation problems within the present framework.
To represent binary judgment aggregation, we simply need to restrict attention to binary opinion functions, i.e., opinion functions that take only the values 0 and 1. 16 Binary opinion functions correspond to consistent and complete judgment sets in judgment-aggregation theory, i.e., sets of the form J X which satisfy \ A2J A 6 = ? (consistency) and contain a member of each pair A; A c 2 X (completeness).17 A binary opinion pooling function assigns to each pro…le of binary opinion functions a collective binary opinion function. Thus, binary opinion pooling functions correspond to standard judgment aggregation functions (with universal domain and consistent and complete outputs). To represent preference aggregation, we need to restrict attention both to the preference agenda, as introduced in Section 7, and to binary opinion functions, as just de…ned. Binary opinion functions for the preference agenda correspond to linear preference orders, as familiar from preference aggregation theory in the tradition of Arrow. Here, binary opinion pooling functions correspond to Arrovian social welfare functions.
The literature on binary judgment aggregation contains several theorems that use axiomatic requirements similar to those used here. In the binary case, however, these requirements lead to dictatorial, rather than linear, aggregation, as in Arrow's original impossibility theorem in preference-aggregation theory. In fact, Arrow-like theorems are immediate corollaries of the results on judgment aggregation, when applied to the preference agenda (e.g., Dietrich andList 2007a, List and[START_REF] List | Aggregating sets of judgments: Two impossibility results compared[END_REF]. In particular, the independence requirement reduces to Arrow's independence of irrelevant alternatives, and the unanimity-preservation requirements reduce to variants of the Pareto principle.
How can the same axiomatic requirements lead to a positive conclusionlinearity -in the probabilistic framework and to a negative one -dictatorshipin the binary case? The reason is that, in the binary case, linearity collapses into dictatorship because the only well-de…ned linear pooling functions are dictatorial here. Let us explain this point. Linearity of a binary opinion pooling function F is de…ned just as in the probabilistic framework: there exist real-valued weights w 1 ; :::; w n 0 with w 1 + ::: + w n = 1 such that, for every pro…le (P 1 ; :::; P n ) of binary opinion functions, the collective truth-value of any given event A in the agenda X is the weighted arithmetic average w 1 P 1 (A) + + w n P n (A). Yet, for this to de…ne a proper binary opinion pooling function, some individual i must get a weight of 1 and all others must get a weight of 0, since otherwise the average w 1 P 1 (A) + + w n P n (A) could fall strictly between 0 and 1, violating the binary restriction. In other words, linearity is equivalent to dictatorship here. 18 We can obtain a uni…ed perspective on several distinct aggregation problems by combining this paper's linearity results with the corresponding dictatorship results from the existing literature (adopting the uni…cation strategy proposed in [START_REF] Dietrich | The aggregation of propositional attitudes: towards a general theory[END_REF]. This yields several uni…ed characterization theorems applicable to probability aggregation, judgment aggregation, and preference aggregation. Let us state these results. The …rst combines Theorem 4 with a result due to [START_REF] Dietrich | Judgment aggregation and agenda manipulation[END_REF]; the second combines Theorem 5 with a result due to [START_REF] Dietrich | Propositionwise judgment aggregation: the general case[END_REF]; and the third combines Theorem 6 with the analogue of Arrow's theorem in judgment aggregation (Dietrich andList 2007a and[START_REF] Dokow | Aggregation of binary evaluations[END_REF][START_REF] Dokow | Aggregation of binary evaluations[END_REF]. In the binary case, the independence requirement and our various unanimity requirements are de…ned as in the probabilistic framework, but with a restriction to binary opinion functions. 19 Theorem 4 + (a) For any non-nested agenda X with jXnf ; ?gj > 4, every binary or probabilistic opinion pooling function satisfying independence and consensus compatibility is linear (where linearity reduces to dictatorship in the binary case). (b) For any other agenda X ( 6 = f?; g), there exists a non-linear binary or probabilistic opinion pooling function satisfying independence and consensus compatibility.
Theorem 5 + (a) For any non-simple agenda X with jXnf ; ?gj > 4, every 18 To be precise, for (trivial) agendas with Xnf ; ?g = ?, the weights w i may di¤er from 1 and 0. But it still follows that every linear binary opinion pooling function (in fact, every binary opinion pooling function) is dictatorial here, for the trivial reason that there is only one binary opinion function and thus only one (dictatorial) binary opinion pooling function. 19 In the binary case, two of our unanimity-preservation requirements (implicit consensus preservation and consensus compatibility) are equivalent, because every binary opinion function is uniquely extendible to (X). Also, conditional consensus compatibility can be stated more easily in the binary case, namely in terms of a single conditional judgment rather than a …nite set of conditional judgments.
binary or probabilistic opinion pooling function satisfying independence and conditional consensus compatibility is linear (where linearity reduces to dictatorship in the binary case). (b) For any simple agenda X (…nite and distinct from f?; g), there exists a non-linear binary or probabilistic opinion pooling function satisfying independence and conditional consensus compatibility.
Theorem 6 + (a) For any path-connected and partitional agenda X, every binary or probabilistic opinion pooling function satisfying independence and consensus preservation is linear (where linearity reduces to dictatorship in the binary case). (b) For any non-path-connected (…nite) agenda X, there exists a non-linear binary or probabilistic opinion pooling function satisfying independence and consensus preservation.20
By Lemma 9, Theorems 4 + , 5 + , and 6 + are relevant to preference aggregation insofar as the preference agenda X K satis…es each of non-nestedness, nonsimplicity, and path-connectedness if and only if jKj > 2, where K is the set of alternatives. Recall, however, that the preference agenda is never partitional, so that part (a) of Theorem 6 + never applies. By contrast, the binary result on which part (a) is based applies to the preference agenda, as it uses the weaker condition of even-number-negatability (or non-a¢ neness) instead of partitionality (and that weaker condition is satis…ed by X K if jKj > 2). As noted above, it remains an open question how far partitionality can be weakened in the probabilistic case. 21
A Proofs
We now prove all our results. In light of the mathematical connection between the present results and those in our companion paper on 'premise-based'opinion pooling for -algebra agendas [START_REF] Dietrich | Probabilistic opinion pooling generalized -Part two: The premise-based approach[END_REF], one might imagine two possible proof strategies: either one could prove our present results directly and those in the companion paper as corollaries, or vice versa. In fact, we will mix those two strategies. We will prove parts (a) of all present theorems directly (and use them in the companion paper to derive the corresponding results), while we will prove parts (b) directly in some cases and as corollaries of corresponding results from the companion paper in others. This Appendix is organised as follows. In Sections A.1 to A.5, we prove parts (a) of Theorems 2 to 6, along with related results. Theorem 1(a) requires no independent proof, as it follows from Theorem 2(a). In Section A.6, we clarify the connection between the two papers, and then prove parts (b) of all present theorems. Finally, in Section A.7, we prove Propositions 1 and 2.
A.1 Proof of Theorem 2(a)
As explained in the main text, Theorem 2(a) follows from Lemmas 1 and 2. We now prove these lemmas. To do so, we will also prove some preliminary results.
Lemma 10 Consider any agenda X.
(a)
de…nes an equivalence relation on Xnf?; g: (b) A B , A c B c for all events A; B 2 Xnf?; g. (c) A B ) A B for all events A; B 2 Xnf?; g. (d) If X 6 = f?; g, the relation has either a single equivalence class, namely Xnf?; g, or exactly two equivalence classes, each one containing exactly one member of each pair A; A c 2 Xnf?; g.
Proof. (a) Re ‡exivity, symmetry, and transitivity on Xnf?; g are all obvious (we have excluded ? and to ensure re ‡exivity).
(b) It su¢ ces to prove one direction of implication (as (A c ) c = A for all A 2 X). Let A; B 2 Xnf?; g with A B. Then there is a path A 1 ; :::; A k 2 X from A to B such that any neighbours A j ; A j+1 are non-exclusive and non-exhaustive. So A c 1 ; :::; A c k is a path from A c to B c , where any neighbours A c j ; A c j+1 are non-exclusive (as (d) Let X 6 = f?; g. Suppose the number of equivalence classes with respect to is not one. As Xnf?; g 6 = ?, it is not zero. So it is at least two. We show two claims: Claim 1. There are exactly two equivalence classes with respect to . Proof of Claim 2. For a contradiction, let Z be an ( -)equivalence class containing the pair A; A c . By assumption, Z is not the only equivalence class, so there is a B 2 Xnf?; g with B 6 A (hence B 6 A c ). Then either A \ B = ? or A [ B = . In the …rst case, B A c , so that B A c by (c), a contradiction. In the second case, A c B, so that A c B by (c), a contradiction.
A c j \ A c j+1 = (A j [ A j+1 ) c 6 = c = ?) and non-exhaustive (as A c j [ A c j+1 = (A j \ A j+1 ) c 6 = ? c = ). So, A c B c .
Proof of Lemma 1. Consider an agenda X 6 = f?; g. By Lemma 10(a), is indeed an equivalence relation on Xnf?; g. By Lemma 10(d), it remains to prove that X is nested if and only if there are exactly two equivalence classes. Note that X is nested if and only if Xnf?; g is nested. So we may assume without loss of generality that ?; = 2 X.
First, suppose there are two equivalence classes. Let X + be one of them. By Lemma 10(d), X = fA; A c : A 2 X + g. To complete the proof that X is nested, we show that X + is linearly ordered by set-inclusion . Clearly, is re ‡exive, transitive, and anti-symmetric. We must show that it is connected. So, let A; B 2 X + ; we prove that A B or B A.
Since A 6 B c (by Lemma 10(d)), either A \ B c = ? or A [ B c = . So, either A B or B A.
Conversely, let X be nested. So X = fA; A c : A 2 X + g for some set X + that is linearly ordered by set inclusion. Let A 2 X + . We show that A 6 A c , implying that X has at least -so by Lemma 10(d) exactly -two equivalence classes. For a contradiction, suppose A A c . Then there is a path A 1 ; :::; A k 2 X from A = A 1 to A c = A k such that, for all neighbours A j ; A j+1 , A j \ A j+1 6 = ? and A j [ A j+1 6 = . Since each event C 2 X either is in X + or has its complement in X + , and since A 1 = A 2 X + and A c k = A 2 X + , there are neighbours A j ; A j+1 such that A j ; A c j+1 2 X + . So, as X + is linearly ordered by , either
A j A c j+1 or A c j+1 A j , i.e., either A j \ A j+1 = ? or A j [ A j+1 = , a contradiction.
We now give a useful re-formulation of the requirement of conditional consensus compatibility for opinion pooling on a general agenda X. Note …rst that an opinion function is consistent with certainty of A (2 X) given B (2 X) if and only if it is consistent with certainty of the event 'B implies A'(i.e., with zero probability of the event BnA or 'B but not A'). This observation yields the following reformulation of conditional consensus compatibility (in which the roles of A and B have been interchanged):
Implication preservation. For all P 1 ; :::; P n 2 P X , and all …nite sets S of pairs (A; B) of events in X, if every opinion function P i is consistent with certainty that A implies B for all (A; B) in S (i.e., some extension P i 2 P (X) of P i satis…es P i (AnB) = 0 for all pairs (A; B) 2 S), then so is the collective opinion function P P 1 ;:::;Pn .
Proposition 3 For any agenda X, a pooling function F : P n X ! P X is conditional consensus compatible if and only if it is implication preserving.
Proof of Lemma 2. Let F be an independent and conditional-consensus-compatible pooling function for agenda X. For all A 2 X, let D A be the pooling criterion given by independence. We show that D A = D B for all A; B 2 X with A \ B 6 = ? and A [ B 6 = . This will imply that D A = D B whenever A B (by induction on the length of a path from A to B), which completes the proof. So, let A; B 2 X with A \ B 6 = ? and A [ B 6 = . Notice that A \ B, A [ B, and AnB need not belong to X. Let x 2 [0; 1] n ; we show that D A (x) = D B (x). As A \ B 6 = ? and A c \ B c = (A [ B) c 6 = ?, there are P 1 ; :::; P n 2 P (X) such that P i (A \ B) = x i and P i (A c \ B c ) = 1 x i for all i = 1; :::; n. Now consider the opinion functions P 1 ; :::; P n 2 P X given by P i := P i j X . Since P i (AnB) = 0 and P i (BnA) = 0 for all i, the collective opinion function P P 1 ;:::;Pn has an extension P P 1 ;:::;Pn 2 P (X) such that P P 1 ;:::;Pn (AnB) = P P 1 ;:::;Pn (BnA) = 0, by implication preservation (which is equivalent to conditional consensus compatibility by Proposition 3). So P P 1 ;:::;Pn (A) = P P 1 ;:::;Pn (A \ B) = P P 1 ;:::;Pn (B), and hence, P P 1 ;:::;Pn (A) = P P 1 ;:::;Pn (B). So, using the fact that P P 1 ;:::;Pn (A) = D A (x) (as P i (A) = x i for all i) and P P 1 ;:::;Pn (B) = D B (x) (as P i (B) = x i for all i), we have
D A (x) = D B (x).
A.2 Proof of Theorem 3(a)
As explained in the main text, Theorem 3(a) follows from Lemma 3, which we now prove.
Proof of Lemma 3. Let F : P n X ! P X be independent and consensus-preserving. Let A; B 2 X such that A ` B, say in virtue of (countable) set Y X. Write D A and D B for the pooling criterion for A and B, respectively. Let x = (x 1 ; :::; x n ) 2 [0; 1] n . We show that D A (x) D B (x). As \ C2fAg[Y C is non-empty but has empty intersection with B c (by the conditional entailment), it equals its intersection with B, so \ C2fA;Bg[Y C 6 = ?. Similarly, as \ C2fB c g[Y C is non-empty but has empty intersection with A, it equals its intersection with A c , so
\ C2fA c ;B c g[Y C 6 = ?. Hence there exist ! 2 \ C2fA;Bg[Y C and ! 0 2 \ C2fA c ;B c g[Y C
. For each individual i, we de…ne a probability function P i : (X) ! [0; 1] by P i := x i ! + (1 x i ) ! 0 (where ! ; ! 0 : (X) ! [0; 1] are the Dirac-measures at ! and ! 0 , respectively), and we then let P i := P i j X . As each P i satis…es P i (A) = P i (B) = x i , P P 1 ;:::;Pn (A) = D A (P 1 (A); :::; P n (A)) = D A (x), P P 1 ;:::;Pn (B) = D B (P 1 (B); :::; P n (B)) = D B (x).
Further, for each P i and each C 2 Y , we have P i (C) = 1, so that P P 1 ;:::;Pn (C) = 1 (by consensus preservation). Hence P P 1 ;:::;Pn (\ C2Y C) = 1, since 'countable inter-sections preserve probability one'. So, P P 1 ;:::;Pn (\ C2fAg[Y C) = P P 1 ;:::;Pn (A) = D A (x), P P 1 ;:::;Pn (\ C2fBg[Y C) = P P 1 ;:::;Pn (B) = D B (x).
To prove that D A (x) D B (x), it su¢ ces to show that P P 1 ;:::;Pn (\ C2fAg[Y C) P P 1 ;:::;Pn (\ C2fBg[Y C). This is true because
\ C2fAg[Y C = \ C2fA;Bg[Y \ C2fBg[Y C,
where the identity holds by an earlier argument.
A.3 Proof of Theorem 4(a)
As explained in the main text, Theorem 4(a) follows from Theorem 1(a) via Lemmas 4 and 5.22 It remains to prove both lemmas. We draw on a known agenda characterization result and a technical lemma.
Proposition 4 (Dietrich 2013) For any agenda X, the following are equivalent: Proof. (a) As X 6 = f ; ?g, we may pick some A 2 Xnf ; ?g. For each x 2 [0; 1] n , there exist (by A 6 = ?; ) opinion functions P 1 ; :::; P n 2 P X such that (P 1 (A); :::; P n (A)) = x, which implies that (P 1 (A c ); :::; P n (A c )) = 1 x and D(x) + D(1 x) = P P 1 ;:::;Pn (A) + P P 1 ;:::;Pn (A c ) = 1.
(b) Given consensus-preservation D(1) = 1. By part (a), D(0) = 1 D(1). So D(0) = 0.
Proof of Lemma 4. Let D be the local pooling criterion of such a pooling function for such an agenda X. Consider any x; y; z 2 [0; 1] n with sum 1. By Proposition 4, there exist A; B; C 2 X such that each of the sets
A := A c \ B \ C, B := A \ B c \ C, C := A \ B \ C c
is non-empty. For all individuals i, since x i + y i + z i = 1 and since A ; B ; C are pairwise disjoint non-empty members of (X), there exists a P i 2 P (X) such that P i (A ) = x i , P i (B ) = y i and P i (C ) = z i . By construction,
P i (A [ B [ C ) = x i + y i + z i = 1 for all i:
(3)
Let P i := P i j X for each individual i. For the pro…le (P 1 ; :::; P n ) 2 P n X thus de…ned, we consider the collective opinion function P P 1 ;:::;Pn . We complete the proof by proving two claims.
Claim 1. P (A ) + P (B ) + P (C ) = P (A [ B [ C ) = 1 for some P 2 P (X) extending P P 1 ;:::;Pn .
The …rst identity holds for all extensions P 2 P (X) of P , by pairwise disjointness of A ; B ; C . For the second identity, note that each P i has an extension P i 2 P (X) for which P i (A [ B [ C ) = 1, so that by consensus compatibility P P 1 ;:::;Pn also has such an extension.
Consider any individual i. We de…ne D i : [0; 1] ! [0; 1] by D i (t) = D(0; :::; 0; t; 0; :::; 0), where t occurs at position i in (0; :::; 0; t; 0; :::; 0). By (7), D i (s + t) = D i (s) + D i (t) for all s; t 0 with s + t 1. As one can easily check, D i can be extended to a function D i : [0; 1) ! [0; 1) such that D i (s + t) = D i (s) + D i (t) for all s; t 0, i.e., such that D i satis…es the non-negative version of Cauchy's functional equation. So, there is some w i 0 such that D i (t) = w i t for all t 0 (by Theorem 1 in [START_REF] Aczél | Lectures on Functional Equations and their Applications[END_REF]. Now, for all x 2 [0; 1] n , D(x) = X n i=1 D i (x i ) (by repeated application of ( 7)), and so (as
D i (x i ) = D i (x i ) = w i x i ) D(x) = X n i=1 w i x i . Applying the latter with x = 1 yields D(1) = X n i=1 w i , hence X n i=1 w i = 1.
A.4 Proof of Theorem 5(a)
As explained in the main text, Theorem 5(a) follows from Theorem 2(a) via Lemmas 6 and 5. 23 It remains to prove Lemma 6.
Proof of Lemma 6. Let D be the local pooling criterion of a neutral and conditionalconsensus-compatible pooling function for a non-simple agenda X. Consider any x; y; z 2 [0; 1] n with sum 1. As X is non-simple, there is a (countable) minimal inconsistent set Y X with jY j 3. Pick pairwise distinct A; B; C 2 Y . Let
A := \ E2Y nfAg E, B := \ E2Y nfBg E, C := \ E2Y nfCg E.
As (X) is closed under countable intersections, A ; B ; C 2 (X). For each i, as x i + y i + z i = 1 and as A ; B ; C are (by Y 's minimal inconsistency) pairwise disjoint non-empty members of (X), there exists a P i 2 P (X) such that
P i (A ) = x i ; P i (B ) = y i ; P i (C ) = z i .
By construction,
P i (A [ B [ C ) = x i + y i + z i = 1 for all i. (8)
Now let P i := P i j X for each individual i, and let P := P P 1 ;:::;Pn . We derive four properties of P (Claims 1-4), which then allow us to show that D(x) + D(y) + D(z) = 1 (Claim 5).
Claim 1. P (\ E2Y nfA;B;Cg E) = 1 for all extensions P 2 P (X) of P .
For all E 2 Y nfA; B; Cg, we have E A [ B [ C , so that by (8) P 1 (E) = ::: = P n (E) = 1, and hence P (E) = 1 (by consensus preservation, which follows from conditional consensus compatibility by Proposition 1(a)). So, for any extension P 2 P (X) of P , we have P (E) = 1 for all E 2 Y nfA; B; Cg. Thus P (\ E2Y nfA;B;Cg E) = 1, as 'countable intersections preserve probability one'.
Claim 2. P (A c [ B c [ C c ) = 1 for all extensions P 2 P (X) of P .
Let P 2 P (X) be an extension of P . Since A \ B \ C is disjoint from \ E2Y nfA;B;Cg E, which has P -probability one by Claim 1, P (A \ B \ C) = 0. This implies Claim 2, since
A c [ B c [ C c = (A \ B \ C) c . Claim 3. P ((A c \ B \ C) [ (A \ B c \ C) [ (A \ B \ C c )) = 1 for some extension P 2 P (X) of P . As A c \ B c is disjoint with each of A ; B ; C , it is disjoint with A [ B [ C ,
which has P i -probability of one for all individuals i by (8). So, P i (A c \ B c ) = 0, i.e., P i (A c nB) = 0, for all i. Analogously, P i (A c nC) = 0 and P i (B c nC) = 0 for all i. Since, as just shown, each P i has an extension P i which assigns zero probability to A c nB, A c nC and B c nC, by conditional consensus compatibility (and Proposition 3) the collective opinion function P also has an extension P 2 P (X) assigning zero probability to these three events, and hence, to their union
(A c nB)[(A c nC)[(B c nC) = (A c \B c )[(A c \C c )[(B c \C c ).
In other words, with P -probability of zero at least two of A c ; B c ; C c hold. Further, with P -probability of one at least one of A c ; B c ; C c holds (by Claim 2). So, with P -probability of one exactly one of A c ; B c ; C c holds. This is precisely what had to be shown. A.5 Proof of Theorem 6(a)
As explained in the main text, Theorem 6(a) follows from Theorem 3(a) via Lemmas 7 and 5 (while applying Lemma 11(b)). It remains to prove Lemma 7.
Proof of Lemma 7. Let D be the local pooling criterion for such a pooling function for a partitional agenda X. Consider any x; y; z 2 [0; 1] n with sum 1. Since X is partitional, some countable Y X partitions into at least three non-empty events. Choose distinct A; B; C 2 Y . For each individual i, since x i + y i + z i = 1 and since A, B and C are pairwise disjoint and non-empty, there is some P i 2 P X such that P i (A) = x i ; P i (B) = y i ; P i (C) = z i .
Let P be the collective opinion function for this pro…le. Since Y is a countable partition of and P can be extended to a ( -additive) probability function,
P E2Y P (E) = 1. Now,
A.6 Proof of parts (b) of all theorems
Parts (b) of three of the six theorems will be proved by reduction to results in the companion paper. To prepare this reduction, we …rst relate opinion pooling on a general agenda X to premise-based opinion pooling on a -algebra agenda, as analysed in the companion paper. Consider any agenda X and any -algebra agenda of which X is a subagenda. (A subagenda of an agenda is a subset which is itself an agenda, i.e., a non-empty subset closed under complementation.) For instance, could be (X). We can think of the pooling function F for X as being induced by a pooling function F for the larger agenda . Formally, a pooling function F : P n ! P for agenda induces the pooling function F : P n X ! P X for (sub)agenda X if F and F generate the same collective opinions within X, i.e., F (P 1 j X ; :::; P n j X ) = F (P 1 ; :::; P n )j X for all P 1 ; :::; P n 2 P :
(Strictly speaking, we further require that P X = fP j X : P 2 P g, but this requirement holds automatically in standard cases, e.g., if X is …nite or (X) = . 24 ) We call F the inducing pooling function, and F the induced one. Our Lemma 13 Consider an agenda X and the corresponding -algebra agenda = (X). Any pooling function for X is (a) induced by some pooling function for agenda ; (b) independent (respectively, neutral, linear) if and only if every inducing pooling function for agenda is independent (respectively, neutral, linear) on X, where 'every'can further be replaced by 'some'; (c) consensus-preserving if and only if every inducing pooling function for agenda is consensus-preserving on X, where 'every' can further be replaced by 'some'; (d) consensus-compatible if and only if some inducing pooling function for agenda is consensus-preserving; (e) conditional-consensus-compatible if and only if some inducing pooling function for agenda is conditional-consensus-preserving on X (where in (d) and (e) the 'only if'claim assumes that X is …nite).
Proof of Lemma 13. Consider an agenda X, the generated -algebra = (X), and a pooling function F for X.
(a) For each P 2 P X , …x an extension in P denoted P . Consider the pooling function F for de…ned by F (P 1 ; :::; P n ) = F (P 1 j X ; :::; P n j X ) for all P 1 ; :::; P n 2 P .Clearly, F induces F (regardless of how the extensions P of P 2 P X were chosen).
(b) We give a proof for the 'independence'case; the proofs for the 'neutrality' and 'linearity'cases are analogous. Note (using part (a)) that replacing 'every'by 'some'strengthens the 'if'claim and weakens the 'only if'claim. It thus su¢ ces to prove the 'if'claim with 'some', and the 'only if'claim with 'every'. Clearly, if some inducing F is independent on X, then F inherits independence. Now let F be independent with pooling criteria D A ; A 2 X. Consider any F : P n ! P n inducing F . Then F is independent on X with the same pooling criteria as for F because for all A 2 X and all P 1 ; :::; P n 2 P we have F (P 1 ; :::; P n )(A) = F (P 1 j X ; :::; P n j X )(A) as F induces F = D A (P 1 j X (A); :::; P n j X (A)) by F 's independence = D A (P 1 (A); :::; P n (A)).
(c) As in part (b), it su¢ ces to prove the 'if'claim with 'some', and the 'only if' claim with 'every'. Clearly, if some inducing F is consensus-preserving on X, F inherits consensus preservation. Now let F be consensus-preserving and induced by F . Then F is consensus-preserving on X because, for all A 2 X and which is either (X) or, if X is …nite, any -algebra which includes X. Our proof of Lemma 13 can be extended to this generalized statement (drawing on Lemma 15 and using an argument related to the 'Claim'in the proof of Theorem 1(b) of the companion paper).
Lemma 14 If a pooling function for a -algebra agenda is independent on a subagenda X (where X is …nite or (X) = ), then it induces a pooling function for agenda X.
The proof draws on a measure-theoretic fact in which the word '…nite'is essential:
Lemma 15 Every probability function on a …nite sub--algebra of -algebra can be extended to a probability function on .
Proof. Let 0 be a …nite sub--algebra of -algebra , and consider any P 0 2 P 0 . Let A be the set of atoms of 0 , i.e., ( -)minimal events in 0 nf?g. As 0 is …nite, A must partition . So, X A2A P 0 (A) = 1. For each A 2 A, let Q A be a probability function on such that Q A (A) = 1. (Such functions exist, since each Q A could for instance be the Dirac measure at some ! A 2 A.) Then P := X A2A P 0 (A)Q A de…nes a probability function on , because (given the identity X A2A:P 0 (A)6 =0 P 0 (A) = 1) it is a convex combination of probability functions on . Further, P extends P 0 , because it agrees with P 0 on A, hence on 0 .
Proof of Lemma 14. Suppose the pooling function F for -algebra agenda is independent on subagenda X, and that X is …nite or (X) = . Let 0 := (X).
If X is …nite, so is 0 . Each P 2 P X can by de…nition be extended to a function in P 0 , which (by Lemma 15 in case 0 is a …nite -algebra distinct from ) can be extended to a function in P . For any Q 2 P X , pick an extension Q 2 P . De…ne a pooling function F 0 for X by F 0 (Q 1 ; :::; Q n ) := F (Q 1 ; :::; Q n )j X for all Q 1 ; :::; Q n 2 P X . Now F induces F 0 for two reasons. First, for all P 1 ; :::; P n 2 P , F 0 (P 1 j X ; :::; P n j X ) = F (P 1 j X ; :::; P n j X )j X = F (P 1 ; :::; P n )j X , where the second '='holds as F is independent on X. Second, P X = fP j X : P 2 P g, where ' 'is trivial and ' 'holds because each P 2 P X equals P j X .
Proof of parts (b) of Theorems 1-6.
By their construction, the numbers p 1 ; :::; p 4 given by ( 10)-( 12) satisfy condition (b) and equation p 1 + ::: + p 4 = 1. To complete the proof of conditions (a)-(b), it remains to show that p 1 ; :::; p 4 0. We do this by proving two claims.
Claim 1. p 4 0, i.e., t 12 +t 13 +t 23 2 1.
We have to prove that T (q 12 ) + T (q 13 ) + T (q 23 ) 2. Note that q 12 + q 13 + q 23 = q 1 + q 2 + q 1 + q 3 + q 2 + q 3 = 2(q 1 + q 2 + q 3 ) 2.
We distinguish three cases.
Case 1 : All of q 12 ; q 13 ; q 23 are all at least 1=2. Then, by (i)-(iii), T (q 12 ) + T (q 13 ) + T (q 23 ) q 12 + q 13 + q 23 2, as desired.
Case 2 : At least two of q 12 ; q 13 ; q 23 are below 1=2. Then, again using (i)-(iii), T (q 12 ) + T (q 13 ) + T (q 23 ) < 1=2 + 1=2 + 1 = 2, as desired.
Case 3 : Exactly one of q 12 ; q 13 ; q 23 is below 1=2. Suppose q 12 < 1=2 q 13 q 23 (otherwise just switch the roles of q 12 ; q 13 ; q 23 ). For all 0 such that q 23 + 1, the properties (i)-(iii) of T imply that T (q 13 ) + T (q 23 ) T (q 13
) + T (q 23 + ):
Since the graphical intuition for ( 13) is clear, let us only give an informal proof, stressing visualisation. Dividing by 2, we have to show that the average value a 1 := 1 2 T (q 13 )+ 1 2 T (q 23 ) is at most the average value a 2 := 1 2 T (q 13 )+ 1 2 T (q 23 + ).
One might wonder why the pooling function constructed in this proof violates conditional consensus compatibility. (It must do so, because otherwise pooling would be linear -hence neutral -by Theorem 5(a).) Let and X be as in the proof, and consider a pro…le with complete unanimity: all individuals i assign probability 0 to ! 1 , 1/4 to ! 2 , 1/4 to ! 3 , and 1/2 to ! 4 . As f! 1 g is the di¤erence of two events in X (e.g. f! 1 ; ! 2 gnf! 2 ; ! 3 g), implication preservation (which is equivalent to conditional consensus compatibility) would require ! 1 's collective probability to be 0 as well. But ! 1 's collective probability is (in the notation of the proof) given by p 1 = t 12 + t 13 t 23 2 = T (q 12 ) + T (q 13 ) T (q 23 ) 2 .
Here, q kl is the collective probability of f! k ; ! l g under a linear pooling function, so that q kl is the probability which each individual assigns to f! k ; ! l g. So
p 1 = T (1=4) + T (1=4) T (1=2) 2 = T (1=4) T (1=2) 2 ,
which is strictly positive as T is strictly concave on [0; 1=2] with T (0) = 0.
Lemma 4
4 If D : [0; 1] n ! [0; 1] is the local pooling criterion of a neutral and consensus-compatible pooling function for a non-nested agenda X with jXnf ; ?gj > 4, then D(x) + D(y) + D(z) = 1 for all x; y; z 2 [0; 1] n with x + y + z = 1. (2) Lemma 5 If a function D : [0; 1 n ] ! [0; 1] with D(0) = 0 satis…es (2), then it takes the linear form D(x 1 ; ::
(c) Let A; B 2 Xnf?; g. If A B, then A B due to a direct connection, because A; B are neither exclusive (as A \ B = A 6 = ?) nor exhaustive (as A [ B = B 6 = ).
Claim 2 .
2 Each class contains exactly one member of any pair A; A c 2 Xnf?; g. Proof of Claim 1. For a contradiction, let A; B; C 2 Xnf?; g be pairwise not ( -)equivalent. By A 6 B, either A \ B = ? or A [ B = . We may assume the former case, because in the latter case we may consider A c ; B c ; C c instead of A; B; C. (Note that A c ; B c ; C c are again pairwise non-equivalent by (b) and A c \ B c = (A [ B) c = c = ?.) Now, since A \ B = ?, we have B A c , whence A c B by (c). By A 6 C, there are two cases: either A \ C = ?, which implies C A c , whence C A c by (c), so that C B (as A c B and is transitive by (a)), a contradiction; or A [ C = , which implies A c C, whence A c C by (c), so that again we derive the contradiction C B, which completes the proof of Claim 1.
(a) X is non-nested with jXnf ; ?gj > 4; (b) X has a (consistent or inconsistent) subset Y with jY j 3 such that (Y nfAg) [ fA c g is consistent for each A 2 Y ; (c) X has a (consistent or inconsistent) subset Y with jY j = 3 such that (Y nfAg) [ fA c g is consistent for each A 2 Y . Lemma 11 If D : [0; 1] n ! [0; 1] is the local pooling criterion of a neutral pooling function for an agenda X (6 = f ; ?g), then (a) D(x) + D(1 x) = 1 for all x 2 [0; 1] n , (b) D(0) = 0 and D(1) = 1, provided the pooling function is consensus preserving.
Claim 2 .
2 D(x) + D(y) + D(z) = 1. Consider an extension P 2 P (X) of P P 1 ;:::;Pn of the kind in Claim 1. As P (A [ B [ C ) = 1, and as the intersection of A c with A [ B [ C is A , P (A c ) = P (A ):(4)Since A c 2 X, we further have P (A c ) = P P 1 ;:::;Pn (A c ) = D(P 1 (A c ); :::; P n (A c )), whereP i (A c ) = P i (A c ) = x i for each individual i. So, P (A c ) = D(x).This and (4) imply that P (A ) = D(x). Analogously, P (B ) = D(y) and P (C ) = D(z). So, Claim 2 follows from Claim 1. Proof of Lemma 5. Consider any D : [0; 1 n ] ! [0; 1] such that D(0) = 0 and D(x) + D(y) + D(z) = 1 for all x; y; z 2 [0; 1] n with x + y + z = 1: (5) We have D(1) = 1 (since D(1) + D(0) + D(0) = 1 where D(0) = 0) and D(x) + D(1 x) = 1 for all x 2 [0; 1] (6) (since D(x) + D(1 x) + D(0) = 1 where D(0) = 0). Using (5) and then (6), for all x; y 2 [0; 1] n with x + y 2 [0; 1] n , 1 = D(x) + D(y) + D(1 x y) = D(x) + D(y) + 1 D(x + y). So, D(x + y) = D(x) + D(y) for all x; y 2 [0; 1] n with x + y 2 [0; 1] n :
Claim 4 .
4 P (A )+P (B )+P (C ) = P (A [B [C ) = 1 for some extension P 2 P (X) of P . Consider an extension P 2 P (X) of P of the kind in Claim 3. The …rst identity follows from the pairwise disjointness of A ; B ; C . Regarding the second identity, note that A [ B [ C is the intersection of the events \ E2Y nfA;B;Cg E and (A c \ B \ C) [ (A \ B c \ C) [ (A \ B \ C c ), each of which has P -probability of one by Claims 1 and 3. So P (A [ B [ C ) = 1. Claim 5. D(x) + D(y) + D(z) = 1. Consider an extension P 2 P (X) of P of the kind in Claim 4. As P (A [ B [ C ) = 1 by Claim 4, and as the intersection of A c with A [ B [ C is A , P (A c ) = P (A ):(9)Since A c 2 X, we also have P (A c ) = P P 1 ;:::;Pn (A c ) = D(P 1 (A c ); :::; P n (A c )),where P i (A c ) = P i (A c ) = x i for all individuals i. So P (A c ) = D(x). This and (9) imply that P (A ) = D(x). Similarly, P (B ) = D(y) and P (C ) = D(z). So Claim 5 follows from Claim 4.
for each E 2 Y nfA; B; Cg, we have P (E) = 0 by consensus preservation (as P i (E) = 0 for all i). So P (A) + P (B) + P (C) = 1. Hence D(x) + D(y) + D(z) = 1 because P (A) = D(P 1 (A); :::; P n (A)) = D(x); P (A) = D(P 1 (B); :::; P n (B)) = D(y); P (A) = D(P 1 (C); :::; P n (C)) = D(z).
First, Theorems 2(b) and 6(b) follow directly from Theorems 1(b) and 3(b), respectively, since consensus compatibility implies conditional consensus compatibility (by Proposition 1) and as non-neutrality implies non-linearity. Second, we derive Theorems 1(b), 3(b) and 5(b) from the corresponding results in the companion paper, namely Theorems 1(b), 3(b), and 5(b), respectively. The matrix of our three-equation system into triangular form:
An agenda X (with jXnf ; ?gj > 4) is non-nested if and only if it has at least one subset Y with jY j 3 such that (Y nfAg) [ fA c g is consistent for each A 2 Y . (b) An agenda X (with jXnf ; ?gj > 4) is non-simple if and only if it has at least one inconsistent subset Y (of countable size) with jY j 3 such that
Recalling that p 4 = 1 (p 1 + p 2 + p 3 ), we also havep 4 = 1 t 12 + t 13 + t 23 2 :
1 t 12 1 t 13 1 1 t 23 1 A ! ! 0 @ 0 @ 1 1 1 1 1 -1 t 12 t 13 t 12 1 1 t 13 t 12 2 t 23 + t 13 t 12 2 1 t 23 +t 13 t 12 1 t 12 A . 1 A
The system therefore has the following solution:
p 3 = t 23 + t 13 t 12 2 (10)
p 2 = t 12 t 13 + t 23 + t 13 t 12 2 = t 12 + t 23 t 13 2 (11)
p 1 = t 12 t 12 + t 23 t 13 2 = t 12 + t 13 t 23 2
This assumes that the -algebra contains more than four events.
Note that A ! B ('if A then B') is best interpreted as a non-material conditional, since its negation, unlike that of a material conditional, is consistent with the negation of its antecedent, A (i.e., A c \ (A ! B) c 6 = ?). (A material conditional is always true when its antecedent is false.) The only assignment of truth-values to the events A, A ! B, and B that is ruled out is (1; 1; 0). If we wanted to re-interpret ! as a material conditional, we would have to rule out in addition the truth-value assignments (0; 0; 0), (0; 0; 1), and (1; 0; 1), which would make little sense in the present example. The event A ! B would become A c [ B (= (A \ B c ) c ), and the agenda would no longer be free from conjunctions or disjunctions. However, the agenda would still not be a -algebra. For a discussion of non-material conditionals, see, e.g.,[START_REF] Priest | An Introduction to Non-classical Logic[END_REF].
Whenever X contains A and B, then(X) contains A [ B, (A [ B) c , (A [ B) c [ B,and so on. In some cases, all events may be constructible from events in X, so that (X) = 2 .
For instance, if X contains k = 2 logically independent events, say A and B, then X includes a partition A of into 2 k = 4 non-empty events, namely A = fA \ B; A \ B c ; A c \ B; A c \ B c g, and hence X includes the set f[ C2C C : C Ag containing 2 2 k = 16 events.
When X is a -algebra,[START_REF] Mcconway | Marginalization and Linear Opinion Pools[END_REF] shows that independence (his weak setwise function property) is equivalent to the marginalization property, which requires aggregation to commute with the operation of reducing the -algebra to some sub--algebra X. A similar result holds for general agendas
X.7 When the agenda is a -algebra, independence con ‡icts with the preservation of unanimously held judgments of probabilistic independence, assuming non-dictatorial aggregation[START_REF] Genest | Further Evidence against Independence Preservation in Expert Judgement Synthesis[END_REF][START_REF] Bradley | Aggregating Causal Judgments[END_REF]. Whether this objection also applies in the case of general agendas depends on the precise nature of the agenda. Another objection is that independence is not generally compatible with external Bayesianity, the requirement that aggregation commute with Bayesian updating of probabilities in light of new information.
An interesting fourth variant is the requirement obtained by combining the antecedent of implicit consensus preservation with the conclusion of consensus compatibility. This condition weakens both implicit consensus preservation and consensus compatibility, while still strengthening the initial consensus preservation requirement.
As a consequence, full neutrality follows even for nested agendas if independence is slightly strengthened by requiring that D A = D A c for some A 2 Xnf?; g.
A generalized de…nition of partitionality is possible in Theorem 6: we could de…ne X to be partitional if there are …nite or countably in…nite subsets Y; Z X such that the set fA \ C : A 2 Y g, with C = \ B2Z B, partitions C into at least three non-empty events. This de…nition generalizes the one in the main text, because if we take Z = ?, then C becomes (= \ B2? B) and Y simply partitions . But since we do not know whether this generalized de…nition renders partitionality logically minimal in Theorem 6, we use the simpler de…nition in the main text.
In this proposition, we assume that the underlying set of worlds satis…es j j 4.
To see that X K is non-simple if jKj > 2, choose three distinct alternatives x; y; z 2 K and note that the three events x y; y z; and z x in X K are mutually inconsistent, but any pair of them is consistent, so that they form a minimal inconsistent subset of X K .
A model in which individuals and the collective specify probabilities of selecting each of the alternatives in K (as opposed to probability assignments over events of the form 'x is ranked above y') has been studied, for instance, by[START_REF] Intriligator | A Probabilistic Model of Social Choice[END_REF], who has characterized a version of linear averaging in it. Similarly, a model in which individuals have vague or fuzzy preferences has been studied, for instance, by[START_REF] Billot | Aggregation of preferences: The fuzzy case[END_REF] and more recently by Piggins and Perote-Peña (2007) (see also[START_REF] Sanver | Sophisticated Preference Aggregation[END_REF].
Formally, a binary opinion function is a function f : X ! f0; 1g that is extendible to a probability function on (X), or equivalently, to a truth-function on (X) (i.e., a f0; 1g-valued function on (X) that is logically consistent).
Speci…cally, a binary opinion function f : X ! f0; 1g corresponds to the consistent and complete judgment set fA 2 X : f (A) = 1g.
In the binary case in part (a), partionality can be weakened to even-number negatability or non-a¢ neness. SeeDietrich and List (2007a) and[START_REF] Dokow | Aggregation of binary evaluations[END_REF].
Of course, one could also state uni…ed versions of Theorems 1 to 3 on neutral opinion pooling, by combining these theorems with existing results on binary judgment aggregation. We would simply need to replace the probabilistic opinion pooling function F : P n X ! P X with a binary or probabilistic such function.
This uses Lemma 11(b) below, where consensus preservation holds by consensus compatibility.
This uses Lemma 11(b), where consensus preservation holds by conditional consensus compatibility.
In these cases, each opinion function in P X is extendable not just to a probability function on (X), but also to one on . In general, extensions beyond (X) may not always be possible,
pooling on general agendas'(September 2007). Dietrich was supported by a Ludwig Lachmann Fellowship at the LSE and the French Agence Nationale de la Recherche (ANR-12-INEG-0006-01). List was supported by a Leverhulme Major Research Fellowship (MRF-2012-100) and a Harsanyi Fellowship at the Australian National University, Canberra. 1
axiomatic requirements on the induced pooling function F -i.e., independence and the various consensus requirements -can be related to the following requirements on the inducing pooling function F for the agenda (introduced and discussed in the companion paper): Independence on X. For each A in subagenda X, there exists a function D A : [0; 1] n ! [0; 1] (the local pooling criterion for A) such that, for all P 1 ; :::; P n 2 P , P P 1 ;:::;Pn (A) = D A (P 1 (A); :::; P n (A)).
Consensus preservation. For all A 2 and all P 1 ; :::; P n 2 P , if P i (A) = 1 for all individuals i then P P 1 ;:::;Pn (A) = 1.
Consensus preservation on X. For all A in subagenda X and all P 1 ; :::; P n 2 P , if P i (A) = 1 for all individuals i then P P 1 ;:::;Pn (A) = 1.
Conditional consensus preservation on X. For all A; B in subagenda X and all P 1 ; :::; P n 2 P , if, for each individual i, P i (AjB) = 1 (provided P i (B) 6 = 0), then P P 1 ;:::;Pn (AjB) = 1 (provided P P 1 ;:::;Pn (B) 6 = 0). 25 The following lemma establishes some key relationships between the properties of the induced and the inducing pooling functions:
Lemma 12 Suppose a pooling function F for a -algebra agenda induces a pooling function F for a subagenda X (where X is …nite or (X) = ). Then:
F is independent (respectively, neutral, linear) if and only if F is independent (respectively, neutral, linear) on X; F is consensus-preserving if and only if F is consensus-preserving on X;
This lemma follows from a more general result on the correspondence between opinion pooling on general agendas and on -algebra agendas. 26 as is well-known from measure theory. For instance, if = R, X consists of all intervals or complements thereof, and = 2 R , then (X) contains the Borel-measurable subsets of R, and it is well-known that measures on (X) may not be extendable to = 2 R (a fact related to the Banach-Tarski paradox). 25 If one compares this requirement with that of conditional consensus compatibility for a general agenda X, one might wonder why the new requirement involves only a single conditional certainty (i.e., that of A given B), whereas the earlier requirement involves an entire set of conditional certainties (which must be respected simultaneously). The key point is that if each P i is a probability function on , then the simpli…ed requirement as stated here implies the more complicated requirement from the main text.
26 More precisely, Lemma 12 is a corollary of a slightly generalized statement of Lemma 13, in P 1 ; :::; P n 2 P such that P 1 (A) = = P n (A) = 1, we have F (P 1 ; :::; P n )(A) = F (P 1 j X ; :::; P n j X )(A) as F induces F = 1 as F is consensus preserving.
(d) First, let F be consensus-compatible and X …nite. We de…ne F as follows. For any P 1 ; :::; P n 2 P , consider the event A in which is smallest subject to having probability one under each P i . This event exists and is constructible as A = \ A2 (X):P 1 (A)= =P n (A)=1 A, drawing on …niteness of = (X) and the fact that intersections of …nitely many events of probability one have probability one. Clearly, A is the union of the supports of the functions P i . We de…ne F (P 1 ; :::; P n ) as any extension in P of F (P 1 j X ; ::::; P n j X ) assigning probability one to A . Such an extension exists because F is consensuscompatible and each P i j X is extendable to a probability function (namely P i ) assigning probability one to A . Clearly, F induces F . It also is consensuspreserving: for all P 1 ; :::; P n 2 P and A 2 , if P 1 (A) = = P n (A) = 1, then A includes the above-constructed event A , whence F (P 1 ; :::; P n )(A) = 1 as F (P 1 ; :::
Conversely, let some inducing pooling function F be consensus-preserving. To see why F is consensus-compatible, consider P 1 ; :::; P n 2 P X and A 2 such that each P i has an extension P i 2 P for which P i (A) = 1. We show that some extension P 2 P of F (P 1 ; :::; P n ) satis…es P (A) = 1. Simply let P be F (P 1 ; :::; P n ) and note that P is indeed an extension of F (P 1 ; :::; P n ) (as F induces F ) and P (A) = 1 (as F is consensus-preserving).
(e) First, let F be conditional-consensus-compatible, and let X be …nite. We de…ne F as follows. For a pro…le (P 1 ; :::; P n ) 2 P n , consider the (…nite) set S of pairs (A; B) in X such that P i (AjB) = 1 for each i with P i (B) 6 = 0 (equivalently, such that P i (BnA) = 0 for each i). Since F is conditional-consensus-compatible (and since in the last sentence we can replace each 'P i 'with 'P i j X '), there is an extension P 2 P of F (P 1 j X ; :::; P n j X ) such that P (AjB) = 1 for all (A; B) 2 S for which P (B) 6 = 0. Let F (P 1 ; :::; P n ) := P . Clearly, F induces F and is conditional-consensus-preserving on X.
Conversely, let some inducing F be conditional-consensus-preserving on X.
To check that F is conditional-consensus-compatible, consider P 1 ; :::; P n 2 P X and a …nite set S of pairs (A; B) in X such that each P i can be extended to P i 2 P with P i (AjB) = 1 (provided P i (B) 6 = 0). We require an extension P 2 P of F (P 1 ; :::; P n ) such that P (AjB) = 1 for all (A; B) 2 S for which P (B) 6 = 0. Now P := F (P 1 ; :::; P n ) is such an extension, since F induces F and is conditional-consensus-preserving on X.
Which pooling functions for induce ones for X? Here is a su¢ cient condition: derivations are similar for the three results; we thus spell out the derivation only for Theorem 1(b). Consider a nested agenda X 6 = f ; ?g. By the companion paper's Theorem 1(b) (see also the footnote to it), some pooling function F for agenda := (X) is independent on X, (globally) consensus preserving and nonneutral on X. By Lemma 14, F induces a pooling function for (sub)agenda X, which by Lemma 12 is independent, consensus-compatible, and non-neutral.
Finally, we prove Theorem 4(b) directly rather than by reduction. Consider an agenda X 6 = f?; g which is nested or satis…es jXnf?; gj 4. If X is nested, the claim follows from Theorem 1(b), since non-neutrality implies non-linearity. Now let X be non-nested and jXnf?; gj 4. We may assume without loss of generality that ?; 6 2 X (as any independent, consensus-compatible, and nonneutral pooling function for agenda X 0 = Xnf?; g induces one for agenda X). Since jXj 4, and since jXj > 2 (as X is non-nested), we have jXj = 4, say X = fA; A c ; B; B c g. By non-nestedness, A and B are logically independent, i.e., the events A \ B, A \ B c , A c \ B, and A c \ B c are all non-empty. On P n X , consider the function F : (P 1 ; ::; P n ) 7 ! T P 1 , where T (p) is 1 if p = 1, 0 if p = 0, and 1 2 if p 2 (0; 1). We complete the proof by establishing that (i) F maps into P X , i.e., is a proper pooling function, (ii) F is consensus-compatible, (iii) F is independent, and (iv) F is non-linear. Claims (iii) and (iv) hold trivially.
Proof of (i): Let P 1 ; :::; P n 2 P X and P := F (P 1 ; :::; P n ) = T P 1 . We need to extend P to a probability function on (X). For each atom C of (X) (i.e., each C 2 fA \ B; A \ B c ; A c \ B; A c \ B c g), let P C be the unique probability function on (X) assigning probability one to C. We distinguish between three (exhaustive) cases.
Case 1 : P 1 (E) = 1 for two events E in X. Without loss of generality, let P 1 (A) = P 1 (B) = 1, and hence, P 1 (A c ) = P 1 (B c ) = 0. It follows that P (A) = P (B) = 1 and P (A c ) = P (B c ) = 0. So P extends (in fact, uniquely) to a probability function on (X), namely to P A\B .
Case 2 : P 1 (E) = 1 for exactly one event E in X. Without loss of generality, assume P 1 (A) = 1 (hence, P 1 (A c ) = 0) and P 1 (B); P 1 (B c ) 2 (0; 1). Hence, P (A) = 1, P (A c ) = 0 and P (B) = P (B c ) = 1 2 . So P extends (again uniquely) to a probability function on (X), namely to 1 2 P A\B + 1 2 P A\B c . Case 3 : P 1 (E) = 1 for no event E in X. Then P 1 (A); P 1 (A c ); P 1 (B); P 1 (B c ) 2 (0; 1), and so P (A) = P (A c ) = P (B) = P (B c ) = 1 2 . Hence, P extends (nonuniquely) to a probability function on (X), e.g., to
. Proof of (ii): Let P 1 ; :::; P n 2 P X and consider any C 2 (X) such that each P i extends to some P i 2 P (X) such that P i (C) = 1. (It only matters that P 1 has such an extension, given the de…nition of F .) We have to show that P := F (P 1 ; :::; P n ) = T P 1 is extendable to a P 2 P (X) such that P (C) = 1. We verify the claim in each of the three cases considered in the proof of (i). In Cases 1 and 2, the claim holds because the (unique) extension P 2 P (X) of P has the same support as P 1 . (In fact, in Case 1 P = P 1 .) In Case 3, C must intersect with each event in X (otherwise some event in X would have zero probability under P 1 , in contradiction with Case 3) and include more than one of the atoms A \ B, A \ B c , A c \ B, and A c \ B c (again by Case 3). As is easily checked, C (A\B)[(A c \B c ) or C (A\B c )[(A c \B). So, to ensure that the extension P or P satis…es P (C) = 1, it su¢ ces to specify P as 1 2 P A\B + 1 2 P A c \B c in the …rst case, and as 1 2 P A\B c + 1 2 P A c \B in the second case.
A.7 Proof of Propositions 1 and 2
Proof of Proposition 1. Consider an opinion pooling function for an agenda X.
We …rst prove part (b), by showing that conditional consensus compatibility is equivalent to the restriction of consensus compatibility to events A expressible as ([ (C;D)2S (CnD)) c for …nite S X X. This fact follows from the equivalence of conditional consensus compatibility and implication preservation (Proposition 3) and the observation that, for any such set S, an opinion function is consistent with zero probability of all CnD with (C; D) 2 S if and only if it is consistent with zero probability of [ (C;D)2S (CnD), i.e., probability one of ([ (C;D)2S (CnD)) c . We now prove part (a) The claims made about implicit consensus preservation and consensus compatibility have already been proved (informally) in the main text. It remains to show that conditional consensus compatibility implies consensus preservation and is equivalent to it if X = (X). As just shown, conditional consensus compatibility is equivalent to the restriction of consensus compatibility to events A of the form ([ (C;D)2S (CnD)) c for some …nite set S X X. Note that, for any A 2 X, we may de…ne S as f(A c ; A)g, so that ([ (C;D)2S (CnD)) c = (A c nA) c = A. So, conditional consensus compatibility implies consensus preservation and is equivalent to it if X = (X).
Proof of Proposition 2. Assume j j 4. We can thus partition into four nonempty events and let X consist of any union of two of these four events. The set X is indeed an agenda since A 2 X , A c 2 X. Since nothing depends on the sizes of the four events, we assume without loss of generality that they are singleton, i.e., that = f! 1 ; ! 2 ; ! 3 ; ! 4 g and X = fA : jAj = 2g.
Step 1. We here show that X is path-connected and non-partitional. Nonpartitionality is trivial. To establish path-connectedness, we consider events A; B 2 X and must construct a path of conditional entailments from A to B. This is done by distinguishing between three cases.
Case 1 : A = B. Then the path is trivial, since A ` A (take Y = ?).
Case 2 : A and B have exactly one world in common. Call it !, and let ! 0 be the unique world in n(A [ B). Then A ` B in virtue of Y = ff!; ! 0 gg. Case 3 : A and B have no world in common. We may then write
Step 2. We now construct a pooling function (P 1 ; :::; P n ) 7 ! P P 1 ;:::;Pn that is independent (in fact, neutral), consensus-preserving, and non-linear. As an ingredient of the construction, consider …rst a linear pooling function L : P n X ! P X (for instance the dictatorial one given by (P 1 ; :::; P n ) 7 ! P 1 ). We shall transform L into a non-linear pooling function that is still neutral and consensus-preserving. First, …x a transformation T : [0; 1] ! [0; 1] such that:
(Such a T exists; e.g. T (x) = 4(x 1=2) 3 + 1=2 for all x 2 [0; 1].) Now, for any P 1 ; :::; P n 2 P X and A 2 X, let P P 1 ;:::;Pn (A) := T (L(P 1 ; :::; P n )(A)). We must prove that, for any P 1 ; :::; P n 2 P X , the function P P 1 ;:::;Pn , as just de…ned, can indeed be extended to a probability function on (X) = 2 . This completes the proof, as it establishes that we have de…ned a proper pooling function and this pooling function is neutral (since L is neutral), consensus-preserving (since L is consensus-preserving and T (1) = 1), and non-linear (since L is linear and T a non-linear transformation).
To show that P P 1 ;:::;Pn can be extended to a probability function on (X) = 2 , we consider any probability function Q on 2 and show that T Qj X extends to a probability function on 2 (which completes our task, since Qj X could be L(P 1 ; :::; P n ) for P 1 ; :::; P n 2 P X ). It su¢ ces to prove that there exist real numbers p k = p Q k , k = 1; 2; 3; 4, such that the function on 2 assigning p k to each f! k g is a probability function and extends T Qj X , i.e., such that (a) p 1 ; p 2 ; p 3 ; p 4 0 and
For all k 2 f1; 2; 3; 4g, let q k := Q(f! k g); and for all k; l 2 f1; 2; 3; 4g with k < l, let q kl := Q(f! k ; ! l g). In order for p 1 ; :::; p 4 to satisfy (b), they must satisfy the system p k + p l = T (q kl ) for all k; l 2 f1; 2; 3; 4g with k < l.
Given p 1 + p 2 + p 3 + p 4 = 1, three of these six equations are redundant. Indeed, consider k; l 2 f1; 2; 3; 4g, k < l, and de…ne k 0 ; l 0 2 f1; 2; 3; 4g, k 0 < l 0 , by fk 0 ; l 0 g = f1; 2; 3; 4gnfk; lg. As p k + p l = 1 p k 0 p l 0 and T (q kl ) = T (1 q k 0 l 0 ) = 1 T (q k 0 l 0 ), the equation p k + p l = T (q kl ) is equivalent to p k 0 + p l 0 = T (q k 0 l 0 ). So (b) reduces (given p 1 + p 2 + p 3 + p 4 = 1) to the system p 1 + p 2 = T (q 12 ), p 1 + p 3 = T (q 13 ), p 2 + p 3 = T (q 23 ). This is a system of three linear equations in three variables p 1 ; p 2 ; p 3 2 R. To solve it, let t kl := T (q kl ) for all k; l 2 f1; 2; 3; 4g, k < l. We …rst bring the coe¢ cient Let SL be the straight line segment in R 2 joining the points (q 13 ; T (q 13
)) and (q 23 + ; T (q 23 + )), and let SL be the straight line segment joining the points (q 13 ; T (q 13 )) and (q 23 ; T (q 23 )). Since a 1 and a 2 are, respectively, the second coordinates of the points on SL and SL with the …rst coordinate 1 2 q 13 + 1 2 q 23 , it su¢ ces to show that SL is 'below'SL. This follows once we prove that T 's graph is 'below'SL (as T is convex on [1=2; 1] and SL joins two points on T 's graph on [1=2; 1]). If q 13 1=2, this is trivial by T 's convexity on [1=2; 1]. Now let q 13 < 1=2. Let SL 0 be the straight line segments joining the points (q 13 ; T (q 13 )) and (1 (q 13 ); T (1 (q 13 ))), and let SL 00 be the straight line segment joining the points (1 (q 13
); T (1 (q 13 ))) and (q 23 + ; T (q 23 + )). Check using T 's properties that LS 0 passes through the point (1=2; 1=2). This implies that (*) T 's graph is 'below'SL 0 on [1=2; 1], and that (**) SL 00 is steeper than SL 0 (by T 's convexity on [1=2; 1]). Also, (***) T 's graph is 'below' SL 00 (again by T 's convexity on [1=2; 1]). In sum, on [1=2; 1], T 's graph is (by (*) and (***)) 'below'both SL 0 and SL 00 which are both 'below'SL by (**). So, still on [1=2; 1], T 's graph is 'below'SL. This proves (13). Applying (13) with = 1 q 23 , we obtain T (q 13 ) + T (q 23 ) T (q 13 (1 + q 23 )) + T (1):
On the right side, T (1) = 1 and (as q 13 (1 + q 23 ) 1 q 12 and as T is increasing) T (q 13 (1 + q 23 )) T (1 q 12 ) = 1 T (q 12 ). So T (q 13 ) + T (q 23 ) 1 + 1 T (q 12 ), i.e., T (q 12 ) + T (q 13 ) + T (q 23 ) 2, as claimed.
Claim 2. p k 0 for all k = 1; 2; 3.
We only show that p 1 0, as the proofs for p 2 and p 3 are analogous. We have to prove that t 13 + t 23 t 12 0, i.e., that T (q 13 ) + T (q 23 ) T (q 12 ), or equivalently, that T (q 1 +q 3 )+T (q 2 +q 3 ) T (q 1 +q 2 ). As T is increasing, it su¢ ces to establish that T (q 1 ) + T (q 2 ) T (q 1 + q 2 ). We again consider three cases.
Case 1 : q 1 + q 2 1=2. Suppose q 1 q 2 (otherwise swap the roles of q 1 and q 2 ). For all 0 such that q 1 0, we have T (q 1 ) + T (q 2 ) T (q 1 ) + T (q 2 + ), as T is concave on [0; 1=2] and 0 q 1 q 1 q 2 q 2 + 1=2. So, for = q 1 , T (q 1 ) + T (q 2 ) T (0) + T (q 2 + q 1 ) = T (q 1 + q 2 ):
Case 2 : q 1 + q 2 > 1=2 but q 1 ; q 2 1=2. By (i)-(iii),
T (q 1 ) + T (q 2 ) q 1 + q 2 T (q 1 + q 2 ).
Case 3 : q 1 > 1=2 or q 2 > 1=2. Suppose q 2 > 1=2 (otherwise swap q 1 and q 2 in the proof). Then q 1 < 1=2, since otherwise q 1 + q 2 > 1. Let y := 1 q 1 q 2 . As y < 1=2, an argument analogous to that in Case 1 yields T (q 1 )+T (y) T (q 1 +y), i.e., T (q 1 )+T (1 q 1 q 2 ) T (1 q 2 ). So, by (i), T (q 1 )+1 T (q 1 +q 2 ) 1 T (q 2 ), i.e., T (q 1 ) + T (q 2 ) T (q 1 + q 2 ). | 114,698 | [
"6630"
] | [
"15080",
"301309",
"328453"
] |
01485803 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485803/file/978-3-642-41329-2_10_Chapter.pdf | Karl Hribernik
Thorsten Wuest
Klaus-Dieter Thoben
Towards Product Avatars Representing Middle-of-Life Information for Improving Design, Development and Manufacturing Processes
Keywords: PLM, Product Avatar, BOL, Intelligent Products, Digital Representation, Information, Data 1
In today's globalized world, customers increasingly expect physical products and related information of the highest quality. New developments bring the entire product lifecycle into focus. Accordingly, an emphasis must be placed upon the need to actively manage and share product lifecycle information. The so-called Product Avatar represents an interesting approach to administrate the communication between intelligent products and their stakeholders along the product lifecycle. After its initial introduction as a technical concept, the product avatar now revolves around the idea individualized digital counterparts as targeted digital representations of products enabling stakeholders to benefit from value-added services built on product lifecycle information generated and shared by Intelligent Products. In this paper, first the concept of using a Product Avatar representation of product lifecycle information to improve the first phases, namely design, development and manufacturing will be elaborated on. This will be followed by a real life example of a leisure boat manufacturer incorporating these principles to make the theoretical concept more feasible.
INTRODUCTION
In today's globalized world, customers increasingly expect physical products and related information of the highest quality. New developments bring the entire product lifecycle into focus, such as an increased sensibility regarding sustainability. Accordingly, an emphasis must be placed upon the need to actively manage and share product lifecycle information.
The so-called Product Avatar [START_REF] Hribernik | The product avatar as a product-instance-centric information management concept[END_REF]] represents an interesting approach to administrate the communication between intelligent products and their stakeholders. After its initial introduction as a technical concept, the Product Avatar now revolves around the idea individualized digital counterparts as targeted digital representations of products enabling stakeholders to benefit from value-added services built on product lifecycle information generated and shared by Intelligent Products [START_REF] Wuest | Can a Product Have a Facebook? A New Perspective on Product Avatars in Product Lifecycle Management[END_REF].
During the middle of life phase (MOL) of a product a broad variety of data and consequently information can be generated, communicated and stored. The ready availability of this item-level information creates potential benefits for processes throughout the product lifecycle. Specifically in the beginning-of-life phase (BOL) of the product lifecycle, opportunities are created to continuously improve future product generations by using item-level MOL information in design, development and manufacturing processes. However, in order to make use of the information, its selection and presentation has to be individualized, customized and presented according to the stakeholders' requirements. For example, in the case of design processes, this means taking the needs of design engineers into account, and during manufacturing, the production planner.
In this paper, first the concept of using a product avatar representation of product lifecycle information to improve the first phases, namely design, development and manufacturing will be elaborated on. This will be followed by a real life example of a leisure boat manufacturer incorporating these principles to make the theoretical concept more feasible.
PRODUCT LIFECYCLE MANAGEMENT AND INTELLIGENT PRODUCTS
The theoretical foundation for the Product Avatar concept is on the one hand Product Lifecycle Management. This can be seen as the overarching data and information source from which the product Avatar retrieves the bits and pieces according to the individual needs of a stakeholder. On the other hand, this depends on Intelligent Products being able to gather and communicate data and information during the different lifecycle phases. In the following both areas are introduced as a basis for the following elaboration on the Product Avatar concept
Product Lifecycle Management
Every product has a lifecycle. Manufacturers are increasingly becoming aware of the benefits inherent in managing those lifecycles [START_REF] Sendler | Das PLM-Kompendium. Referenzbuch des Produkt-Lebenszyklus-Managements[END_REF]. Today's products are becoming increasingly complicated. For example, the amount of component parts is increasing. Simultaneously, development, manufacturing and usage cycles are accelerating [START_REF] Sendler | Das PLM-Kompendium. Referenzbuch des Produkt-Lebenszyklus-Managements[END_REF] and production is being distributed geographically. These trends highlight the need for innovative concepts for structuring and handling product related information efficiently throughout the entire lifecycle. On top that, customer demand for more customisation and variation stresses the need for a PLM at item and not merely type-level [Hribernik, Pille, Jeken, Thoben, Windt & Busse, 2010]. Common graphical representations of the product lifecycle encompass three phases beginning of life (BOL), Middle of Life (MOL) and End of Life (EOL) -arranged either in a circle or in a linear form (see Figure 1). The linear form represents the product lifecycle "from the cradle to the grave".
The social web offers a number of opportunities for item-level PLM. For example, Web 2.0-based product information acquisition could contribute to the improvement of the quality of future products [START_REF] Merali | Web 2.0 and Network Intelligence[END_REF][START_REF] Gunendran | Methods for the capture of manufacture best practice in product lifecycle management[END_REF]].
Intelligent Products
Intelligent Products are physical items, which may be transported, processed or used and which comprise the ability to act in an intelligent manner. McFarlane et al.
[McFarlane, Sarma, Chirn, Wong, Ashton, 2003] define the Intelligent Product as "...a physical and information based representation of an item [...] which possesses a unique identification, is capable of communicating effectively with its environment, can retain or store data about itself, deploys a language to display its features, production requirements, etc., and is capable of participating in or making decisions relevant to its own destiny."
The degree of intelligence an intelligent product may exhibit varies from simple data processing to complex pro-active behaviour. This is the focus of the definitions in [McFarlane, Sarma, Chirn, [START_REF] Mcfarlane | Auto ID systems and intelligent manufacturing control[END_REF]] and [START_REF] Kärkkäinen | Intelligent products -a step towards a more effective project delivery chain[END_REF]. Three dimensions of characterization of Intelligent Products are suggested by [START_REF] Meyer | Intelligent Products: A Survey[END_REF]]: Level of Intelligence, Location of Intelligence and Aggregation Level of Intelligence. The first dimension describes whether the Intelligent Product exhibits information handling, problem notification or decision making capabilities. The sec-ond shows whether the intelligence is built into the object, or whether it is located in the network. Finally, the aggregation level describes whether the item itself is intelligent or whether intelligence is aggregated at container level. Intelligent Products have been shown to be applicable to various scenarios and business models. For instance, Kärkkäinen et al. describe the application of the concept to supply network information management problems [START_REF] Kärkkäinen | Intelligent products -a step towards a more effective project delivery chain[END_REF]. Other examples are the application of the Intelligent Products to supply chain [START_REF] Ventä | Intelligent and Systems[END_REF], manufacturing control [McFarlane, Sarma, Chirn, Wong, Ashton, 2003], and production, distribution, and warehouse management logistics [Wong, McFarlane, Zaharudin, Agrawal, 2009]. A comprehensive overview of fields of application for Intelligent Products can be found in survey paper by Meyer et al [START_REF] Meyer | Intelligent Products: A Survey[END_REF].
Thus, an Intelligent Product is more than just the physical productit also includes the enabling information infrastructure. Up to now, Intelligent Products are not "socially intelligent" [START_REF] Erickson | Social systems: designing digital systems that support social intelligence[END_REF] in that they could create their own infrastructure to communicate with human users over or store information in. However, Intelligent Products could make use of available advanced information infrastructures designed by socially intelligent users, consequently enhancing the quality of information and accessibility for humans who interact with them.
PRODUCT AVATAR
One approach to representing the complex information flows connected to item-level PLM of an intelligent product is the Product Avatar. This concept describes a digital counterpart of the physical Intelligent Product which exposes functionality and information to stakeholders of the product's lifecycle via a user interface [
Concept behind the Product Avatar
The concept of the Product Avatar describes a distributed and de-centralized approach to the management of relevant, item-level information throughout a product's lifecycle [Hribernik, Rabe, Thoben, & Schumacher, 2006]. At its core lies the idea that each product should have a digital counterpart by which it is represented towards the different stakeholders involved in its lifecycle. In the case of Intelligent Products, this may also mean the implementation of digital representations towards other Intelligent Products. Consequently, the Avatar concept deals with establishing suitable interfaces towards different types of stakeholder. For Intelligent Products, the interfaces required might be, for example services, agents or a common messaging interfaces such as QMI. For human stakeholders, such as the owner, producer or designer, these interfaces may take the shape, e.g., of dedicated desktop applications, web pages or mobile "apps" tailored to the specific information and interaction needs. This contribution deals with the latter.
Example for Product Avatar Application during the MOL Phase
In order to make the theoretical concept of a Product Avatar more feasible, a short example based on a real case will be given in this section.
The authors successfully implemented a Product Avatar application for leisure boats in the usage (MOL) phase of the lifecycle for the stakeholder group "owner" by using the channel of the popular Social Network Service (SNS) Facebook. The goal was to create additional benefits for users by providing services (e.g. automatic logbook with location based services). The rationale behind using the popular SNS Facebook was that users are already familiar with the concept and that the inherent functions of the SNS expanded by the PLM based services increase the possibilities for new services around the core product of a leisure boat (see Figure 3). The Product Avatars main function in this phase was, to provide pre-defined (information) services to users. The PLM information needed were either based on a common data base where all PLM data and information for the individual product were stored or derived through a mediating layer, e.g. the Semantic Mediator (Hribernik, Kramer, Hans, Thoben, 2010), from various available databases. Among the services implemented was a feature to share the current location of the boat including an automatic update of the weather forecast employing Google maps and Yahoo weather. Additionally, information like the current battery load or fuel level were automatically shared on the profile (adjustable by the user for data security reasons). (see Figure 4)
PRODUCT AVATAR APPLICATION IN THE BOL PHASE
In this section the practical example of a Product Avatar for a leisure boat, shortly introduced with a focus on MOL in the section before, will be described in more detail focusing on the BOL phase. First the stakeholders with an impact on the BOL phase will be presented and discussed briefly. Afterwards, some insights on MOL data capturing through sensor application and the different existing prototypes are introduced. The last sub-section will then give three examples of how MOL data can be applied during the BOL phase in a beneficial way for different stakeholders.
Stakeholders
The stakeholders having an impact on BOL processes can be clustered in two main groups: data producing (MOL) and data exploiting (BOL).
The group of data producing stakeholders during the MOL phase is fairly large and diverse. The main stakeholders with the biggest impact are: Users (owners): This stakeholder controls what data will be communicated (data security). Furthermore, they are responsible for the characteristic of the data captured through the way they use the boat. Producers: This group has an impact on updates (software), what sensors are implemented and what services available all influencing the data availability and quality. Maintenance: This group on the one hand produces relevant data themselves when repairing the boat but also ensures the operation readiness of the sensors etc.
In the BOL phase, the stakeholders are more homogenious as all have a common interest of building the boat. However, they have different needs towards possible MOL data application. There are two main groups to be identified:
OEMs: This stakeholder is responsible for the overall planning and production of the boat and later the contact towards the customer. He has the strongest interest in learning about the "real" usage of the boat based on MOL data. Suppliers: This group is mostly integrated in the planning process through the OEM. However, even so indirectly included in planning activities, MOL data can be of high value for their operations.
Depending on the Customer Order Decoupling Point, the user might also fall into this category of important stakeholders during the BOL phase. However, the user at this stage is mostly considered not to be directly involved in the product developing activities and has to rely on the OEMs communication.
4.2
Capturing of MOL Data
Today's technological development, especially in the field of sensor technology, presents almost unlimited possibilities of data capturing. Of course this is limited by common sense and economic reasons.
To capture MOL data of a leisure boat, the development included three stages of prototypes.
The first stage of the so called Universal Marine Gateway (UMG) were three sensors (humidity, pressure and temperature) connected to a processing unit (here: Bea-gleBone) and mounted in an aquarium. In this lab prototype (see Figure 5) first hands on experience was gathered and the software interfaces with PLM data infrastructure was tested. The next stage, the UMG Prototype MK.II (see Figure 6) incorporated the findings of the first lab prototype on a miniature model of a boat in order to learn about the effects of a mobile application and wireless communication on the data quality and impact on capturing still in a secure environment. The final stage,UMG Prototype MK. III (see Figure 7), consists of a fully functional and live size boat where a set of sensors, based on the findings of the earlier stage tests, is implemented. This prototype will be tested under realistic settings and different scenarios. The practical implementation of the sensor equipment implies a series of challenges. E.g. the sensors need to be protected against damage coused by impact when debarking on shore. On the other hand they have to be "open" to the sourrounding environment to measure correctly. Other challenges include how the captured data is communicated "in the wild" to the data base.
Application, Limitation and Discussion
In this sub-section three exemplary cases of utilization and application of MOL data of a leisure boat by the BOL stakeholders through the Product Avatar are presented below. It is however evident that the use cases are just a short description without going into detail as this would exceed the purpose of this paper. The first use case of MOL data is based on the Product Avatar supplying data of location, temperature, humidity in combination with a timestamp to boat designers.
Ideally they can derive information on not only suitable material (e.g. what kind of wood can withstand high humidity and sun) and dimensioning of certain details (e.g. sunroof more likely to be used in tropical environments) but also on equipment needed under the circumstances (e.g. heating system or air conditioning).
Whereas the benefit of the first use case could also be realized utilizing other methods, the second one is more technical. The Product Avatar provides information directly to the suppliers of the Boat OEM, namely the engine manufacturer. Through aggregated data of, on the one side, the engine itself, e.g. rpm or heat curve and on the other side supplying information about the conditions it is used, e.g frequency, runtime, but also outside temperature etc the engine designers can reduce the risk of over-engineering. When a boat is just used a few times a year, the durability of the engine module might not be as important.
The third use case is in between the former two. Whilst it is unlikely that MOL data can influence manufacturing processes directly, it definiately can influence them indirectly through the process planning. An example is that through location based data and accompanying legal information for that location, both provided by the Product Avatar, the production planner can change the processes. So can it be necessary to e.g. add Shark-Inspired Boat Surface on the hull instead of using toxic paint as for the region the boat is mostly used the toxic paint is illegal. This could also be an application for the MOL phase again, notifying boat users not to enter a certain area (e.g. a coral reef) as they might inflict damage to the environment, which might be valued by environmelntal conscious users
CONCLUSION AND OUTLOOK
This paper presented an introduction on the basic principles of PLM and Intelligent Products as a basis for the concept of a Product Avatar as a digital representation of a physical product. After Introducing the theoretical concept and giving an example of application of PLM data during the MOL phase, the usage of MOL data during the BOL phase was elaborated. To do so, the main stakeholders of both phases were derived and the process towards data capturing on leisure boats was briefly introduced. This was followed by three hypothetical use cases on how MOL data provided by a Product Avatar can be beneficial for the stakeholders.
In conclusion, the Product Avatar can only be as good as the existing data and information and, very importantly, the knowledge on what information and data is needed in what way (e.g. format) through which channel by which individual stakeholder.
In the next steps the Product Avatar concept will be expanded and evaluated further through scenarios as described in the use cases.
Fig. 1 .
1 Fig. 1. -Phases of the Product Lifecycle
Fig. 2 .
2 Fig. 2. -Digital, Stakeholder specific Representation of a Product through a Product Avatar
Fig. 3 .
3 Fig. 3. -Screenshot of the Facebook Product Avatar for the Usage phase (MOL)
Fig. 4 .
4 Fig. 4. -Screenshot of an excerpt of information provided by the Product Avatar
Fig. 5 .
5 Fig. 5. -Lab Prototype "Universal Marine Gateway" (UMG) with Example Sensors
Fig. 6 .
6 Fig. 6. -UMG Prototype Mk. II "in Action"
Fig. 7 .
7 Fig. 7. -Prototype boat for sensor integration and testing
ACKNOWLEDGEMENT
This work has partly been funded by the European Commission through the BOMA "Boat Management" project in FP7 SME-2011-1 "Research for SMEs". The authors gratefully acknowledge the support of the Commission and all BOMA project partners. The results presented are partly based on a student project at the University of Bremen. The authors would like to thank the participating students for their significant contributions: Anika Conrads, Erdem Galipoglu, Rijad Merzic, Anna Mursinsky, Britta Pergande, Hanna Selke and Mustafa Severengiz. | 20,657 | [
"996300",
"991770",
"989864"
] | [
"217679",
"217679",
"217679"
] |
01485806 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485806/file/978-3-642-41329-2_13_Chapter.pdf | Steve Rommel
email: steve.rommel@ipa.fraunhofer.de
Andreas Fischer
email: andreas.fischer@ipa.fraunhofer.de
Additive Manufacturing -A Growing Possibility to Lighten the Burden of Spare Parts Supply
Keywords: Additive Manufacturing, Spare Parts, Spare Parts Management 1
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
Considering the global but also the local market and the competition of corporations within these markets constant improve of products and processes are required to search and find more cost effective solutions to manufacture products. At the same time services and products need to offer growing possibilities and ways for customers to individualize, specialize and improvise these products. This fact holds also true for spare parts and its market importance.
Today's spare parts industry is characterized by high volume production with sometimes specialized products, long distance transportation and extensive warehousing, resulting in huge inventory of spare parts. These spare parts even hold the risk of being outdated or not usable at the time of need so that they are often scrapped afterwards. On an industrial level, reacting to this burden, companies are competing or collaborating with OEMs, to provide a variety of maintenance services and products which in turn are limited with regards to the broadness and flexibility of their service solutions, especially when design or feature changes to spare parts (copies or OEMmanufactured) are required.
Additive Manufacturing offers new sometimes unimaginable possibilities for manufacturing a product which have the potential to change the logistical and business requirements and therefore create the possibilities to lighten the burden. Being a new possibility the following aspects of business need to be further developed with the focus on additive manufacturing: standardization of manufacturing processes logistics product and process management certification process Product and business management.
The goal of funded and private research projects is to develop a model which incorporates the old and new requirements of manufacturing and the market demands to assist companies to better compete in the market settings.
ADDITITVE MANUFACTURING AND SPARE PARTS MANAGEMENT
ADDITIVE MANUFACTURING TECHNOLOGIES
Additive Manufacturing and its technologies involve all technologies used to manufacture a product by adding (placing and bonding) layers of the specific material to each other in a predetermined way. These so called layers are generally speaking 2Dcross-sections of the products 3D-model. AM therefore is creating the geometry as well as the material characteristics during the build predetermined by the material selected. The contour is created in the x-y-direction. The z-direction creates the volume and therefore the 3rd dimension.
Additive Manufacturing offers the possibility to optimize products after each run of parts being build and the lessons learned. There is generally speaking little to no limitations to the freedom in design given by this process. Complex shapes and functional parts can be realized by these innovative processes directly from CAD data. Two examples of technologies are Selective Laser Sintering (SLS) as shown in Figure 1 and Fused Deposition Modeling (FDM) as shown in Figure 2. In order to hold a finished real product in hand two main process steps need to be performed [START_REF] Gebhardt | Generative Fertigungsverfahren: Rapid Prototyping -Rapid Tooling -Rapid Manufacturing[END_REF]):
1. Developing of the cross-sections (layers) of the 3D-model 2. Fabrication of the physical product. In order to withstand the test of being a true addition or even alternative to conventional manufacturing technologies additive manufactured products are looked at possessing the same mechanical and technological characteristics as comparable conventionally manufactured products. This does not equal the fact that its material characteristics have to be exactly the same to the once of conventional technologies. This can be a false point of view and a limiting factor to the use of additive manufacturing, because the new freedom of design offered also offers the possibility to create new products which may look completely different but perform the required function equally well or better.
The thinking has to shift from "get exactly the same product and its material characteristics with another technology, so I can compare it" to "get the same performance and functionality of the product regardless of the manufacturing technology" used.
Besides the benefit of using the 3D-model data directly for the manufacturing process of a product exist additional benefits listed below [START_REF] Gebhardt | Generative Fertigungsverfahren: Rapid Prototyping -Rapid Tooling -Rapid Manufacturing[END_REF][START_REF] Hopkinson | Rapid Manufacturing: An Industrial Revolution for the Digital Age: The Next Industrial Revolution[END_REF]:
Benefits of AM Integration of functions, increase in complexity of products and components and including of internal structures for the stability of the product Manufacturing of products very difficult or not possible to manufacture with traditional/conventional manufacturing technologies (e.g. undercuts …) Variation of spare products -ability to adopt "local" requirements to the same products and therefore supply local markets with the product and its expected features, low effort and no true impact on the manufacturing Customizationtwo forms of customization are possible -Manufacturer customization (MaCu), and Client customization (CliCu) No tooling, and reduction in process steps One-piece or small volume series manufacturing is possible, Product-on-demand Alternative logistic strategies based on the current requirements give the AM an enormous flexibility with regards to the strategy of the business model Table 1. Benefits of Additive Manufacturing
SPARE PARTS MANAGEMENT
Spare parts nowadays have become a sales and production factor, especially for any manufacturing company with a highly automated, complex and linked machinery park and production setup. Any production down-time caused by the failure of a component of any equipment within production lines will lead not only to capacity issues but also to monetary losses. These losses may come from [START_REF] Biedermann | Ersatzteilmanagement: Effiziente Ersatzteillogistik für Industrieunternehmen (VDI-Buch)[END_REF]:
Distribution: sales lost due to products not being manufactured Production: unused material, increase in material usage due to reduced capacity, additional over-time for employees to balance the inventory and make up for the lost production, additional maintenance costs of 2-30% of overall production costs Purchasing and supply chain: increase in storage for spare parts and costs incurred due to the purchase of spare parts These points alone illustrate the need for each corporation to choose the right spare parts strategy in order to reduce to risk to the business to a minimum by determining the right balance between the minimum inventory and the availability to delivery spare parts intime to prevent production down-time or dissatisfied customers. When choosing this strategy on important aspect is the type of spare parts. According to DIN 13306 a spare part is an "item intended to replace a corresponding item in order to retain or maintain the original required function of the item". Biedermann is defining the items in the following way [START_REF] Biedermann | Ersatzteilmanagement: Effiziente Ersatzteillogistik für Industrieunternehmen (VDI-Buch)[END_REF]:
Spare part: item, group of items or complete products intended to replace damaged, worn-out or missing items, item groups or products Reserve/Back-up item: item, which is allocated to one or more machines (anlagen), and therefore not used individually and in disposition and stored for the purpose of maintenance, Back-up items are usually expensive and are characterized by a low inventory level with a high monetary value Consumable item: item, which due to its setup will be consumed during use and has no economically sound way of maintenance Another aspect in determining the strategy is the type of maintenance the company is choosing or offering. There are three basic strategies:
Total Preventive Maintenance (TPM): characterized by the performance of inspections, maintenance work and replacement of components prior to the failure of the equipment. Scheduled Maintenance or Reliability Centered Maintenance: strategy where the replacement of an item is as the term says planned ahead of time. Corrective Maintenance or Repair which is also called Risk Based Maintenance: an item fails and will be replaced in order to convert the installations or equipment back into production mode.
Besides the mentioned type of spare part and the maintenance strategy the following aspects play an equally important role when selecting the strategy failure behavior and failure rate of the item reliability requirements level of information available and receivable back-up solutions Alternatives amongst others.
The decision on the strategy and the type of spare parts determines the logistics and supply chain model to be chosen and therefore the cost for the logistics portion.
SPARE PARTS LOGISTICS
The current spare parts logistics strategies are typically focusing on the procurement of spare parts from an already established supplier. In many cases this supplier is responsible for manufacturing the initial primary products. This bares the benefits of an already established business relationship, defined and common understanding of the requirements for the products and services offered, clear definition of the responsibilities, established logistic, established payment modalities and a common understanding of the expectations of either party. On the other hand there are some downfalls like lack of innovative ideas, unwanted dependence of each other, or an increase in logistics costs to name a few.
The logistics strategy itself is determined by two groups of factors:
NEW PROCESS DESIGN AND BUSINESS MODEL
PROPOSED PROCESS FLOW
Derived from the limitations and effects of the current supply strategies of spare parts, customer feedback from questionnaires and based on the processes of AM in combination with the product figure 5 illustrates a generic conventional process flow model and figure 6 the proposed preliminary process flow model.
As with standard process models the proposed process flow model covers all the process steps starting with the input from the market in the form of customer orders and customer feedback up until the delivery of the finished product to the customer.
Fig. 5. Conventional process flow
Fig. 6. Preliminary Process Model
The process model will be in a permanent updating stage for some time due to the development stage of the technologies. Producing parts using Additive Manufacturing technologies has an impact on multiple levels and multiple areas of a business' operation.
IMPACT OF AM TECHNOLOGIES ON SPARE PARTS MANUFACTURING
The impact of using AM technologies to manufacture spare parts will be described in the following subchapter. This chapter presents only an overview of the main benefits.
REDUCTION OR ELIMINATION OF TOOLING.
Conventional manufacturing like injection molding requires various tools in order to fabricate a product from start to finish. This results not only in costs for the tool build and tooling material, but also in time for the tool build, setup procedures during production periods and maintenance activities in order to keep the tools and therefore production running. Additionally tooling often has to be stored for a defined time after the end of production (EOP) to be able to produce spare parts when needed.
There are two possible alternatives to the conventional way of manufacturing spare parts being proposed: one being the fabrication of products including its spare parts strictly using Additive Manufacturing technologies from the start, thus eliminating tooling completely. The other alternative is to manufacture primary products using conventional technologies including tooling but manufacturing spare parts using Additive Manufacturing technologies.
In order to make a decision which alternative is to be preferred the suggestion is to perform an analysis of the spare part in question to determine the potentials and risks of using Additive Manufacturing. Depending on the spare parts characteristics and the spare parts strategy the following benefits can be achieved: Reduction or elimination of tooling Freeing up storage space for not needed tooling Freeing up storage space for already produced products Reduction in costs for logistics Freeing up production time for not required products to be made ahead of time after EOP Reductions of obsolete or excessive spare parts being produced at EOP and being disposed if not required.
In the case of spare parts an additional benefit is that product failures causing the need for spare parts can be examined and corrective actions can be implemented into the product design without the need to also change or update tooling data, tooling and processes.
REDUCING COMPLEXITY.
The manufacturing of spare parts directly from 3D CAD data significantly reduces the complexity in organizational and operational processes e.g. reduction of data transfers and conversion for the various tools and equipment.
On the other hand, handling of data is much more convenient than handling of real parts, but also required a secured loop in order to assure the correct data handling and storage. Within the mega trend of customization / individualization of products it is very easy to produce lots of different versions and personalized products with very little additional effort short-term and long-term. Data handling of all the versions will be the limitation.
MANUFACTURING "ON DEMAND" AND "ON LOCATION".
The main advantage of Additive Manufacturing spare parts is the possibility to produce these parts on demand. Two alternative models of this process are possible. First the spare parts will be kept on stock in very small numbers and the customer demand will trigger the delivery of the parts from the stock and the immediate production of the desired number of parts to refill the stock. Second is to eliminate the stock and produce directly the number and version of parts that the customer demands. The timing demand will be longer but no capital will be tied up in the spare parts sitting in storage. Another advantage is the future production on location: Production on location envisions sending the 3D-CAD part data with additional information regarding building process, materials and tolerances to a production site close to the customer. The parts could be manufactured in independent or dependent production facilities that have clearly defined and certified Additive Manufacturing capacities. This model could have a large impact on the logistics that will be evaluated. The impact of a production on demand, on location and with local material is recapped in the following Staying competitive using traditional business model concepts is becoming more and more difficult. Customization and the response time to customer needs are two critical factors of being successful. 21st century companies have to focus on moving physical products as well as their information quickly through retail, distribution, assembly, manufacture and supply. This is part of the value proposition manufacturing and service providers offer to their customers. Using Additive Manufacturing can provide a significant competitive advantage to a company.
Business models which consist of a deeper cooperation between suppliers and receivers of on-demand parts with possible virtual networks will have to be developed. The stake holders involved vary depending on the type of spare part and the setup between the manufacturer and the user. Depending on the business model each player has a different level of involvement and therefore a different level of value creation he adds to the overall product.
CONCLUSIONS AND OUTLOOK
Implementing and using Additive Manufacturing in order to manufacture spare parts offers a viable option not only to familiarize one with a new emerging manufacturing technology but also presents opportunities to offer products and services to the customer which fit their desire and requirements regarding time and cost effective deliver.
It is however important to take in account that Additive Manufacturing has also its current limitations like size, surface finish quality and the production volume (number of parts). Additive Manufacturing and its benefits has the potential of an enormous economic impact by reducing inventory levels to an absolute minimum as well as reducing logistics cost significantly.
Fig. 1 .Fig. 2 .
12 Fig. 1. Schematic diagram of SLS [following VDI 3404]
Fig. 3.Figure 3: Two main process steps of AM
Figure 3 :
3 Fig. 3.Figure 3: Two main process steps of AM
Fig. 4 .
4 Fig. 4. Process Steps of AM
Fig. 7 .Fig. 8 .Fig. 9 .
789 Fig. 7. Key stake holders of an Additive Manufactured Spare Parts logistics
Table 2 .
2 1. Exogenous Factors: social and political environment and settings, market situation and competition, type of spare part and customer requirements and expectations [Michalak 2009]. 2. Endogenous Factors: company internal factors, in this case inbound, production and outbound logistics. With the focus on additive manufacturing the parameters for spare parts supplied using Additive Manufacturing are shown in the tables 5-7: Parameter Selection inbound logistics | Rommel, Fraunhofer IPA (following Michalak 2009)
Parameter Parameter Value
Place of spare parts production Internal External
Location of spare
parts manufac- Local Domestic Global
Sourcing turer Number of pos-
sible spare parts Single Multiple
manufacturers
Vertical produc-tion integration Components Modular
Allocation concepts Stock JIT Postponement
Parameter Parameter Value
Outbound logistics structure Vertical (steps of distribution) Horizontal (number of distribution single-step single multiple-step multiple
units)
Sales strategy intensive selective exclusive
storage location structure (if need-ed) central local
Table 3 .
3 Parameter Selection outbound logistics | Rommel, Fraunhofer IPA (following Michalak 2009)
Parameter Degree of centralization
Trending towards Trending towards de-
centralized storage centralized storage
Assortment broad limited
Delivery time sufficient fastest delivery (specific
time …)
Product value high low
Level of concentration one source multiple sources
of manufacturing sites
Customer structure few big size compa- many small sized compa-
nies nies
Specific storage requi- yes no
rements
Specific natio- few many
nal/regional require-
ments
Table 4 .
4 Parameter table for selecting the storage location strategy | Rommel, Fraunhofer IPA (following Schulte 2005)
table 5 :
5
On demand On location Local material
No more warehousing for Worldwide service Reaction on local re-
spare parts including space, without limitations quirements
building maintenance, ener-
gy for climate control,
workers…
No more logistics of scrap- No more logistics for Environmental friendly
ping unused old spare parts end products
No more time limitations Faster response time Much less raw material
for spare parts support over long distances logistics
Social benefits of job
creation in the local area
Cultural adaption
Table 5 .
5 Impact of Additive Manufacturing on Spare Parts
3.2.4 BUSINESS MODEL OPPORTUNITIES. | 20,043 | [
"1003698",
"1003699"
] | [
"443235",
"443235"
] |
01485808 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485808/file/978-3-642-41329-2_15_Chapter.pdf | Karlheinz J Hoeren
email: karlheinz.hoeren@uni-due.de
Gerd Witt
email: gerd.witt@uni-due.de
Karlheinz P J Hoeren
Design-Opportunities and Limitations on Additive Manufacturing Determined by a Suitable Test-Specimen
Keywords: Additive Manufacturing, Laser Beam Melting, Fused Layer Modeling, Laser Sintering, test-specimen
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
Additive manufacturing can be described as a direct, tool-less and layer-wise production of parts based on 3D product model data. This data can be based on Image-Generating Measuring Procedures like CT (computed tomography), MRI (magnetic resonance imaging) and 3D-Scanning, or, like in the majority of cases, a 3D-CAD construction. Due to the layer-wise and tool-less buildup-principle, additive manufac-turing offers a huge amount of freedom for designers, compared to conventional manufacturing processes. For instance rear sections, light weight constructions or inner cavities can be built up without a significant rise of manufacturing costs. However, there are some specific limitations on the freedom of construction in additive manufacturing. These limitations can partly be attributed to the layer-wise principle of buildup, which all additive manufacturing technologies have in common, but also to the individual restrictions that come along with every single manufacturing technology. [START_REF] Wohlers | Wohlers Report 2011 -Annual Worldwide Progress Report[END_REF] [2] [START_REF] Gebhardt | Generative Fertigungsverfahren[END_REF] Following, after a short description of the additive manufacturing technologies Laser Beam Melting (LBM), Laser Sintering (LS) and Fused Layer Mod-eling (FLM), the geometry of a test-specimen that has been designed by the chair of manufacturing technologies of the University Duisburg-Essen will be introduced. Based on this geometry, the design-opportunities and limitations of the described tech-nologies will be evaluated.
LASER BEAM MELTING (LBM)
Besides Electron Beam Melting, LBM is the only way of directly producing metal parts in a powder-based additive manufacturing process. [START_REF]VDI-Guideline 3404: Additive fabrication -Rapid technologies (rapid prototyping) -Fun-damentals, terms and definitions, quality pa-rameter, supply agreements[END_REF] [START_REF] Wiesner | Selective Laser Melting -Eine Verfahrensvariante des Strahlschmelzens[END_REF] In course of the LBM process, parts are built up by repeatedly lowering the build-platform, recoating the build-area with fresh powder from the powder-supply and selective melting of the metal powder by means of a Laser-Beam. For the schematic structure of LBM-Machine see Figure 1. [START_REF] Sehrt | Möglichkeiten und Grenzen der generativen Herstellung metallischer Bauteile durch das Strahlschmelzverfahren[END_REF] Melting metal powder, a lot of energy is needed. Therefore a huge amount of thermal energy is led into the build-area. In order to lead this process-warmth away from the build-plane, and in order to keep the parts in place, supports (or supportstructures) are needed in Laser Beam Melting. Sup-port-structures are to be placed underneath every surface that is inclined by less than about 45° to-wards the buildplatform. They are build up simulta-neously with the part and consist of the same mate-rial. Thus parts have to be mechanically separated from their supports after the LBM-process, for ex-ample by sawing, milling or snapping off. As a re-sult, the surface-quality in supported areas is significantly reduced and LBM-parts are often reworked with processes like abrasive blasting, barrel finishing, or electrolytic polishing. [7] [8] Fig. 1. -schematic illustration of the LBM process
LASER SINTERING (LS)
In Laser Sintering, unlike LBM, plastic powder is used as basic material. Regarding the procedure, LBM and LS are very similar (see Figure 2). One of the main differences is that in LS no support-structures are necessary. This is because in LS the powder bed is heated to a temperature just below the melting point of the powder, so the energy that has to be introduced by the laser for melting the powder is very low. Therefore only little additional heat has to be led away from the build-area. On the one hand, this amount of energy can be compen-sated by the powder, on the other hand, due to the smaller temperature gradient, the curl-effect is less pronounced to occur. The curl-effect is the effect that causes a part to bend inside the powder bed and become deformed or even collide with the re-coating-unit. Latter would lead to a process break down. In FLM, slices of the part are built up by extruding an ABSplus wire through the heated nozzles of a movable printing head (see Figure 3). The printing head is moved in the x-, y-plane of the FLM ma-chine to build up a layer. When a layer is finished, the build-platform is lowered by a layer-thickness and the next layer is built up. [START_REF]VDI-Guideline 3404: Additive fabrication -Rapid technologies (rapid prototyping) -Fun-damentals, terms and definitions, quality pa-rameter, supply agreements[END_REF] [1] Fig. 3. -schematic illustration of the FLM process Since FLM is not a powder-based procedure, like LS, or LBM, there is no powder to prevent the heat-ed ABSplus from bending up or down inside the build-chamber. Therefore in FLM, supports are needed. In contrast to supports used in LBM, these supports only have the function, to hold the part in place. One special thing about supports in FLM is that a second, acid material is extruded through a second nozzle in order to build up supports. This way, when the FLM process is finished, the part can be put into an alkaline solution and the supports are dissolved. As a consequence, supports are not the main reason for the low finish-quality of parts pro-duced by FLM. However, the high layer-thickness, which is one of the factors that make FLM cheap, compared to other additive manufacturing technolo-gies, impacts the finishquality in a negative way.
DESIGN OF THE TEST-SPECIMEN
The chair of manufacturing technologies of the Uni-versity Duisburg-Essen has developed a test-specimen to convey the limits of additive manufac-turing technologies. This specimen is designed to illustrate the smallest buildable wall-thicknesses, gapwidths and cylinder and bore diameters de-pending on their orientation inside the build-chamber. Thus diameters/thicknesses between 0.1 and 1 mm are built up at intervals of 0.1 mm and diameters/thicknesses between 1 and 2 mm are built up at intervals of 0.25 mm (see Figure 4). In addition, the test specimen contains walls with different angles towards the build-platform (x-, y-plane), in order to show the change in surface-quality of downskin surfaces with increas-ing/decreasing angles. Furthermore a bell-shaped geometry is built up in order to give a visualisation of the so called stair-effect. This effect characterises the lack of reproduction-accuracy due to the fact that parts are built up layerwise depending on the orientation of a surface towards the x-, y-plane. For a further evaluation of the test-specimen, be-sides visual inspection, the distances between indi-vidual test-features are constructed large enough to enable the use of a coordinate measuring machine. However, the chief difference concerning the design of the test specimen compared to other test-specimens of the chair of manufacturing technolo-gies [START_REF] Wegner | Design Rules For Small Geometric Features In Laser Sintering[END_REF] [START_REF] Reinhardt | Ansätze zur Qualitäts-bewertung von generativen Fertigungsverfahren durch die Einführung eines Kennzahlen-systems[END_REF] is that this specimen is designed to suit the special requirements coming along with an additive manufacturing by technologies using sup-ports (especially LBM). Besides the features de-scribed earlier, these special requirements result in the following problems:
CURL-EFFECT
The curl-effect, which already was mentioned in the description of LBM, needs special focus. Since the production of large surfaces inside the x-y-plane is directly connected with a stronger occurrence of the curl-effect, this has to be avoided. Therefore the test-specimen is divided into eleven platforms, con-taining the individual testfeatures. The platforms are connected by small bridges, positioned in a z-level below the unpskin-surfaces of the platforms. This way, a large test-specimen can be produced without melting up large surfaces, especially in a z-level that may have an influence on the features to be evaluated.
SUPPORTS
As described before, in some additive manufactur-ing processes, supports have to be built up with the parts for several reasons. Since supports often have to be removed manually, the geometry of test-specimen should require few and little massive supports. This way production costs for support material and post-processing requirements can be kept at a low level.
In most cases the critical angle between build-platform and a surface to be built up is at about 45 degrees. Thus the downskin surface of each plat-form of the test specimen is equipped with a 60 degree groove. This way the amount of support that has to be placed under the platforms is significantly reduced without lowering processstability (see Fig-ure 5). Additionally there are two kinds of test-features on the test-specimen which require supports. Since walls and cylinders that are oriented parallel to the build platform cannot be built without supports, the platforms containing these features are placed at the outside of the test specimen. By this means, the features are accessible for manual post-processing and visual inspection.
RECOATING
In powder-or liquid-based additive manufacturing processes, different recoatingsystems are used to supply the build-platform with fresh material. One thing, most recoating-systems have in common is some kind of blade that pushes the material from one side of the build-chamber to the other. Especially in LBM, the surfaces that just have been built up tend to have heightenings. In the majority of cases, these heightenings are the edges of the part, slightly bending up as a result of the meltingprocess. The combination of these circumstances can cause a scratching between the recoating-unit and the part to varying extends, depending on the build-material and the build-parameters used. In order to keep this phenomenon from affecting the features of the test-specimen, all platforms of the test specimen are oriented in an angle of 45 degrees towards the recoating-unit. This way, harsh contacts between recoatingunit the long edges of the platforms can be avoided. However, there is another problem connected with the recoating-direction that may have an influence on the results, when small features are built up. As the testspecimen is designed to show the most filigree features that can be produced with each additive manufacturing process, the diameters and wall-thicknesses have to be decreased to the point, where they cannot be built up anymore. At this point, the features either are snapped by the recoat-ing-unit, or they cannot be built up as a connected object anymore. In both cases, fragments of the test-features are pushed across the powder-bed the recoating-unit. In order to prevent these fragments from having an influence on the build-process, by getting stuck between the recoating-unit and another area of the part or by snapping other test-features, the platforms and test-features are arranged in a suitable way. For instance, all diameters and wall-thicknesses decrease with the recoating-direction. Additionally, platforms with gaps are placed behind platforms with filigree features, so the space above them can be used as outlet zone.
PRODUCTION OF TEST-SPECIMENS WITH LBM, LS AND FLM
Following, the results of visual inspections and measurements on test-specimen, build of Hastel-loy X (LBM with an EOSINT M270 Laser Beam Melting System), glassfilled polyamide (LS with a FORMIGA P 100 Laser Sintering System) and ABSplus (FLM with a Stratasys Dimension 1200es Fused Layer Modeling System) are discussed. For inspections and measurements, the test-specimen made of Hastelloy X and glass-filled polyamide have been exempted from powder adhesions by blasting with glass-pearls (LS), respectively corundum and glass-pearls (LBM). On the testspecimen produced by FLM, only supports have been removed by putting it into an alkaline bath.
WALL-THICKNESSES
A look at the minimum producible wall-thicknesses conveys that in LBM the most filigree walls can be produced. Additionally walls in LBM show the slight-est deviation from the specified dimensions (see Figure 7). However, in LBM there is a considerable difference between walls oriented parallel the re-coating-unit and wall orientated orthogonal to the recoating-unit. The walls oriented parallel to the recoatingunit can only be built up down to a thick-ness of 0.7 mm. Thinner walls have been snapped by the recoating-unit (see Figure 6). Especially in FLM, but also in LS one can observe that from a certain threshold, in spite of decreasing nominal dimensions, the measured wall-thicknesses do not become any thinner. In FLM this can be ex-plained by the fact that an object that is built up at least consists of its contours. Taking into account the diameter of an ABSplus wire and the fact that it is squeezed onto the former layer, it is clear that the minimum wall-thickness is situated at about 1 mm. In LS, the explanation is very similar. However the restricting factor is not the thickness of a wire, but the focus diameter of the LS-System in combination with the typical powder-adhesions. The wall-thicknesses along the z-axis in powder-based manufacturing technologies (LBM and LS) are always slightly thicker than the nominal size (see Figure 8). This is to be explained by the fact, that melting the first layers of the walls, especially in LS, excess energy is led into the powder underneath the walls and melts additional powder particles. In LBM this effect can be observed less intense, since the manual removal of supports affects the results.
Fig. 8. -results of measuring minimal wall-thicknesses along the z-axis
In FLM, the course of measured wall-thicknesses is erratic within the range from 0.25 to 1.0 mm. This can be explained considering the layer-thickness in FLM. The layerthickness in FLM is 0.254 mm. That way, a nominal thickness of 0.35 mm for example can either be represented by one, or two layers (0.254 mm or 0.508 mm). Since the resolution in FLM is very coarse, this effect can also be seen by a visual inspection of the walls (see Figure 9).
CYLINDERS
The test-specimen contains clinders with an angle of 0 degrees and cylinders with a polar angle of 45 and 90 degrees, each in negative x-and y-direction. The orientation along the x-axis has been chosen, since the process-stability in LBM is a lot higher, if the unsupported cylinders with a polar angle of 45 degrees dont grow against the recoating-direction.
Fig. 10. -results of measuring minimal cylinder-diameters
Comparing the cylinders with a polar angle of 0 degrees, shows again, that in LBM the most filligree featuers can be built up with the best accuarcy (see Figure 10). However, there are breaks visible in the cylinders in a hight of about 5 mm at cylinders with a diameter of less than 0.9 mm (see Figure 11). These breaks are a result of the scratching between the cylinders and the recoater-blade. This time the blade did not snap the cylinders since the geometry is more flexible. Thus the cylinders were able to flip back into their former positions be built on. The results in LS are comparable tho those of the minimal wall-thicknesses along the x-and y-axis (see Figure 10). The smallest possible diameter in LS is 0.5 mm. In FLM, however, only cylinders with a diameter of 2 mm can be built up.
Fig. 12. -form deviation of FLM cylinders
The results concerning cylinders with a polar angle of 45 and 90 degrees in LBM and LS are correlating with the results of cylinders with a polar angle of 0 degees regarding their accuracy. In FLM it is striking that smaller diameters can be built with increasing polar angles (see Figure 12). At a polar angle of 90 degrees, even cylinders with 0.1 mm diameter can be built. However, with increasing polar angles the form deviation in FLM becomes more visible. Due to the coarse resolution in FLM, caused by thick layers and a thick ABSplus wire, the deviation in Form and diameter for small cylinders becomes so large that inspecting their diameter is not possible anymore from 0.9 mm downwards (see Figure 13).
GAPS AND BORES
The evaluation of gaps and bores is reduced to a visual inspection. This is due to the fact that the accuracy of such filligreee bores cant be usefully inspected with a coordinate measuring machine, since the diameter of the measurement-end would be on the same scale as the diameter of the bores and the irregularities that are to be inspected. The results of the visual inspection are summarised in Table 1. One striking concerning bores in LS is their quality, which is worse compared to the other mannufacturing technologies. This becomes clear either by inspecting the smallest depictable diameters, but also by taking a look at the huge form deviation of bores in LS (see Figure 14). The eplanation for both, form deviation and resolution is found in the way of energy contribution in LS. As described above, in LS, less energy is necessary to melt the powder, compared tho LBM. Thus the threshold between melting the powder and not melting the powder is much smaller. Consequently, if excess energy is led into the part, surrounding powder ist melted and form deviations will occur.
ANGLES TOWARD BUILD-PLATFORM
The test-specimen contains five walls, inclined from 80 to 40 degrees towards the build-platform in steps of 10 degrees (see Figures 151617). This walls serve as a visualisation of the decreasing surface quality with decreasing angles towards the buildplatform. Again the walls are inclined to the negative x-direction in order to raise process stability and avoid process aborts. If possible, this walls should be built without support-structures, so deviations in form and surface quality can be displayed within the critical area. In LS, the surface quality appears hardly affected by different angles towards the build-platform (see Figure 15). Even with an angle of 40 degrees, the stair-effect (visibility of layers on stongly inclined walls) is not visible. Taking a look at the walls built by FLM, it becomes clear that the stair-effect in FLM is visible right from the beginning (see Figure 16). This is due to the coarse resolution of FLM. Additionally, the wall inclined by 40 degrees even has a worse surface quality then the other walls. In FLM, supports are created automaticly. Therefore users are not able to erase supports before starting a FLM process. The wall, inclined by 40 degrees, was built up with supports. Thus the lack of surface-quality results from the connection between supports and part.
Fig. 17. -Anlges towards build-platform in LBM
The walls in LBM convey a strong influence of angles between parts and buildplatform and the surface-quality of downskin surfaces (see Figure 17). A first discoloration of the surface can be seen on the wall inclined by 60 degrees. This discoloration is a result of process-warmth, not beeing able to leave the part due to the fact that these walls dont have support-structures. At an inclination of 50 degrees, a serious deterioration of the surface quality becomes visible. This deteroration becomes even stronger with an inclination by 40 degrees. Additionally, the edge of the wall inclined by 40 degrees appears frayed. The reason for this can be found in the fact that with decreasing angle towrad the build-platform and increasing jam of heat inside the part, the curl-effect becomes stronger. In this case, the recoater-unit starts scantching the curled edge. This is a first sign, that at this angle of inclination, process aborts may occur depending on the orientation of the part towards the recoating-unit.
STAIR-EFFECT
The bell-shaped feature on the test-specimen serves as a visualisation of the staireffect. Comparing the built up test-specimens, a clear difference in surface quality can be recognised.
In LBM, steps are just slightly visible at an angle of 10 to 15 degrees towards the build-platform (see Figure 18). Due to the thin layer-thickness in LBM, the whole bell-profile apperas very fine and smooth. Taking a look at the LS-bell-profile, it becomes clear that the surfaces are a bit more rough than in LBM. The stair-effect is already visible at an angle of 20 degrees. In FLM, as mentioned above, single layers are always visible, due to the coarse resolution of the technology. In spite of this, the bell-profile conveys that, using the FLM-technology, angles of less then about 20 degrees inevitably lead to a loss of shape.
CONCLUSIONS
Comparing the different test-specimen, built by LBM, LS and FLM, the first thing to be recognised is that in LBM the most filigree structures can be produced with the best accuracy. However, it becomes clear that the LBM-process is much more complex than for example the FLM-process. Both, designers and operators have to be aware of the typical constrains that are connect with the process-specific characteristics of LBM. This becomes particularly obvious, considering the huge influence that part-orientation and supports have on process stability and part-quality. As mentioned above, LS is very similar to LBM concerning the course of procedure. This similarity can also be seen, comparing the test-specimen. In LS, most features are just slightly less filigree than in LBM. Due to the fact that support-structures are not needed for LS, a lot of time and money can be saved in pre-and, as a consequence, post-processing. In addition, the process-handling is easier and process aborts are a lot less likely.
[START_REF] Rechtenwald | Funk-tionsprototypen aus Peak[END_REF] [10][START_REF]VDI-Guideline 3404: Additive fabrication -Rapid technologies (rapid prototyping) -Fun-damentals, terms and definitions, quality pa-rameter, supply agreements[END_REF]
Fig. 2 . 3 FUSED
23 Fig. 2. -schematic illustration of the LS process
Fig. 4 .
4 Fig. 4. -test-specimen made of glassfilled polyamide 12 by LS
Fig. 5 .
5 Fig. 5. -downskin-surface of the test-specimen produced by LBM after support-removal
Fig. 6 .
6 Fig. 6. -snapped walls, orientated parallel to the recoating-unit in LBM
Fig. 7 .
7 Fig. 7. -results of measuring minimal wall-thicknesses along the y-axis (parallel to recoatingunit in LBM)
Fig. 9 .
9 Fig. 9. -minimal wall-thicknesses along the z-axis in FLM
Fig. 11 .
11 Fig. 11. -breaks in LBM-Cylinders
Fig. 13 .
13 Fig. 13. results of measuring cylinders with a polar angle of 45 and 90 degrees manufactured by FLM
Fig. 14 .
14 Fig. 14. -form deviation of bores along the y-axis in LS
Fig. 15 .
15 Fig. 15. -angles towards build-platform in LS
Fig. 16 .
16 Fig. 16. -angles towards build-platform in FLM
Fig. 18 .
18 Fig. 18. -Comparison of bell-shaped features on the test-specimen built by LBM, LS and FLM
Table 1 .
1 -smallest depictable bores and gaps determined by visual inspection
Taking a look at the FLM-process, it is obvious that this technology is way less complex and filigree than LBM and LS. Fine features often can`t be displayed and deviations in form and dimension often can be recognised. However, the FLMprocess is very easy to be handled. Supports are constructed automatically and when the part is built up, they can be removed by an alkaline-bath. Additionally, no precautions have to be taken and no cleaning effort has to be done handling any powder. The FLM-technology is much cleaner than LBM and LS and therefore much more suitable for an office-surrounding. The last thing to be taken into account for this comparison are process-costs. The FLM-technology is a lot cheaper than LBM (which is the most expensive technology) and LS. | 24,265 | [
"1003700",
"1003701"
] | [
"300612",
"300612"
] |
01485809 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485809/file/978-3-642-41329-2_16_Chapter.pdf | Stefan Kleszczynski
email: stefan.kleszczynski@uni-due.de
Joschka Zur Jacobsmühlen
Jan T Sehrt
Gerd Witt
email: gerd.witt@uni-due.de
Mechanical Properties of Laser Beam Melting Components Depending on Various Process Errors
Keywords: Additive Manufacturing, Laser Beam Melting, process errors, mechanical properties, High Resolution Imaging
Additive Manufacturing processes are constantly gaining more influence. The layer-wise creation of solid components by joining formless materials allows tool-free generation of parts with very complex geometries. Laser Beam Melting is one possible Additive Manufacturing process which allows the production of metal components with very good mechanical properties suitable for industrial applications. These are for example located in the field of medical technologies or aerospace. Despite this potential a breakthrough of the technology has not occurred yet. One of the main reasons for this issue is the lack of process stability and quality management. Due to the principle of this process, mechanical properties of the components are strongly depending on the process parameters being used for production. As a consequence incorrect parameters or process errors will influence part properties. For that reason possible process errors were identified and documented using high resolution imaging. In a next step tensile test specimens with pre-defined process errors were produced. The influence of these defects on mechanical properties were examined by determining the tensile strength and the elongation at break. The results from mechanical testing are validated with microscopy studies on error samples and tensile specimens. Finally this paper will give a summary of the impact of process errors on mechanical part quality. As an outlook the suitability of high resolution imaging for error detection is discussed. Based on these results a future contribution to quality management is aspired.
Introduction
Additive Manufacturing (AM) offers many advantages for manufacturing of complex and individual parts. It provides a tool-free production, whereby physical parts are created from virtual solid models in a layer by layer fashion [START_REF] Gibson | Additive Manufacturing Technologies -Rapid Prototyping to Direct Digital Manufacturing[END_REF][START_REF] Gebhardt | Generative Fertigungsverfahren -Rapid Prototyping -Rapid Tooling -Rapid Manufacturing[END_REF]. In a first step the layer data is gained by slicing virtual 3D CAD models into layers of a certain thickness. Layer information could also be gained by slicing data from 3D scanning or CT scanning. In the following build process the layer information is converted into physical parts by creating and joining the respective layers. The principle of layer creation classifies the AM process [START_REF]VDI-Guideline 3404: Additive fabrication -Rapid technologies (rapid prototyping) -Fundamentals, terms and definitions, quality parameter, supply agreements[END_REF]. Laser Beam Melting (LBM) as an AM process offers the opportunity of small volume production of metal components. Here a thin layer of metal powder is deposited onto the build platform. In a next step the powder is molten into solid material by moving a laser beam (mostly Nd-or Yb-fibre laser source) across the current cross-section of the part. After this, the build platform is lowered and the two process stages are repeated iteratively until the solid metal part is fully produced (figure 1). As a result of the process the produced components show very good mechanical properties, which are widely comparable to conventionally processed materials [START_REF] Spierings | Designing material properties locally with Additive Manufacturing technology SLM[END_REF][START_REF] Wohlers | Wohlers Report 2011 -Annual Worldwide Progress Report[END_REF] or in some cases even better [START_REF] Wohlers | Wohlers Report 2011 -Annual Worldwide Progress Report[END_REF]. The density of components reaches approximately 100 % [START_REF] Spierings | Designing material properties locally with Additive Manufacturing technology SLM[END_REF], [6 -8]. Potential applications for LBM components are located in the domain of medical implants, FEM optimized lightweight components or the production of turbine blades with internal cooling channels [START_REF] Gibson | Additive Manufacturing Technologies -Rapid Prototyping to Direct Digital Manufacturing[END_REF][START_REF] Gebhardt | Generative Fertigungsverfahren -Rapid Prototyping -Rapid Tooling -Rapid Manufacturing[END_REF][START_REF] Wohlers | Wohlers Report 2011 -Annual Worldwide Progress Report[END_REF].
There are about 158 factors of process influences [START_REF] Sehrt | Möglichkeiten und Grenzen bei der generativen Herstellung metallischer Bauteile durch das Strahlschmelzverfahren[END_REF] from which the parameters of laser power, scanning velocity, hatch distance (distance of melt traces) and layer thickness have been reported as the most influencing ones [START_REF] Spierings | Designing material properties locally with Additive Manufacturing technology SLM[END_REF][START_REF] Meiners | Direktes Selektives Laser Sintern einkomponentiger metallischer Werkstoffe[END_REF][START_REF] Sehrt | Möglichkeiten und Grenzen bei der generativen Herstellung metallischer Bauteile durch das Strahlschmelzverfahren[END_REF]. These main process parameters mentioned are often set into connection by means of the magnitude of volume energy density E_v [START_REF] Spierings | Designing material properties locally with Additive Manufacturing technology SLM[END_REF][START_REF] Meiners | Direktes Selektives Laser Sintern einkomponentiger metallischer Werkstoffe[END_REF][START_REF]VDI-Guideline 3405 -2. Entwurf. Additive manufacturing processes, Rapid Manufacturing -Beam melting of metallic parts -Qualification, quality assurance and post processing[END_REF] which is defined as:
(1)
Fig. 1. Schematic process principle of LBM
where P_l stands for Laser Power, h stands for the hatch distance, v_s stands for the scanning velocity and d stands for the powder layer thickness. Since the process of layer creation determines the resulting part properties [START_REF] Gibson | Additive Manufacturing Technologies -Rapid Prototyping to Direct Digital Manufacturing[END_REF][START_REF] Gebhardt | Generative Fertigungsverfahren -Rapid Prototyping -Rapid Tooling -Rapid Manufacturing[END_REF][START_REF] Spierings | Designing material properties locally with Additive Manufacturing technology SLM[END_REF][START_REF] Meiners | Direktes Selektives Laser Sintern einkomponentiger metallischer Werkstoffe[END_REF] wrong process parameters or technical defects in certain machine components could also cause process errors which deteriorate mechanical properties. Spierings et al. [START_REF] Spierings | Designing material properties locally with Additive Manufacturing technology SLM[END_REF] show that the resulting part porosity mainly depends on the used process parameters and significantly affects the mechanical properties. In addition a correlation between volume energy density and the respective part properties is inves-tigated with the result that volume energy density can be considered as the parameter determining part porosity. Yasa et al. [START_REF] Yasa | Application of Laser Re-Melting on Selective Laser Melting parts[END_REF] investigate the influence of double exposure strategies on resulting part properties. It is noted that the application of re-melting is able to improve surface quality and reduce part porosity.
Due to high security requirements of some potential domains of applications and actual standardisation efforts, a demand for suitable quality control for LBM technologies has been reported [START_REF]VDI-Guideline 3405 -2. Entwurf. Additive manufacturing processes, Rapid Manufacturing -Beam melting of metallic parts -Qualification, quality assurance and post processing[END_REF][START_REF] Lott | Design of an Optical system for the In Situ Process Monitoring of Selective Laser Melting (SLM)[END_REF][START_REF] Kruth | Feedback control of selective laser melting[END_REF]. Thus far, some different approaches for process control and process monitoring have been given in literature. Kruth et al. monitor the current melt pool using a coaxial imaging system and control laser power to hold the size of the melt pool constant [START_REF] Kruth | Feedback control of selective laser melting[END_REF]. As the thermal conductivity of metal powder is about three orders of magnitude lower than those of solid metal [START_REF] Meiners | Direktes Selektives Laser Sintern einkomponentiger metallischer Werkstoffe[END_REF] this system can improve the part quality for overhanging structures by lowering the laser power when the size of the melt pool shows fluctuations in these certain regions. Lott et al. [START_REF] Lott | Design of an Optical system for the In Situ Process Monitoring of Selective Laser Melting (SLM)[END_REF] improve this approach by adding additional lighting to resolve melt pool dynamics at higher resolution. In [START_REF] Craeghs | Online Quality Control of Selective Laser Melting[END_REF] images of the deposited powder layers are taken additionally using a CCD camera. This enables the detection of coating errors due to a damaged coater blade. Doubenskaia et al. [START_REF] Doubenskaia | Optical System for On-Line Monitoring and Temperature Control in Selective Laser Melting Technology[END_REF] use an optical system consisting of an infra-red camera and a pyrometer for visualisation of the build process and online temperature measurements. All approaches previously mentioned feature an implementation into the optical components or the machine housing of the respective LBM system. This makes it elaborate and expensive to equip existing LBM machines with these systems. Moreover the coaxial monitoring systems are limited to the inspection of the melt pool. The result of melting remains uninspected. The CCD camera used in [START_REF] Craeghs | Online Quality Control of Selective Laser Melting[END_REF] is restricted to the inspection of the powder layer. Possible errors within the compound of melt traces cannot be resolved.
In this work the influence of process errors on resulting part properties is investigated. First selected process errors are provoked and documented using a high resolution imaging system, which is able to detect errors at the scale of single melt traces. A further description of the imaging system is shown in paragraph 2.1 and in [START_REF] Kleszczynski | Error Detection in Laser Beam Melting Systems by High Resolution Imaging[END_REF][START_REF] Jacobsmühlen | High Resolution Imaging for Inspection of Laser Beam Melting systems[END_REF]. In general, process errors can influence process stability and part quality [START_REF] Kleszczynski | Error Detection in Laser Beam Melting Systems by High Resolution Imaging[END_REF]. Therefore error samples are built by manipulating the main exposure parameters and exposure strategies. Next, tensile specimens with selected errors are built and tested. The results are validated by microscopy studies on the tested tensile specimens. Finally a correlation between tensile strength, elongation at break, porosity and error type is discussed.
Method
LBM and high resolution system
For the experiments in this work an EOSINT M 270 LBM system (EOS GmbH, Germany) is used. Hastelloy X powder is used as material, which is a nickel-base super alloy suitable for applications such as gas turbine blades. The documentation of process errors is carried out with an imaging systems consisting of a monochrome 29 megapixel CCD camera (SVS29050 by SVS-VISTEK GmbH, Germany). A tilt and shift lens (Hartblei Macro 4/120 TS Superrotator by Hartblei, Germany) helps to reduce perspective distortion by shifting the camera back and allows placing the focal plane on the build platform using its tilt ability. A 20 mm extension tube reduces the minimum object distance of the lens. The imaging system is mounted in front of the LBM system using a tube construction which provides adjustable positioning in height and distance from the machine window (figure 2). Two orthogonally positioned LED line lights provide lighting for the build platform. Matt reflectors on the machine back and the recoater are used to obtain diffuse lighting from a close distance, which was found to yield the best surface images. The field of view is limited to a small substrate platform (10 cm x 10 cm) to enable best possible resolving power (25 µm/pixel to 35 µm/pixel) [START_REF] Jacobsmühlen | High Resolution Imaging for Inspection of Laser Beam Melting systems[END_REF]. Image acquisition after powder deposition and laser exposure is triggered automatically using limit switches of the machine's coater blade and laser hourmeter.
Determination of mechanical properties
Test specimens for tensile testing are built as cylindrical raw part by LBM. The final specimen shape is produced by milling the raw parts into the standardised shape according to DIN 50125 -B 5x25 [START_REF]DIN 50125 -Testing of metallic materials -Tensile test pieces[END_REF]. Tensile tests are performed according to the specifications of DIN 50125. A Galdabini Quasar 200 machine is used for the tests. The fragments of test specimens are used for further microscopy studies. For which unstressed material from the specimen's thread heads is prepared into grinding samples. The microscopy studies are carried out using Olympus and Novex microscopes. Porosity of error samples is determined using an optical method according to [START_REF] Yasa | Application of Laser Re-Melting on Selective Laser Melting parts[END_REF] and [START_REF]VDI-Guideline 3405 -2. Entwurf. Additive manufacturing processes, Rapid Manufacturing -Beam melting of metallic parts -Qualification, quality assurance and post processing[END_REF] where the acquired images are converted to black and white images using a constant threshold value. Finally the ratio of black pixels, representing the porosity, is measured.
Documentation of process Errors
Process errors
An overview of typical process errors has been given in previous work [START_REF] Kleszczynski | Error Detection in Laser Beam Melting Systems by High Resolution Imaging[END_REF]. In this paper the main focus is on errors that influence part quality and in particularly on errors that affect mechanical properties. As mentioned in paragraph 1 mechanical properties strongly depend on process parameters, which define the energy input for the melting of the powder and in consequence the ratio of distance and width of the single melt traces. Technical defects of the laser source or the choice of wrong process parameter sets could therefore worsen the compound of layers and melt traces leading to porous regions. On the other hand too much energy input could lead to heat accumulation. In this case surface tensions of the melt induce the formation of superelevated regions, which could endanger process stability by causing collisions with the recoating mechanism. However higher energy inputs have been reported to increase mechanical properties due to a better compound of melt traces and lower porosity [START_REF] Yasa | Application of Laser Re-Melting on Selective Laser Melting parts[END_REF]. To provoke errors of these two kinds the main process parameters laser power, hatch distance and scanning velocity are changed by 20 % and 40 % arround the standard value, which was found by systematic qualification experiments (see [START_REF] Sehrt | Anforderungen an die Qualifizierung neuer Werkstoffe für das Strahlschmelzen[END_REF]). Additionally the layer thickness is doubled from 20 µm to 40 µm for one sample keeping the process parameters constant. As illustrated in equation 1 these variations directly affect the energy input. Another sample is built using a double exposure strategy resulting in higher energy inputs. For the experiments the stripe exposure strategy is used. Hereby the cross sections of the current parts are separated in stripes of 5 mm length. The overlap value for these stripes is another process parameter which could affect the compound of melt traces. Therefore one sample with the lowest possible stripe overlap value of 0,01 mm is build.
Error samples
Figure 3 shows an image, which was recorded during the build process of error samples using the high resolution imaging system. The samples are arranged in a matrix. The first three rows of the matrix represent the process parameters scanning velocity (vs), laser power (Pl) and hatch distance (h). These values are varied in the columns from -40 % to + 40 % in steps of 20 %.The last row of the matrix contains a reference sample (left), a sample being built with double layer thickness (mid left), a sample being built with double exposure (mid right) and a sample being built with the lowest possible stripe overlap value (right). As can be seen from figure 3, samples representing higher volume energy densities (reduced scanning velocity/hatch, increased laser power or double exposure) appear much brighter and smoother than those of samples representing low volume energy density (increased scanning velocity/hatch, reduced laser power or double layer thickness). A closer view at the documented error samples (figure 4) shows that surface irregularities and gross particles are visible at the sample of double layer thickness.
The sample with 40 % enlarged hatch distance indicates a poor connection of melt traces, which could be a signal for increased porosity. The sample with 40 % increased laser power shows a strong connection of melt traces, although there are some superelevated regions visible at the edges. A comparison of the high resolution images with images taken from microscopy confirms these impressions. Here it is clearly visible that there is no connection between the melt traces at the sample build with enlarged hatch distances. The error samples with 40 % respectively increased or decreased parameters show the strongest deviation from the reference sample. These parameter sets are used for production of tensile test specimens. Additionally tensile test specimens representing standard, double layer, double exposure and reduced stripe overlap parameter sets are built.
Mechanical Properties
Tensile strength
For each type of error six specimens are produced as described in paragraph 2.2 to ensure the level of statistical certainty. The determined values for tensile strength and the associated standard deviations are presented in figure 5. Additionally the calculated values for volume energy density are added into the chart. As can be seen the respective bars representing tensile strength and volume energy density show a similar trend for almost all specimens. In the case of the specimen of "double layer" this trend is not applicable. Higher values for the mean tensile strength are determined (comparing "double layer" to "H + 40 %" and "Vs + 40 %"), although this specimen has the lowest value for the volume energy density. Here it is remarkable that specimen "P -40 &" shows a tensile strength which is about 14 % (117 MPa) lower than those of specimen "double layer" while the values for volume energy density are at the same level. Specimens produced using higher energy input parameter sets [START_REF] Sehrt | Anforderungen an die Qualifizierung neuer Werkstoffe für das Strahlschmelzen[END_REF]. At this point it has to be stated that the maximum value (1110 MPa) is achieved after heat treatment.
Elongation at break
Figure 6 shows the determined values for the elongation at break compared to the calculated values of volume energy density. Unlike the results for tensile strength, there seems to be no connection between the elongation at break and volume energy density. Furthermore, there are no significant divergent trends recognizable between high energy input and low energy input parameter sets. It is remarkable that there are three different levels of values recognizable in the chart. First there is the level of about 30 % elongation at break which is determined for most of the specimens (reference, double exposure, stripe overlap, double layer, Vs + 40 %). Second there is the level of about 25 % Fig. 6. Results from determination of elongation at break compared to calculated values of volume energy density to 28 % elongation at break which is detected at four specimens (Vs -40 %, H -40 %, P + 40 %, H + 40 %). Here it is remarkable that the three high energy input parameter sets (Vs -40 %, H -40 %,P + 40 %) show the lowest standard deviation compared to all other specimens. Finally the lowest value for elongation at break is measured for specimen "P -40 %" representing the parameter set with lowest energy input. In literature the values for elongation at break for Hastelloy X are located in the range of 22 -60 % [START_REF] Sehrt | Anforderungen an die Qualifizierung neuer Werkstoffe für das Strahlschmelzen[END_REF] depending on the respective heat treatment. With exception of specimen "P -40 %" all determined values are within this range. However the determined values are at least 50 % lower than the maximal values reported.
Porosity
After mechanical testing, selected specimens are used for determination of porosity using microscopy according to the procedure described in paragraph 2.2. For the reference specimen the porosity is determined to 0,04 %, which is comparable to results from previous studies [4 -7] emphasising that LBM components achieve up to 99 % density. Specimen "Pl -40 %", which has the lowest value for volume energy density, shows the highest porosity. The determined value is 3,94 %. The results from porosity analysis (as presented in figure 7) underline previously published statements, which say that porosity grows with sinking energy input. It has to be stated that in general "high energy input" specimens show very similar porosity values (0,020 % to 0,027 %, see figure 7). The determined porosity values are higher for low energy input specimens (0,227 % to 3,938 %), which confirms the assumption that porosity is strongly dependent on energy input. The porosity values of the "reference" and the "reduced stripe overlap" specimen differ from each other by 0,02 %. Thus the reduced stripe overlap specimens show a lower porosity value. This is remarkable due to the fact that the "reduced stripe overlap" was suspected to increase part porosity. One explanation for this result might be found in the exposure strategy. As mentioned in paragraph 3.1 the cross section of parts are subdivided in stripes of a certain width. After exposure these stripes are rotated and gaps in the compound of melt traces could be closed during exposure of the next layer. On the other hand it has to be stated that the details from photomicrographs used for the analysis show only one certain area of the whole cross section. Moreover pores are distributed stochastically, which makes it difficult to make a statement with an accuracy level of a hundredth of a percent. Figure 8 shows photomicrographs from reference specimen (middle), reduced scanning speed specimen (top, highest tensile strength) and reduced laser power specimen (at the bottom, lowest tensile strength). Specimen "vs -40 %" shows little and small pores. The value of porosity is 0,025 %. The same appearance is visible at the photomicrographs from the reference sample, which shows slightly more but still small pores. Specimen "Pl -40 %", in contrast, shows clearly more and bigger pores, which seem to be distributed stochastically (figure 8, at the bottom).
Specimen
Discussion
The results presented in the previous sections of this paper prove that mechanical properties strongly depend on process parameters. In general it can be stated that increasing energy input improves tensile strength and reduces porosity. It is to be expected that porosity affects tensile strength, due to the fact that irregularities like pores induce crack formation at a certain mechanical load. The elongation at break on the other hand is not systematically affected by different energy input parameter sets.
Here, there are some groups of parameter sets which show values at similar levels. However there is no general connection between energy input and the elongation at break for the investigated material. It seems like the exposure strategies have more influence in this case. As can be seen from figure 6 the three high energy input parameter specimens "Vs -40 %", "H -40 %", "P + 40 %" show similar values for elongation at break. The "double exposure" specimen has a calculated volume energy density which is comparable to those of the other high energy specimens. Nevertheless the elongation at break of this sample lies in the same region as the "reference" sample and some "low energy input" samples. One possible explanation for this appearance could be that the "double exposure" sample was built using two different energy input parameter sets. One for melting the powder and another parameter set for re-melting the produced layer. Thus the heat flow has been different to those of the "high energy input parameter" samples, which has obviously induced different mechanical properties. The "high energy input" specimens show improved values for tensile strength but lower values for elongation at break compared to the reference sample. In contrast the "double exposure" sample shows an improved value for tensile strength at constant ductility. Figure 9 compares the results of tensile and porosity studies depending on the volume energy density. For this purpose the respective numbers of specimens are plotted into the chart. For identification see the explanation in the chart. Comparing the two plots of logarithmic interpolations shows that both are working contrarily. Both of the magnitudes seem to run asymptotically to parallels of the x-axis for high values of the volume energy density. The tendencies at the tensile tests underline the results from porosity determination (specimen 7: double layer, Rm = 833 MPa, porosity 0.227 %, specimen 8: hatch distance plus 40 %, Rm = 813 MPa, porosity 1.633 %). In this case specimen 8 shows a higher value of porosity and lower tensile strength. Comparing these results with the images from figure 4 allows the conclusion that a poor connection of melt traces causes higher tensile strength values than no connection of melt traces. Specimens 7 shows that the previously mentioned correlation between tensile strength, volume energy density and porosity is not applicable to every kind of an error. Here the low value for the volume energy density does not correlate with the interpolation for tensile strength and porosity of other specimens.
Fig. 9. Connection between porosity, tensile strength and volume energy density
This shows that volume energy density is more suitable for estimating tendencies concerning the magnitudes of tensile strength and porosity. A more significant influence is spotted at the type of error, respectively to the kind of energy input or exposure strategy.
conclusions
In this paper a brief demonstration for documenting possible process errors in the area of LBM by using a high resolution imaging system was given. The results and validations via microscopy show a good correlation between the recorded images. High resolution imaging might be an alternative and more pragmatic approach for process monitoring and quality management in the area of LBM due to the fact that the system is easy to implement and compatible to every LBM systems that features a window for the inspection of the process.
In a second step the impact of process errors on tensile strength, porosity and elongation at break was investigated. It could be shown that a higher energy input mostly induces higher values for tensile strength and lower porosities. On the other hand it was found, that the lower the volume energy density is, the lower the determined tensile strength and the higher the porosity are. For some error samples it could be found that the measure of volume energy density is not in a direct correlation with the resulting part properties. This was noticed in detail by comparing the tensile strength of samples with similar values for volume energy density, which were varying for about 117 MPa. Here the nature of melt trace connection seems to have the bigger influence. The mentioned disagreement of volume energy density with resulting part properties was especially noticed at the determination of the elongation at break. Here some samples that were built with "high energy parameter sets" showed a reduced elongation at break, which induced that the higher energy input seems to embrittle the material compared to the values of the reference specimen. At the same time another specimen, with a comparable higher level of volume energy density resulting in an higher tensile strength, showed higher values for the elongation at break, which were at the same level as specimens produced with low energy input parameters.
Nevertheless all determined mean values for tensile strength and elongation at break were in the range of known values from conventionally produced samples. Only the sample with the lowest tensile strength, lowest elongation at break and highest porosity, which was produced by reducing laser power by 40 %, showed values which were at the lower end of the known range. The elongation at break, which is a measure for ductility of materials, did not reach more than 50 % of the known maximum value from literature. This means that for some applications, where high elongation at break values are required, heat treatments are still necessary to improve this certain part property.
For future work the further investigation of the influences of varying process parameters is necessary for different materials and different machine systems, which might use other laser sources or inert gases for flooding of the process chamber. Especially in case of elongation at break it would be interesting to analyse the influence of different exposure strategies. Using high resolution imaging systems for collecting data of different error types and materials could be a useful tool to create a knowledge database, which links process parameters, resulting surface images and resulting mechanical part properties. In a next step an automated image analysis could detect significant differences in the structure of melt traces and might therefore also be applicable to quality management and production documentation.
Fig. 2 .
2 Fig. 2. Camera setup in front of LBM system EOSINT M 270
Fig. 3 .Fig. 4 .
34 Fig. 3. Documentation of error samples using high resolution imaging
Fig. 5 .
5 Fig. 5. Results from determination of tensile strength compared to calculated values of volume energy density
Fig. 7 .Fig. 8 .
78 Fig. 7. Summary of determined porosity values
Acknowledgment
The IGF project 17042 N initiated by the GFaI (Society for the Promotion of Applied Computer Science, Berlin, Germany), has been funded by the Federal Ministry of Economics and Technology (BMWi) via the German Federation of Industrial Research Associations (AiF) following a decision of the German Bundestag. | 31,408 | [
"1003702",
"1003701"
] | [
"300612",
"303510",
"300612",
"300612"
] |
01485810 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485810/file/978-3-642-41329-2_17_Chapter.pdf | Dipl.-Des Andreas Msc
Fischer
MBE Dipl.-Ing. (FH Steve Rommel
email: steve.rommel@ipa.fraunhofer.de
Prof. Dr Thomas Bauernhansl
New Fiber Matrix Process With 3D Fiber Printer -A Strategic In-process Integration of Endless Fibers using Fused Deposition Modeling (FDM)
Keywords: Additive Manufacturing, Fused Deposition Modeling, Fibers
Product manufacturers are faced with a constant decrease in product development time, short product life cycles and an increase of complexity of the products. At the same time, it is becoming more and more difficult to justify the expected product features while satisfying the requirements of the trends like eco-design, resource efficiency, light weight design and individualization. These developments are key characteristics which favor the growing role and importance of additive manufacturing (AM). Additive Manufacturing techniques such as Fused Deposition Modeling (FDM), Selective Laser Sintering (SLS) or Selective Laser Melting (SLM) are transitions from building prototypes to manufacturing series products within short time and with highest requirements regarding material characteristics as well as product performance. Product performance is one of the main drivers of the creation of multi-material composites. These composites, existing of a matrix and embedded fibers, are more and more used in various industries not just the aerospace and automotive industry. They offer a solution to the light-weight requirements amongst others. Yet, so far the manufacturing of composites is only partially automated; it requires a lot of manual labor and is typically requiring tools resulting in added costs. A combination or integration of additive and composite manufacturing is very limited to not existing, being an indicator for the difficulties to bring these two technologies together. To meet this challenge the 3D Fiber Print has been developed. It is an AM method using Fused Deposition Modeling (FDM) and composites in process simultaneously while manufacturing a product. The products are still manufactured layer-wise, with each layer containing a composite on-demand. The newly developed and patented print head is designed in the way that composite and matrix material are applied in the right setup and ration using a nozzle style print head to completely embed the composite. This paper presents the setup and possibilities of this new application.
INTRODUCTION TO ADDITIVE MANU-FACTURING
Additive manufacturing is a technology that is considered to have the potential to start a new industrial revolution. Mostly known as rapid prototyping a variation of terms is used for "additive manufacturing" depending on the purpose of the product: Rapid Prototyping (RP)building sample parts and prototypes Rapid Tooling (RT)building tools for other manufacturing methods "Generative Manufacturing" or Additive Manufacturing (AM)end-products with small-to medium-sized quantities.
The various technologies within additive manufacturing differentiate from conventional manufacturing technologies in the strategy the parts are produced. Contrary to CNC milling for example, being a subtracting manufacturing method and therefore manufacturing a product by removing material from a block, an additive manufactured product is built by placing material layer by layer onto each other until the final part shape is completed. Each layer is solidified individually, but also adhered to the ones next to it to form a continuous shape. Adhering these layers to each other is achieved by introducing an energy force, typically in the form of heat or light. This principle of building a product and its advantages is also the biggest disadvantage.
The "created border" between these layers is a potential weak point within the product itself when it comes to mechanical or thermal stresses during use. One way to overcome this issue is taking precautions in the design and construction of a product following the stress analysis and strengthening the part where needed. This approach by itself is common to conventional design methods. Additionally the placement of the product to be built in the build space of the machine and therefore its orientation is also one way to reduce the risk of failure.
It can be argued that the advantages as well as the disadvantages of additive manufacturing result from the reduction of a three-dimensional manufacturing issue to a two-dimensional layerwise manufacturing issue. The selection of a manufacturing strategy or method and its corresponding planning process of conventional manufacturing are no longer applicable when it comes to additive manufacturing. The planning process can be considered lean. Benefits of this circumstance are saving in time and cost in the case of an increase in product complexity. Considering time and cost it is also true that an increase in detail by reducing the layer thickness or an increase in product size are typically resulting in a significant increase in production or building time.
Besides the classification of the additive manufacturing following DIN 8580 the following criteria can be used to classify additive manufacturing: Considering the production of serial end-use products Laminated Object Manufacturing (LOM) as well as 3D-Printing (3DP) are less well-suited. The strength of 3DP for instance lies within the possibility to produce colored samples and prototypes by coloring the binding material. The material of choice is modified gypsum for full-colored prototypes or silica sand for sand forming tools. The gypsum products have to be infiltrated with an epoxy or wax in order to gain mechanical strength. Building fine detailed products is, due to this low mechanical strength of the binding process, not possible. There are also some systems using metal powder for the application in tooling. These systems, however, are very limited and harder to find. The so-called Layer Oriented Manufacturing (LOM) uses foil, paper or metal sheets to manufactured product. Each sheet represents one layer. The principle uses laser cutting or stamping tools to contour the sheets, which are then placed on top of each other and adhered. The current significance of this principle is limited. On the other hand, combining this principle with other additive manufacturing principles like Fused Deposition Modeling (FDM) could result in new applications, due to the advantage of LOM in generating relatively big surfaces in a short amount of time. The biggest disadvantage of LOM is the amount of scrap.
Physical state of material Material
All the other principles mentioned in the table currently play a more important role with regards to additive manufactured products. The main reason is the combination of fine detail, handling and the mechanical strength of their products. Most significant for the quality of these products are the following criteria: Surface quality (in the direction of the layer as well as alongside the layer set up) Mechanical strength within a layer Mechanical strength between two layers Density of the additive applied material (e.g. air pockets) This paper will in particular focus on Fused Deposition Modeling (FDM) as the principle of choice for the application.
STATE-OF-THE-ART OF FUSED DEPOSITION MODELING (FDM)
Fused Deposition Modeling (FDM) uses liquefied or melted thermoplastic material which is applied in layers using minimum one nozzle. The fast cooling of the thermoplastic material itself in the contour applied is responsible for the solidified product. De-pending on the part shape and complexity, a support structure using support material may be required. This is typically the case when the angle of the overhanging contour is bigger than 45°. This support structure is then in a post-processing step removed using either mechanical or thermal force or a bath in which the support material dissolves. The support material is applied by a nozzle just like the product material. This can be either the same nozzle or a second nozzle. All material applied is supplied in a string form run of from a spool. In general, every thermoplastic material can be used for FDM. The current material selection is, however, limited to a few. These include ABS, ABSi, PLA, PC and PPSU. Professional FDM equipment is currently capable of handling a maximum of ten (10) different materials which are only layer-wise or in various mixtures combined. These base materials are also available in a variety of colors increasing the application field. The possibility of layer-wise building with different materials and different colors within the same build job or product is responsible for colored single-material or multi-material products.
The size limit of products produced as a single part with no post-gluing of segments is determined by the building volume of the equipment. The largest build volume is currently 914mm x 610mm x 914mm.
The surface quality or roughness is de-fined by the extruded layers. Individual layers vary in thickness between 0,178 mm and 0,254 mm and are also visible. If this is an issued from an esthetics standpoint the surface can be smoothed using acetonesteaming afterwards. The new surface is close to that of an injection molded part. Alternatively, FDM products can also be surface coated in order to achieve higher resistance rates or to include other functions of the product.
Tests have shown that the mechanical strength of a FDM product is approx. 80-85% of a comparable injection molded product.
FDM has one advantage compared to other additive manufacturing technologies. It is easy to stop the process at any giving point in order to place other products or byproducts into the product and then continue and finish the building process. Successfully tested examples include metal components shown in figure 1 and even electronics as shown in figure 2. This expands the functionality of additive manufactured products e.g. a lighting function. Trying to feed a fiber together with a thermoplastic material through an extruder the basic difference between the two be-comes evident immediately. The thermoplastic filament usually supplied in thick-nesses either 1,75 mm or 3,0 mm is able to handle pull and push forces. This circumstance allows the feeding of the filament without bigger issues by using either two gears or using one gear and a ball bearing to restrain the filament. By turning the gear a continuous feed rate is applied, which in turn enables the material to be transported all the way through the heating area and the nozzle to the tip of the nozzle. It is important that during this process of feeding the filament through the nozzle is happening with as little friction as possible. It is also beneficial for the filament to move as straight as possible through the print head unit.
Contrary to the filament, the fibers are unable to handle push forces. In order to move a fiber forward, pull forces have to be applied. Fibers are acting from a mechanical point of view like a true rope, only being able to handle pull forces. Under these circumstances it became clear that the mechanism of feeding the filament forward will not work for the fibers alike.
WEIGHT AND MASS RATIO OF FIBER TO THERMOPLASTIC MATERIAL
The diameter of the fiber, e.g. a carbon fiber or a carbon fiber bundle, is much less than the diameter of the thermoplastic filament. The thickness of a 1k carbon fiber cable for instance is only 190 µm. This is also the reason that the carbon fiber cannot be transported by the molten thermoplastic material, simply by pulling it along. Assuming a round cross section of the fiber its cross sectional area calculates to approximately A Fiber = 28.300 µm². The cross sectional area of a 0,5 mm diameter nozzle is in comparison A nozzle = 196.000 µm². The cross sectional area of a 1,75 mm thick filament finally is A Filament = 2.405.000 µm².
In order to completely fill the nozzle with fiber and thermoplastic material, the later has to fill the remaining cross sectional area of A nozzle -A Fiber = 167.700 µm².
Assuming the same feed rate for the fiber as well as the filament the nozzle will be receiving 2.405.000 µm², which is more than 14 times the amount of the remaining cross sectional area. The effect would be excess material resulting in a clogged up nozzle.
This basic and simple calculation brought up the idea of embedding the fibers prior to it entering the extruder. This idea would have been a much easier task to complete. The fibers could have been equally applied with the thermoplastic material in an external apparatus.
In order to solve the problem stated above, a filament string with the exact diameter of the nozzle would need to be manufactured. Considering the mentioned example a 0,5 mm thick filament string of thermo-plastic material including the fiber already would be needed. Using such a theoretically and practically possible filament would result in the same problems as the fibers itself. The filament would be too weak to handle the push forces required for the FDM process and the nozzle to work.
SOLUTION
The problems discussed under point 3.1.1 and 3.1.2 can be reduced and simplified to the problem of the fiber feeding process. The fiber(s) and the filament have to be run at different feed rates and therefore it is impossible to feed the fibers with the filament using the same transportation unit. As mentioned above, transporting the filament is not an issue and can be considered state-of-the-art. To feed the fiber(s) at the matching ration of the feed rate of the filament is the challenge. To reduce the complexity of finding and running the matching feed rate of the fiber, one idea was to fix the endless fiber directly on the building platform prior to starting the building process. This procedure would then automatically assure that the fiber is applied depending on the feed rate of the extruder head. Proceeding this way, however, will cause other problems.
One being the fact that the fiber would need to be manually attached, the other being that the fiber and with it the matrix already produced will be under constant force and therefore additional stress would be put on the part during the build process. This issue comes especially into effect when the head has to move in a curvature or moving into a corner (depending on the geometry of the product) instead of a straight line. Describing a curvature or running into a corner applies strong shear forces to the fiber-matrix so that the risks of delamination or pull-off are increasing. Additionally the strong bending of the fiber at the nozzle tip poses the risk of dam-aging the fibers as well as an increased wear of the nozzle itself. These two risks are especially high due to the dynamic pull forces with which the fibers are pulled through the nozzle.
With these issues and risks in mind the search for a solution ended in the modified application of the water-jet pump principle.
This principle is characterized through the fact that a flowing medium is guided through a pipe and within that pipe along an opening. The flow of the medium is creating suction and thereby pulling the medium from the opening. This principle cannot be copied one to one to fiber and filament but requires adjustments. On the one hand, a very slow moving medium like the molten thermo-plastic material does not create suction. On the other hand, such a thin fiber would not be moved by suction. These issues are solved by the design of the 3D Fiber Printer nozzle. The fiber must be fed through a side channel into the main channel in which the filament is fed. The position of the meeting point of the two channels has to be exactly at the point where the diameter of the nozzle has reached the 0,5 mm. The extruder nozzle is designed in the way that the molten filament is fed through several mm of length through the channel with the diameter of 0,5 mm. Within these few millimeters the fiber is introduced to the molten filament and "adheres" to it forming the mechanical bond. This bond will now pull the fiber along with the molten material achieving the right mix ratio of thermoplastic material to fiber to create the best embedding of the fiber as well as the correct feed rate of both.
DESIGN OF THE 3D FIBRE PRINT NOZZLE
The basis for the design of this special nozzle is the standard design of conventional brass nozzle in order to realize the same simple exchange mechanism of the nozzles within the print head. An additional side channel is designed meeting the main channel at a specific point. The fibers are introduced through this channel. The design and size of this channel are too small to be manufactured with conventional technologies. The solution is manufacturing this nozzle by the additive manufacturing technology of Selective Laser Melting (SLM). The side channel is also forming a curve see figure 3. This curve is needed to smooth the entrance of the fiber into the main channel and by that reducing the wear of the nozzle at the meeting point during the use of the nozzle. Car-bon fibers are known for acting like sanding paper to any burr or hard break resulting in the destruction of the nozzle in the long run.
FIRST TEST RESULTS
The first test runs were performed using the designed and built nozzle together with a 3K carbon fiber and a thermoplastic elastomer from the urethane group from Bayer Material Science. Starting point was the application of a straight line, to test the complete embedding of the fiber with the molten filament see figure 4. The base plate for the test was a heated aluminum plate with the temperature set to 110°C. For adhesion purposes a kapton adhesive tape was applied to the aluminum plate. The kapton adhesive tape is a high heat resistance adhesive tape which also shows good adhesion to the urethane material. The first test results showed success at the application and feeding of both materials in the right setup. The visual inspection showed a good embedding of the fiber within the product.
NEXT STEPS AND OUTLOOK -3D FIBER PRINTER
The industrial implementation of the de-veloped technology requires further optimization and tests. These tests include e.g. pull tests to test the matrix. It was, however, shown that the fibers can be embedded into products manufactured with the Fused Deposition Modeling (FDM) technology.
Besides the testing of the adhesion forces between the thermoplastic material and the fibers, which have to include chemical test-ing, the long-term testing of the nozzle and its design for wear as well as the feeding mechanism are the next steps planned. The later tests are also intended to determine whether SLM is the right technology to manufacture the nozzles and whether the SLM manufactured nozzles are suited for industrial use.
Fig. 1 .Fig. 2 .
12 Fig. 1. Embedded wrench | Fraunhofer IPA
Fig. 3 .
3 Fig. 3. FDM 3D Fiber Printer nozzle design proposal including a curved fiber channelFraunhofer IPA
Fig. 4 .
4 Fig. 4. First printing test with new nozzle and fiber included | Fraunhofer IPA
Table 1 .
1 the product is made of Usage of products within the product development process Manufacturing principle One of the best describing classifica-tions is probably the manufacturing principle as shown in the table below. Classification of Additive Manufacturing according to the manufacturing principle
Principle group AM Abreviation principle
Sintering Selective Laser Sintering SLS localized
Selective Laser Melting SLM powder melting
Electron Beam Melting EBM
Extrusion Fused Deposition FDM Application of
Modeling liquified polymers
Multi-Jet-Modeling MJM using nozzles
UV-Hardening Stereolithographie SLA Localized
co-polymerisation
Binder 3-Dimensional Printing 3DP Localized Application
Technology of binder
Laminating Laminated Object Manufac- LOM Contouring and apply-
turing ing of sheet material
LITERATURE/REFERENCES | 19,996 | [
"1003699",
"1003698"
] | [
"443235",
"443235",
"443235"
] |
01485812 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485812/file/978-3-642-41329-2_19_Chapter.pdf | Ines Dani
email: ines.dani@iws.fraunhofer.de
Aljoscha Roch
Lukas Stepien
Christoph Leyens
Moritz Greifzu
Marian Von Lukowicz
Energy Turnaround: Printing of Thermoelectric Generators
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
In developed countries, ca. 40% of the total fuel consumption is used for heating. Of this, about one-third is wasted due to insufficient transformation technologies into electricity. This wasted heat may be used by exploiting the Seebeck effect, enabling special materials to generate electrical energy if exposed to a temperature gradient. Thermoelectric generators (TEGs) built up from these materials are a durable way for energy supply without any moving components. Drawbacks of state-of-the-art materials are their low efficiency, limited availability or toxicity of the raw materials, and high costs.
Polymer materials in combination with printing techniques offer the possibility to manufacture flexible generators from non-toxic and easily available raw materials. The operation temperature of these materials ranges from room temperature up to 100 °C; small temperature gradients of 1 K can be exploited.
Material development
PEDOT:PSS (poly(3,4-ethylenedioxythiophene) poly(styrenesulfonate) is an intrinsically p-type conductive polymer and possesses a high conductivity, good stability and flexibility. All printing experiments presented here are based on the printing paste SV3 (Heraeus) with a viscosity of 1 Pa.s (at a shear rate of 250 s-1). The solids content is about 2.3 wt%.
Due to the high viscosity of the PEDOT:PSS printing paste a dispenser printer (Asymtec) was used (figure 1). To increase the charge carrier concentration of the polymer, PEDOT:PSS was doped with dimethyl sulfoxide (DMSO). For characterisation of the thermoelectric properties the pastes were printed on glass substrates and dried at 50°C for 2 days. The resulting films with a thickness of 22 µm were tested with a modified van der Pauw method [2]. The electrical conductivity increased from 8 S/cm without DMSO addition to 84 S/cm with 6 wt% DMSO. The Seebeck coefficient of about 15 µV/K was not influenced by DMSO doping.
Fig. 1. Scheme of the dispenser printing process
Printing process and results
Until now, no n-type conductive polymer for thermoelectric applications is commercially available. Therefore only p-type conductive PEDOT:PSS was used to print an unileg thermoelectric generator (figure 2). Polyimide film was chosen as substrate due to its temperature stability up to 400 °C and its flexibility. In a first step, silver paste (Heraeus) was printed as contact material and sintered at 200°C. PEDOT:PSS with 6 wt% DMSO serves as the active thermoelectric material. It was printed on top of the silver contact and dried at 60 °C for 24 h. A multilayer structure of the active material reduces the internal resistance by half, enabling a higher number of thermocouple legs with the same internal resistance.
Using a dispenser printing process enables a fast adaption of the unileg geometry; standard parameters are a length of 1 mm and a height of 10 mm for one unileg. A TEG with 60 unilegs corresponds to a total length of about 30 cm (figure 3).
Fig. 3. Dispenser printing process of a TEG with 60 unilegs
For analyzing the thermoelectric performance of the device, the printed unileg film was wrapped around an adapter, made from two aluminum parts and a PEEK connector with low heat conductivity (figure 4). The cold side was held constant at 20°C, the temperature of the hot side was stepwise increased. Increasing the temperature gradient leads to a linear increase in the measured voltage (figure 5). For a temperature difference of 90 K a resulting voltage of 37 mV was determined.
Summary
A printed unileg generator consisting of only a p-conductive material was realised by dispenser printing of DMSO-doped PEDOT:PSS on polyimide to demonstrate the manufacturing of flexible thermoelectric generators from non-toxic and easily available raw materials. A TEG with 60 legs and silver contacts was characterized. By using a multilayer design the internal resistance was decreased by more than 50%. With a cold side temperature of 20°C and a temperature difference of 90 K a voltage of 37 mV was generated.
Fig. 2 .
2 Fig. 2. Scheme of an unileg TEGp-type conductive thermoelectric material (blue) and silver contacts (grey)
Fig. 4 .
4 Fig. 4. CAD model of the adapter for winding-up the unileg film (left) and complete TEG with contacts (right)
Fig. 5 .
5 Fig. 5. Voltage in dependence of the temperature gradient in the TEG, cold side at 20°C | 4,804 | [
"1003704"
] | [
"488104",
"488104",
"488104",
"488104",
"96520",
"96520",
"96520"
] |
01485813 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485813/file/978-3-642-41329-2_1_Chapter.pdf | Dr Erastos Filos
email: erastos.filos@ec.europa.eu
Keywords: crisis, innovation, R&D, investment
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Manufacturing is the activity to make goods, usually on a large scale, through processes involving raw materials, components, or assemblies with different operations divided among different workers. Manufacturing encompasses equipment for materials handling and quality control and typically includes extensive engineering activity such as product and system design, modelling and simulation, as well as tools for planning, monitoring, control, automation and simulation of processes and factories.
It is increasingly seen as a priority area of economic activity especially for economies that have been hit by the recent financial and economic crisis.
The paper aims to identify issues pertaining to manufacturing innovation in the framework of Horizon 2020. Smart policies will be required to link up R&D and innovation strategies at regional, national and European levels and to offer needed incentives for growth and differentiation as well as to leverage investments towards the ambitious goal of countering Europe's de-industrialisation.
Addressing the Challenges of Europe's Manufacturing
The challenges for manufacturing in Europe can be summarised as in fig. 1 Threats.
De-industrialisation: Manufacturing/engineering unpopular with large parts of society, outsourcing trends, low investment in R&D&I Lack of support for exporting SMEs Lack of incentives to skills development in STEM
Fig. 1. SWOT analysis of European manufacturing
Manufacturing accounts for around 16% of Europe's GDP. It remains a key driver of R&D 1 , innovation, productivity growth 2 , job creation and exports. 80% of innovations are made by industry. 75% of EU exports are in manufactured products. Each job in manufacturing generates two jobs in services. Since the beginning of the financial and economic crisis, however, EU employment in manufacturing has fallen by 11%. The recent moderation of the recovery has kept employment levels in manufacturing low and the perspectives of a fast rebound remain rather weak. Over the last decade the share of manufacturing in EU employment and value-added has been declining. In 2011 it accounted for over 14% of employment and over 15% of total value added, representing 23% and 22% respectively, in the non-financial business economy. As shown in fig. 2, the crisis has had severe consequences over manufacturing in Europe. However, high-technology manufacturing3 has performed better than the rest of the industry.
Recent econometric research4 points to the fact that a country's or a region's capacity to export products only few others can make is based on an accumulation of manufacturing knowledge and capability leading to competitive its advantage over others.
A recent World Economic Forum report on "The Future of Manufacturing"5 highlights the key role of enabling technologies and infrastructures in enabling "manufacturing to flourish and contribute to job growth". As these are growing in importance and sophistication it becomes challenging to develop and maintain them.
At the beginning of this decade the European Union put forward Europe 2020, a ten-year growth strategy that aims at more than just overcoming the crisis that continues to afflict many of Europe's economies. It aims to address the shortcomings of the current growth model by supporting "smart, sustainable and inclusive growth" along five objectives6 : more employment, more investment in R&D and innovation, attaining the triple-20 climate change/energy targets, better education and higher skills levels and finally by eliminating poverty and social exclusion.
The next EU Framework Programme for research and innovation, called Horizon 20207 , directly supports the policy framework of Europe 2020. It is due to start on 1 January 2014 and to run over seven years with a total budget of around 80 billion, according to the European Commission's proposal. The proposal is currently under debate in the (3) societal challenges (see fig. 3).
Compared to previous EU Framework Programmes Horizon 2020 will bring together research and innovation in a single programme, it will focus on multidisciplinary societal challenges European citizens face and it will aim to simplify the participation of companies, universities, and other institutes in all EU countries and beyond.
The programme's activities related to manufacturing are mostly concentrated in the activity area called "Leadership in enabling and industrial technologies". Key activities will focus on roadmap-based research and innovation involving relevant industry and academic research stakeholders.
In 2012 the Commission put forward an update of its Industrial Policy Communication8 which supports a further 2020 objective which aims to reverse Europe's deindustrialisation: Industry to account for 20% of the EU GDP by 2020.
3
Towards Manufacturing Innovation
The Factories of the Future PPP
The Factories of the Future initiative was launched as a public-private partnership (PPP) in 2009, constituting a EUR 1.2 bn part of the European Economic Recovery Plan9 . In total 98 projects were launched after three calls for proposals10 and 52 projects in the final call of 2012, currently under grant agreement negotiations. The projects represent research and technology development and innovation related activities that cover the full spectrum of manufacturing, from the processing of raw materials to the delivery of manufactured products to customers, across many sectors, covering both large-volume and small-scale production, dealing with matters such as supply chain configurations, virtual factories, material processing and handling, programming and planning, customer-driven design, energy efficiency, emissions reductions, new processing technologies, new materials, upgrading of existing machines and technologies, including many projects involving elements of, or wholly focused on, various uses of Information and Communication Technologies (ICT).
High Performance Manufacturing
There are also a diverse range of projects that are clustered under the theme of High Performance Manufacturing. Projects here focus on: production of high precision plastic parts; development of photonic device production capabilities; development of precision glass moulding processes; enhancing tool-making technologies for highprecision micro-forming; development of new manufacturing routes for micro-and nano-scale feature manufacturing; zero defect manufacturing; control of milling processes for thin-walled work-pieces; production of grapheme, etc. There are also projects investigating additive manufacturing and robotics for maintenance tasks applied to large scale structures. Consideration is also being given in some projects to modular reconfigurable production systems and the development of capabilities to custom-ise products and to put in place the necessary manufacturing capabilities needed for customisation.
Impacts for this theme consist of improved dynamics for cutting processes, higher precision, and improved reliability during changing process conditions. But also evident are impacts in terms of lowering commissioning and ramp-up times, and avoiding investments in new production machinery through enhancing re-usability of manufacturing technologies and systems, and extending the life of machinery through add-ons and upgrades. Some developments are evidently also enabling, in the sense that once the processes and associated technologies are developed they will enable those who purchase them to engage in their own innovation (additive manufacturing is one example of this).
Sustainable Manufacturing
Under the theme Sustainable Manufacturing, projects, also diverse in character and focus, are dealing with matters that have relevance to creating a more environmentally responsible approach to manufacturing. Here there are projects that are also addressing customisation issues, with one example being customised clothing that allows the customer to choose more environmentally benign materials. There are also other projects dealing with customisation with an eco-dimension, delivering environmental assessment tools while also providing web-based access to support the customisation process. Work is also being undertaken on the development of the supporting manufacturing and supply chain infrastructure necessary for a shift away from large volume standardised products, to small scale production of customised products. Other projects are working on topics such as: environmental footprint reduction for metal formed products; resource efficient manufacturing systems; waste energy recovery; eco-efficient firing processes; use of predictive maintenance to achieve optimal energy use; condition and energy consumption monitoring; harmonisation of product, process and factory lifecycle management, etc.
For projects with a sustainability focus, impacts tend to be with respect to reductions in costs, energy consumption, emissions, and material wastage, but also in terms of increased availability of processes and machines and improved plant efficiencies. Also evident are impacts such as better understanding of customer needs, reduced time to market, reductions in delivery times, reduced transportation costs, and more time and cost effective customisation. Also, improved decision-making as a result of more in-depth understanding of environmental impacts is an impact. Potentially, the capability to produce customised products cost-effectively on a small scale could result in a shift away from large expensive centralised production facilities (with their associated significant transportation-driven carbon emissions) to more local production closer to the point of sale.
ICT for Manufacturing
In the domain of ICT there are projects that focus on platforms, which can be considered as the hardware, system architectures and software, necessary to undertake a range of related tasks. Typically these platform projects are focused on specific interests such as: manufacturing information and knowledge management; supply chain configuration; creation of virtual factories assembled from multiple independent factories; and collaborative engineering. But there are also ICT projects addressing simulation, game-based training, support for end-of-life material recovery and remanufacturing, advanced robotics, energy efficiency monitoring, laser welding, etc.
Impacts by these projects include reduced costs, accelerated product and process engineering, greater flexibility, improved quality, better equipment availability, reduced use of consumables, lower energy consumption, etc. ICT is also in many of these projects more than just a technology, but also an enabling technology in that it has the potential to bring about changes in method, procedures, or to open up new possibilities. Some of the targeted impacts in this respect include developing specific software targeted at SMEs to take account of their specific constraints (for example, limited time and expertise). Another enabling impact is greater flexibility, both in terms of capabilities of specific machines and processes, but also in the capability to redeploy these in new configurations (factory layouts), to meet changing demands. In some of the ICT projects it is also possible to see consideration being given to addons and upgrades to existing machines and systems, which will help to improve the performance of existing equipment and to extend its useful life.
Lessons from Four Years of Operation of the PPP
In a report from a recent workshop11 involving all Factories of the Future projects launched under the first three years of the PPP's operation, the following observations were made.
Overall the Factories of the Future PPP has been a good initiative to generate a family of industry-related projects. Industry participation has increased to over 50 % (30 % SME participation). The PPP also helps to create a broader spectrum of research projects, addressing higher levels of technology readiness than basic research projects. For SMEs, who need solutions designed to fit their constraints (time, money, expertise) the PPP proves to be very valuable if sufficient consideration is given to projects that address these constraints.
Academia and research institutions participate in these projects, and for them there is an added value of being able to provide students with research projects driven by a clear industrial need, and also to provide contacts for these students with industry personnel and facilities. However, greater emphasis needs to be given to the research training potential of those PPP projects, to maximise the benefits for young researchers that come from working with industry. Development of manufacturing oriented Living Labs may have a role to play here in enhancing industry-academia interactions.
Projects address a wide range of technologies and issues relevant to enhancing the competitiveness of European manufacturing, and many potential impacts are evident in these projects. The challenge is how to turn impacts into tangible business benefits in the marketplace, which requires more focus on business and market issues. Projects are beginning to realise that a much stronger business and market focus is needed within the PPP, and they have identified activities in this area that could form the basis for clustering.
One of the great benefits of clustering is bringing different perspectives and expertise together, which points towards clustering that is based on diversity, rather than similarity. However, some balance between the two does need to be achieved. Some clustering activities are relatively easy to do, such as joint dissemination, or collective contributions to standardisation. Some activities however are more challenging, such as addressing business and market related issues, and also sharing results, which raises confidentiality and IPR related issues. Yet in terms of that which will have most impact for projects and the PPP as a whole, is the hard-to-do clustering that is likely to be most effective. Projects have identified that clustering which addresses business and market related matters can be of great importance with regard to helping to take results to the market. Self-evidently, difficulties such as coping with IPR issues and guidance on how best to undertake challenging clustering activities will be needed, but none of the problems that people raise to undertaking this type of clustering are insurmountable.
Conclusions
R&D and innovation activities under a public-private partnership scheme are a way forward for Europe's industry to address effectively the challenges of manufacturing innovation.
It will be essential to address new and promising technologies such as 3D-printing, model-based controls, composites and nano-based materials as well as industrial ICT to stimulate innovation in manufacturing and offer a competitive advantage to European enterprises.
Develop and implement 'new manufacturing'
Manufacturing is likely to evolve along three paths12 :
On-demand manufacturing: Fast-changing demand from internet-based customers requires mass-customised products. The increasing trend to last-minute purchases and online deals requires from European manufacturers to be able to deliver products rapidly and on-demand to customers. This will only be achievable through flexible automation and effective collaboration between suppliers and customers.
Optimal (and sustainable) manufacturing: Producing products with superior quality, environmental consciousness, high security and durability, competitively priced. Envisaging product lifecycle management for optimal and interoperable product design, including value added after-sales services and take-back models. Human-centric manufacturing: Moving away from a production-centric towards a human-centric activity with greater emphasis on generating core value for humans and better integration with life, e.g. production in cities. Future factories have to be more accommodating towards the needs of the European workforce and facilitate real-time manufacturing based on machine data and simulation. 'Assisted working' should aid an aging workforce to leverage skills and knowledge effectively for the creation of innovative products.
Continue the structuring of Europe's industrial landscape
The introduction of the Recovery Plan PPPs has contributed to this structuring effect by encouraging strategic thinking in terms of roadmapping, financial commitment as well as 'impact-driven' thinking among European industrial players. The formation of industrial research associations such as EFFRA13 , in the case of the Factories of the Future PPP, alongside the four annual calls for proposals in this domain has had a significant structuring effect in synergy with the European Technology Platformswhich in the case of MANUFUTURE has also led to the creation of national Technology Platforms in almost every EU Member State and beyond.
Leverage the benefits of advanced manufacturing across the EU
Smart specialisation in advanced manufacturing is a way forward through identification and strengthening of competitive advantages existing in EU regions (e.g. skills, R&D capability, industrial output, ICT infrastructure and complementarities to neighbouring regions). Whilst national and European R&D programmes in the past have been focusing on developing new and powerful technologies, they lacked however the incentives and the institutional setup to encourage their dissemination and take-up across sectors, across countries and regions. Roadmap-driven R&D and innovation partnerships (PPPs) could play a significant role in bringing together the relevant stakeholders (industrial firms, academia, standardisation bodies, funding bodies (including VC investors) and public administrations at all levels) to generate market impact and thus strengthen Europe's industrial capability. In Europe, it seems that entrepreneurial skills are required to drive results exploitation forward, from EU funded projects, via national/regional level demonstrators (pilot plants), to cross-fertilisation and uptake in different industrial sectors14 .
Fig. 2 .
2 Fig. 2. Index of production for total industry and main technology groups in manufacturing, EU27, 2005-2012, seasonally adjusted (2005=100) -Source: Eurostat
Fig. 3 .
3 Fig. 3. The Horizon 2020 Framework Programme
:
Strengths.
European mechanical engineering (machines, equipment & systems industry)
Industrial software (embedded software including factory automation + robotics,
ERP + analytics, PLM software)
High & flexible automation equipment enabling mass customisation
Sustainability as a driver
Weaknesses.
Despite strengths in industrial ICT, Europe's ICT industry is globally insignificant
& fragmented
Lack of large integrators (e.g. OEMs offering system-level products rather than
components)
Incoherent trade/competition/industrial policies
Opportunities.
Concerns over climate change & nuclear risks drive demand for sustainabilityconscious European technology Growing wealth of global consumers drives demand for customized quality products Manufacturing increasingly popular with policymakers Local capabilities enhanced by 3D printing, photonics-based equipment & methods, micro-nano-& bio-based materials, personal robotics, ICT infrastructures (e.g. Clouds)
80% of private investment
3% on average vis-à-vis 1% of the economy between[2000][2001][2002][2003][2004][2005][2006][2007]
High-technology manufacturing comprises aerospace, pharmaceuticals, computers, office machinery, electronics/communications and scientific instruments sectors with relative high R&D expenditure on value added.
See "Complexity and the Wealth of Nations" in: http://harvardmagazine.com/print/26610?page=all and "The Art of Economic Complexity" in: http://www.nytimes.com/interactive/2011/05/15/magazine/art-of-economiccomplexity.html.
5 "The Future of Manufacturing. Opportunities to Drive Economic Growth", World Economic Forum Report, April 2012; "Special Report on Manufacturing and Innovation", The Economist, 19 April 2012; "Why Manufacturing Still Matters", New York Times, 10 February 2012; "Future Factories", Financial Times, 11 June 2012.
Europe 2020 strategy, see relevant documents under: http://ec.europa.eu/europe2020/index_en.htm
Horizon 2020: The EU Framework Programme for research and innovation, see relevant documents under: http://ec.europa.eu/research/horizon2020
"A stronger European industry for growth and economic recovery", COM(2012) 582 final
A European Economic Recovery Plan, COM(2008) 800 final, 26 November 2008, electronically available under: http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2008:0800:FIN:en:PDF
10 Interim Assessment of the Research PPPs in the European Economic Recovery Plan, Brussels, 2011, electronically available: http://ec.europa.eu/research/industrial_technologies/pdf/research-ppps-interim-assessment_en.pdf
Impact of the Factories of the Future Public Private Partnership, Workshop Report, 22 April 2013, available electronically: http://ec.europa.eu/research/industrial_technologies/pdf/fof-workshop-report-11-12-032013_en.pdf
http://www.actionplant-project.eu/public/documents/vision.pdf
The European Factories of the Future Research Association (EFFRA), http://www.effra.eu
José Carlos Caldeira, From Research to Commercial Exploitation -The Challenges Covering the Innovation Cycle, presentation at Advanced Manufacturing Workshop, Brussels, 27 May 2013, to become electronically available here:
Disclaimer
The views outlined in this publication are the views of the author alone and do not necessarily reflect the official position of the European Commission on this matter. | 22,087 | [
"1003705"
] | [
"309933"
] |
01485814 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485814/file/978-3-642-41329-2_20_Chapter.pdf | Steffen Nowotny
email: steffen.nowotny@iws.fraunhofer.de
Sebastian Thieme
David Albert
Frank Kubisch
Robert Kager
Christoph Leyens
Generative Manufacturing and Repair of Metal Parts through Direct Laser Deposition using Wire Material
In the field of Laser Additive Manufacturing, modern wire-based laser deposition techniques offer advantageous solutions for combining the high quality level of layer-by-layer fabrication of high value parts with the industry's economical requirements regarding productivity and energy efficiency. A newly developed coaxial wire head allows for omni-directional welding operation and, thus, the use of wire even for complex surface claddings as well as the generation of three-dimensional structures. Currently, several metallic alloys as steel, titanium, aluminium, and nickel are available for the generation of defect-free structures. Even cored wires containing carbide hardmetals can be used for the production of extra wear-resistant parts. Simultaneous heating of the wire using efficient electric energy increases significantly the deposition rate and energy efficiency. Examples of application are light-weight automotive parts, turbine blades of Nickel super alloys, and complex inserts of injection moulds.
Introduction
Laser buildup welding is a well-established technique in industrial applications of surface cladding and direct fabrication of metallic parts. As construction materials, powders are widely used because of the large number of available alloys and the simple matching with the shaped laser beam. However, the powder utilization is always less than 90 %, characteristic values are in the range of 60 %, and so a serious part of the material is lost. Additionally, there is a risk for machine, operators and environment due to dangerous metal dust. The alternative use of wires as buildup material offers a number of advantages: The material utilization is always 100 %, and rightly independent from the part's shape and size. The process is clean and safe, and so the effort for protecting personnel and environment is much less. Also the wire feed is completely independent from the gravity which is of great advantage especially in applications of three-dimensional material deposition.
The main challenge compared to powder is the realization of an omni-directional welding operation with stable conditions of the wire supply. The only possible solution therefore is to feed the wire coaxially in the centre axis of the laser beam. This technology requires a complex optical system which permits the integration of the wire material into the beam axis without any shadowing effects of the laser beam itself. Accordingly, the work presented here was focused of the development of a new optics system for the practical realization of the centric wire supply as well as the related process development for the defect-free manufacturing of real metallic parts.
Laser wire deposition head
Based on test results of previous multi beam optics [START_REF] Nowotny | Laser Cladding with Centric Wire Supply[END_REF], a new optical system suitable for solid-state lasers (slab, disk, fiber) has been developed. The optical design of the head shown in Figure 1 is based on reflective optical elements and accommodates a power range of up to 4 kW. The laser beam is symmetrically split into three parts so that the wire can be fed along the centre axis without blocking the beam. The partial beams are then focused into a circular spot with a diameter ranging from 1.8 to 3 mm. The setup enables a coaxial arrangement for beam and wire, which makes the welding process completely independent of weld direction. The coaxial alignment is even stable in positions that deviate from a horizontal welding setup.
The wire is fed to the processing head via a hose package that contains wire feeder, coolant supply and protection gas delivery. Wire feeders from all major manufactures can be easily adapted to this setup and are selected based on wire type, feed rate, and operation mode. Typical wire diameters range from 0.8 to 1.2 mm. However, the new technology is principally also suitable for finer wires of about 300 µm in diameter. The wires can be used in cold-and hot-wire setups to implement energy source combinations. The new laser wire processing head is useful for large-area claddings as well as additive multilayer depositions to build three-dimensional metallic structures.
Fig. 1. Laser processing optic with coaxial wire supply
For process monitoring, the wire deposition head may be equipped with a camerabased system which measures dimensions and temperature of the melt bath simultaneously during the running laser process. Optionally, an optical scanning system controls the shape and dimension of the generated material volume in order to correct the build-up strategy if necessary [START_REF] Hautmann | Adaptive Laser Welding (in german) REPORT[END_REF].
Deposition process and results
Fig. 2 shows a typical laser wire cladding process during a multiple-track deposition.
The process shows a stable behaviour with extremely low emissions of splashes and dust, compared to powder-based processes. The integrated on-line temperature regulation keeps the temperature of the material on a constant level during the whole manufacturing duration.
Fig. 2. Process of laser wire deposition of a metal volume
The part is built from a large number of single tracks, which are placed according to a special build-up strategy. This strategy is designed by computer calculation prior to the laser generative process. Normally, an intermediate machining between the tracks and layers is not necessary. The primary process parameters laser power, wire feeding rate and welding speed have to be adapted to each other to enable a continuous melt flow of the wire into to the laser induced melt pool. For certain primary parameters, the process stability depends on the heat flow regime during the build-up process. Besides the temperature regulation mentioned above, also interruptions between selected layers may be useful to cool down the material. If necessary, also an active gas cooling of the material can be applied [START_REF] Beyer | High-Power Laser Materials Processing Proceedings of the 31st International Congress on Applications of Lasers and Electro-Optics[END_REF].
In Fig. 3 the cross section of a generated structure of the Nickel super alloy INCONEL718 is shown. The solidified structure is defect free and each layer is metallurgically bonded to the other. Through optimization of the process parameters, even the crack-sensitive IN718 structure is crack-free. An optimized build-up strategy allows minimal surface roughness of RZ = 63 µm. A layer thickness of 1.4 mm and a build-up rate of 100 cm³/h can be achieved with 3.0 kW laser power and 2.0 m/min welding speed. Simultaneous heating of the wire using efficient electric energy (hotwire deposition) increases significantly the deposition rate up to about 160 cm³/h [START_REF] Pajukoski | Laser Cladding with Coaxial Wire Feeding Proceedings of the 31st International Congress on Applications of Lasers and Electro-Optics[END_REF].
Fig. 3 .
3 Fig. 3. Cross-section of a laser generated wall of INCONEL718
Figure 4
4 illustrate two examples of layer-by-layer generated parts. Figure 4a shows a turbine blade out of INCONEL718. The blade's height is 100 mm, and inside it has a hollow structure. The height of the inlet tube of Figure 4b is 85 mm, and it consists of light-weight alloy AlMg5. The height of the single layers is 0.4 mm for the Ni alloy and 0.7 mm for the Al alloy.
Fig. 4 .
4 Fig. 4. Turbine blade out of INCONEL718 Inlet tube out of AlMg5
Summary
The current state of laser wire deposition shows the wide range of potential applications of this new technique. In addition to the well-established powder welding and powder-bed melting techniques, wires represent an advantageous alternative for highquality laser deposition. A specially developed laser head with coaxial wire supply permits omni-directional welding operation and thus new dimensions in additive manufacturing. Also, equipment for the on-line process regulation is available and can be used for quality management. In particular, the regulation concerns the melt bath's dimensions and temperature on its surface.
As construction material, commercially available welding feedstock wires can be used. The material utilization is always 100 %, the welding process is clean, and the variant of hot-wire cladding advantageously increases productivity and energy efficiency. The generated metal structures are completely dense, as important precondition for a high mechanical strength of the final parts. The surface roughness is typically lower than RZ100 µm, and the model-to-part-accuracy lies in the range of some tenth of a millimetre.
Examples of application are corrosion protection coatings on cylinders, turbine parts of Nickel and Titanium [START_REF] Brandl | Deposition of Ti-6Al-4V using laser and wire Surface & Coatings[END_REF] alloys as well as light-weight parts for automobile use. | 9,161 | [
"1003706"
] | [
"488104",
"488104",
"488104",
"488104",
"488104",
"96520"
] |
01485816 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485816/file/978-3-642-41329-2_22_Chapter.pdf | Kesheng Wang
email: kesheng.wang@ntnu.no
Quan Yu
email: quan.yu@ntnu.no
Product Quality Inspection Combining with Structure Light System, Data Mining and RFID Technology
Keywords: Inspection Combining with Structure Light System, Data Mining and RFID Technology Quality inspection, Structure Light System, Data mining, RFID 1
de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
Product quality inspection is a nontrivial procedure during the manufacturing process both for the semi-finished and end products. 3D vision inspection has been rapidly developed and increasingly applied in product quality inspection, with the advantages of high precision and applicability comparing with commonly used 2D vision approaches. 3D vision is superior in inspection of multi-features parts due to providing the height information. 3D vision techniques comprise approaches on the basis of different working principles [START_REF] Barbero | Comparative study of different digitization techniques and their accuracy[END_REF]. Among various approaches, the Structure Light System (SLS) is a kind of cost effective techniques for the industrial production [START_REF] Pernkopf | 3D surface acquisition and reconstruction for inspection of raw steel products[END_REF][START_REF] Xu | Real-time 3D shape inspection system of automotive parts based on structured light pattern[END_REF][START_REF] Skotheim | Structured light projection for accurate 3D shape determination[END_REF]. By projecting specific patterns on the inspected product, the camera captures corresponding images. The 3D measurement information of the product is retrieved in the form of the point cloud on the basis of the images. With the generated 3D point cloud, automated quality inspection can be performed with less human interference, where data mining approach is commonly used [START_REF] Ravikumar | Machine learning approach for automated visual inspection of machine components[END_REF][START_REF] Lin | Measurement method of three-dimensional profiles of small lens with gratings projection and a flexible compensation system[END_REF].
However, although the quality information is able to be decided on the basis of the SLS and data mining approaches, it will not become more valuable until it is used to improve the production process and also achieve the real-time data access and the quality traceability.
In this paper, Radio Frequency Identification (RFID) technique is used to integrate the quality information together with the product. RFID uses a wireless non-contact radio system to identify objects and transfer data from tags attached to movable items to readers, which is fast, reliable, and does not require physical sight or contact between reader/scanner and the tagged objects. By assigning a RFID tag to each inspected product, it is possible to identify the product type and query the quality inspection history. An assembly quality inspection problem is selected as a case study to test the feasibility of proposed complex system. The proposed approach will be an alternative for SMEs considering the fast product type update with respect to the fast changing market.
The paper is organized as the following: Section 1 introduces the general applications of 3D vision in manufacturing and the importance of combining SLS, Data mining and RFID technology. Section 2 introduces the architecture of the complex system and the working process in each system level respectively. Section 3 shows a case to study the feasibility of the system combining the three techniques. Section 4 comes to the conclusion of the applicability of the complex system.
QUALITY INSPECTION SYSTEM ARCHITECTURE ON THE BASIS OF STRUCTURE LIGHT SYSTEM, DATA MINING AND RFID
The RFID 3D quality inspection system combines the function of product quality inspection and RFID tracing and tracking. By attaching the RFID tag to the product, the system takes the pictures of the inspected product as the inputs, generates the 3D point cloud and finally writes the quality related information of the product in the RFID tag. Thus, it is available to monitor the product quality along the production line and achieve the real-time quality control.
2.1
System architecture The quality inspection system comprises 4 levels as shown in Figure 1, which are 3D vision level, data processing level, computational intelligence level and RFID level respectively. Within each level of the system, the data is converted as the sequence of the point cloud, the feature vector, the quality information and the writable RFID data. Each level is introduced as following:
RFID level
Computational
1. 3D vision level consists of the Structure Light System, which uses camera together with the projector to generate the point cloud of the inspected product. 2. Data processing level comprises quality related feature determination and extraction. The product quality is quantified on the basis of the point cloud according to the design requirements. The feature vector is generated after the processing as the input of the next level. 3. Computational intelligence level uses data mining approaches to achieve the automated quality classification on the basis of the feature vector. 4. RFID level comprises the RFID hardware and software, which achieves the product tracking and controlling by writing and reading RFID tag attached on the product.
Introduction of the Structure light system
Structured Light System (SLS) is one of typical 3D vision techniques. SLS accomplishes to point cloud acquisition by projecting specific patterns onto the measured object and capturing the corresponding images. The point cloud of the object surface can be generated with the images analysis approach.
The hardware of a SLS consists of a computer, an image capture device and a projector. Figure 2 shows the working process of a typical SLS, which can be divided into 4 steps. Step 1 is the pattern projection. A coded light pattern is projected onto the scene by the projector. The pattern can be either a single one or series with respect to the type of the code. 2.
Step 2 is the image recording. The inspected object is captured by the camera, and then captured images are stored in sequence if pattern series are used. The scene is captured previously as references without the presence of the object. Comparing the images with the inspected object to the one without it, it is observed that the pattern is distorted due to the existence of the object, which indicates the height information.
3.
Step 3 is the phase map generating. Images captured in step 2 are analysed by the computer with fringe analysis techniques on the basis of the pattern encoding rule.
The wrapped phase maps are obtained firstly, and to be unwrapped to obtain the map with continuous phase distribution. 4. Step 4 is the transformation from phase to height. The height value of each image pixel is derived from its phase by phase calibration or phase-height mapping comparing with the reference obtained from step 2. After the calibration, the pixels in the image are transformed to points in metric unit, and the height value in each pixel is calculated, so that the 3D point cloud of the inspected object is formed.
Decision support using data mining approaches
To achieve the automated product quality inspection, it is of importance to use computational intelligence approaches to classify the product quality with less artificial interference. Data mining method is often used to analyze the massive data and find out the knowledge to ascertain the product quality [START_REF] Wang | Applying data mining to manufacturing: the nature and implications[END_REF]. Data mining based quality inspection requires input variables related to product quality. Although the point cloud is able to be acquired using SLS in the form of 3D coordinates, it generally includes mega points and is unlikely possible to use the data driven methods to solve the classification problem. In this case, the point cloud generated by the SLS has to be processed according to the specifics of the product. Mega points are converted to be a vector including the most useful quality related parameters. It is effecient to focus on the partial point cloud mostly representing the product quality, which can be seen as the Region of Interest (ROI) of the point cloud. Furtherly, a vector
12 , , , n X x x x
is extracted from the mega points, which comprises feature values xi calculated according to the point cloud. Thus, the mega points are converted to a single vector including geometrical features which most represent the product quality information. With this step of simplification, it becomes feasible to select the most suitable data mining approaches on the basis of the extracted feature vectors to do the quality classification. For example, three typical data mining approaches are common used in the classification problem, which are Artificial Neural Networks (ANN), Decision tree and Support Vector Machines (SVM) respectively.
Decision Tree
A decision tree is one of data mining approaches applied in many real world applications as a solution to classification problems. A decision tree is a flowchart-like tree structure, where each internal node denotes a test on an attribute, each branch represents an outcome of the test, and each leaf node holds a class label. The construction of decision tree classifiers does not require any domain knowledge or parameter setting, and therefore is appropriate for exploratory knowledge discovery.
C4.5 is a classic algorithm for decision tree induction. The succedent algorithm C5.0 is available in IBM SPSS Modeler®. By using this software, it is easy to accomplish the decision tree induction and test.
Artificial Neural Networks
As another effective data mining approach, an Artificial Neural Network consists of layers and neurons on each of them. Parameters are adjustable in an ANN such as the number of the hidden layers and neurons, the transfer functions between layers and the train method etc. A powerful ANN toolbox is available in Matlab® and can be highly customized to get the best result by the user.
Support Vector Machines (SVM)
A Support Vector Machine (SVM) is a supervised learning method for data analysis and pattern recognition. The standard SVM is designed for binary classification. Given a set of training examples, each marked as belonging to one of two categories, several SVM training algorithms are available to build a model that assigns examples into corresponding category. New examples are then predicted to belong to a category based on the constructed model. For multi-classification problem, a commonly used approach is to construct K separate SVMs, in which the kth model yk(x) is trained using the data from class Ck as the positive examples and the data from the remaining K -1 classes as the negative examples, which is known as the one-versus-the-rest approach.
RFID level
Radio Frequency Identification (RFID) is one of numerous technologies grouped under the term of Automatic Identification (Auto ID), such as bar code, magnetic inks, optical character recognition, voice recognition, touch memory, smart cards, biometrics etc. Auto ID technologies are a new way of controlling information and material flow, especially suitable for large production networks [START_REF] Elisabeth | The RFID Technology and Its Current Applications[END_REF]. RFID is the use of a wireless non-contact radio system to transfer data from a tag attached to an object, for the purposes of identification and tracking. In general terms, it is a means of identifying a person or object using a radio frequency transmission. The technology can be used to identify, track, sort or detect a wide variety of objects [START_REF] Lewis | A Basic Introduction to RFID technology and Its Use in the Supply Chain[END_REF]. RFID system can be classified by the working frequency, i.e. Low Frequency (LF), High Frequency (HF), Ultra High Frequency (UHF) and Microwave. Different frequency works for various media, e.g. UHF is not applicable to metal but HF is metal friendly. Thus, the working frequency has to be used on the basis of tracked objects.
Hardware of RFID system includes RFID tag, RFID reader and RFID antenna. RFID tag is an electronic device that can store and transmit data to a reader in a contactless manner using radio waves, which can be read-only or read-write. Tag memory can be factory or field programmed, partitionable, and optionally permanently locked, which enables the users save the customized information in the tag and read it everywhere, or kill the tag when it will not be used anymore. Bytes left unlocked can be rewritten over more than 100,000 times, which achieves a long useful life. Moreover, the tags can be classified by power methods i.e. passive tags without power, semi-passive tags with battery and active tags with battery, processor and i/o ports. The power supply increases the cost of the tag but enhance the readable performance. Furthermore, a middleware is required as a platform for managing acquired RFID data and routing it between tag readers and other enterprise systems. Recently, RFID become more and more interesting technology in many fields such as agriculture, manufacturing and supply chain management.
CASE STUDY
In this paper, a wheel assembly problem is proposed as the case study of the implementaion for combining the SLS, data mining approaches and RFID technology. In the first step, the assembly quality classification is introduced. Secondly, the point cloud of the object is acuqired using SLS and converted to be the feature vector, which is defined according to the assembly requirments and provided to the data mining classifier as the input. At last, the quality is decided by the classifier and converted to RFID data, which is saved in the RFID tag attached on the object.
Problem description
To certify the feasibility of proposed 3D vision based quality inspection, LEGO® wheel assembly inspection is taken as the example in this paper. In the supposed wheel assembly inspection, the object is to check the assembly quality. A wheel is constituted of 2 components, the rim and the tire. Possible errors occur during the assembly process as shown in Figure 3, which are supposed to be divided into 5 classes, according to the relative position of the rim and tire:
1. Wheel assembly without errors 2. Tire is compressed 3. There exists an offset for one side 4. Rim is detached from the tire 5. Rim is tilted
Fig. 3. Wheel assembly classification
Each class has a corresponding inner layout, which is shown respectively in Figure 4.
The section views show the differences among the classes.
Fig. 4. Inner layout of the wheel
2D vision is not applicable to distinguish some cases because of the similarity of pictures, as shown in Figure 5.
Fig. 5. Similarity from the top of the view
It is noticed that there is not so much difference between the two wheels from the top view in the image. However, the height is different if seen from the side view. Structured Light System (SLS) is an effective solution which can get 3D point cloud of the inspected part, so that the real height value of the parts are obtained while errors can be recognized with the metric information directly.
Feature extraction for the classification
The hardware configuration is shown in Figure 6. The image capture device is used with a SONY XCG-U100E industrial camera, with UXGA resolution (1600×1200 pixels) and Gigabit Ethernet (GigE) interface, and together with a Fujinon 9mm lens. A BenQ MP525 digital projector is employed to project the patterns. The hardware control and the image processing are performed with commercial software Scorpi-on®. After calibration, the accuracy of the measurement can achieve 0.01mm in this case study. Regarding the 5 classes in the wheel assembly problem, it is of importance to extract features most related to the assembly status from the point cloud. These 5 features extracted above are able to describe the pose of a wheel, which indicates the height and inclination. For each profile, 5 feature values are extracted. Thus, a vector XS is used to denote a wheel where X S = {X 1 , X 2 , …, X 6 } as the inputs of data mining approaches.
3.3
Quality information embeded using RFID tag
After the assembly quality has been approved using SLS and data mining decision support system. The quality information will be written into the RFID tag placed in the wheel. Thus, the quality information is combined with the product together for the check in the future. In this wheel assembly quality inspection problem, the tag accessing is to be completed with the Reader Test Tool of the RFID reader as shown in Figure 10.
Fig. 10. RFID reader test tool
In this case study, the OMNI-ID RFID tag, the SIRIT RFID reader and the IMPINJ near-field antenna are used to construct the RFID system, as shown in Figure 11. Then the tagged tire and a rim are assembled. Because the EPC code of each RFID tag is set to be unique, each wheel is given by a unique . Finally the assembly quality of the wheel is to be inspected using SLS and data mining based decision support system. After the classification, the quality information will be written into the tag and kept with the product as shown in Figure 13.
Fig. 13. Quality inspection and information writing
The memory of the OMNI-ID RFID tag is divided into three parts, which are allocated for the EPC code of 96bits, the user data of 512bits and the TID data of 64bits. The 512bits user data are reserved for the customized information, which can be converted to 64 ASCII characters. In this case, the quality inspection related information is written in the user memory of the RFID tag. The information comprises the classified assembly quality, the inspection time and the inspection date, in the form of "QUALITY=X TIME=HH:MM DATE=DD.MM.YY". Because the tag memory is written in the form of hexadecimal, the texts have to be converted to HEX before writting. Supposing the quality is classified to be 1 and the inspection is taken at 14:05 on 30.04.13, the information is converted from the ASCII characters to hexadecimal as shown in Figure 14. The information writing is completed using reader test tool as shown in Figure 15.
Fig. 15. RFID tag information control
After the quality information is written in the user memory of the tag, it is able to be read out with any other RFID reader. Using the same decoding approach, the hexadecimal can be restored to characters again for later check.
Result and analysis
During the quality inspection test. After the classification, the inspection related information is written in the RFID tag.
The inspection time and date is available using reader test tool as shown in Figure 16.
Fig. 16. Acquiring the RFID tag information
Combining with the quality classfication result, the text to be written is QUALITY=1 TIME=07:27 DATE=21.05.13 Which is converted to be HEX as following using compiler 0x5155414C4954593D312054494D453D30373A323720444154453D32312E3035 2E3133
The converted HEX is written in the user memory of the tag in command line of the reader test tool as shown in Figure 17.
CONCLUSIONS
The combination of Structured Light System, the data mining approach and RFID technology is tested in this paper. SLS is applicable in this proposed wheel assembly quality classfication problem. According to the assembly requirements, the feature definition on the basis of the point cloud is suitable for similar product type. The feature vector extraction provides the SVM classifier availalbe inputs and ahecieves 95.8% Correctly Classified Instances. Meanwhile, the RIFD system successfully converts the quality inspection result to acceptable data format for the tag and writes the information in it. This step is able to improve the traceablity of the product quality. Suppose multiple SLS inspection stations are assigned along the assembly line, the quality inspection results are saved in the RFID tag respectively. The earlier inspection result is available for the system before the product enters into the following processing station. The tag embeded information does not require the remote database access. The interuption due to the product quality is able to be avoided. In the following work, the middleware for integrating the three system will be developed.
Fig. 1 .
1 Fig. 1. System architecture
Fig. 2 .
2 Fig. 2. General working process of a Structured Light System
Fig. 6 .
6 Fig. 6. SLS hardware
Fig. 7 .Fig. 8 .
78 Fig. 7. Point cloud acquired with SLS
Fig. 9 .
9 Fig. 9. Feature extraction
Fig. 11 .
11 Fig. 11. RFID system hardware
Fig. 12 .
12 Fig. 12. Attach a RFID tag to a tire
Fig. 14 .
14 Fig. 14. Conversion from the ASCII characters to the Hexadecimal
Fig. 17 .
17 Fig. 17. Information writting using RFID system
Table 1 .
1 Results of SVM
Six measures are obtained for the classifier on the
basis of 4 outcomes: True Positive (TP), True Negative (TN), False Positive (FP) and
False Negative (FN), which constructs a confusion matrix
TP (correctly accepted) FN (incorrectly refused)
FP (incorrectly accepted) TN (correctly refused)
For each validation, the confusion matrixes of each class are constructed respec-
tively, and the final confusion matrix for each validation would contain the average
values for all classes combined.
The 6 measures are defined as following
1. Correctly Classified Instances (CCI): Percentage of samples correctly classified.
2. Incorrectly Classified Instances (ICI): 100% -CCI. | 21,868 | [
"1003708",
"1003709"
] | [
"50794",
"50794"
] |
01485817 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485817/file/978-3-642-41329-2_23_Chapter.pdf | Philipp Sembdner
Stefan Holtzhausen
Christine Schöne
email: christine.schoene@tu-dresden.de
Ralph Stelzer
Additional Methods to Analyze Computer Tomography Data for Medical Purposes and Generatively Produced Technical Components
Keywords: Computer tomography, Reverse Engineering, 3D-Inspection 1
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
In the context of industrial manufacturing and assembly, continuous quality analysis is absolutely necessary to guarantee the production guidelines predefined in an operation. Optical 3D measuring systems for contactless measurement of component geometries and surfaces are increasingly being applied in the production process. These measuring techniques are also being used more and more often in combination with automated manufacturing supervision processes to maintain consistently high standards of quality [START_REF] Bauer | Handbuch zur industriellen Bildverarbeitung -Qualitätssicherung in der Praxis[END_REF].
One disadvantage of such systems is that it is only possible to inspect visible regions of the manufactured object. It is impossible to check inner areas of components or joints, such as welded, soldered or adhesive joints, by means of these nondestructive measuring techniques. Here, it makes sense to inspect the formation of blowholes or inclusions in pre-series manufacturing to optimise the production process or to safeguard the quality standards during series manufacturing [START_REF] Zabler | Röntgen-Computertomographie in der industriellen Fertigung (Kraftfahrzeug-Zulieferer) -Anwendungen und Entwicklungsziele[END_REF].
Computer tomography (CT) is an imaging technology that provides a proven solution to this problem. State of the art in the medical environment, this technology has become more and more established in other technological fields. However, in mechanical engineering, we are faced with other requirements that must be fulfilled by the procedure, both in terms of the definition of the measuring task and strategy and with consideration of the issue of measuring uncertainty.
Because high accuracy is needed, micro CT systems are frequently used, resulting in huge data volumes in the form of high-resolution slice images. However, increases in the capacities of computer systems in recent years make image analysis, as well as 3D modelling, based on these slices images a promising technology. Consequently, it is necessary to develop efficient analysis strategies for data gathered by means of imaging techniques to find new strategies for quality assurance and process optimisation. The Reverse Engineering team at the Chair of Engineering Design and CAD of the Dresden University of Technology has been studying the analysis and screening of CT data, at first mainly from medicine [START_REF] Schöne | Individual Contour Adapted Functional Implant Structures in Titanium[END_REF][START_REF] Sembdner | Forming the interface between doctor and designing engineeran efficient software tool to define auxiliary geometries for the design of individualized lower jaw implants[END_REF], for several years. An example is given in Fig. 1, in which a discrete 3D model is generated from CT data. In this process, the calculation of the iso-surfaces is performed by means of the Marching Cubes Algorithm [START_REF] Seibt | Umsetzung eines geeigneten Marching Cubes Algorithmus zur Generierung facettierter Grenzflächen[END_REF]. In the next step, the segmented model of the lower jaw bone is used for operation planning and the design of an individual implant for the patient. Due to industry demand, processing of CT image data in the technical realm is becoming more and more important. The paper elucidates opportunities for component investigation from CT data by means of efficient image processing strategies and methods using the example of a soldered tube joint.
FUNDAMENTALS
Computer tomography (CT) is an imaging technique. As a result of its application, we obtain stacks of slice images. As a rule, these data are available in the DICOM format, which has been established in medical applications. Apart from the intrinsic image data, it includes a file head, incorporating the most essential information about the generated image. Relevant information here includes patient data, image position and size, pixel or voxel distance and colour intensity. In the industrial realm, the images are frequently saved as raw data (RAW) or in a standardised image format, such as TIFF. For ongoing processing of the image data in the context of the threedimensional object, additional geometric data (pixel distance, image position etc.) must be available separately. The colour intensity values of the image data, which depend on density, are saved in various colour intensities (8 bit and higher). In medical applications, a 12 bit scale is often used. Evaluation of CT data is often made more challenging by measuring noise and the formation of artefacts due to outshining. It is impossible to solve these problems simply by using individual image filters. For this reason, noise and artefact reduction are discussed in many publications [START_REF] Hahn | Verfahren zur Metallartefaktreduktion und Segmentierung in der medizinischen Computertomographie[END_REF].
Characteristics
In the following, the authors elucidate the CT data analysis methods implemented as program modules to read, process and display CT data developed at the Chair.
INSPECTION OF A SOLDERED TUBE JOINT
The task was to inspect two soldered flange joints on a pipe elbow in co-operation with a manufacturer and supplier of hydraulic hose pipes (see Fig. 2). The goal was to dimension the tube joint to withstand higher pressure values. It was first necessary to demonstrate impermeability. What this means, in practical terms, is that the quantity and size of blowholes or inclusions of air (area per image, volume in the slice stack) in the soldering joints have to be inspected in order to guarantee that a closed soldering circle of about 3…4 mm can be maintained. It is possible to execute the measurements using pre-series parts, sample parts from production, or parts returned due to complaints.
Methodes for blowholds
A slice image resulting from the CT record is shown in Fig. 3. On the right side, one may clearly see inclusions in the region of the soldering joint. It is necessary to detect these positions and to quantify their area. If this is done using several images in the slice stack, we can draw conclusions regarding the blowholes' volume.
Fig. 3. -Slice image with blowholes in the soldering region
To guarantee that only the zone of the soldering joint was considered for the detection of air inclusions, instead of erroneously detecting inclusions in the tubes themselves, we first determined the tube's centre point in the cross section at first. This approach only works if the slices are perpendicular to the tube centre line, so that the internal contour of the tube forms a circle. The centre point is determined as follows (Fig. . This filter detects each separate object in an image. The objects are marked with polygons (in our example, rectangles). c) Then the identified objects lying in the soldering region are found as a function of the calculated centre point and the given soldering circle diameter (= outer diameter of the inner tube). In this search, a tolerance is added to the soldering circle diameter (in our example: ±10%). Our goal is to record only the soldering joint rather than to detect inclusions in the tube itself. Now, the bright pixels (in our example with gray scale 4095) are identified inside the rectangle. They represent the air inclusions, for which we are searching. It is possible to quantify the area of the blowholes in the image using the known pixel width in both image directions.
The result of blowhole detection is especially dependent on the choice of an adequate threshold. This threshold has to be predefined by the user. If this threshold must be used over several slice images, the value has to be adjusted again if necessary.
Generation of 3D freeform cross sections
Analysis of planar (2D) cross sections using the CT data record does not allow for a complete analysis of cylindrical or freeform inner structures in one view. Especially when evaluating a soldered joint, alternative slice images provide a way to represent the area to be soldered in a manner rolled out on a plane. To do this, slice images are generated, which cope with Spline surfaces according to their mathematical representation. The basis for this approach is that all slice images with their local, twodimensional co-ordinate systems <st> are transformed into a global co-ordinate system <xyz> (Fig. 6). Consequently, each image pixel of a slice image k can be represented as a threedimensional voxel V by its co-ordinates [n,m,k]. This voxel is also defined in threedimensional space. In this reference system, one may define any Spline surface of the type F(u,v) (Bezier, Hermite, etc.). In the case investigated here, we used Hermite surface patches, which are described by defining a mass point matrix Gab. It is possible to calculate discrete points Pi,j on the patch surface. These points are also defined in the global reference system. The quantity of points on the Spline surface in u-or in v directions determines the resolution of the desired slice image. The gray intensity values for one patch point can be determined by trilinear or tricubic interpolation of the gray intensity values of the adjacent voxel.
The procedure to create a freeform slice through the slice image stack can be described as follows:
1. In one or more CT slice images, the soldered joint is marked by a Spline curve (shown in red colour in Fig. 7). In this process, the quantity of defined curves is arbitrary. The quantity of supporting points per layer has to be the same in order to establish the mass point matrix Gab. 2. Now we can calculate the Spline surface with the help of the mass points. Subsequently, on this surface, discrete points are calculated by iteration of the <uv> coordinates. As a result, a point cloud of three-dimensional points is created, which can be visualised both in 2D as a curved slice image (Fig. 9), and in 3D space as a triangulised object (Fig. 8). -Rolled out cross section of a tube joint Thus, it is now possible to make qualitative statements about the joint's impermeability. Furthermore, one may perform an analysis to measure, for example, the size of the blowhole on the generated slice image. However, if this cross section is executed repeatedly in the region of the soldering position, concentric to the originally defined cross section, then we obtain a number of these three-dimensional panorama views of the soldered joint. Visual inspection of these views in their entirety, without taking into account further evaluation strategies, may provide an initial estimate of impermeability.
SUMMARY
The use of computer tomography in industry offers a great potential for contactless and nondestructive recording of non-visible component regions. Since it is possible in this context to apply a significantly higher radiation level than for medical CTs, measuring uncertainty may be clearly reduced and data volumes concomitantly increased. Additionally, since the test objects are mostly stationary, as a rule, movement blurs can also be avoided [START_REF] Zabler | Röntgen-Computertomographie in der industriellen Fertigung (Kraftfahrzeug-Zulieferer) -Anwendungen und Entwicklungsziele[END_REF].
The results provided by computer tomography can be used equally effectively for various tasks. In the narrower sense of Reverse Engineering, it is possible to use the data for modelling, for example. However, the most common applications come from measuring analyses, such as wall thickness analyses and test methods within the context of quality assurance. The example discussed in the paper shows that the implementation of efficient analysis strategies is essential for process monitoring and automation. The option of generating arbitrary freeform cross sections by means of a slice image stack particularly opens up new strategies for component investigation.
Fig. 1 .
1 Fig. 1.-Application of medical CT for operation planning[START_REF] Sembdner | Forming the interface between doctor and designing engineeran efficient software tool to define auxiliary geometries for the design of individualized lower jaw implants[END_REF]
Fig. 2 .
2 Fig. 2. -Tube joint with two soldered flanges
Fig. 4 .Fig. 5 .
45 Fig. 4. -Determination of the tube centre point in the slice image
Fig. 6 .
6 Fig. 6. -Representation of the slice images in a compound of slice images Left: In a global reference co-ordinate system, all slice images are unambiguously defined. This way, indexing of a voxel V is possible (feasible) by indexing n, m, k. Right: In this global reference system, one may define an arbitrary Spline surface, whose discrete surface points P may be unambiguously transformed into the reference system.
Fig. 7 .
7 Fig. 7. -Marking of the soldered joint in the CT image
Fig. 8 .
8 Fig. 8. -3D cross section through a tube joint in the region of the soldering position
Medical CT of a skull Industrial CT of a tube joint
Image format DICOM RAW, TIFF
Image size 512 x 512 pixels 991 x 991 pixels
Pixel size 0.44 x 0.44 mm 0.08 x 0.08 mm
Distance between im- 1 mm 0.08 mm
ages
Image number 238 851
Data volume 121 MB 1550 MB
Measuring volume about 225 x 225 x 238 about 80 x 80 x 68 mm
mm
Table 1 .
1 -, it is approximately 13 times greater). It is very difficult to handle such a flood of data. Consequently, either we have to have available powerful computer systems capable of handling huge data volumes, or data stock has to be reduced, which, in turn, leads to losses in accuracy. For the latter option, one solution is to remove individual layers off the slice stack, thereby reducing image resolution or colour intensity.
Comparison of a medical and an industrial CT data record Table
1
offers an example outlining the difference between a medical CT and an industrial micro-CT based on two typical data records. In this representation, the differences in accuracy can be seen. It is possible for high-resolution industrial CTs to achieve measuring inaccuracy values of less than 80 µm. This results in a clearly higher data volume, which is frequently many times that of a medical CT (in the example shown in Table
1 | 14,650 | [
"1003710"
] | [
"96520",
"96520",
"96520",
"96520"
] |
01485818 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485818/file/978-3-642-41329-2_24_Chapter.pdf | György Gyurecz
email: gyurecz.gyorgy@bgk.uni-obuda.hu
Gábor Renner
email: renner@vision.sztaki.hu
Correction of Highlight Line Structures
Keywords: Highlight lines, highlight line structure Ci Ai, Ti 1 Bi, Ti 2
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Most important class A surfaces can be found on cars, airplanes, ship hulls, household appliances, etc. Beyond functional criteria, the design of class A surfaces involves aspects concerning style and appearance. Creating tools supporting the work of a stylist is a challenging task in CAD and CAGD.
A highlight line structure is a series of highlight lines, representing visually the reflection and the shape error characteristics of the surface. They are calculated as the surface imprint of the linear light source array placed above the surface [START_REF] Beier | Highlight-line algorithm for real time surface quality assessment[END_REF].
The structures are evaluated by the pattern and the individual shape of the highlight lines. A comprehensive quality inspection can be carried out by the comparison of the highlight line structures of different light source and surface position settings. The uniform or smoothly changing highlight line pattern is essential for the high quality highlight line structures.
Following the inspection, the defective highlight curve segments are selected and corrected. Based on the corrected highlight curves, the parameters of the surface producing the new highlight line structure can be calculated [START_REF] Gyurecz | Correcting Fine Structure of Surfaces by Genetic Algorithm[END_REF].
In our method the correction of highlight line structure is carried out in two steps. First, sequences of evaluation points are defined to measure the error in terms of distance and angle functions. Next, these functions are smoothed and based on the new function values, new highlight line points are calculated. New highlight curve curves are constructed using these points. The outline of the method is summarized in Figure 1. For a point on the highlight line d(u,v)=0 holds, which must be solved for the control points of S(u,v). To design high quality surfaces, this relation has to be computed with high accuracy. We developed a robust method for computing points on highlight lines, which is described in detail in [START_REF] Gyurecz | Robust computation of reflection lines[END_REF].
The highlight lines are represented by curves constructed by interpolation in B-Spline form. For the calculation of P i control points of C(t) curves system of equation is solved, where the unknowns are the control points [START_REF] Pigel | The NURBS Book[END_REF].
( ̅ ) ∑ ( ̅ ) (2)
The parameter values tk of the highlight points Qk are set proportional to the chord distance between highlight points. To ensure C2 continuity of the curves, the degree r of the basis function N is set to 3.
Selection of the defective highlight curve segments
Selection identifies the location of the correction by fitting a sketch curve on the surface around the defective region. This is carried out by the interactive tools of the CAD system. For the identification of the affected highlight curve segments Ci, i=0...N, and the endpoints A i and B i intersection points are searched. The identification is carried out by an algorithm utilizing exhaustive search method [START_REF] Deb | Optimization for Engineering Design: Algorithms and Examples[END_REF]. The tangents T i1 and T i2 corresponding to the endpoints are also identified; they are utilized in a subsequent process of correction.
In Figure 2, the defective curve segments are shown in bold; the endpoints are marked by solid squares. The dashed curve represents the user drawn sketch curve.
Evaluation of the highlight line pattern
The structure of the selected highlight curve segments is evaluated on sequences sj, j=0...M of highlight points E 0,0 ,…E i,j …E N,M spanning over the defective segment in crosswise direction. The sequences include correct highlight curve points E 0,j , E 1,j and E N-1,j , E N,j needed to ensure the continuity of corrected highlight segments with the adjoining unaffected region. We evaluate the structure error by dj distance and ∝j angle functions defined on sj, sequences The distance function represents the inequalities of the structure in crosswise direction; the angle function characterizes the structure error along the highlight curves. where at
The location of evaluation point is in the surrounding of where:
(4) at
E0,0 sj C0 CN E0,M EN,M ' 1 i E ' 1 i H i T ' 1 i T i T 1 i H i H 1 i C i C 1 i E i E
Definition of distance and angle error functions
The distance error function is defined by the d_(i,j) distances between the consecutive sequence elements:
‖ ‖ ‖ ‖ (5)
The angle error function is defined by α_(i,j) angles between the consecutive H_i vectors:
( ‖ ‖ ‖ ‖ ) (6)
In Figure 5 Based on the new functions, points for the new highlight curves are obtained.
Calculation of the new highlight curve points
The new function values are calculated by least square approximation method, applied on the original functions. The continuity with the highlight line structure of the adjoining region is ensured by constraints on function end tangents Ti1 and Ti2. The tangents are calculated as: Tj 1 = E0,j -E1,j and Tj 2 = EN-1,j -EN,j.
Figure 7 shows calculation of new R_(i,j) points (indicated by solid squares).
Construction of the corrected highlight curve segments
The new Ci highlight curve segments are cubic B-Splines constructed from the new R (i,j) points by constrained least squares curve fitting method [START_REF] Pigel | The NURBS Book[END_REF]. The points to be approximated are R(i,0) …R(i,j)…R(i,M) new highlight curve points, arranged by Ci, highlight curves. The constraints are Ai and Bi segment endpoints and the T i 1 and T i 2 endpoint tangents. The u Ai and u Bi parameter values of the new segments correspond to Ai and Bi segment endpoints. For the calculation of Pi control points, system of equations is solved. The uk, parameters of the curve points Qk are defined on u k = u Ai …u Bi . The parameter values are set proportional to the chord distance between the highlight curve points.
Tj 1 Tj 2 E0,j E1,j EN-1,j EN,j C 1 R i,j E 1,M-1 E 1,1 V i,j C N-1
Application and Examples
The method is implemented in Rhino 4 NURBS modeler. The calculation of new highlight curve points and the construction of corrected highlight curve segments is written in C++ code, the calculation and selection of highlight curves is realized in VBA. We tested our method on several industrial surfaces. In Fig. 9 and Fig. 10, two highlight curve structures before and after corrections are presented. The defective surface area is selected interactively, the evaluation and correction of highlight lines is automated. The parameters of the automatic correction can be adjusted by the designer.
The method is successfully implemented in the surface modeling software (Rhino 4) widely used in industrial shape design. The method is applicable to surfaces with uniform or changing highlight line pattern, and wide range of highlight line errors. The applicability of the method was proved on number of industrial surfaces.
Fig. 1 .
1 Fig. 1. Block diagram of the highlight line structure improvement method
Fig. 2 .
2 Fig. 2. Selection of the defective highlight curve segments
Fig. 3 .
3 Fig. 3. Definition of the evaluation point sequences
Fig. 4 .
4 Fig. 4. Calculation of the evaluation points Point is in the perpendicular direction if
, an error function constructed from points Ei,j| i=0..N , N=5 is presented. The i=2…N-2 sequence of the error functions correspond to points on defective highlight curves. The rapid and irregular changes represent the defects in the highlight curve structure. The function values at i=0,1 and N-1,N correspond to points on highlight curves of the adjoining correct pattern.
Fig. 5 .
5 Fig. 5. Error function example
Fig. 6 .
6 Fig. 6. The error function after smoothing
Fig. 7 .
7 Fig. 7. Calculation of points for new highlight curve segments
Fig. 8 .
8 Fig. 8. New highlight curve segment
Fig. 9 .Fig. 10 .
910 Fig. 9. Car body element before and after correction | 8,364 | [
"1003711",
"1003712"
] | [
"461402",
"306576"
] |
01485819 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485819/file/978-3-642-41329-2_25_Chapter.pdf | George L Kovács
email: kovacs.gyorgy@sztaki.mta.hu
Imre Paniti
email: paniti.imre@sztaki.mta.hu
Re-make of Sheet Metal Parts of End of Life Vehicles -Research on Product Life-Cycle Management
Keywords: life vehicle, sheet-metal, incremental sheet forming, sustainability, product life-cycle
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Some definitions and abbreviations
The motivation of the example used during this study was born from the EU Directive 2000/53/EC [1a] according to which by 2015, the portion of each vehicle that should be recycled or reused have to increase to 95%. To avoid being too abstract in the paper we use the management of sheet metal parts of worn-out or crashed cars as example. We show a possible way of value evaluation and measurement of values during PLCM. On the other hand we deal with the complexity and problems of decision making during processing of used sheet-metal parts' l, if the main goal is Remake (reuse). The decisions are about dismantling and remake as e.g. re-cycling or re-use with or without repair, how to repair, etc. Some definitions and abbreviations will be given first to help better understanding some expressions in the context of worn out or broken or simply End of Life (EOL) vehicles. Remake, re-use and re-cycling should not be mixed up with waste management, as our goal is to use everything instead of producing waste.
─ ELV (End of Life Vehicle, EOL vehicle): cars after collision or worn out cars ─ Shredder (S): a strong equipment breaking, tearing everything into small pieces, to shreds, almost like a mill.
Introduction and state-of-the-art
Management of EOV (EOL vehicles) is a rather complicated task, which needs to process several technical and legal steps (paperwork). All parts of all vehicles have to be under control during their whole Life-Cycle, including permissions to run, licence plates, permissions to stop, informing local authorities, etc. In this paper we deal only with technical aspects of a restricted set of parts, namely with sheet-metal parts.
2.1 Some sources (for example [START_REF]Depollution and Shredder Trial on End of Life Vehicles in Ireland[END_REF]) simplify the procedure to the following 3 steps:
Depollution; Dismantling; Shredding
The main goal is achieved, but several points (decision points) remain open and several questions unanswered, and nobody really knows what to do and how to do and what are the consequences of certain activities. What happens with the different parts?? Is there anything worth repairing, etc. ? There are others, who claim that the process is a little more complicated. The following EU suggestion does not go into details; it is a straightforward average procedure including paper work, which is crucial if someone deals with EOV.
2.2
According to [2] the procedure of dismantling should be the following:
─ Delivery of EOL vehicle ─ Papers and documents are fixed, permissions issued (or checked if issued by others) ─ Put the car onto a dry bed: remove dangerous materials and liquids, store everything professionally ─ Select useful parts, take them out and store under roof ─ Sell/offer for selling the tested parts ─ Press the body to help economic delivery ─ Re-use raw materials A little more precise description of the removal sequence of different important parts/materials with more details, however still without decision points is the following:
─ Remove battery and tanks filled with liquid gas ─ Remove explosive cartridges of airbag and safety bells ─ Remove gasoline, oils, lubricants, cooling liquid, anti-freezing, break liquid, airconditioner liquid ─ Most careful removal of parts containing quick silver ─ Remove catalysts ─ Remove metal parts containing copper, aluminium, magnesium ─ Remove tyres and bigger size plastic parts (fender/bumper, panel, containers for liquids) ─ Remove windshields and all glass products
2.3
The IDIS web site [3] has the following opinion:
The International Dismantling Information System (IDIS) was developed by the automotive industry to meet the legal obligations of the EU End of Life Vehicle (ELV) directive and has been improved to an information system with vehicle manufacturer compiled information for treatment operators to promote the environmental treatment of End-of-Life-Vehicles, safely and economically. The system development and improvement is supervised and controlled by the IDIS2 Consortium formed by automotive manufacturers from Europe, Japan, Malaysia, Korea and the USA, covering currently 1747 different models and variants from 69 car brands.
The access to and the use of the system is free of charge. The basic steps of dismantling suggested by IDIS2 are as follows:
Batteries --Pyrotechnics --Fuels AC( Air Conditioner) -Draining -Catalysts Controlled Parts to be removed -Tires --Other Pre-treatment Dismantling
2.4
At GAZ Autoschool [4] in the UK the following are underlined as the most important steps to follow:
1. Removing vehicle doors, bonnet, boot, hatch. Removing these items early in the dismantling process enables easier access to vehicle interior, reduces restriction in work bays and minimises the risk of accident damage to potentially valuable components. 2. Removing interior panels, trim, fittings and components. This is a relatively clean and safe operation which maximises the resale opportunities available for items whose value depends on appearance/condition and which may be damaged if left on the vehicle. Components to be removed include dashboard, instrument panel, heater element, control stalks, steering column. 3. Remove light clusters: An easy process but one which needs care to avoid damage.
Once removed items need to be labeled and stored to enable potential re-sale. 4. Removal of wiring harness: The harness should be removed without damage, meaning that all electrical components are unclipped and the wires pulled back through into the interior of the car so that it can be removed complete and intact.
Harness should be labeled and then stored appropriately. 5. Removal of Engine and Gearbox: This will involve the use of an engine hoist, trolley jacks and axle stands, and will often necessitate working under the vehicle for a short period to remove gear linkages etc. Often the dirtiest and most physical task. Engine and gearbox oil together with engine coolant will need to be drained and collected for storage. 6. Engine dismantling: Engines are kept for resale where possible. 7. Gearbox dismantling: Gearboxes are kept for resale where possible. 8. Brakes and shock absorbers: Brake components are checked and offered for resale where they are serviceable.
2.5
Finally we refer to [START_REF] Kazmierczak | Lersř Parkallé: A Case Study of Serial-Flow Car Disassembly: Ergonomics, Productivity and Potential System Performance[END_REF], which is a survey and case study on serial flow car disassembly. The suggested technology can be represented by Fig. 1., where one can see, that the system used five stations and four buffers.
At stations 1-3 glass, rubber, and interior are removed. At station 4 the "turning machine" rotated cars upside down to facilitate engine and gearbox unfastening At station 5 the engine and gearbox are removed .
The procedure is the following:
1. Take a lot of pictures before you begin the disassembly process, including pictures of the interior. This important issue is rarely mentioned by other authors 2. Get a box of zip lock plastic bags in each size available to store every nut, bolt, hinge, clip, shim, etc. Make color marks to all. 3. Make sure you have a pen and a notebook by your side at all times to document any helpful reminders, parts in need of replacement and to take inventory 4. Remove the fenders, hood and trunk lid with the assistance of at least one able body to avoid damage and personal injury 5. Remove the front windshield and the rear window by first removing the chrome molding from the outside of the car, being careful not to scratch the glass. 6. This would be a good point to gut the interior. Remove the seats, doors and interior panels, carpeting and headliner. 7. Clear the firewall and take all the accessories off the engine.
8. Go through your notebook and highlight all the parts that need to be replaced and make a separate "to do" list for ordering them.
The most characteristic in the above study is a practical approach: be very careful and document everything, use bags and color pens and notebook to keep track of all parts and activities. This non-stop bookkeeping may hinder effective and fast work, however surely helps in knowing and tracing everything, if needed.
Sheet-metal parts' management
It can be seen from the previous examples that practically nobody deals with the special problems of sheet metal parts of EOL vehicles. On the other hand it is clear that sheet metal parts are only a certain percentage of an EOL vehicle. But it is clear, that almost every EOL vehicle has several sheet metal parts, which could be re-used with or without corrections, with or without re-paint. This makes us to believe that it is worthwhile to deal with sheet metal parts separately, moreover in the following part of our study this will be our only issue. To think and to speak about re-use (re-make, re-shape) as a practical issue, we need as minimum:
(a) proper dismantling technology to remove sheet-metal parts without causing damages to them (b) a measurement technology to evaluate the dismantled part and a software to compare the measured values to requested values, to define whether the dismantled part is appropriate or needs correction and to decide its applicability to be used for another vehicle, and finally (c) a technology to correct slightly damaged sheets, based on CAD/CAM information. This information may come from design (new or requested parts) or through measurements by a scanner (actual, dismantled part).
Our staff using ISF technology and the robotic laboratory of SZTAKI is able to perform the requested operations on our machines and on the software
A sheet-metal decision sequence
Our approach needs to follow a rather complicated decision sequence; it will be detailed only for sheet metal (SM) parts, supposing the available Incremental Sheet Forming (ISF) facilities. It is the following, using the above defined abbreviations, emphasizing decision types and points: This decision sequence -naturallycan be taken into account as a small and timely short period of the PLMC, namely of the EOL cars' sheet-metal parts. Each move and activity have certain actual prices, which are commonly accepted, however we know that they are not really correct, they do not support sustainability, and on the other hand often increase negative effects.
3.2
Evaluation of costs and advantages of the re-use and re-make of sheetmetal parts of EOVs After such a long, or long-looking decision procedure we need a methodology for evaluation of all, what we do or do not perform to have re-usable sheet-metal parts from parts of EOL vehicles.
The simplest way would be simply compare costs and prices of all involved parts (good, to be corrected, etc.) and services (scanning, ISF, manual work, painting, etc.) and all conditions (shredder or repair, etc.).
Today that is the only way some people follow, if any. Generally a very fast view at the EOV is enough to send it to the shredder, as this is the simplest decision with the smallest risk. To be more precise, the risk is there, but rather hidden.
The cost/value estimations and comparisons can be performed relatively easily, however the results correspond only to the present economicalpolitical situation and to the actual financial circumstances, and would not say anything about the future, what is embedded into the "sustainability", "footprint" and "side-effects" (see later on). We believe that there exist giving appropriate tools and means for "real", "future centric" evaluations, thus we need to find and use them.
Our choice are the KILT model and the TYPUS metrics, which will be explained and used for our study, for details see [START_REF] Michelini | Integrated Design for Sustainability: Intelligence for Eco-Consistent Products and Services[END_REF], [START_REF] Michelini | Knowledge Enterpreneurship and Sustainable Growth[END_REF], [START_REF] Michelini | Knowledge Society Engineering, A Sustainable Growth Pledge[END_REF] and [8a].
The main goal of the above tools is to model and quantify the complete delivery (all products, side-products, trash and effects of them, i.e. all results) of a firm, and to model all interesting and relevant steps of the LC (or LCM). It is clear from the definitions (see later and the references) that any production steps can be evaluated and can be understood as cost values. If we speak about car production, the input is row material, machining equipment and design information, and people, who work, etc. Output (delivery) is the car. Sheet metal production is a little part of car making, generally prior to body assembly. For our study we take into consideration only sheetmetal parts.
We consider and make measurements, comparisons, re-make by using ISF and repaint, and other actions. These can hardly be compared with the "simple" processes used in new car manufacturing. Every step of the decision sequence below can be investigated one by one, taking into account all effects and side-effects. For the sake of simplicity only the input (sheet metal part to be measured and perhaps corrected) and the output (sheet-metal part ready to be used again) may be enough.
Fig. 2. shows some qualitative relationships, which cannot be avoided if environmental issues, sustainability, re-use and our future are important points.
Fig. 2 gives a general picture of our main ideas, and it needs some explanation. See [8a] for more details. It is a rather simplified view of some main players in the production/service arena, however it still shows quite well certain main qualitative relationships. We believe that these can be used to understand what is going on in our (engineering-manufacturing-sustainable) world.
Fig. 2. Re-use, PLCM, ecology, sustainability and KILT
The TYPUS/KILT metrics, methodology and model give us a possibility to better understand and evaluate production results and their components in terms of the K, I, L, T values. They give us a method of calculations and comparisons based on realistic values. The side effects and 2nd and 3rd order effects, etc. mean the following: let us consider a simple and simplified example: to produce a hybrid car (today) means (among others) to produce and build in two engines, two engines need more metal than one (side effect), to produce more metal we need more electrical energy and more ores (2nd order side effect), to produce more electricity more fuel is necessary and to produce more ores needs more miners' work (3rd order side effect), etc., and it could be continued. It is a hard task to know how deep and how broad we should go with such calculations. And if we take a look at our example there are several other viewpoints that could be taken into account. Just one example: the increased water consumption during mining. We have to confess that in the recent study on sheetmetal parts of EOL cars we do not deal with the side-effects at all. The reason is simply that we are in the beginning of the research and only try to define what should we do in this aspect.
Today the whole world, all at least most countries understand the importance of natural resources, environment, and based on this understanding reuse and recycling are getting more and more important in everyday life, as well as the decrease of CO2 emission, etc. These all request to keep energy, water, natural resources, manpower, etc. consumption in a moderate, sustainable level. This leads to sustainable development, or even to sustainability.
Ecological Footprint Life-Cycle PLCM
Re-use Re-cycling Sorting Disassembly
Effects: 1st, 2nd, 3rd order, etc.
Sustainability Sustainable Development
TYPUS/KILT metricsmethodology
3.3
The KILT model and the TYPUS metrics.
Just to remember we repeat some main points of the KILT model and the TYPUS, which are properly explained in [6,7,8 and 8a].TYPUS metrics means Tangibles Yield per Unit of Service. It is measured in moneyon ecological basis. It reflects the total energy and material consumption of (all) (extended) products of a given unit, e. g. of an enterprise. But it can be applied for bigger (e.g. virtual enterprises) or any smaller units (e.g. workshop or one machine) or for any selected actions (e.g painting, bending, cutting, etc.) of any complexity. In this study it is all only about sheet-metal management of EOV, however due to the complexity of the problems and due to the status of the research we still do not make real value calculations.
The metrics assumes several things, as : life-cycle function; material and energy provisions during manufacturing, operation, repair, reuse or dismissal, etc.
KILT is an arbitrarily, but properly chosen implementation of TYPUS,
we could imagine other realizations as well, However, recently the given definitions seem to be the best to manage the requested goals, as far as the authors believe. The related TYPUS metrics is further discussed later on. In earlier models and considerations The delivered quantities (all outputs), Q, is assumed to depend on the contributed financial ( I ) and human ( L ) capitals , plus the know-how ( K ) (innovation) and the tangibles ( T ) have non negligible effects..
The relationships are still work as multiplication and looks the next:
Q = f (K, I, L, T)
Summarizing the different factors we get some content to all of them as capital, knowledge, activity, material, etc. at the same time: K: Technical capitalknowledge, technology, know how, etc.-intangibles I: Financial capital-investment, capital, etc. L: Human capital-labor, traditional labor, human efforts, welfare charges, etc. T: Natural capitaltangible resources: material, consumables, ecologic fees, utilities, commodities, etc.
All the contributed technical K, financial I, human L and natural T capitals are included, and there is a tetra-linear dependence, which assumes to operate nearby equilibrium assets. The KILT models reliably describe the delivered product quantities, Q. Lacking one contribution (any of the above factors has a value of 0), the balance is lame, and the reckoned productivity figures, untruthful or meaningless.
The tetra-linear dependence means the equivalence of assets alone, and their synergic cumulated action. The company return is optimal, when the (scaled) factors are balanced; the current scaling expresses in money the four capitals (the comparison of non-homogeneous quantities is meaningless; the output Q has proper value, with the four inputs homogeneity). The return vanishes or becomes loss, if one contribution disappears. The loss represents the imbalance between constituent (know-how, money, work out-sourcing, bought semi-finished parts, etc.) flows.
The TYPUS metrics.
TYPUS, tangibles yield per unit service: the measurement plot covers the materials supply chain, from procurement, to recovery, so that every enjoyed product-service has associated eco-figures, assembling the resources consumption and the induced falls-off requiring remediation. The results are expressed in money. The point is left open, but, it needs to be detailed, to provide quantitative (legal metrology driven) assessment of the "deposit-refund" balance.
The metrics is an effective standard, aiming at the natural capital intensive exploitation. The supply chain lifecycle visibility needs monitoring and recording the joint economic/ecologic issues, giving quantitative assessment of all input/output materials and energy flows.
We have to apply these considerations to the decision sequence of "remakeno remake" of sheet metal parts of EOV, taking into account all technological steps and human actions including their side effects, if we can understand, define, measure and quantify them. It will take some ore time and a lot of researchers' efforts. To be a little positive we are convinced that the above discussed metrics and model can be used between any two points of the PLC, i.e. all costs, outputs, results, effects and side effects can be measured, calculated and evaluated in the decision sequence of sheet metal parts of EOL cars in our case.
Some technological issues of ISF
There are several open issues concerning the ISF technology and 3D measurements.
We have to make several experiments with the scanner system to have an exact view of the measured sheet, and with the software systems, which compare the different surfaces with each other and with the accepted shape's data. These may be results of scanning, but more often results from the design (CA/CAM) processes. Finally the software dictatesand the humans generally accept -what should be done using ISF.
On the ISF side we still have problems with accurate shape and thickness measurements. These works are running recently with high efforts. Fig. 3 presents an ISF experiment with an industrial robot using a 50 cm x 50 cm frame for the sheet. For car parts we can use the same robot, but a larger, and different frame will be needed. It is on the design table already.
Fig. 3. ISF experiment with a FANUC robot
From the technological point of view, the ISF consists of the gradual plastic deformation of a metal sheet by the action of a spherical forming tool whose trajectory is numerically controlled. The interest in evolution of ISF is rather old; it started in 1967 with the patent of Leszak [START_REF] Leszak | Apparatus and Process for Incremental Dieless Forming[END_REF]. This idea and technology is still active today in the field of producing sheet metal and polystirol parts in small batch and one-of-a-kind production, rapid prototypes, in medical aid manufacturing and in architectural design. A specific forming tool is mounted on the machine spindle or on a robot, and it is moved according to a well-defined tool path to form the sheet into the desired shape. Several ISF strategies have been developed which mainly differ in equipment and forming procedure. In particular, the process can be divided into:
─ Single Point Incremental Forming (SPIF)
Here the sheet metal is shaped by a single tool (with a faceplate supporting the initial level of the sheet).
─ Two Points Incremental Forming (TPIF),
where the sheet metal shaping is ensured by: a) two counter tools or b) a local die support that is a sort of partial / full die.
In Full Die Incremental Forming the tool shapes the sheet alongside a die; this die could be produced from cheap materials such as wood, resin or low cost steel; the use of a die ensures a better and more precise shape of the final piece.
As repair tool only SPIF or TPIF with synchronised counter tools can be considered because the manufacturing of full or partial dies needs more time and money. On the other hand SPIF has some drawbacks compared to TPIF with two counter tools.
Experimental Investigations and Numerical Analysis were carried out by Shigekazu Tanaka et al. [START_REF] Tanaka | Residual Stress In Sheet Metal Parts Made By Incremental Forming Process[END_REF] to examine the residual stress in sheet metal parts obtained by incremental forming operations because distortion were observed after removing the outer portion of the incremental formed sheet metal part. Results showed that "tension residual stress is produced in the upper layer of the sheet and compression stress in the lower", furthermore the stress is increasing with the increase of the tool diameter [START_REF] Tanaka | Residual Stress In Sheet Metal Parts Made By Incremental Forming Process[END_REF].
Crina Radu [START_REF] Radu | Analysis of the Correlation Accuracy-Distribution of Residual Stresses in the Case of Parts Processed by SPIF, Mathematical Models and Methods in Modern Science[END_REF] analysed the correlation between the accuracy of parts processed by SPIF using different values of process parameters and the distribution of the residual stresses induced in the sheets as results of cold incremental forming.
The hole drilling strain gauge method was applied to determine the residual stresses distribution through the sheet thickness. Experiments showed that the increase of tool diameter and incremental step depth increased residual stresses, which led to higher geometrical deviations [START_REF] Radu | Analysis of the Correlation Accuracy-Distribution of Residual Stresses in the Case of Parts Processed by SPIF, Mathematical Models and Methods in Modern Science[END_REF].
J. Zettler et al. stated in their work that SPIF indicate "great residual stresses to sheet during the forming which lead to geometrical deviations after releasing the fixation of the sheet". They introduced a spring back compensation procedure in which an optical measurement system is used for measuring the part geometry after the forming [START_REF] Zettler | Springback Compensation for Incremental Sheet Metal Forming Applications, 7. LS-DYNA Anwenderforum[END_REF].
Silva [START_REF] Silva | Revisiting single-point incremental forming and formability/failure diagrams by means of finite elements and experimentation[END_REF] et al. made some Experimental Investigations and Numerical Analysis to evaluate the applicability and accuracy of their analytical framework for SPIF of metal sheets. They stated that "plastic deformation occurs only in the small radial slice of the component being formed under the tool. The surrounding material experiences elastic deformation and, therefore, it is subject of considerably lower stresses."
In order to compensate springback more effectively (in-process) online residual stress measurements are suggested. Residual Stress Measurement Methods can be characterized according to the length detection over the stresses balance.
Feasible Non-Destructive Testing (NDT) Methods based on a summary of Withers et al. [START_REF] Withers | Residual stress part 2-nature and origins[END_REF] are Ultrasonic and Magnetic Barkhausen noise (MBN) measurements. By comparing these methods we can say that ultrasonic solutions can be used for nonferromagnetic materials too, but for the evaluation of Multiple Residual Stress Components the Barkhausen noise (BN) testing is preferable. The work of Steven Andrew White showed that "BN testing is capable of providing near-surface estimates of axial and hoop stresses in feeder piping, and could likely be adapted for in situ feeder pipe inspection or quality assurance of stress relief during manufacture" [16].
By adapting the MBN solution of Steven Andrew White to Incremental Forming of sheet metals we can realize an enhanced concept of J. Zettler et al. [START_REF] Zettler | Springback Compensation for Incremental Sheet Metal Forming Applications, 7. LS-DYNA Anwenderforum[END_REF] where the optical measurement system is replaced/extended by a MBN measurement device integrated into a forming tool. This solution may allow finishing the manufacturing/repairing of a part with high geometrical accuracy however, without releasing the fixation.
Conclusions and further plans
Our real goal is to give some means and tools to calculate different values which correspond to different phases of the life-cycle of a product (PLC). We specially emphasize re-use and re-cycling as important LC phases, due to the approaching water-, energy-and raw material-shortages. Generally on product we mean anything which is used by simple users (a car, a cup, a bike, or a part of them, etc.), or which are used by dedicated users to produce or manage other products (a machine tool, a robot, a house, a test environment, etc.), or which are used to manage everything else (a firm, a factory, a ministry, etc.). We differentiate between simple products and extended products (as traditional and extended enterprise) and between tangible and intangible parts (aspects) and service is taken into account as a product, too.
In the recent study we restrict ourselves to a very narrow part of the PLCM of cars: to evaluate EOL cars' sheet-metal parts, then to decide whether to re-make (repair, use) them or let them go to the shredder to be dismissed.
During our research to assist re-use of sheet metal parts we had several problems to solve to make waste as little as possible, and to prefer re-use, or re-make. There were several machine-and human decisions, which need support. An important assistance is 3D modelling and visualisation to help human decision making if a simple view is not enough, as it cannot be exact enough. The set and methods of decision making drove us to the way of cognitive info-communication. This way should be extended and explained in more details in the future.
We showed that the above explained simple multiplications forms of KILT cannot yet be used for economically useful calculations, they contain only several ideas and qualitative relationships to go on a right way. We plan to find proper relationships to use our ideas and formulae for real world situations to assist not only designers and engineers in their work, but politicians and other decision makers as well. These studies and their resulting calculations, values and suggestions how to proceed will be in a following study. Specific applications to ISF technology may mean simplifications and easier understanding and using of the metrics and the model.
Fig. 1 .
1 Fig. 1. Serial-Flow Car Disassembly
DD: disassembly starts, parts are taken off one by one, and sorted and stored, until the next decision can be done 5. Decision 2: S or DM -done at any time. Decision may be partial shredder (PS) and partial dismantling (PD) after a while. (a) if PS&PD: certain parts are taken apart, the rest goes to shredder. (b) if DM or DD and PD are done, now we have a lot of parts organised somehow 6. Decision 3: Select sheet metal parts: automatically or manually or hybrid way, keep SMs, put away the rest (a) Examine all SM parts, first thickness (TH) measurement is done (b) if TH is too small, part goes to S. The rest goes to border measurement (SB), as borders (SB), of a sheet may easily be damaged during dismantling. (c) SB may be done by optics and AI and/or by human or both, after each other. (d) if SB is repairable or good, make shape measurement (SHM) (e) Compare measured sheet (MS) to standard shape (SH). SH can be taken by measuring a failure-free sample, or from any appropriate catalogue. For processing we need CAD data in both cases. (f) Compare SH with MS. and calculate differences somehow, it is the deviation from standard (DS) 7. Decision 4: if DS is small enough (defined by the customer, who will need the part, Decision 6: if accepted it goes to the shop or to a workshop for painting, and then to a shop. (a) if rejected it goes back to 6.6. 10. The part is accepted, sent to the shop or to business again
4. Decision 1: Shredder (S) or dismantling (DM) or delayed decision after the begin-
ning of dismantling (DD). Decision is done basically by human, eventually assist-
ed by measurements, or even by 3D part modeling
(a) if S: no more work to do: car goes to shredder and then burial (dismissal)
(b) if DM: disassembly starts, parts are taken off one by one until the last one,
based on a given protocol for all car types.
(c) if
1. Car arrives: on wheels or on a trailer, papers and documents are fixed, permission
issued (or checked if issued by others)
2. It goes or it is taken to the dismantling bed (dry bed)
3. Remove liquids and dangerous materials (Unconditional)
or it is an average value generally accepted), part goes to repair. The rest goes to shredder. 8. Decision 5: repair by hand, by ISF or combined, any sequence is possible (a) if ISF: part goes to ISF centre together with its CAD/CAM code, and processed (b) if Manual or Combined: part goes to worker, when needed, after or before ISF (c) if ISF is donea final measurement is needed (SHM). 9. | 32,260 | [
"1003713",
"1003714"
] | [
"306576",
"488112",
"306576"
] |
01485820 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485820/file/978-3-642-41329-2_26_Chapter.pdf | Martin Hardwick
email: hardwick@steptools.com
David Loffredo
Joe Fritz
Mikael Hedlind
Enabling the Crowd Sourcing of Very Large Product Models
Keywords: Data exchange, Product Models, CAD, CAM, STEP, STEP-NC, 1
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
Part 21 is a specification for how to format entities describing product data [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 21: Implementation methods: Clear text encoding of the exchange structure[END_REF]. The format is minimal to maximize upward compatibility and simple to allow for quick implementation. It was invented before XML, though not before SGML, and it makes no special allowance for URL's [START_REF]Uniform Resource Identifiers (URI): Generic Syntax[END_REF].
Several technical data exchange standards use Part 21. They include STEP for mechanical products, STEP-NC for manufacturing products and IFC for building products. Over twenty years, substantial installed bases have been developed for all three with many Computer Aided Design (CAD) systems reading and writing STEP, a growing number of Computer Aided Manufacturing (CAM) systems reading and writing STEP-NC, and many Building Information Management (BIM) systems reading and writing IFC.
The data described by STEP, STEP-NC and IFC is continually growing [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 1: Overview and fundamental principles[END_REF]. STEP was first standardized as ISO 10303-203 for configuration controlled assemblies, and as ISO 10303-214 for automotive design. Both protocols describe the same kinds of information, and have taken turns at the cutting edge. Currently they are being replaced by ISO 10303-242 which will add manufacturing requirements, such as tolerances and surface finishes, to the product data [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 242: Application protocol: Managed Model-based 3D Engineering[END_REF].
STEP-NC is a related standard for manufacturing process and resource data. It has been tested by an industry consortium to verify that it has all the features necessary to replace traditional machining programs. They recently determined that it is ready for implementation and new interfaces are being developed by the Computer Aided Manufacturing (CAM) system vendors [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 238: Application Protocols: Application interpreted model for computerized numerical controllers[END_REF].
IFC describes a similar set of standards for building design and construction. IFC has made four major releases with the most recent focused on enabling the concurrent modeling of building systems. These include the systems for electric power, plumbing, and Heating, Ventilation and Air Conditioning (HVAC). The building structural elements such as floors, rooms and walls were already covered by previous editions. With the new release, different contractors will be able to share a common model during the construction and maintenance phases of a building [START_REF]ISO 16739: Industry Foundation Classes for data sharing in the construction and facility management industries[END_REF].
All three models are being used by a large community to share product data but Part 21 has been showing its age for several years. In the last ten years there have been six attempts to replace it with XML [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 28: Implementation methods: XML representations of EXPRESS schemas and data, using XML schemas[END_REF]. To date, none has succeeded but there is a growing desire for a more powerful and flexible product data format.
This paper describes an extension to Part 21 to enable the crowd sourcing of very large product models. Extending the current format has the advantage of continuing to support the legacy which means there will be a large range of systems that can already read and write the new data. The new edition has two key new capabilities:
1. The ability to distribute data between model fragments linked together using URI's. 2. The ability to define intelligent interfaces that assist the user in linking, viewing and running the models.
The next section describes the functionalities and limitations of Part 21 Editions 1 and 2. The third section describes how the new format enables massive product databases. The fourth section describes how the new format enables crowdsourcing. The fifth section outlines the applications being used to test the specification. The last section contains some concluding remarks.
EDITIONS 1 AND OF PART 21
STEP, STEP-NC and IFC describe product data using information models. Each information model has a schema described in a language called EXPRESS that is also one of the STEP standards [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 1: Overview and fundamental principles[END_REF]. EXPRESS was defined by engineers for engineers. Its main goal was to give clear, concise definitions to product geometry and topology.
An EXPRESS schema defines a set of entities. Each entity describes something that can be exchanged between two systems. The entity may describe something simple such as a Cartesian point or something sophisticated such as a boundary representation. In the latter case the new entity will be defined from many other entities and the allowed data structures. The allowed data structures are lists, sets, bags and arrays. Each attribute in an entity is described by one of these data structures, another entity or a selection of the above. The entities can inherit from each other in fairly advanced ways including AND/OR combinations. Finally EXPRESS has rules to define constraints: a simple example being a requirement for a circle radius to be positive; a more complex example being a requirement for the topology of a boundary representation to be manifold. Part 21 describes how the values of EXPRESS entities are written into files. A traditional Part 21 file consists of a header section and a data section. Each file starts with the ISO part number (ISO-10303-21) and begins the header section with the HEADER keyword. The header contains at least three pieces of information a FILE_DESCRIPTION which defines the conformance level of the file, a FILE_NAME and a FILE_SCHEMA. The FILE_NAME includes fields that can be used to describe the name of the file, a time_stamp showing the time when it was written, the name and organization of the author of the file. The FILE_NAME can also include the name of the preprocessing system that was used to write the file, and the name of the CAD system that created the file. One or more data sections follow the header section. In the first edition only one was allowed and this remains the case for most files. The data section begins with the keyword DATA, followed by descriptions of the data instances in the file. Each instance begins with an identifier and terminates with a semicolon ";". The identifier is a hash symbol "#" followed by an unsigned integer. Every instance must have an identifier that is unique within this file but the same identifier can be given to another instance in another file. This includes another version of the same file.
ISO
The identifier is followed by the name of the entity that defines the instance. The names are always capitalized because EXPRESS is case insensitive. The name of the instance is then followed by the values of the attributes listed between parentheses and separated by commas. Let's look at instance #30. This instance is defined by an entity called FACE_BOUND. The entity has three attributes. The first attribute is an empty string, the second is a reference to an EDGE_LOOP and the third is a Boolean with the value True. The EXPRESS definition of FACE_BOUND is shown below. FACE-BOUND is an indirect subtype of representation item. The first attribute of FACE-BOUND (the string) is defined by this super-type. Note also that the "bound" attribute of face_bound is defined to be a loop entity so EDGE_LOOP must be a subtype of LOOP. The first goal was met by requiring the files to be encoded in simple ASCII, by requiring all of the data to be in one file, and by the requiring every identifier to be an unsigned integer that only has to be unique within the context of one file. The latter condition was presumed to make it easier for engineers to write parsers for the data.
ENTITY
The second design goal was met by minimizing the number of keywords and structural elements (nested parentheses). In most cases, the only keyword is the name of the entity and unless there are multiple choices there are no other keywords listed. The multiple choices are rare. They happen if an attribute can be defined by multiple instances of the same type. An example would be an attribute which could be a length or a time. If both possibilities are represented as floating point numbers then a keyword is necessary to indicate which has been chosen.
Making the Part 21 format simple was helpful in the early years as some users developed EXPRESS models and hand populated those files. However as previously mentioned there are thousands of definitions in STEP, STEP-NC and IFC and to make matters worse the definitions are normalized to avoid insertion and deletion anomalies. Consequently, it quickly became too difficult for engineers to parse the data by hand and a small industry grew up to manage it using class libraries. This industry then assisted the CAD, CAM and BIM vendors as they implemented their data translators.
The two design goals conflict when minimizing the number of keywords makes the files harder to read. This was dramatically illustrated when XML became popular. The tags in XML have allowed users to create many examples of relatively easy to understand, self-describing web data. However, for product models contain thousands of definitions the tags are less helpful. The following example recodes the first line of the data section of the previous example in XML.
<data-instance ID="i10"> <type IDREF="oriented_edge"> <attribute name="name" type="label"></attribute> <attribute name="edge_start" type="vertex" value="derived"/> <attribute name="edge_end" type="vertex" value="derived"/> <attribute name="edge_element" type="edge"><instance-ref IDREF="i44820"/></attribute> <attribute name="orientation" type="BOOLEAN">TRUE</attribute> </type> </data-instance> Six XML data formats have been defined for STEP in two editions of a standard known as Part 28 [START_REF]ISO 16739: Industry Foundation Classes for data sharing in the construction and facility management industries[END_REF]. The example above shows the most verbose format called the Late Binding which tried to enable intelligent data mining applications. Other formats were more minimal though none as minimal as Part 21. In practice the XML tags add small value to large product models because anyone that wants to parse the data needs to process the EXPRESS. Plus they cause occasional update problems because in XML adding new choices means adding new tags and this can mean old data (without the tags) is no longer valid.
The relative failure of Part 28 has been mirrored by difficulties with Part 21 Edition 2. This edition sought to make it easier for multiple standards to share data models. At the time STEP was moving to an architecture where it would be supporting tens or hundreds of data exchange protocols each tailored to a specific purpose and each re-using a common set of definitions. Edition 2 made it possible to validate a STEP file in all of its different contexts by dividing the data into multiple sections each described by a different schema. In practice, however, the applications have folded down to just three that are becoming highly successful: STEP for design data, STEP-NC for manufacturing data and IFC for building and construction date.
The failures of XML and Edition 2 need to be balanced against the success of Edition 1. This edition is now supported by nearly every CAD, CAM and BIM system. Millions of product models are being made by thousands of users. Consequently there is an appetite for more, and users would like to be able to create massive product models using crowd sourcing.
MASSIVE PRODUCT MODELS
The following subsections describe how Edition 3 enables very large product models. The first two subsections describe how model fragments can be linked by URI's in anchor and reference sections. The third subsection describes how the transport of collections of models is enabled using ZIP archives. The fourth subsection describes how the population of a model is managed using a schema population.
Anchor section
The syntax of the new anchor section is simple. It begins with the keyword ANCHOR and ends with the keyword ENDSEC. Each line of the anchor section gives an external name for one of the entities in the model. The external name is a reference that can be found using the fragment identifier of a URL. For example, the URL www.server.com/assembly.stp#front_axle_nauo references the front_axle in the following anchor section.
ANCHOR; <front_axle_nauo> = #123; <rear_axle_nauo> = #124; <left_wheel_nauo> = #234; <right_wheel_nauo> = #235; ENDSEC;
Unlike the entity instance identifiers of Edition 1, anchor names are required to be unique and consistent across multiple versions of the exchange file. Therefore, alt-hough the description of the front_axle in chasis.stp may change, the front_axle anchor remains constant.
Reference section
The reference section follows the anchor section and enables references into another file. Together the reference and anchor sections allow very large files to be split into fragments.
The reference section begins with the keyword REFERENCE and ends with the keyword ENDSEC. Each line of the reference section gives a URI for an entity instance defined in an external file. In this example, the external file contains references to the anchors given in the previous example. The file is defining a manufacturing constraint on an assembly [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 28: Implementation methods: XML representations of EXPRESS schemas and data, using XML schemas[END_REF]. The example uses names for the entity identifiers. This is another new feature of Edition 3. Instead of requiring all entity identifiers to be numbers they can be given names to make it easier for casual users to code examples, and for systems to merge data sets from multiple sources. Numbers are used for the entities #124, #125 and #126 because it is traditional but the rest of the content has adopted the convention of giving each instance a name to indicate its function and a qualifier to indicate its type. Thus "chasis_pd" indicates that this instance is the product_definition entity of the chasis.stp file.
DATA
ZIP Archives and Master Directories
The anchor and reference sections allow a single earlier-edition file to be split into multiple new files but this can result in management problems. The old style led to files that were large and difficult to edit outside of a CAD system, but all of the data was in one file which was easier to manage. ZIP archives allow Part 21 Edition 3 to split the data and continue the easy data management. A ZIP archive is a collection of files that can be e-mailed as a single attachment. The contents of the archive can be any collection including another archive. A ZIP archive is compressed and may reduce the volume by as much as 70%. Many popular file formats such as ".docx" are ZIP files and can be accessed using ZIP tools (sometimes only after changing the file extension to .zip) Edition 3 allows any number of STEP files to be included in an archive. Each file in the directory can be linked to the other files in the ZIP using relative addresses and to other files outside of the ZIP using absolute addressing. Relative addresses to files outside the ZIP are not allowed so that applications can deploy the zipped data at any location in a file system.
References into the ZIP file are allowed but only via a master directory stored in the root. This Master directory describes where all the anchors in the ZIP can be found using the local name of the file. Outside the archive the only visible name is that of the archive itself. If there is a reference to this name then a system is required to open the master directory and look for the requested anchor.
In Figure 1 the file ISO-10303-21.txt is the master directory. It contains the forwarding references to the other files and it is the file that can be referenced from outside of the archive using the name of the archive.
Schema population with time stamps
The support for data distribution in Part 21 Edition 3 gives rise to a problem for applications that want to search a complete data set. If the data is distributed and normalized then there may be "orphan" files that contain outbound references but no inbound ones. For example, Figure 2 shows how a file may establish a relationship between a workplan and a workpiece by storing URI's to those items but not have a quality that needs to be referenced from anywhere else. The schema population includes all the entity instances in all the data sections of the file.
If there is a reference section, then the schema population also includes the schema populations of all the files referenced by URIs. If the header has a schema_population definition then the schema population also includes the schema population of each file in the set of external_file_locations.
The last inclusion catches the "orphan files". The following code fragment gives an example. In this example the STEP file shown is referencing two other files. There are two other attributes in each reference. An optional time stamp shows when the reference was last checked. An optional digital signature validates the integrity of the referenced file. The time stamp and signature enable better data management in contractual situations. Clearly if the data is distributed there will be opportunities for mistakes and mischief.
SCHEMA_POPULATION( ('http
INTELLIGENT INTERFACES
The following subsections describe how Edition 3 enables crowdsourcing using intelligent interfaces. The first subsection describes how JavaScript has been added to the model. The second subsection describes how some of the data rules have been relaxed for easier programming. The third subsection describes how application specific programming is enabled using data tags. The last subsection summarizes the options for making the programming more modular.
JavaScript
The goal of adding JavaScript to Part 21 Edition 3 is to make managing the new interfaces easier by encapsulating the tasks that can be performed on those interfaces as methods. For example, the following code checks that a workpiece is valid before linking it to a workplan. In Edition 3 a file can include a library of JavaScript functions to operate on an object model of the anchors and references but not necessarily the data sections. In many cases the volume of the data sections will overwhelm a JavaScript interpreter.
The following three step procedure is used for the conversion:
1. Read the exchange structure and create a P21.Model object with anchor and reference properties. 2. Read the JavaScript program definitions listed in the header section. 3. Execute the JavaScript programs with the "this" variable set to the P21.Model object.
For more details see Annex F of the draft specification at www.steptools.com/library/standard/. The procedure creates one object for each exchange file and gives it the behaviour defined in the JavaScript. The execution environment then uses those objects in its application. For example, in the code above workplan and workpiece are object models for two exchange structures and the application is checking for compatibility before linking them.
Data relaxation
In order to be reusable a product model needs to flexible and extensible. The information models defined by the STEP, STEP-NC and IFC standards have been carefully designed over many releases to achieve these qualities. Interface programming is different because an interface can be created as a contingent arrangement of anchors and references for a specific purpose. The information model has not changed so application programming for translation systems stays the same, but for interface programming the requirements are different. Therefore, two relaxations have been applied to the way data is defined for interfaces in the new edition.
1. The instance identifiers are allowed to be alphanumeric. 2. The values identified can be lists and literals.
Editions 1 and 2 of Part 21 restricted the format so that every identifier had to be an unsigned integer. This helped emphasize that the identifiers would not be consistent across files and at the time it was thought to make it easier for parsers to construct symbol tables. The symbol table reason is false. Every modern system uses a hash table for its symbols and these tables are agnostic with respect to the format of the identifier.
Requiring numbers for identifiers has always made hand editing harder than necessary. Therefore, Edition 3 supports alphanumeric names. The following example of unit definition illustrates the advantage. Each unit definition follows a pattern. Unit definitions also show why a more flexible approach to defining literals is desirable. The following is a file that defines some standard constants. The new identifiers can be used in the data section as well as the anchor and reference sections.
Tags for fast data caching
In many cases the JavaScript functions operating on the interfaces need additional data to be fully intelligent. Therefore, the new edition allows additional values to be tagged into the reference and data sections. Each tag has a category and a value. The category describes its purpose and the value is described by a literal. The following example shows the data checked by the JavaScript function of the previous example. The tag data may be initialized by a pre-processor or created by other means. In the above example two pieces of data are being linked and it is important to know that the workpiece is ready for use by manufacturing.
Another role for tags is as a place to cache links to visualization information. Again this data may be summarized from the STEP, STEP-NC or IFC information model and the tags allow it to be cached at a convenient place for rapid display.
The last example shows tags being used to document the STEP ARM to AIM mapping. Those who have worked on STEP and STEP-NC know that they have two definitions: a requirements model describing the information requirements; and an interpreted model that maps the requirements into a set of extensible resources [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 1: Overview and fundamental principles[END_REF]. The tags can become a way to represent the mapping between these two models in the data.
REFERENCE;
#1234 {x_axis:(#3091,#3956)}{y_axis: (#2076)} = <#ma-chine_bed>; #4567 {z_axis:(#9876,#5273)}= <#tool_holder>; ENDSEC;
A quick summary of the new data options in Edition 3 is that it allows URL's to be placed between angular brackets ("<>") and application specific data to be placed between curly brackets ("{}").
Modularity options
Part 21 Edition 3 has three options for modularizing STEP, STEP-NC and IFC data.
Continue using traditional files, but surround those files with interfaces referencing into the data Create a product data web linked by URL's. Create a ZIP archive of replaceable components.
The traditional approach to STEP implementation creates a massive symbol table using an EXPRESS compiler and then reads the exchange data into objects described by the table. This is expensive both with respect to processing time and software investment, but efficient if all of the data is being translated into a CAD system.
The new edition allows the Part 21 data to be arranged into interfaces for light weight purposes such as checking tolerances, placing subsystems and running processes. Therefore, alternate implementation paradigms are possible. Three being considered include:
1. A JavaScript programming environment can process just the data in an interface. In this type of implementation the Part 21 files are rapidly scanned to create the objects required for the anchor and reference sections. 2. A web browser environment can be activated by including an "index.html" file in the ZIP archive along with code defining the P21 object model of the interface.
This type of implementation will be similar to the previous one but with steaming used to read the Part 21 data. 3. The third type of implementation is an extended Standard Data Access Interface (SDAI). The SDAI is an application programming interface for Edition 1 and 2 data that can be applied to Edition 3 because of upward compatibility.
One option for an SDAI is to merge all the Edition 3 data into one large file for traditional CAD translation processing. Another option is to execute the JavaScript and service web clients
APPLICATIONS
PMI Information for Assemblies
The first in-progress application is the management of Product Manufacturing Information (PMI) for assemblies. Figure 3 shows a flatness tolerance on one of the bolts in an assembly. In the data, a usage chain is defined to show which of the six copies of the bolt has the tolerance.
Fig. 3. -Assembly tolerances
The data for the example can be given many organizations. One is the traditional single file which will work well for small data sets. For large assemblies the new specification enables a three layer organization. The lowest layer is the components in the model. The second layer is the assemblies and sub-assemblies. The third layer is the PMI necessary to manufacture the assembly.
In order for this organization to work the product components must expose their product coordinate systems to the assembly modules and their faces to the PMI modules. Similarly the assembly modules must expose their product structure to the PMI modules. The following shows the resulting anchor and reference sections of the PMI module. This code is then referenced in the data sections of the PMI modules.
Data Model Assembly
STEP and STEP-NC are related standards that share definitions. The two models can be exported together by integrated CAD/CAM systems, but if different systems make the objects then they must be linked outside of a CAD system. In STEP-NC a workplan executable is linked to the shape of the workpiece being machined. The two entities can be exported as anchors in the two files and an intelligent interface can link them on-demand. The following code shows the interface of a linker file with open references to the two items that must be linked. The linker Ja-vaScript program given earlier sets these references after checking the validity of the workpiece and workplan data sets. It also sets the name of the reference to indicate if workpiece represents the state of the part before the operation (as-is) or after the operation (to-be).
REFERENCE;
#exec = $; #shape = $; #name = $; ENDSEC; DATA; #10=PRODUCT_DEFINITION_PROCESS(#name,'',#exec,''); #20=PROCESS_PRODUCT_ASSOCIATION('','',#shape,#10); ENDSEC;
A number of CAM vendors are implementing export interfaces for STEP-NC [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 238: Application Protocols: Application interpreted model for computerized numerical controllers[END_REF]. They export the process data which needs to be integrated with workpiece data to achieve interoperability. The workpieces define the cutting tools, fixtures and machines as well as the as-is and to-be removal volumes.
Next Generation Manufacturing
The last application is the control of manufacturing operations. Today manufacturing machines are controlled using Gcodes generated by a CAM system [START_REF] Hardwick | A roadmap for STEP-NCenabled interoperable manufacturing[END_REF]. Each code describes one or more axis movements. A part is machined by executing millions of these codes in the right order with the right setup and the right tooling. Change is difficult which makes manufacturing inflexible and causes long delays while engineers validate incomplete models. Part 21 Edition 3 can replace these codes with JavaScript running STEP-NC. The broad concept is to divide the STEP-NC program into modules each describing one of the resources in the program. For example, one module may define a toolpath and another module may define a workingstep. The modules can be put into a ZIP archive so the data volume will be less and the data management easier. The JavaScripts defined for each module makes them intelligent. For example for a workingstep, the script can control the tool selection and set tool compensation parameters.
Before a script is started other scripts may be called to make decisions. For instance an operation may need to be repeated because insufficient material was removed, or an operation may be unnecessary because a feature is already in tolerance. Such functionalities can be programmed in today's Gcode languages but only with difficulty.
The JavaScript environment is suited to manufacturing because it is event driven. Performance should not be an issue because in practice machine controls operate by running look-ahead programs to predict future movements. Changing the look-ahead to operate on JavaScript instead of Gcode is probably a better use of resources.
For an example of how such a system might operate see Figure 4 which is a screen capture of the following WebGL application: http://www.steptools.com/demos/nc-frames.html?moldy/ The new edition adds URI's and JavaScript to enable the crowdsourcing of massive product models. Other supporting features include a schema population to keep track of all the components, ZIP archives to enable better data management, data relaxation to enable easier interface programming, and data tags to allow application specific programming.
The new specification is not yet finished. Additional extensions are still being considered. They include allowing the files in a ZIP archive to share a common header section, merging the anchor and reference sections into an interface section and loosening the syntax to allow for lists of URI's.
The current specification can be accessed at: http://www.steptools.com/library/standard/p21e3_dis_preview.html Implementation tools are being developed to enable testing. They include libraries to read and write the data into a standalone JavaScript system called NodeScript, libraries to steam the data into web browsers, and libraries to run the scripts in an SDAI. The NodeScript implementation is available as open source at the following location. http://www.steptools.com/library/standard/ The next steps (pun intended) are to:
1. Complete the prototype implementations so that they can be used to verify the specification. 2. Submit the specification to ISO for review as a Draft International Standard (DIS). 3. Respond to the international review with additional enhancements for the additional requirements.
4. Begin the development of common object models for design and manufacturing applications. For example, object models for the execution of machine processes, and object models for the definition of assembly tolerances. 5. Create applications to demonstrate the value of the specification. The applications will include attention grabbing ones that use kinematics to show the operation of products and machines, value added ones that use the JavaScript to link data sets and create massive product models, and manufacturing ones to verify tolerances while processes are running.
-10303-21; HEADER; /* Exchange file generated using ST-DEVELOPER v1.5 */ FILE_DESCRIPTION( /* description */ (''), /* implementation_level */ '2; 1'); FILE_NAME( /* name */ 'bracket1', /* time_stamp */ '1998-03-10T10:47:06-06:00', /* author */ (''), /* organization */ (''), /* preprocessor_version */ 'ST-DEVELOPER v1.5', /* originating_system */ 'EDS -UNIGRAPHICS 13.0', /* authorisation */ ''); FILE_SCHEMA (('CONFIG_CONTROL_DESIGN')); /* AP203 */ ENDSEC; DATA; #10 = ORIENTED_EDGE('',*,*,#44820,.T.); #20 = EDGE_LOOP('',(#10)); #30 = FACE_BOUND('',#20,.T.); #40 = ORIENTED_EDGE('',*,*,#44880,.F.); #50 = EDGE_LOOP('',(#40)); #60 = FACE_BOUND('',#50,.T.); #70 = CARTESIAN_POINT('',(-1.31249999999997,14.594,7.584)); #80 = DIRECTION('',(1.,0.,3.51436002694883E-15)); … ENDSEC; END-ISO-10303-21;
REFERENCE; /* assembly definitions for this constraint */ #outer_seal_nauo = <assembly.stp#outer_seal>; #outer_bearing_nauo = <assembly.stp#outer_bearing>; #right_wheel_nauo = <assembly.stp#right_wheel>; #rear_axle_nauo = <assembly.stp#rear_axle>; /* Product definitions */ #seal_pd = <assembly.stp#seal_pd>; #bearing_pd = <assembly.stp#bearing_pd>; #wheel_pd = <assembly.stp#wheel_pd>; #axle_pd = <assembly.stp#axle_pd>; #chasis_pd = <assembly.stp#chasis_pd>; ENDSEC;
Fig. 1 .
1 Fig. 1. -Zip Archive
Fig. 2 .
2 Fig. 2. -Link files for massive databases
<http://www.iso10303.org/part41/si_base_units.stp#METRE>; #kilogram = <http://www.iso10303.org/part41/si_base_units.stp#KILOGRA M>; #second = <http://www.iso10303.org/part41/si_base_units.stp#SECOND> ; ENDSEC; DATA; /* Content extracted from part 41:2013 */ #5_newton=DERIVED_UNIT_ELEMENT(#meter,1.0); #15_newton=DERIVED_UNIT_ELEMENT(#kilogram,1.0); #25_newton=DERIVED_UNIT_ELEMENT(#second,-2.0); #new-ton=SI_FORCE_UNIT((#5_newton,#15_newton,#25_newton),*,$,. NEWTON.); #5_pascal=DERIVED_UNIT_ELEMENT(#meter,-2.0); #25_pascal=DERIVED_UNIT_ELEMENT(#newton,1.0); #pas-cal=SI_PRESSURE_UNIT((#5_pascal,#25_pascal),*,$,.PASCAL.) ;
<bolt.stp#bolt>; #nut = <nut.stp#nut>; #rod = <rod.stp#rod>; #plate = <plate.stp#plate>; #l-bracket = <l-bracket.stp#l-bracket>; #bolt_shape = <bolt.stp#bolt_shape>; #nut_shape = <nut.stp#nut_shape>; #rod_shape = <rod.stp#rod_shape>; #plate_shape = <plate.stp#plate_shape>; #l-bracket_shape = <l-bracket.stp#l-bracket_shape>; #bolt_wcs = <bolt.stp#bolt_wcs>; #nut_wcs = <nut.stp#nut_wcs>; #rod_wcs = <rod.stp#rod_wcs>; #plate_wcs = <plate.stp#plate_wcs>; #l-bracket_wcs = <l-bracket.stp#l-bracket_wcs>; #bolt_top_face = <bolt.stp#bolt_top_face>; ENDSEC;
Fig. 4 .
4 Fig. 4. -WebGL Machining | 35,000 | [
"1003715"
] | [
"33873",
"488114",
"488114",
"366312",
"469866"
] |
01485824 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485824/file/978-3-642-41329-2_2_Chapter.pdf | Fumihiko Kimura
email: fumihiko.kimura@hosei.ac.jp
IT Support for Product and Process Development in Japan and Future Perspective
Keywords: Product development, Process development, IT support, CAD, CAM 1
Due to the globalization of market and manufacturing activity, manufacturing industry in industrially advanced countries are facing difficult problems, such as severe competition with the low-cost production in developing countries and radical changes of customer requirements in industrially mature countries, etc. For coping with these problems, it is important to identify hidden or potential customer expectation, and to develop systematized design and manufacturing technology to augment human expertise for innovative product development. It is known that the strength of Japanese manufacturing industry comes from the intimate integration of sophisticated human expertise and highly efficient production capability. For keeping the competitiveness of the Japanese industry, it is strongly required to systematize the product and process development throughout the total product life cycle, and to introduce IT methods and tools for supporting creative and intelligent human activities and for automating well understood engineering processes. In this paper, current issues in manufacturing industry are generally reviewed. Future directions of manufacturing industry are described, and important technological issues and their IT support solutions are discussed. Finally future perspective for advanced IT support is investigated.
INTRODUCTION
Manufacturing is a basic discipline for sustaining economy for advanced industrialized countries, such as Japan, USA and Europe. However due to the recent trends of globalization in manufacturing activities, difficult issues are arising, such as severe competition with very low cost production of developing countries, risk management for global distribution of product development and production activities, environmental problems for achieving sustainable manufacturing, etc. In this paper, the recent advances of product and process development technology, current issues and future perspectives are reviewed from the standpoint of IT support, and particularly Japanese activities are discussed for keeping the competitiveness in the future.
It is well known that the strength of Japanese manufacturing industry comes from the intimate integration of sophisticated human expertise and highly efficient production technology. For keeping the competitiveness of the Japanese industry, it is essential to maintain the quality and volume of the expert human work force, but it is predicted that the population of Japanese working age people will decease about half in the year of 2050. Therefore it is strongly required to systematize the product and process development technology throughout the total product life cycle, and to introduce IT methods and tools for supporting creative and intelligent human activities and for automating well understood engineering processes. This new IT supported way of product and process development will rationalize the current human-dependent processes, and achieve efficient global collaboration among industrially developed countries and developing countries. The underlying research and development activities are discussed in this paper.
In the next section, current issues in manufacturing industry are generally reviewed. Future directions of manufacturing industry are described in section 3. Then important technological issues and their IT support solutions are discussed in sections 4 to 7. Finally future perspective for advanced IT support is shown in section 8.
CURRENT ISSUES IN MANUFACTURING
Technological, economical, and social situations for manufacturing are changing rapidly in recent years. There are many issues manufacturing industry is facing today, especially in industrially advanced countries. Major issues are reviewed, and specific problems with Japanese industry are discussed.
Energy and resource constraints
It is needless to say that industrial technology in advanced countries cannot be fully diffused to developing countries due to over waste of energy and resources. For example, today's standard automobiles cannot be directly spread, and small lightweighted energy-efficient vehicles should be developed for mass use in the developing countries. Technology innovation is required for such new developments. Japan and other advanced countries import large amount of energy sources and other resources. There are always risks of disruption of those supplies due to natural disaster and political problems.
Competition in global market
Big consumer market is emerging in developing countries, and completely new categories of very cheap products are required for adapting to the global market needs. It is fairly hard for Japanese industry to change their technology from high-end products to commodity products. This change requires not only new product strategy but also new technology.
Radical changes of world market
In industrially advanced countries, most of the products have been already spread among the consumers, and, generally speaking, consumer's products are not sold well. It is a big problem how to inspire hidden potential demands for new products.
Service design for product production
Fundamentally it is very important for manufacturing industry to capture potential social expectation for the future, and to propose vision and scenario to approach to it. For example, it is an urgent issue to realize society of higher resource efficiency. Maintenance of social infrastructure, such as roads and bridges, is a good example, and new technology is urgently desired for efficient life cycle management.
Population problem
In Japan, and similarly in other advanced countries, labour force in manufacturing industry will decrease rapidly in coming 50 years. For sustaining the industrial level, it is very important to amplify human intellectual ability by IT support, and to automate production processes as much as possible.
FUTURE DIRECTION FOR MANUFACTURING TECHNOLOGY INNOVATION
For analyzing the issues explained in the previous section, a role of design and manufacturing engineering is investigated, and the issues are classified according to the customer expectation and technology systematization.
Figure 1 shows a key role of design and manufacturing engineering. For enhancing the quality of life (QOL), social expectation for technology development and new products is expressed in the form of explicit demands, social scenario or general social vision, through market mechanism and other social /political mechanisms. Based on the level of industrialization of the society, the expectation is very clearly expressed, or is not expressed explicitly. Depending on such expectation or independently, design and manufacturing engineering tries to develop and propose various attractive technology options in the form of new systems and products to the society, based on the contribution from basic science and engineering.
Traditionally what customers want to get was clear, and known to customers themselves and to engineers. In such cases, according to the social expectation, appropriate technology options can be identified, and the developed systems and products are accepted by customers for better QOL. This case corresponds to the Class I and Class II problems in Figure 2, and is further explained below.
Today especially in industrially advanced society, customer demand or social expectation is not explicitly expressed, but only potentially noticed. Then, even though advanced technology and products are available, they may not be accepted by the society, and the customers are frustrated.There seems to be a big discrepancy between customer wish and producer awareness. This case corresponds to the Class III problem in Figure 2, and is further explained below. Class I Problem: Customer expectation is clearly expressed, and surrounding conditions for manufacturing activity is well known to producers. Therefore the problems are well understood, and systematized technology can be effectively applied.
Class II Problem: Customer expectation is clearly expresed, but surrounding conditions are not well captured or not known. In this case, product and production technology must be adapted to the changing and unknown situations. Integration of human expertise is required for problem solving.
Class III Problem: Customer expectation is not explicitly expressed or is not known. In this case, the problems are not clearly described, and tight coupling exists between identification of customer expectation and correponding technology for problem solving. Co-creative approach between customers and producers is mandatory. The current situation of Japanese manufacturing industry is explained by use of the above classification. Figure 3 shows the manufacturing problem classification with two coordinates: customer expectation and adaptability to surrounding conditions. Here adaptability to surrounding conditions is considered to depend on technology systematization, and it can be characterized by technology innovativeness and maturity.
Fig. 3. -Current trend of product development
Under the clear target setting situation, traditionally Japanese industry is very strong, even with varying conditions and unknown technology issues, to adapt to the difficult technology problems by applying front-edge technology and very sophisticated human involvement, and to produce very high-quality attractive products. This is a Class II problem, and Japanese approach is called as "Suriawase" in Japanese, which means sophisticated integration of human expertise with problem solving.
If the problem is well understood, and technology to solve the problem is well matured, the whole product development and production process can be systematized, and possibly automated. This is a Class I problem. Products which belong to this Class tend to be mass production products, and their competitiveness mainly depends on cheap price. The current difficulty of Japanese manufacturing industry is the technological and organizational inflexibility to adapt to this problem. By simply applying the sophisticated Class II problem solving method to Class I problems, it results in very expensive products with excessive quality.
If we look at the world market situation, two expanding or emerging market areas are recognized, as shown in Figure 3. One area is a mass production commodity product area, where products belong to the Class I problem, and the price is the most critical competitive factor. Another area is an innovative product area, where products belong to the Class III problem. The most important factor for Class III products is to identify customers' potential or hidden expectation, and to incorporate appropriate advanced knowledge from basic science and engineering toward product innovation.
Based on the above discussion, important future directions for manufacturing technology development are considered.
Identification of customer expectation
As the product technology is advancing so rapidly, the customers are normally uable to capture the vision of future society, and tend to be frustrated with the proposed products and systems from the producers. It is important to develop a methodology to search for the hidden and potential social expectation, and to identify various requirements of customers for daily life products and social infrastructure. It is effective to utilize IT methods to promote observation, prediction and information sharing for mass population society. This issue is discussed in Section 4.
Systematization of technology
The manufacturing problems become complex, such as large system problems, complexity problems due to multi-disciplinary engineering, and extremely difficult requiremts toward product safety and energy efficiency, etc. The traditional human dependent approaches only cannot cope with the problems effectively, and it is mandatory to introduce advanced IT support, and to systematize and to integrate various engineering disciplines toward design and engineering methods. These issues are discussed in Sections 5, 6 and 7.
POTENTIAL CUSTOMER EXPECTATION
It is often argued that recent innovative products, such as a smart phone or a hybrid vehicle, could not be commercialized by the conventional market research activity, because impacts of those innovative products are difficult to imagine by normal consumers due to their technological difficulty. Many customers have vague expectation for new products, but they cannot express their wish correctly, therefore their expectation cannot be satisfied. Manufacturers can offer new products based on their revolutional technology, but it is not easy to match their design intention with customers' real wish. It is very important to develop a methodology to capture hidden or potential customer expectation. In recent years, Japanese industry and research community have heavily disussed this issue, and proposed various practical methods for observing the potential social expectation [START_REF]Discovery of Social Wish through Panoramic Observation[END_REF]. It is still pre-mature to develop systematic methods, but several useful existing methods are discussed: ─ systematic survey of existing literature, ─ multi-disciplinary observation, ─ observation of social dilemma and trade-off, ─ collection of people's intuitive concern, ─ deep analysis of already known social concern, ─ re-examiation of past experiences.
Modelling, simulation, and social experiments are useful tools for prediction and information sharing. IT support is very effective for data mining and bibliometrics. Combination of information network and sensor capability has a big potential for extracting unconsciously hidden social wish. Promising approaches are advocated as a Cyber-Physical System [START_REF] Lee | Computing Foundations and Practice for Cyber-Physical Systems: A Preliminary Report[END_REF]. A huge number of sensors are distributed into the society, and various kinds of information are collected and analysed. There are many interesting trials in Japan, such as energy consumption trends of supermarkets, combination of contents and mobility information, zero-emission agriculture, healthcare information, etc. Important aspects are collection of demand-side information and combination of different kinds of industrial activities. It is expected that, by capturing latent social or customer wish, social infrastructure and individual QOL are better servicified by the manufacturing industry.
LARGE SYSTEM AND COMPLEXITY PROBLEMS
By the diversity and vagueness of customer requirements, industrial products and systems tend to become large in scale and complicated. Large system problems typically occur for designing social infrastructure or complicated products like a space vehicle, etc. Complexity problems are often related with multi-disciplinary engineering, such as designing mechatronics products.
For coping with large system problems, various system engineering methods are already well developed, but these methods are not fully exploited for product design and manufacturing practices. Those methods include the following system synthesis steps [START_REF]Towards Solving Important Social Issues by System-Building Through System Science and Technology[END_REF]: ─ common understanding via system modelling, ─ subsystem decomposition and structuring, ─ quantitative analysis of subsystem behaviour, ─ scenario setting and system validation.
There are important viewpoints for system design, such as harmonization of local optimization and global optimization, multi-scale consideration, structural approach to sensitivity analysis, etc. Standardization and modular concept are essential for effective system decomposition. The V-Model approach in system engineering is valid, but decomposition, verification and validation processes become very complicated with multi-disciplinary engineering activities in product design and manufacturing.
Various kinds of model-based approaches are proposed, and standardization for model description scheme and languages is progressing. For coping with large scale and complexity problems, it is important to take into account of the following aspects: ─ modelling and federation at various abstraction levels and granularity, ─ modular and platform approach, ─ multi-disciplinary modelling.
For system decomposition and modularization, it is effective to utilize a concept of function modelling as a basis, instead of physical building blocks. Product development processes are modelled, starting from requirement modelling, via function modelling and structure modelling, to product modelling. Those modelling should be per-formed in multi-disciplinary domains, and appropriately federated. Many research works are being performed, but industrial implementation are not yet fully realized.
UPSTREAM DESIGN PROBLEMS
It is argued that inappropriate product functionality and product defects are often caused at the upstream of product development processes. It is very expensive and time-consuming to remedy such problems at the later stages of product development, because many aspects of products have been already fixed. It is very effective to spend more time and effort at the stages of product requirement analysis, concept design and function design.
In Japan, it is currently a big problem that products tend to have excessive functions, and become the so-called "Galapagos" products, as shown in Figure 4. "Galapagos" products mean products designed to incorporate available leading technology as much as possible for product differentiation, and it results in very expensive products. Now the big market is expanding into developing countries. As the "Galapagos" products cannot be sold well in such market, it is required to eliminate excessive functions, and to make the products cheaper. But, it is difficult to compete with the cheap products designed especially for such market from scratch.
Fig. 4. -Identification of essential requirement
This problem happens from the ambiguity of product requirement identification. The following approach is important for coping with this problem: ─ identification of essential product requirements, ─ realization of essential functions with science-based methods, ─ rationalization and simplification of traditional functions and processes, ─ minimization of required resources.
The above approach cannot be implemented by conventional technology only, but requires dedicated advanced technology specifically tailored for the target products,
Identification of Requirement
Manufacturing Engineering such as extremely light-weighted materials, highly energy-efficient processes, etc. This is a way that Japanese industry can take for competitiveness.
A systematic approach to upstream design is a very keen research issue in Japan. An interesting approach is advocated by the name 1DCAE [START_REF] Ohtomi | The Challenges of CAE in Japan; Now and Then, Keynote Speech[END_REF]. There are many IT tools available today for doing precise engineering simulation. However it is very cumbersome to use those tools for upstream conceptual design activity. Also those tools are inconvenient for thinking and understanding product functional behaviour in an intuitive way. 1DCAE tries to establish a methodology to systematically utilize any available methods and tools to enhance the true engineering understanding of product characteristics to be designed, and to support the conceptualization of the products. 1DCAE is to exhaustively represent and to analyse product functionality, performance and possible risks at an early stage of design, and to provide a methodology for visualizing the design process and promoting the designer's awareness of possible design problems.
Figure 5 shows a 1DCAE approach for mechatronics product design. For conceptual design, various existing IT tools are used based on mathematical analysis and physics, such as dynamics, electronics and control engineering. Through good understanding of product concept and functional behaviour, detailed product models are developed. Fig. 5. -1DCAE approach for mechatronics product design [START_REF] Ohtomi | The Challenges of CAE in Japan; Now and Then, Keynote Speech[END_REF] Figure 6 represents a 1DCAE approach for large scale system design, such as a spacecraft. In this case, system optimization and risk mitigation are the very important design target. Design process is optimized by using DSM (Design Structure Matrix) method, and every necessary technological aspect of the products is modelled at appropriate granularity for simulation.
IMPORTANCE OF BASIC TECHNOLOGY
In addition to the various system technologies, the basic individual technology is also important. In recent years, remarkable progress of product and process development technology has been achieved by use of high speed computing capability. As indicated in Figure 1, there are many interesting and useful research results in basic science and engineering which could be effectively applied for practical design and manufacturing. However, many of those are not yet utilized. Science-based approach enables generally applicable reliable technology. Quite often the so-called expert engineering know-how or "Suriawase" technology can be rationalized by the science-based developments. By these developments, traditional manufacturing practices relying on veteran expert engineers and workers can be replaced by comprehensive automated systems, as discussed in the next section. By sophisticated engineering simulation with supercomputers, extreme technologies have been developed, such as light-weighted high-strength materials, low-friction surfaces, nano-machining, low-energy consumption processes, etc. Advanced modelling technology is developed, which can represent volumetric information including graded material properties and various kinds of defects in the materials. Powerful measurement methods, such as neutron imaging, are being available to visualize internal structure of components and assemblies. By such precise modelling, accuracy of computer simulation is very much enhanced, and delicate engineering phenomena can be captured, which is difficult for physical experiments.
Ergonomic modelling and robotics technology have evolved, and behaviour of human-robot interaction can be simulated precisely by computer. This is a basis for designing comprehensively automated production systems, as discussed in the next section. There are several critical issues for realizing future-oriented IT support systems as shown in Figure 7. One of the important problems is to integrate well developed practical design methods into IT support systems. There are many such methods as Quality Function Deployment (QFD), Functional Modelling, First Order Analysis (FOA), Design for X (DfX), Design for Six Sigma (DFSS), Design Structure Matrix (DSM), Optimization Design, Design Review, Failure Mode and Effect Analysis (FMEA), Fault Tree Analysis (FTA), Life Cycle Assessment (LCA), etc. For implementing those methods in digital support systems, it is necessary to represent pertinent engineering information, such as qualitative/quantitative product behaviour, functional structure, tolerances, errors and disturbances, etc. It is still pre-mature to install such engineering concept into practical IT support systems. Figure 9 shows an example of product model representation which can accommodate various kinds of disturbances arising during production and product usage, and can support the computerization of practical reliability design methods. Further theoretical work and prototyping implementation are desired for practical use.
Fig. 1 .
1 Fig. 1. -Importance of design and manufacturing engineering
Fig. 2 .
2 Fig. 2.-Classification of design and manufacturing problems[START_REF] Ueda | Value Creation and Decision-Making in Sustainable Society[END_REF]
Fig. 6 .
6 Fig.6. -1DCAE approach for large system design[START_REF] Ohtomi | The Challenges of CAE in Japan; Now and Then, Keynote Speech[END_REF]
Fig. 8 .
8 Fig. 8. -Digital Pre-Validation of Production Lines [6]
Fig. 9 .
9 Fig. 9. -Model based engineering with disturbances [7]
FUTURE PERSPECTIVE FOR ADVANCED IT SUPPORT
Various kinds of CAD/CAM systems are effectively utilized in industry today, and they have already become indispensable tools for daily product and process development works. However their functionality is not satisfactory from the future rquirements for IT support discussed in the previous sections. Two important aspects for advanced IT support for product and process developments are identified. One is comprehensive support of intelligent human engineers for creative product design, and the other is systematic rationalization and automation for well developed engineering processes.
The possible configuration of advanced IT support for product and process development is shown in Figure 7. "Virtual Product Creation" deals with intelligent human support for product design, and "Error-Free Manufacturing Preparation" performs comprehensive support and automation of well developed engineering processes. A core part of the system is "Integrated Life Cycle Modelling", which represents all the necessary product and process information for intelligent support and automation. Technologies discussed in Sections 5 to 7 are somehow integrated in these system modules.
Fig. 7. -IT support for product and process development
In Japanese industry, some part of those system functionalities are implemented individually as in-house applications, and some are realized as commercially available IT support systems. Figure 8 shows an example of digital pre-validation of production lines for electronics components. Recently Japanese companies operate such factories in foreign countries. By using the digital pre-validation, most of the required engineering works can be done in Japan before actual implementation of the factory equipments in foreign contries. Line design work load of human expertise can be radically reduced by this support system. This system incorporates many sophiscticated modelling and evaluation engineering know-hows, and exhibits differentiated characteristics from the conventional factory simulation systems.
SUMMARY
With the globalization of market and manufacturing activity, manufacturing industry in industrially advanced countries are facing difficult problems, such as severe competition with the low-cost production in developing countries and radical changes of customer requirements in industrially mature countries, etc. For coping with these problems, it is important to identify hidden or potential customer expectation, and to develop systematized design and manufacturing technology to augment human expertise for innovative product development. As Japan expects the radical decrease of population in coming 50 years, it is very important to systematize the product and process development technology throughout the total product life cycle, and to introduce IT methods and tools for supporting creative and intelligent human activities, and for automating well understood engineering processes. In this paper, current issues in manufacturing industry are generally reviewed. Future directions of manufacturing industry are described, and important technological issues and their IT support solutions are discussed from the viewpoints of potential customer expectation identification, large system and complexity problems, upstream design problems and important basic technology. Finally future perspective for advanced IT support is investigated. | 27,921 | [
"1003717"
] | [
"375074"
] |
01485826 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485826/file/978-3-642-41329-2_31_Chapter.pdf | M Borzykh
U Damerow
C Henke
A Trächtler
W Homberg
Modell-Based Approach for Self-Correcting Strategy Design for Manufacturing of Small Metal Parts
Keywords: Metal parts, punch-bending process, control strategies, modelbased design, manufacturing engineering 1
The compliance of increasing requirements on the final product often constitutes a challenge in manufacturing of metal parts. The common problem represents the precise reproduction of geometrical form. The reasons for form deviation can be e.g. varying properties of the semi-finished product as well as wear of the punch-bending machine or the punch-bending tool themself. Usually the process parameters are manually adjusted on the introduction of new production scenario or after the deviation between the actual form of produced pieces and the designed form become clear. The choice of new process parameters is normally based on the experience of the machine operators. It leads to a time-consuming and expensive procedure right on the early stages of production scenarios as well as during the established production process. Furthermore, the trend of miniaturization of part sizes along with narrowing tolerances and increase in the strengths of materials drastically pushes up the requirements on the production process. Aiming at reduction of scrap rate and setup-time of production scenarios, a model-based approach is chosen to design a self-correcting control strategy. The strategy is designed by modeling the bending process. In the first step the bending process has to be analyzed on the model by varying of process variables influencing the process significantly. It is done by corresponding simulations. After that, the correlations between significant variables and geometrical deviation were defined and different self-correcting control strategies were designed and tested. In order to identify and validate the simulation and to test the quality of the self-correcting control strategies, a special experimental tool was built up. The experimental tool is equipped with an additional measurement de-vice and can be operated on a universal testing machine. Finally, the selfcorrecting control strategies were tested under real production conditions on the original tool in order to address further influences of the punch-bending machine on the manufacturing process.
INTRODUCTION
The increasing international competition on the one hand and the trend toward miniaturization of components on the other hand represent the challenges for manufacturers of electrical connection technology. To meet these challenges, the new production technologies with smart tools should be developed.
Complex metal parts e.g. plug contacts being used in the electrical connection technology are currently produced on cam disc based punch-bending machines. These machines are mechanically working and use the same adjustments for all production steps. Due to the on-going trend of reduction in size of produced parts with simultaneous decreasing tolerances and use of high strength materials, geometrical deviations of the final product appear increasingly. The use of punch-bending machines with NC-controlled axis allows a more flexible set up in comparison to cam disk based machines.
The figure 1 presents the active structure of the conventional bending process using a punch-bending machine with two NC-controlled axes. Material flow runs from the feed/punch through bending and correction punch down to the chute. The advantage here is that the operator selects a product to be manufactured and the movements of the each axis are automatically generated. In this case, for the production two punches are used: bending and correction punch. The operator thereby receives the status and the information about the machine, but not about the manufacturing process. Today, when undesirable geometrical deviations appear, the new process parameters have to be set by the operator based on his or her personal experience. These targeted interventions are only possible when the punch-bending machine is stopped. Hence, this procedure is very time consuming especially when it is necessary to perform it more than once. Besides that, frequent leaving of the tolerances leads to high scrap rate. The failure to reproduce form of the element within allowable tolerances is caused by varying shape or strength of the semi-finished material (flat wire) as well as the thermal and dynamical behavior and wears phenomena of the punch-bending machine itself or of the punch-bending tool.
OBJECTIVE
The aim of a project at the Fraunhofer Institute for Production Technology (IPT) in cooperation with the University of Paderborn is to develop a punch-bending machine being able to react adaptively on changing properties of the process as well as on variability of the flat wire properties. This aim is targeted in implementation of a selfcorrecting control strategy. Figure 2 shows the enhancement of a controlled process keeping the nominal dimension within the tolerances compared with the current noncontrolled situation.
Fig. 2. -Non-controlled and controlled processes
The short-circuit bridge (Fig. 3) was employed as a basic element for the process of control strategy development. The geometrical shape is created in the first two bending steps and with the last bending step the opening dimension is adjusted. In order to keep the opening dimension of the short circuit bridge within the tolerances, it is necessary to detect a leaving of the allowable interval first and then to take appropriate corrective action by a punch in the next step. Development of a self-correcting control strategy needs all components of the process to be taken into account. Therefore, machine behavior has to be analyzed as well as the behavior of the tool, the flat wire and workpiece shape. Additional measurement devices have to be developed in order to measure process variables such as the opening dimension of the short circuit bridge and the punch force. In a self-correcting control strategy, the measured process variables are used to calculate the corrected punch movement by an algorithm in a closed-loop mode. Furthermore, the position accuracy of the punchbending machine axis as well as of the punches of the tool is analyzed by means of displacement transducers.
The desired approach is similar to the VDI guidelines for the design methodology for mechatronic systems [VDI-Guideline 2206 ( 2004)]. The objective of this guideline is to provide methodological support for the cross-domain development of mechatronic systems. In our case, these domains are bending process, modeling and control engineering.
ANALYZING THE INITIAL PROCESS
Gaining a basic understanding of the current process flow, the process design, the tool design, the behavior of the punch bending machine as well as of the material used for the flat wire are to be analyzed. The punch-bending tool is used to produce the short circuit bridge with three bending steps. The punches used for the single bending steps are driven by the NC-axis of the punch-bending machine. It could be observed that the geometrical deviations of the workpiece occur within short time and therefore wear phenomena are unlikely to be responsible for problems with shape of the final product and their influence can be neglected. Geometrical deviations could also result from the positioning accuracy of NC-axis or from varying properties of the semifinished material. The positioning accuracy as given by the machine manufacturer is within 0.02 mm tolerance what was proved by additional measurements with a laser interferometer. This accuracy is sufficient for the bending process. Finally, the deviations of part's geometry are most probably caused by the changes of the flat band properties. To investigate the properties of the flat band, the model-based approach with the further identification and validation on an experimental tool was chosen.
MODEL-BASED ANALYSIS OF BENDING PROCESS
TEST ON AN EXPERIMENTAL TOOL
In order to investigate the properties of real flat wire, an experimental tool representing the significant bending operations of the production was build up. The tool can be operated on a universal testing machine, allowing measurement of the punch movement and force during the whole bending process. The experimental tool is used to investigate the impact of the geometrical dimension on the flat wire when its thickness and width change. A reduction of the thickness t of the flat wire at a constant width w showed the punch force to decrease clearly (Figure 6a). But when the thick-ness t is kept constant and the width w of the flat wire is reduced, there will be a significantly smaller decrease of the punch force (Figure 6b). This behavior could also be observed during measurement of the punch force during the manufacturing of the short-circuit bridge in the production tool. There the punch is always moved on a fixed end position so by means of the punch force, the changing thickness of the flat wire can be detected indirectly. The thickness of the flat wire varies by +/-0.015 mm but remains within the admissible production tolerance set by the manufacturer. Furthermore it could be observed that the change of thickness affects the opening dimension of the flat wire significantly.
MEASUREMENT DEVICE
The opening dimension is a decisive parameter for the functioning of the short circuit bridge and has to be checked in quality assurance procedures. In order to check and to adjust the opening dimension in a defined way, it has to be measured runtime during the manufacturing process by means of contact or contactless measurement methods.
Because the short circuit bridge is formed within the tool and access to it is rendered, a contactless optical measurement device has proven to be the most appropriate. For keeping the opening dimension of the short-circuit bridge within the tolerance range of 1.2 mm, a measurement accuracy of about 0.02 mm is indispensable. The measurement device has to be fast enough to detect the opening dimension of each workpiece at a production speed of 60 parts per minute. Consequently an optical measurement device has been found to be the most appropriate one.
For testing the function of the measurement method, a self-developed setup was chosen. A schematic setup of measurement device is shown below (Fig. 7). A shadow of the short circuit bridge on the level of the opening dimension is cast by a flat LED-backlight to avoid a perspective error [START_REF] Hentschel | Feinwerktechnik, Mikrotechnik[END_REF] The information of the CCD sensor is processed via a real-time IO-system manufactured by dSpace GmbH and transferred to MatLab/Simulink, where the opening dimension is being calculated. For first investigations this self-developed setup was chosen to prove the functioning of the measurement method and to keep the costs low. The measurement result is influenced by the relative position of the measured object to the CCD sensor on one hand and by vibration or shock and contamination in the punch-bending machine on the other. First the width of the measurement object was changed in a range in which the current opening dimension varies. A very good linear connection between the width B of measurement object and dark pixels can be observed. The measure accuracy per pixel is about 0.02 mm including measurement tolerances and is accurate enough to recognize a leaving of the tolerance early. By varying the distance A at a constant width B the measurement becomes inaccurate. But observation of the real process has shown that possible movement in direction A is negligible because the short circuit bridge is fixed in the tool during the bending operations.
In order to investigate vibrations or shocks in the process an acceleration sensor was attached to the optical measurement device. When the punch-bending machine in running on 60 RPM accelerations of about 0.2 m/s² could be detected which will not affect the measurement.
Further investigations have shown that the change of thickness of the flat wire impacts the opening dimension of the short-circuit bridge. The thickness of the flat wire can be estimated indirectly by measuring the punch force in the production tool. This method showed reliable results and will keep costs low if an already existing force sensor is used.
SELF-CORRECTING STRATEGY
To build up a self-correcting strategy, it is necessary to detect the opening dimension for the each workpiece especially when it is beginning to run from the desired value to one of the tolerance limits. In the next step the punch movement has to be adapted by a defined value to correct the opening dimension. Because there is only very little time between measuring and the correcting step, a closed-loop control for the trend correction is used. Therefore the information on the current opening dimension is used for correcting the opening dimension of the next short circuit bridge. This is possible because the changes of size of the opening dimension are slight enough.
After that, the punch force of the first bending step is used to determine the influence of the flat wire thickness. The information given by the punch force applied to a part can be used for the same part because there is enough time between the measurement and the correcting bending step. So for the self-correcting strategy the opening dimension from one short circuit bridge before (yi-1) and the maximum punch force of the first bending step from the previous and current short circuit bridge (Fi-1 and Fi) are used together with additional constant terms (k1, k2).
di F F k y y k u i i i desired i ) ( ) ( ( 1 2 1 1 (1)
where ipart number.
The coefficient k1 is calculated from the relationship between the plastic change of the opening dimension and position of the punch actuator using the bending model. The Term (Fi -Fi-1) represents a discrete differentiator of the maximum punch force from the first bending step, and the coefficient k2 is calculated from the relationship between the change of the wire thickness and the maximum punch force using the bending model as well.
The figure 9 illustrates the extended active structure of the new self-correcting bending tool on the punch-bending machine with two NC-controlled axes. The conventional bending tool is extended by two components. The first new component is the integrated measuring equipment, such as camera system and force sensor. The second new component is located in the information processing. The information about the current article is collected and processed. It can be determined being the opening dimension in the tolerance or not. Furthermore, the new adjustment path for the correction punch can be calculated. Thus, the on-line process regulation is realized. Additionally, the operator must set up the tolerance and the desired value for the opening dimension. A first verification of the self-correcting strategy was carried out on the real short circuit bridge by using the experimental tool. These tests showed very good results with a stable performance of the closed-loop control. Nevertheless, the experimental tool could not be used to test the self-correcting strategy under production conditions. By implementing the optical measurement device into the production tool and the algorithm of the self-correcting control strategy into the controller of the punchbending machine, a test under production condition could be carried out. At the production speed of 60 RPM of the punch-bending machine the opening dimension as well as the punch force could be measured reliable and the closed-loop control showed a stable behavior so the opening dimension of the short circuit bridge could be held within the tolerances (Fig. 10).
CONCLUSIONS
The trend in the electrical connection technology goes to a minimization of the metal part size and narrowing of tolerances. Because of unavoidable varying properties of the high strength materials the small tolerances can be only kept under a high scrap rate and a large expenditure of time. In this case the production process of a short circuit bridge was used to reduce scrap rate and the setup time of the process. Therefore a self-correcting strategy based on a closed loop control was built up. This selfcorrecting strategy uses geometrical dimensions of the workpiece measured during the bending process to be able to keep the opening dimension of the short circuit bridge by a correcting bending step within the tolerances over the whole process period.
Fig. 1 .
1 Fig. 1. -Active structure of the conventional bending process
Fig. 3 .
3 Fig. 3. -Short-circuit bridge
AFig. 4 .Fig. 5 .
45 Fig. 4. -Setup of the MBS model: a) Modell of the bending process; b) Modelling of the workpiece
Fig. 6 .
6 Fig. 6. -Influence of the thickness and width of the flat wire concerning the punch force
, Demant, C. (2011)]. The shadow is received through an objective and produces dark areas on a CCD linear image sensor which detects the transition between light and dark. Knowing the size of pixels and their position in the line it is possible to calculate the opening dimension.
Fig. 7 .
7 Fig. 7. -Setup of the optical measurement device
Figure 8
8 shows the schematic structure of the process control. The calculation of the control input (ui) for punch actuator is shown in the equation (1). Thus, the control law corresponds to the discrete I-controller [Shinners, Stanley M. (1998)].
Fig. 8 .
8 Fig. 8. -Schematic design of the closed-loop control for the trend correction
Fig. 9 .
9 Fig. 9. -Active structure of the self-correcting bending process
Fig. 10 .
10 Fig. 10. -Measured trend of the opening dimension without (a) and with (b) the self-correcting strategy
ACKNOWLEDGEMENTS
We express our deep gratitude to the AiF/ ZIM for funding this project. We would like to gratefully acknowledge the collaborative work and support by our project partners Otto Bihler Maschinenfabrik GmbH & Co. KG and Weidmüller Interface GmbH & Co. KG. | 18,161 | [
"1003720",
"1003721",
"1003722",
"1003723",
"1003724"
] | [
"446543",
"74348",
"446543",
"74348",
"74348"
] |