reference
stringlengths
376
444k
target
stringlengths
31
68k
A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> 3) TYPE OF REFLECTARRAY <s> Dielectric reflectarray antennas are studied in this paper as a possible low-loss and low-cost solution for high gain THz antennas. Variable height dielectric slabs are proposed for the reflectarray elements which allow for the use of low dielectric-constant materials for the design. A 3-D printing technology is utilized to fabricate the antenna, and both numerical and experimental results are presented for a prototype operating at 100 GHz. This study shows that the proposed design approach is well suited for high gain THz antennas. <s> BIB001 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> 3) TYPE OF REFLECTARRAY <s> Conformal metal reflectarray antennas on the surface of a cylinder are designed for millimetre-wave applications. All depths of metal grooves in the reflectarrays are specially manipulated for high-gain conformal reflectarray antennas. There is a good agreement between simulated and experimental radiation performance results for two different ‘sagittas’. The proposed conformal reflectarray antennas can be a helpful choice for applications requiring high-gain antennas on curved platforms. <s> BIB002 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> 3) TYPE OF REFLECTARRAY <s> A novel metal-only reflectarray antenna is proposed in this letter. By using a unique unified slot structure, the dielectric substrate commonly applied in conventional reflectarray antennas can be avoided. Various slot elements are investigated, and a prototype reflectarray antenna working at 12.5 GHz is studied for experimental verification. The simulation and measurement results show that good radiation characteristics are achieved by the proposed design. The measured gain is 32.5 dB with 1-dB gain bandwidth of 8.3%, which is comparable to reflectarrays consisting of conventional patch elements. The metal-only structure provides an innovative reflectarray configuration to better withstand the extreme outer space environment and effectively reduce the antenna cost. <s> BIB003 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> 3) TYPE OF REFLECTARRAY <s> This letter presents the design and results of low-loss discrete dielectric flat reflectarray and lens for E-band. Using two different kinds of feed, 3-D-pyramidal (wideband) horn and 2 $\,\times\,$ 2 planar microstrip array (narrowband) antenna, the radiation performances of the two collimating structures are investigated. The discrete lens is optimized to cover the frequencies 71–86 GHz (71–76- and 81–86-GHz bands), while the discrete reflectarray is optimized to cover the 71–76-GHz band. The presented designs utilize the principle of perforated dielectric substrate using a square lattice of drilled holes of different radii and can be fabricated using standard printed circuit board (PCB) technology. The discrete lens has 41 $\,\times\,$ 41 unit cells and thickness of 6.35 mm, while the reflectarray has 40 $\,\times\,$ 40 unit cells and thickness of 3.24 mm. A good impedance matching ( $\vert { S}_{11} \vert 10 dB) and peak gain of $34 \pm 1~$ dB with maximum aperture efficiency of 44.6% are achieved over 71–86 GHz for the lens case. On the other hand, reflectarray with peak gain of $32 \pm 1~$ dB and aperture efficiency of 41.9% are achieved for 71–76-GHz band. <s> BIB004
The gain of a reflectarray can also be optimized based on its type. In this sub-section some types of reflectarray antenna other than the conventional microstrip reflectarray have been discussed for the gain enhancement purpose. Dielectric reflectarrays and metallic reflectarrays are two common types that are mostly used for the gain enhancement purposes at higher frequencies. These both types posses three dimensional structure and require immense accuracy for the fabrication. A dielectric reflectarray with a metallic ground was proposed in BIB001 for 100 GHz high gain operation. The progressive phase was obtained by the variable height of the dielectric surface as depicted in Figure 8(a) . A high precision 3D technology was used to perform this task. A gain of 24.7 dB was achieved with 20×20 elements. A dielectric FIGURE 9. Metallic reflectarrays (a) Metallic grooves reflectarray BIB002 (b) unified slot element for metal only reflectarray BIB003 . resonator antenna (DRA) reflectarray was also reported in for high gain millimeter wave operation. The DRA reflectarray has been shown in Figure 8 (b) where a narrow metallic strip was used on the top of DRA to control the reflection phase of reflectarray. This reflectarray offered a 28.3 dBi gain at 31 GHz frequency. A discrete dielectric reflectarray BIB004 was also recommended for E-band frequency operation. It was a perforated surface reflectarray antenna with drilled air holes of different radii to control the reflection phase as shown in Figure 8 (c). 40×40 such elements were combined together to form a full reflectarray with 32 dB gain. The main problem with dielectric reflectarrays is their limited efficiency performance due to lack of the conducting material in resonant structure. A metallic reflectarray with planted grooves in a curved platform BIB002 as shown in Figure 9 (a) was also proposed for high gain and high frequency operation. The absence of dielectric material in the proposed design eliminates the chances of high dielectric loss which is essential for high gain performance. This reflectarray achieved a maximum gain of 32.4 dBi at 95 GHz. However, its miniaturized design with curved surface makes it difficult to be fabricated for high frequencies. Another type of metallic reflectarray was proposed in BIB003 with unified slot elements operating at 12.5 GHz. The slots were deposited in a square patch element to make a unit cell as shown in Figure 9 (b). 1380 such elements were used to form a circular aperture reflectarray with a ground plane separated by an air gap. The measured gain of the proposed reflectarray was 32.5 dB. The metallic reflectarrays offer good gain performance with negligible dielectric losses, but their design complexity for fabrication is much higher than their counterparts.
A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. DIFFERENT ELEMENTS WITH HIGH EFFICIENCY REFLECTARRAY OPERATION <s> The microstrip reflectarray is rapidly becoming an attractive alternative solution to the traditional parabolic reflector antenna due to its various advantages, such as its high gain, narrow beam with low side lobes, light weight, and smaller volume. In this article, a microstrip reflectarray consisting of a hexagonal patch with crossed slots as the cell elements printed on a conductor-backed substrate is proposed to replace the traditional parabolic reflector antenna. Using this type of cell element, the simulated results of the microstrip reflectarray demonstrate a radiation efficiency of over 60% at 12.5 GHz and a gain of over 26 dBi at 12–13.7 GHz. The experimental results show good agreement with the simulated ones, confirming the validity of the approach presented herein. © 2012 Wiley Periodicals, Inc. Microwave Opt Technol Lett 54:2383–2387, 2012; View this article online at wileyonlinelibrary.com. DOI 10.1002/mop.27095 <s> BIB001 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. DIFFERENT ELEMENTS WITH HIGH EFFICIENCY REFLECTARRAY OPERATION <s> This letter presents a reflectarray antenna composed of a combination of concentric open rings and an I-shaped dipole on a conductor-backed substrate. Two classes of elements with different variable dimensional parameters are introduced to realize linear phase-frequency characteristic in a wide frequency band and a phase range of over 360° . It is worth noting that the wideband performance is figured by the proposed reflectarray with a low profile of 0.065λ0 ( λ0 is the free-space wavelength at the center frequency), rather than thick substrate employed in common wideband designs. A prototype is fabricated and measured to verify the availability of the elements, and the measured results show good agreement with simulations. <s> BIB002 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. DIFFERENT ELEMENTS WITH HIGH EFFICIENCY REFLECTARRAY OPERATION <s> In this article, a novel method is introduced to design a high efficiency and broadband reflectarray.Several techniques will be introduced for designing wideband reflectarray. First, by using fractal ring unit cell with 700° phase range; second, by smoothing phase tapering on reflectarray surface; and finally, by finding the optimum free reference phase and f/D ratio. Using these ideas, a 625 element array prototype of 25 × 25 cm2 dimensions with 1 dB gain bandwidth of 12% and radiation efficiency close to 66% at 10 GHz is simulated and tested. © 2012 Wiley Periodicals, Inc. Microwave Opt Technol Lett 55:747–753, 2013; View this article online at wileyonlinelibrary.com. DOI 10.1002/mop.27427 <s> BIB003 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. DIFFERENT ELEMENTS WITH HIGH EFFICIENCY REFLECTARRAY OPERATION <s> An offset-fed reflectarray antenna composed of single-layer elements based on the combination of circular and square concentric rings of variable size is presented in this paper. The characteristics of the elementary cells have been optimized providing a lineal phase curve with a phase range higher than 360°. The 297 × 297 mm reflectarray has been designed, fabricated, and measured to operate at 16 GHz and radiates the main beam in the direction given by θ = 10° and = 0°. In spite of utilizing a thin substrate, measured results of the linearly polarized reflectarray demonstrate a 1 dB gain bandwidth of 15.48% and radiation efficiency of 52.36% with a peak gain of 32.21 dB. Also, the measurements exhibit a cross-polar discrimination better than 27 dB in the working frequency band. <s> BIB004 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. DIFFERENT ELEMENTS WITH HIGH EFFICIENCY REFLECTARRAY OPERATION <s> A broadband reflectarray cell made of three parallel dipoles printed on a dielectric layer is presented. A 33% bandwidth is achieved for the cell made of dipoles, which is larger than that obtained for a reference cell consisting of three stacked square patches (26%). Using this cell, a 41-cm reflectarray antenna has been designed to produce a collimated beam at 9.5 GHz. The numerical results obtained for the reflectarray antenna made of parallel dipoles show a 1-dB bandwidth of 19%, a 65% efficiency, 0.2 dB of losses, and low levels of cross polarization (25 dB below the maximum). These results demonstrate a high performance for the proposed reflectarray antenna made of cells with three printed dipoles. <s> BIB005 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. DIFFERENT ELEMENTS WITH HIGH EFFICIENCY REFLECTARRAY OPERATION <s> A broadband single-layer reflectarray antenna composed of multiresonance square-rings elements is presented. The element is optimized to provide a linear phase curve and wide-enough phase variation. To avoid feed blockage, an offset feed configuration is used, and wave incident angle is considered to determine the appropriate element parameters that provide desirable phase shift on the antenna surface. The aperture efficiency of the antenna is maximized by optimizing feed position and characteristics. A 480-element antenna is fabricated and measured. The measured results show 1-dB gain bandwidth of 17% and radiation efficiency of 66% at 13.5 GHz center frequency. <s> BIB006 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. DIFFERENT ELEMENTS WITH HIGH EFFICIENCY REFLECTARRAY OPERATION <s> A Circularly Polarized (CP) high efficiency wide band Reflectarray (RA) antenna is designed for Ka-band using cross bow-tie elements. The reflected wave phase curve is obtained by anti-clockwise bow-tie rotation. The linear phase curve with complete 360 ◦ degree is obtained when left-hand circularly polarized (LHCP) is incident normally in unit cell environment. The proposed method provides high gain, high aperture efficiency, wideband axial ratio (AR), in circularly polarized bow-tie RA using multiple copies of unit cell to form 25 ∗ 25 antenna array. Before designing RA, the unitcell is analyzed, for oblique incidence to predict its bandwidth. The proposed antenna provided good performance in terms of Half Power Beam width (HPBW), Side Love Level (SLL), cross polarization, gain bandwidth and AR bandwidth. A 25 ∗ 25 bow-tie RA antenna provides the highest aperture efficiency of 57%, HPBW of 9.0 degrees, SLL −19 dB, cross polarization −27 dB. A 1-dB gain bandwidth of 32.5%, 3- dB gain bandwidth of 51.4% and 1.5-dB AR bandwidth of 32.9% while 3-dB AR bandwidth of 48.7% is achieved in simulation. These results are validated through fabricated cross bow-tie RA, and the measurements make good agreement with simulation results. <s> BIB007 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. DIFFERENT ELEMENTS WITH HIGH EFFICIENCY REFLECTARRAY OPERATION <s> Based on the idea of a single-layer multiresonance structure for a linearly polarized reflectarray, a novel slotted hollow ring element is presented for Ku-band high-efficiency circularly polarized reflectarrays. The appropriate geometry parameters are determined by studying the effects of incident angle on the element reflection coefficient. Compared to the conventional microstrip ring element, it can provide a broader CP bandwidth and more accurate phase compensation. Then, by using the angular rotation technique, an offset 137-element reflectarray operating at 12 GHz is designed and fabricated to demonstrate the superior performance of the element. The measured result shows that the reflectarray offers a maximum efficiency of 66.13% at 12.3GHz. Meanwhile, the efficiency is better than 60% over a 16.8% bandwidth (11.1GHz–13.1GHz). The overlapping 1-dB gain bandwidth and 3-dB axial-ratio bandwidth can reach 20%. <s> BIB008 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. DIFFERENT ELEMENTS WITH HIGH EFFICIENCY REFLECTARRAY OPERATION <s> A single-layer dual-band circularly polarized reflectarray antenna has been designed, fabricated, and tested for Ka-band satellite communications. The reflectarray antenna phasing element is composed of a concentric split-ring, in combination with a modified Malta Cross, where the technique of varying rotation angle and element size has been utilized to compensate the phase delay at 20 and 30 GHz, respectively. All the element configurations have been optimized to reduce cross-polar reflection. The aperture field method has been used to predict the reflectarray radiation performance, such as gain, radiation pattern, and cross-polarization level. A reflectarray with a circular aperture of 420 mm in diameter has been designed, manufactured, and tested for verification. A planar near-field measurement setup has been utilized to measure the reflectarray radiation characteristics. The measured results demonstrate that this dual-band reflectarray has achieved aperture efficiency of 66.5% at 20 GHz and 50% at 30 GHz, respectively. <s> BIB009
The progressive phase distribution of the unit cell element of a reflectarray antenna is responsible for the optimized gain and efficiency performance. Some resonant elements have inability to acquire a full 360 • phase swing, which increases the amount of phase errors while designing a full reflectarray antenna. An enhanced phase range performance is responsible for an efficient operation of reflectarray antenna. In this section, various unit cell elements with wide phase range have been discussed which are used with high efficiency reflectarray operation. Various elements used with high efficiency reflectarrays are summarized in Table 2 . In each element a different technique was used to perform the phase enhancement strategy. It can be seen from Table 2 that, the wide phase range is also a reason behind the good efficiency (≥50%) of a reflectarray antenna. The main reasons behind the high efficiency of the mentioned designs, which are the aperture size and f /D ratio are also summarized in Table 2 . All listed reflectarray designs were fed by horn antennas except the reflectarray containing Fractal elements BIB003 , which used a low gain rectangular waveguide. This is the reason behind its very low f /D ratio that was used to accumulate its whole aperture with a wide beam feed. As mentioned earlier, the aperture size and feed type can affect the aperture efficiency of reflectarray. However, in order to analyze the total efficiency a thorough investigation to its unit cell element is also required. The Hexagonal element with crossed slots BIB001 was selected for a full 360 • reflection phase swing with variation in its dimensions. The slop of the reflection phase was controlled by the separation between the crossed slots. The wideband Bowtie element BIB007 had the ability to be used for dual circular polarization with high efficiency. The counter-clockwise rotation of this element was used to get the progressive phase distribution. In a different scenario, two concentric open rings were used with an I-shaped element BIB002 for the same purpose. Its reflection phase was controlled by varying the distance between the open rings while the width of I-shaped element was also optimized for a better phase performance. The combination of two or more elements can also be utilized for the efficiency improvement of reflectarrays as reported in BIB004 and BIB005 . In BIB004 a circular ring was used with a concentric square ring and the dimensions of both structures were varied for the removal of phase errors. In the second design BIB005 , three parallel dipole elements were used as a single unit cell, and their lengths were varied to get an optimized reflection phase range performance with high efficiency. The efficiency of reflectarray antenna can be improved even without achieving a full 360 • reflection phase swing. It can be seen from BIB008 and where a slotted hollow ring element and a fragmented element were exploited for the efficiency enhancement. The slotted hollow ring element had a reflection phase range of 333 • and it was achieved by its different angular rotations. The overall dimensions of the elements remained same over the surface of the fabricated reflectarray due to the angular rotation. This tactic was used to reduce the cross polarization level with high efficiency performance. On the other hand, a single reflection phase value was possible to obtain with many different shapes of the fragmented element . Through this method the variations in the shape of the fragmented element were minimized over the surface of the reflectarray with improved efficiency performance even with a reflection phase range of 300 • . The efficiency enhancement can also be performed with multi-resonant elements like Fractal element BIB003 and Concentric square rings BIB006 , as mentioned in Table 2 . The Fractal shape was used to obtain a 700 • reflection phase range, which was enough to reduce the phase errors and improve the efficiency up to 66%. The three concentric square rings also offered the same amount of efficiency with a 500 • phase range. The dual band elements such as proposed in BIB009 can also perform efficiency enhancement at two different frequencies by enhancing their respective phase ranges. The dual band element shown in Table 2 was used to operate at 20 GHz and 30 GHz with 360 • and 300 • of reflection phase range respectively. The modified Malta cross element was used with a surrounded split ring element. The rotation angle of split ring was varied for the phase variations at 20 GHz, whereas the same was obtained by the variable size of the Malta cross element at 30 GHz. The optimization for each element was performed to get a lower cross polarization for the full reflectarray. The efficiencies of 66.5% and 50% were achieved at the upper and lower band of frequencies respectively. The reduction in the efficiency at the lower band was due to the narrow reflection phase range at that frequency.
A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> FIGURE 12. <s> The objective of this paper is to carry out analysis and design for X/Ka dual-band, frequency selective surface (FSS)-backed reflectarrays. In this design, a novel ring structure has been developed as the cell element for both the X- and Ka-bands to achieve dual-band performance. A single loop was used for the construction of the FSS-backed reflectarray in the Ka-band and a reflectarray with a solid ground plane was designed for the other band. Prototypes of these reflectarrays were fabricated and tested. The measurements demonstrated that for an optimum design, the gain of the FSS-backed reflectarray is almost the same as its counterpart backed by a solid ground plane. Characterisation of the out-of-band performance of these antennas demonstrated a close to 0.6-dB insertion loss. <s> BIB001 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> FIGURE 12. <s> A wideband perforated rectangular dielectric resonator antenna (RDRA) reflectarray is presented. The array of RDRA are formed from one piece of material. Air-filled holes are drilled into the material around the RDRA. This technique of fabricating RDRA reflectarray using perforations eliminates the need to position and bond individual elements in the reflectarray and makes the fabrication of the RDRA reflectarray feasible. The ground plane below the reflectarray elements is folded to form a central rectangular concave dip so that an air-gap is formed between the RDRA elements and the ground plane in order to increase the bandwidth. Full-wave analysis using the finite integration technique is applied. Three cases are studied. In the first one, the horn antenna is placed at the focal point to illuminate the reflectarray and the main beam is in the broadside direction. In the second one, the horn antenna is placed at the focal point and the main beam is at ±30 degrees off broadside direction. In the third one, an offset feed RDRA reflectarray is considered. A variable length RDRA provides the required phase shift at each cell on the reflectarray surface. The normalized gain patterns, the frequency bandwidth, and the aperture efficiency for the above cases are calculated. <s> BIB002 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> FIGURE 12. <s> Design and implementation of a dual-band single layer microstrip reflectarray are presented in this communication. The proposed reflectarray operates in two separated broad frequency-bands within X and K bands. Each element in the reflectarray consists of a circular patch with slots, and two phase delay lines attached to the patch. The required phase shifts in X and K bands are obtained by varying the lengths of the phase delay lines. The proposed element has more than 500 and 800 degrees linear phase range within 9.2 ~ 11.2 GHz (X-band) and 21 ~ 23 GHz (K-band), respectively. Measurement results show the maximum gain of 26.2 dB at 10.2 GHz with 16% 1-dB gain bandwidth and 29.7 dB at 22 GHz with 9.1% 1-dB gain bandwidth. With proper arrangement of the elements in the array, the cross-polarization is reduced. The measured efficiency is 47% at 10.2 GHz and 25% at 22 GHz. <s> BIB003 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> FIGURE 12. <s> A reflectarray element using a conductor cell with variable height is proposed. It is found that the reflection phase can be tuned by adjusting the height of the conductor cells arranged on the spatially discretised reflector plane. A complete linearised reflection phase thus can be achieved when the proposed conductor cell is used as a unit element. A millimetre-wave antenna has been designed using the proposed method and it shows 50% aperture efficiency with a half-wavelength cell height variation. <s> BIB004
Dual layer reflectarray with Ka/X-band of operation BIB001 . compared to conventional design. The same approach for cross polarization reduction was also applied to a dual band design BIB003 where a circular element with cross slot and two phase tuning stubs was proposed for X-band and K-band operation as depicted in Figure 11 (b). It offered 47% efficiency at 10.2 GHz with a cross polarization level of -25 dB. However the K-band operation at 22 GHz was less efficient (25%) due to the higher cross polarization level of -16 dB. Another dual band design for X-band and Ka-band operation with two different layers of reflectarray and dual feed was proposed in BIB001 . The Ka-band reflectarray placed above the X-band reflectarray separated by an air gap has been shown in Figure 12 . It was shown that the FSS backed Ka-band reflectarray attained an efficiency of 42%, compared to a solid grounded X-band reflectarray which offered 60% efficiency. The FSS ground was used in the Ka-band reflectarray in order to reduce the blockage of the signals reflected from X-band reflectarray. Additionally, the FSS ground plane could also introduce some back radiations which were the main reason behind the less efficiency of Ka-band reflectarray. Some other types of reflectarrays like full conductor reflectarray BIB004 and Dielectric resonator reflectarray BIB002 were also tried to get high efficiency. Their main aim was to reduce the reflection losses for an efficient performance. The conductor cell reflectarray offered a higher efficiency of 50% as compared to a DRA reflectarray which offered 47%. Another advantage of conductor reflectarray was its millimeter wave operation at 95 GHz while DRA reflectarray was designed at a lower frequency of 12 GHz.
A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> FIGURE 13. <s> In this article, a novel method is introduced to design a high efficiency and broadband reflectarray.Several techniques will be introduced for designing wideband reflectarray. First, by using fractal ring unit cell with 700° phase range; second, by smoothing phase tapering on reflectarray surface; and finally, by finding the optimum free reference phase and f/D ratio. Using these ideas, a 625 element array prototype of 25 × 25 cm2 dimensions with 1 dB gain bandwidth of 12% and radiation efficiency close to 66% at 10 GHz is simulated and tested. © 2012 Wiley Periodicals, Inc. Microwave Opt Technol Lett 55:747–753, 2013; View this article online at wileyonlinelibrary.com. DOI 10.1002/mop.27427 <s> BIB001 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> FIGURE 13. <s> A new cell element is introduced for broadband reflectarray applications. The presented unit cell exhibits linear phase response which makes it a suitable candidate for broadband X-Ku band applications. This cell element consists of three concentric rectangular loops etched on a two-layer grounded substrate. The dimensions of the cell element have been optimized to achieve linear phase response in the operation band. A square offset-fed reflectarray of 40 cm × 40 cm was designed and fabricated based on this unit cell with wideband performance at X-Ku band. Considering three different feed positions, the whole reflectarray was simulated in CST and good agreement between simulated and measured results was observed. A maximum gain of 32 dBi was obtained which is equivalent to 58% aperture efficiency. Also, a remarkable value of 36%, 1.5-dB gain bandwidth was measured which is higher compared to previously reported designs in the literature. Another investigation that is carried out in this development through theory and simulation is determination of the effect of feed movement along the focal axis on the operating band of the reflectarray. It is shown for the first time that changing the feed location leads to a considerable shift in the operation bandwidth and maximum gain of the designed broadband reflectarray. © 2012 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2013. <s> BIB002 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> FIGURE 13. <s> In this study, we propose a novel design method to significantly reduce the volume of reflectarray antennas. Unlike the commonly used approaches, the distance ( F ) between a source antenna and a reflectarray is largely narrowed in this work, which is no longer than $0.3\lambda$ . Accordingly, the total area occupied by the reflectarray can also be reduced with almost no decline in performance. Instead of a directional antenna, a simple omnidirectional dipole antenna is used as a feeding source, which has traditionally been expected to lower aperture efficiency of the proposed antenna. To solve this problem, an additional condition to maximize antenna gain in a target direction is suggested, which applies to both waves reflected from the reflectarray and radiated directly from the source. Moreover, by assuming that an incident plane wave coming from a far-field region, ambiguity in the polarization and incidence angle of the incident field impinging onto the reflectarray can clearly be removed. As a result, a relatively high aperture efficiency and antenna gain can be maintained in spite of the extremely reduced volume that is more than 700 times electrically smaller than the conventional ones. Good agreement between the experiment and the prediction confirms the validity of our approach. <s> BIB003
Gain and efficiency with respect to frequency of the reflectarray with λ/2 element size BIB001 . As it has been mentioned earlier, the feed mechanism can also affect the gain and hence the efficiency performance of reflectarray antenna. In a proposed design of a reflectarray antenna feed distance from reflectarray was varied BIB001 along with some other amendments to get an optimized gain and efficiency performance. The resonant patch elements were also tested with two different sizes of λ/2 and λ/3 at 10 GHz frequency. The reflection phase value of the center element with respect to a center feed was taken as a phase reference value. That phase reference value was varied between 0 • , 60 • and 120 • with a variable f /D ratio of 0.25 to 0.5 for the investigations. As it is depicted in Figure 13 , the maximum gain performance with an efficiency of 60% was obtained at an element size of λ/2 with 60 • phase reference and 0.33 f /D ratio. The stated work actually demonstrated that, the gain and efficiency performance of a reflectarray antenna can be evaluated based on its element configuration, reflection phase and feeding mechanism. Another useful technique regarding the feeding of a reflectarray antenna was proposed in BIB003 where a dipole antenna was used instead of a conventional feed horn antenna. Due to the Omni-directional characteristics of dipole antenna the f /D ratio was drastically reduced to 0.3λat a frequency of 1.87 GHz. But this reduction in the f /D ratio also reduced the efficiency performance of the reflectarray antenna. As described in Figure 14 , the efficiency of the proposed reflectarray antenna was improved by combining the reflected waves of reflectarray and radiated waves of the dipole antenna together. Through this tactic a gain of 11.2 dB was obtained with an efficiency of 52.6%. The feed distance variations can also be performed in an offset feed reflectarray antenna as reported in BIB002 . The feed was moved FIGURE 14. The combination of radiated and reflected waves for efficiency improvement of reflectarray BIB003 .
Survey on Reversible Watermarking <s> Higher embedding capacity : <s> An undesirable side effect of many watermarking and data-hiding schemes is that the host signal into which auxiliary data is embedded is distorted. Finding an optimal balance between the amount of information embedded and the induced distortion is therefore an active field of research. With the rediscovery of Costa's (1983) seminal paper entitled Writing on dirty paper, there has been considerable progress in understanding the fundamental limits of the capacity versus distortion of watermarking and data-hiding schemes. For some applications, however, no distortion resulting from auxiliary data, however small, is allowed. In these cases the use of reversible data-hiding methods provide a way out. A reversible data-hiding scheme is defined as a scheme that allows complete and blind restoration (i.e. without additional signaling) of the original host data. Practical reversible data-hiding schemes have been proposed by Fridrich et al. (2002), but little attention has been paid to the theoretical limits. It is the purpose of this paper to repair this situation and to provide some first results on the limits of reversible data-hiding. Admittedly, the examples provided in this paper are toy examples, but they are indicative of more practical schemes that will be presented in subsequent papers. <s> BIB001 </s> Survey on Reversible Watermarking <s> Higher embedding capacity : <s> In 2002, Lee, Ryu, and Yoo proposed a fingerprint-based remote user authentication scheme using smart cards. The scheme makes it possible for authenticating the legitimacy of each login user without any password table. In addition, the authors claimed that the scheme can withstand message replay attack and impersonation. In this paper, we shall point out a security flaw in this scheme, that is, n legitimate users can conspire to forge 2n-n-1 valid IDs and PWs for successfully passing the system authentication. Furthermore, we also show that the authentication equation is incorrect. Thus, the scheme is unworkable. <s> BIB002 </s> Survey on Reversible Watermarking <s> Higher embedding capacity : <s> A reversible watermarking algorithm with very high data-hiding capacity has been developed for color images. The algorithm allows the watermarking process to be reversed, which restores the exact original image. The algorithm hides several bits in the difference expansion of vectors of adjacent pixels. The required general reversible integer transform and the necessary conditions to avoid underflow and overflow are derived for any vector of arbitrary length. Also, the potential payload size that can be embedded into a host image is discussed, and a feedback system for controlling this size is developed. In addition, to maximize the amount of data that can be hidden into an image, the embedding algorithm can be applied recursively across the color components. Simulation results using spatial triplets, spatial quads, cross-color triplets, and cross-color quads are presented and compared with the existing reversible watermarking algorithms. These results indicate that the spatial, quad-based algorithm allows for hiding the largest payload at the highest signal-to-noise ratio. <s> BIB003 </s> Survey on Reversible Watermarking <s> Higher embedding capacity : <s> In lossless watermarking, it is possible to completely remove the embedding distortion from the watermarked image ::: and recover an exact copy of the original unwatermarked image. Lossless watermarks found applications in fragile ::: authentication, integrity protection, and metadata embedding. It is especially important for medical and military ::: images. Frequently, lossless embedding disproportionably increases the file size for image formats that contain lossless ::: compression (RLE BMP, GIF, JPEG, PNG, etc...). This partially negates the advantage of embedding information as ::: opposed to appending it. In this paper, we introduce lossless watermarking techniques that preserve the file size. The ::: formats addressed are RLE encoded bitmaps and sequentially encoded JPEG images. The lossless embedding for the ::: RLE BMP format is designed in such a manner to guarantee that the message extraction and original image ::: reconstruction is insensitive to different RLE encoders, image palette reshuffling, as well as to removing or adding ::: duplicate palette colors. The performance of both methods is demonstrated on test images by showing the capacity, ::: distortion, and embedding rate. The proposed methods are the first examples of lossless embedding methods that ::: preserve the file size for image formats that use lossless compression. <s> BIB004 </s> Survey on Reversible Watermarking <s> Higher embedding capacity : <s> We propose a new reversible (lossless) watermarking algorithm for digital images. Being reversible, the algorithm enables the recovery of the original host information upon the extraction of the embedded information. The proposed technique exploits the inherent correlation among the adjacent pixels in an image region using a predictor. The information bits are embedded into the prediction errors, which enables us to embed a large payload while keeping the distortion low. A histogram shift at the encoder enables the decoder to identify the embedded location. <s> BIB005 </s> Survey on Reversible Watermarking <s> Higher embedding capacity : <s> Recently, the development of data hiding techniques to hide annotations, confidential data, or side information into multimedia attracts the attention of researchers in various fields, especially in digital library. One of the essential tasks in digital library is the digitization of arts together with the corresponding textural descriptions. The purpose of data hiding is to embed relating textural description into the image to form an embedded image instead of two separate files (text file and image file). The hidden textural description and the original host image can be extracted and reconstructed from the embedded image in the reverse data extraction process. However, the reconstructed host image will more or less be distorted by utilizing traditional data hiding methods. In this paper, we propose a novel lossless data hiding method based on pixel decomposition and pair-wise logical computation. In addition to the lossless reconstruction of original host images, the results generated by the proposed method can also obtain high data hiding capacity and good visual quality. Furthermore, the task of tampering detection can also be achieved in the proposed method to ensure content authentication. Experimental results demonstrate the feasibility and validity of our proposed method. <s> BIB006 </s> Survey on Reversible Watermarking <s> Higher embedding capacity : <s> This paper presents a reversible data hiding method based on wavelet spread spectrum and histogram modification. Using the spread spectrum scheme, we embed data in the coefficients of integer wavelet transform in high frequency subbands. The pseudo bits are also embedded so that the decoder does not need to know which coefficients have been selected for data embedding, thus enhancing data hiding efficiency. Histogram modification is used to prevent the underflow and overflow. Experimental results on some frequently used images show that our method has achieved superior performance in terms of high data embedding capacity and high visual quality of marked images, compared with the existing reversible data hiding schemes. <s> BIB007 </s> Survey on Reversible Watermarking <s> Higher embedding capacity : <s> Secret sharing is to send shares of a secret to several participants, and the hidden secret can be decrypted only by gathering the distributed shares. This paper proposes an efficient secret sharing scheme using Largrange's interpolation for generalized access structures. Furthermore, the generated shared data for each qualified set is 1/(r-1) smaller than the original secret image if the corresponding qualified set has r participants. In our sharing process, a sharing circle is constructed first and the shared data is generated from the secret images according to this sharing circle. The properties of the sharing circle not only reduce the size of generated data between two qualified sets but also maintain the security. Thus the actual ratio of the shared data to the original secret images is reduced further. The proposed scheme offers a more efficient and effective way to share multiple secrets. <s> BIB008
Capacity property of digital watermarks refers to amount of information that can be embedded within the media. The embedding capacity of the reversible watermarking is much more than the conventional watermarking scheme. The embedding capacity should not be low as it affects the accuracy of extracted watermark and the recovered image. The general procedure of the conventional and reversible watermarking scheme can be illustrated by the figure 1.The procedure of the conventional and reversible watermarking are similar except one step. The change is that in reversible watermarking one function is extra that recovers the original image from the suspected image. That's why the reversible watermarking is suitable for the applications where high quality images are required. For example military and medical applications. There are two research fields that are connected with the digital watermarking that are data hiding (steganography) BIB008 and image authentication BIB002 . The purpose of data hiding is to hide the secret information in the cover image. The purpose of image authentication is to verify whether the received image is tampered or not. To achieve this goals, it is required that data hiding scheme should have a large capacity to carry more secret information. And the information hidden must be imperceptible so that information will be secure. The image authentication scheme also embeds the information in protected image. It has to keep imperceptibility between the original image and processed image. The goal of reversible watermarking is to assure the ownership and to recover the original image. Imperceptibility, blind and readily embedding and retrieving, robustness are the different criteria's of reversible watermarking. A number of schemes for digital images are already proposed. In this paper some of the schemes of reversible watermarking are focused. Schemes for digital images are focused. There are several reversible watermarking schemes which have been proposed BIB003 BIB004 BIB001 BIB005 BIB006 BIB007 .
Survey on Reversible Watermarking <s> II. REVERSIBLE WATERMARKING SCHEME BY APPLYING DATA COMPRESSION <s> The need for reversible or lossless watermarking methods has been highlighted to associate information with losslessly processed media or to enable their authentication. The paper first analyzes the specificity and the application scope of lossless watermarking methods. An original circular interpretation of a bijective transformation is then proposed to implement a method that fulfill all quality and functionality requirements. <s> BIB001 </s> Survey on Reversible Watermarking <s> II. REVERSIBLE WATERMARKING SCHEME BY APPLYING DATA COMPRESSION <s> We present a novel reversible (lossless) data hiding (embedding) technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known LSB (least significant bit) modification is proposed as the data embedding method, which introduces additional operating points on the capacity-distortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion, and transmitting these compressed descriptions as a part of the embedded payload. A prediction-based conditional entropy coder which utilizes static portions of the host as side-information improves the compression efficiency, and thus the lossless data embedding capacity. <s> BIB002 </s> Survey on Reversible Watermarking <s> II. REVERSIBLE WATERMARKING SCHEME BY APPLYING DATA COMPRESSION <s> A novel framework is proposed for lossless authentication watermarking of images which allows authentication and recovery of original images without any distortions. This overcomes a significant limitation of traditional authentication watermarks that irreversibly alter image data in the process of watermarking and authenticate the watermarked image rather than the original. In particular, authenticity is verified before full reconstruction of the original image, whose integrity is inferred from the reversibility of the watermarking procedure. This reduces computational requirements in situations when either the verification step fails or the zero-distortion reconstruction is not required. A particular instantiation of the framework is implemented using a hierarchical authentication scheme and the lossless generalized-LSB data embedding mechanism. The resulting algorithm, called localized lossless authentication watermark (LAW), can localize tampered regions of the image; has a low embedding distortion, which can be removed entirely if necessary; and supports public/private key authentication and recovery options. The effectiveness of the framework and the instantiation is demonstrated through examples. <s> BIB003 </s> Survey on Reversible Watermarking <s> II. REVERSIBLE WATERMARKING SCHEME BY APPLYING DATA COMPRESSION <s> We present a novel lossless (reversible) data-embedding technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known least significant bit (LSB) modification is proposed as the data-embedding method, which introduces additional operating points on the capacity-distortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion and transmitting these compressed descriptions as a part of the embedded payload. A prediction-based conditional entropy coder which utilizes unaltered portions of the host signal as side-information improves the compression efficiency and, thus, the lossless data-embedding capacity. <s> BIB004
To extract the original image from the watermarked image, the recovery information is embedded into the original image. With the recovery information, we also have to embed the watermark data into the original image. That's why the capacity required to embed the information is more. So, to embed more data, a solution is to compress the embedding data. By applying the data compression it reduces the size of embedding data. There are various techniques related to this. BIB004 BIB001 BIB003 BIB002 The embedding procedure is as below: 1. The L-level scalar quantization is applied to each pixel and the reminders are generated. ] to compress the remainders. With the above example, it is assumed that 16 remainders are compressed to the 12 digit data. It is denoted as {x0,x1,…..,x11}. It can be decompressed to the original 16 remainders. 3. We have converted the data using L-ary scalar quantization concatenate that data. For the above same example, watermark W is converted from {10 0010 1011}2 to {4 2 1 0}5, and it becomes {x0, x1, x2, . . . , x10,x11, 4, 2, 1, 0} 4. The compressed data and watermark are added to the quantified image and the watermark image is generated. Finally, the watermarked image is produced.
The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Introduction <s> Computer vision technology is a sophisticated inspection technology that is in common use in various industries. However, it is not as widely used in aquaculture. Application of computer vision technologies in aquaculture, the scope of the present review, is very challenging. The inspected subjects are sensitive, easily stressed and free to move in an environment in which lighting, visibility and stability are generally not controllable, and the sensors must operate underwater or in a wet environment. The review describes the state of the art and the evolution of computer vision in aquaculture, at all stages of production, from hatcheries to harvest. The review is organized according to inspection tasks that are common to almost all production systems: counting, size measurement and mass estimation, gender detection and quality inspection, species and stock identification, and monitoring of welfare and behavior. The objective of the review is to highlight areas of research and development in the field of computer vision which have made some progress, but have not matured into a useful tool. There are many potential applications for this technology in aquaculture which could be useful for improving product quality or production efficiency. There have been quite a few initiatives in this direction, and a tight collaboration between engineers, fish physiologists and ethologists could contribute to the search for, and development of solutions for the benefit of aquaculture. <s> BIB001 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Introduction <s> With increased use of precision agriculture techniques, information concerning within-field crop yield variability is becoming increasingly important for effective crop management. Despite the commercial availability of yield monitors, many crop harvesters are not equipped with them. Moreover, yield monitor data can only be collected at harvest and used for after-season management. On the other hand, remote sensing imagery obtained during the growing season can be used to generate yield maps for both within-season and after-season management. This paper gives an overview on the use of airborne multispectral and hyperspectral imagery and high-resolution satellite imagery for assessing crop growth and yield variability. The methodologies for image acquisition and processing and for the integration and analysis of image and yield data are discussed. Five application examples are provided to illustrate how airborne multispectral and hyperspectral imagery and high-resolution satellite imagery have been used for mapping crop yield variability. Image processing techniques including vegetation indices, unsupervised classification, correlation and regression analysis, principal component analysis, and supervised and unsupervised linear spectral unmixing are used in these examples. Some of the advantages and limitations on the use of different types of remote sensing imagery and analysis techniques for yield mapping are also discussed. <s> BIB002 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Introduction <s> The existing state-of-the-art in wireless sensor networks for agricultural applications is reviewed thoroughly.The existing WSNs are analyzed with respect to communication and networking technologies, standards, and hardware.The prospects and problems of the existing framework are discussed with case studies for global and Indian scenarios.Few futuristic applications are presented highlighting the factors for improvements for the existing scenarios. The advent of Wireless Sensor Networks (WSNs) spurred a new direction of research in agricultural and farming domain. In recent times, WSNs are widely applied in various agricultural applications. In this paper, we review the potential WSN applications, and the specific issues and challenges associated with deploying WSNs for improved farming. To focus on the specific requirements, the devices, sensors and communication techniques associated with WSNs in agricultural applications are analyzed comprehensively. We present various case studies to thoroughly explore the existing solutions proposed in the literature in various categories according to their design and implementation related parameters. In this regard, the WSN deployments for various farming applications in the Indian as well as global scenario are surveyed. We highlight the prospects and problems of these solutions, while identifying the factors for improvement and future directions of work using the new age technologies. <s> BIB003 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Introduction <s> We presented a review on the representative vision schemes for harvesting robots.We reviewed hand-eye coordination techniques and their applications in harvesting robots.We presented some fruit or vegetable harvesting robots and their vision control techniques.We described and discussed the challenges and feature trends for robotic harvesting. Although there is a rapid development of agricultural robotic technologies, a lack of access to robust fruit recognition and precision picking capabilities has limited the commercial application of harvesting robots. On the other hand, recent advances in key techniques in vision-based control have improved this situation. These techniques include vision information acquisition strategies, fruit recognition algorithms, and eye-hand coordination methods. In a fruit or vegetable harvesting robot, vision control is employed to solve two major problems in detecting objects in tree canopies and picking objects using visual information. This paper presents a review on these key vision control techniques and their potential applications in fruit or vegetable harvesting robots. The challenges and feature trends of applying these vision control techniques in harvesting robots are also described and discussed in the review. <s> BIB004 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Introduction <s> Display Omitted We mould the concept Software Ecosystems to the agricultural domain.We propose a reference architecture for Farm Software Ecosystems.Our reference architecture describes an organizational and technical infrastructure.We motivate that our reference architecture can improve farm enterprise integration.Our reference architecture is used to review some existing initiatives. Smart farming is a management style that includes smart monitoring, planning and control of agricultural processes. This management style requires the use of a wide variety of software and hardware systems from multiple vendors. Adoption of smart farming is hampered because of a poor interoperability and data exchange between ICT components hindering integration. Software Ecosystems is a recent emerging concept in software engineering that addresses these integration challenges. Currently, several Software Ecosystems for farming are emerging. To guide and accelerate these developments, this paper provides a reference architecture for Farm Software Ecosystems. This reference architecture should be used to map, assess design and implement Farm Software Ecosystems. A key feature of this architecture is a particular configuration approach to connect ICT components developed by multiple vendors in a meaningful, feasible and coherent way. The reference architecture is evaluated by verification of the design with the requirements and by mapping two existing Farm Software Ecosystems using the Farm Software Ecosystem Reference Architecture. This mapping showed that the reference architecture provides insight into Farm Software Ecosystems as it can describe similarities and differences. A main conclusion is that the two existing Farm Software Ecosystems can improve configuration of different ICT components. Future research is needed to enhance configuration in Farm Software Ecosystems. <s> BIB005 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Introduction <s> Abstract Sugarcane is an important crop for tropical and sub-tropical countries. Like other crops, sugarcane agricultural research and practice is becoming increasingly data intensive, with several modeling frameworks developed to simulate biophysical processes in farming systems, all dependent on databases for accurate predictions of crop production. We developed a computational environment to support experiments in sugarcane agriculture and this article describes data acquisition, formatting, storage, and analysis. The potential to support creation of new agricultural knowledge is demonstrated through joint analysis of three experiments in sugarcane precision agriculture. Analysis of these case studies emphasizes spatial and temporal variations in soil attributes, sugarcane quality, and sugarcane yield. The developed computational framework will aid data-driven advances in sugarcane agricultural research. <s> BIB006 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Introduction <s> A WSN for extended monitoring of beehive activity and condition has been developed.Data collected from a beehive were analysed from a multi-disciplinary perspective.A decision tree algorithm describing hive/colony status was proposed and evaluated.An algorithm for predicting short term rainfall local to the hive was also proposed.The algorithms were deployed in network with a minimal energy increase (5.35%). United Nations reports throughout recent years have stressed the growing constraint of food supply for Earth's growing human population. Honey bees are a vital part of the food chain as the most important pollinator for a wide range of crops. It is clear that protecting the population of honey bees worldwide, as well as enabling them to maximise their productivity, is an important concern. In this paper heterogeneous wireless sensor networks are utilised to collect data on a range of parameters from a beehive with the aim of accurately describing the internal conditions and colony activity. The parameters measured were: CO2, O2, pollutant gases, temperature, relative humidity, and acceleration. Weather data (sunshine, rain, and temperature) were also collected to provide an additional analysis dimension. Using a data set from a deployment at a field-deployed beehive, a biological analysis was undertaken to classify ten important hive states. This classification led to the development of a decision tree based classification algorithm which could describe the beehive using sensor network data with 95.38% accuracy. Finally, a correlation between meteorological conditions and beehive data was observed. This led to the development of an algorithm for predicting short term rain based on the parameters within the hive. Envisioned applications of this algorithm include agricultural and environmental monitoring for short term local forecasts (95.4% accuracy). Experimental results shows the low computational and energy overhead (5.35% increase in energy consumption) of the classification algorithm when deployed on one network node, which allows the node to be a self-sustainable intelligent device for smart bee hives. <s> BIB007 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Introduction <s> The next decade of competitive advantage revolves around the ability to make predictions and discover patterns in data. Data science is at the center of this revolution. Data science has been termed the sexiest job of the 21st century. Data science combines data mining, machine learning, and statistical methodologies to extract knowledge and leverage predictions from data. Given the need for data science in organizations, many small or medium organizations are not adequately funded to acquire expensive data science tools. Open source tools may provide the solution to this issue. While studies comparing open source tools for data mining or business intelligence exist, an update on the current state of the art is necessary. This work explores and compares common open source data science tools. Implications include an overview of the state of the art and knowledge for practitioners and academics to select an open source data science tool that suits the requirements of specific data science projects. <s> BIB008 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Introduction <s> We present the physical node virtualization model for agricultural applications.We justify the advantages of sensor-cloud framework over traditional WSN-based framework.We formulate the sensor node utilization model targeting agricultural applications.We present a model for providing cost effective agro-computing services to large number of farmers.The proposed model is suitable for setups with multiple organizations, users, and applications. The advent of the sensor-cloud framework empowers the traditional wireless sensor networks (WSNs) in terms of dynamic operation, management, storage, and security. In recent times, the sensor-cloud framework is applied to various real-world applications. In this paper, we highlight the benefits of using sensor-cloud framework for the efficient addressing of various agricultural problems. We address the specific challenges associated with designing a sensor-cloud system for agricultural applications. We also mathematically characterize the virtualization technique underlying the proposed sensor-cloud framework by considering the specific challenges. Furthermore, the energy optimization framework and duty scheduling to conserve energy in the sensor-cloud framework is presented. The existing works on sensor-cloud computing for agriculture does not specifically define the specific components associated with it. We categorize the distinct features of the proposed model and evaluated its applicability using various metrics. Simulation-based results show the justification for choosing the framework for agricultural applications. <s> BIB009 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Introduction <s> Smart Farming is a development that emphasizes the use of information and communication technology in the cyber-physical farm management cycle. New technologies such as the Internet of Things and Cloud Computing are expected to leverage this development and introduce more robots and artificial intelligence in farming. This is encompassed by the phenomenon of Big Data, massive volumes of data with a wide variety that can be captured, analysed and used for decision-making. This review aims to gain insight into the state-of-the-art of Big Data applications in Smart Farming and identify the related socio-economic challenges to be addressed. Following a structured approach, a conceptual framework for analysis was developed that can also be used for future studies on this topic. The review shows that the scope of Big Data applications in Smart Farming goes beyond primary production; it is influencing the entire food supply chain. Big data are being used to provide predictive insights in farming operations, drive real-time operational decisions, and redesign business processes for game-changing business models. Several authors therefore suggest that Big Data will cause major shifts in roles and power relations among different players in current food supply chain networks. The landscape of stakeholders exhibits an interesting game between powerful tech companies, venture capitalists and often small start-ups and new entrants. At the same time there are several public institutions that publish open data, under the condition that the privacy of persons must be guaranteed. The future of Smart Farming may unravel in a continuum of two extreme scenarios: 1) closed, proprietary systems in which the farmer is part of a highly integrated food supply chain or 2) open, collaborative systems in which the farmer and every other stakeholder in the chain network is flexible in choosing business partners as well for the technology as for the food production side. The further development of data and application infrastructures (platforms and standards) and their institutional embedment will play a crucial role in the battle between these scenarios. From a socio-economic perspective, the authors propose to give research priority to organizational issues concerning governance issues and suitable business models for data sharing in different supply chain scenarios. <s> BIB010
The Food and Agriculture Organization of the UN (FAO) predicts that the global population will reach 9.2 billion by 2050, and food production must increase by 70 percent to keep the pace . The income distribution in the world is uneven and hugely divided. In one part of the world, prosperity exists, and there is always demand for high-quality food. While in another part of the world, hunger and war exist, and there is always demand for a large quantity of foods. With limited farming land and freshwater resources, this quality and quantity crisis in food can only be addressed by the application of ICT in agriculture. Both small-and large-scale farming can benefit from introducing ICT into the agriculture value chain, having their productivity increased, quality improved, services extended, and costs reduced. Furthermore, ICT facilitates information-and knowledge-based approach rather than only focusing on input-intensive agriculture. As a result, agriculture becomes more networked, and decision making and resource utilization could significantly be leveraged. ICT in agriculture is interchangeably used as e-agriculture, smart agriculture, precision agriculture (PA), or IoT (internet of things) in agriculture depending upon the context. Modern agriculture is hugely automated, controlled, and constantly monitored. Sensors are the heart of ICT, and various sensing devices used for this purpose generate a large volume of data continuously. The application of data analytics helps in solidifying the research in agriculture. It provides insights into various issues in the agriculture like weather prediction, crop and livestock disease, irrigation management, and supply and demand of agriculture inputs and outputs and helps in solving those problems. It can also provide valuable information for optimum resource utilization and production boosting. Our work reviews research articles focused on agricultural data and provides insights on several agricultural issues. A wide variety of review literature is available, covering the topic: sensors and ICT in agriculture. Ojha et al. BIB003 reviewed the use and the state-of-the-art of wireless sensor networks (WSNs) in agriculture. Their work covers applications, design, standards, and technologies of WSNs used in agriculture. Also, another article by same authors BIB009 reviewed and proposed a sensor-cloud framework for the efficient addressing of various agricultural problems and applications. Another review article included key vision control techniques and their potential applications in fruit or vegetable harvesting robots BIB004 . In particular, it looked at various vision schemes and recognition approaches for harvesting robots. Similarly, Zion BIB001 reviewed on the use of computer vision technologies in aquaculture. The review highlighted on the measurement, stock identification, and monitoring of different gender and species of aquatic animals. Other reviews included the keywords "ICT" and "agriculture" but were more focused on models and architectures in agriculture absorbing ICT BIB005 . A recent article by Wolfert et al. BIB010 reviewed the state-ofthe-art of big data applications in smart farming and identifying socio-economic challenges associated with it. The article slightly touched the technological part but largely focused on socio-economic and governance issues for the design of suitable business models. Another recent article by Lan et al. reviewed the state-of-the-art in precision agricultural aviation technology highlighting remote sensing, aerial spraying, and ground verification technologies. Likewise, a large number of research articles do exist, combining the use of artificial intelligence (AI), database, and advanced statistical tools in the agriculture BIB006 BIB002 BIB007 . However, a review article focused on sensors and data analytic techniques in the area of agriculture is still scarce in the literature. This article aims to review the use of ICT especially sensors and data analytic techniques in the area of agriculture. Agriculture in this paper is used in a broader sense and covers research in crop cultivation, horticulture, animal husbandry, apiculture, and aquaculture. Our classification closely resembles the scientific classification of agriculture. By breaking agriculture into different subfields, and reviewing applied sensors and data analytics, we intend to complement existing reviews. The objective of this paper is to review research and development in the area of agriculture from the technological perspective highlighting its various subfields. Also, we intend to facilitate readers in comparing one subfield with the other subfields easily. 1.1. Associated Technical Terminologies. We briefly explain certain terms which frequently appears when we talk about ICT, sensors, and data analytics. Sensors are electronic devices that measure a physical change in its environment and convert it into a suitable electrical form. Sensors like environmental sensors, airflow sensors, location sensors, electrochemical sensors, mechanical sensors, and optical sensors are used for acquiring various kinds of agricultural data. Smart sensors are capable of not only acquiring data but also store, process, and integrate such data. Sensor fusion is a combination of two or more sensor data to get additional insights or overcome the weakness of a single sensor. Wireless sensor networks (WSNs) are networks of such sensors connected wirelessly. An embedded system is a microprocessor/controller embedded into an electro-mechanical system for performing a particular task. It is programmable and has limited memory and processing power. Most of the embedded systems are based on sensing systems consisting of sensors and actuators. Internet of things (IoT) is a complex interconnected network of things that continuously exchange data. Here, "things" refer to any physical devices like sensors, cameras, wearables, vehicles, cell phones, and houses that are connected to the internet. Cloud computing is an internet-based computing service where one can store, manipulate and retrieve data, and utilize resources from anywhere without actually owning the required hardware or software. Data analytics techniques include analyzing and processing of acquired large datasets from the field and providing meaningful information so that any interested party may utilize them for their future work. Those large data sets are called big data, and the analytic technique is called data mining. A separate field of study called data science has emerged recently which combines computer algorithms and statistical methods for data analytics BIB008 .
The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> Abstract This paper proposed an agricultural application of wirless sensor network. The main work is to implement two types of nodes and building sensor network. The hardware platform is constituted by data process unit, radio module, sensor control matrix, data storage flash, power supply unit, analog interfaces and extended digital interfaces. The software system adopts TinyOS which is composed of system kernel, device drivers and applications. Energy-saving algorithm is implemented in the software system. The monitoring network adopts two networking protocols. The Collection Tree Protocol is a tree-based collection protocol which consists in collecting the data generated in the network into a base station. The dissemination is the complementary operation to collection. The goal of a dissemination protocol is to reliably deliver a piece of control and synchronization instructions to every node in the network. The experimental results show us that the monitoring system is feasible for applications in precision agriculture. <s> BIB001 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> With increased use of precision agriculture techniques, information concerning within-field crop yield variability is becoming increasingly important for effective crop management. Despite the commercial availability of yield monitors, many crop harvesters are not equipped with them. Moreover, yield monitor data can only be collected at harvest and used for after-season management. On the other hand, remote sensing imagery obtained during the growing season can be used to generate yield maps for both within-season and after-season management. This paper gives an overview on the use of airborne multispectral and hyperspectral imagery and high-resolution satellite imagery for assessing crop growth and yield variability. The methodologies for image acquisition and processing and for the integration and analysis of image and yield data are discussed. Five application examples are provided to illustrate how airborne multispectral and hyperspectral imagery and high-resolution satellite imagery have been used for mapping crop yield variability. Image processing techniques including vegetation indices, unsupervised classification, correlation and regression analysis, principal component analysis, and supervised and unsupervised linear spectral unmixing are used in these examples. Some of the advantages and limitations on the use of different types of remote sensing imagery and analysis techniques for yield mapping are also discussed. <s> BIB002 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> Using a wireless sensor network, the authors developed an online microclimate monitoring and control system for greenhouses. They field-tested the system in a greenhouse in Punjab, India, evaluating its measurement capabilities and network performance in real time. <s> BIB003 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> Photosynthesis is considered the most important physiological function because it constitutes the main biomass entrance for the planet and consequently it permits the continuance of life on earth. Therefore, accurate photosynthesis measurement methods are required to understand many photosynthesis-related phenomena and to characterize new plant varieties. This project has been carried out to cover those necessities by developing a novel FPGA-based photosynthesis smart sensor. The smart sensor is capable of acquiring and fusing the primary sensor signals to measure temperature, relative humidity, solar radiation, CO"2, air pressure and air flow. The measurements are used to calculate net photosynthesis in real time and transmit the data via wireless communication to a sink node. Also it is capable of estimating other response variables such as: carbon content, accumulated photosynthesis and photosynthesis first derivative. This permits the estimation of carbon balance and integrative and derivative variables from net photosynthesis in real time due to the FPGA processing capabilities. In addition, the proposed smart sensor is capable of performing signal processing, such as average decimation and Kalman filters, to the primary sensor readings so as to decrease the amount of noise, especially in the CO"2 sensor while improving its accuracy. In order to prove the effectiveness of the proposed system, an experiment was carried out to monitor the photosynthetic response of chili pepper (Capsicum annuum L.) as case of study in which photosynthetic activity can successfully be observed during the excitation light periods. Results revealed useful information which can be utilized as new tool for precision agriculture by estimating the aforementioned variables and also the derivative and integrative new indexes. These indexes can be utilized to estimate carbon accumulation over the crop cycle and fast derivative photosynthesis changes in relation to the net photosynthesis measurement which can be utilized to detect different stress conditions in the crops, permitting growers to apply a correction strategy with opportunity. <s> BIB004 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> today a major problem in Kerala is its heavy dependency on neighboring states for food products. One of the main reasons for decline in agriculture in our state is the lack of availability of cheap labour in our state. This problem can be overcome by automation in agriculture (1). The introduction of "AUTOMATED GREENHOUSE MONITORING SYSTEM" can bring a green revolution in agriculture. Introducing this system can help in increasing the cultivation in a controlled environment. Greenhouse environment, used to grow plants under controlled climatic conditions for efficient production, forms an important part of the agriculture and horticulture sectors. Appropriate environmental conditions are necessary for optimum plant growth, improved crop yields, and efficient use of water and other resources. Automating the data acquisition process of the soil conditions and various climatic parameters that govern plant growth allows information to be collected with less labor requirements. Existing EMSs are bulky, very costly, difficult to maintain and less appreciated by the technologically less skilled work-force. This project is designed using world's most powerful microcontroller PIC 16F877A where the temperature, humidity, soil moisture and illumination conditions are analysed (2). <s> BIB005 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> The concept of "Plant factory" could realize the multiple targets of high yield, high quality, high efficiency and security. It had become the trend of agricultural development. It solved the growing contradiction between people's increasing demand for green, organic food and the diminishing agricultural arable area in China. According to the research on the key technologies of plant factory, a small simulated environment for crop growth (i.e., a growth cabinet) was designed. The growth cabinet used the light-emitting diode (LED) light source as crop growth light and simulated ecological environment artificially based on the requirement of crop growth and development. The crop can obtain suitable environmental conditions for growth and development in anti-season and non-suitable environmental conditions by using the sensor and embedded technology. The results of experiments showed that the crop growth cabinet’s structure design was reasonable and had the advantages such as reliable performance, low-carbon, intelligence and security. <s> BIB006 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> Stock theft is a major problem in the agricultural sector in South Africa and threatens both commercial and the emerging farming sectors in most of the country. Although there have been several techniques to identify cattle and combat stock theft, the scourge has not been eradicated in the farming sector. This paper investigates how we can model cow behaviour using global positioning wireless nodes to get the expected position of a cow. The objective of this research is to model the typical behaviour of a cow to determine anomalies in behaviour that could indicate the presence of the thieves. A wireless sensor node was designed to sense the position and speed of a cow. The position and the speed of the cow are collected for analysis. A random walk model is applied to the cow's position in order to determine the probability of the boundary condition where we assume there is an increased probability of a cow on the boundary position being stolen. The Continuous Time Markov Processes (CTMP) is applied to the movement pattern of an individual cow in order to find the probability that the cow will be at the boundary position. The value of 2.5 km/h has been found as our treshold to detect any agitation of the animal. The cow has less probability to be at the boundary position. The predictive model allows us to prevent stock theft in farms especially in South Africa and Africa in general. <s> BIB007 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> Wireless senor network (WSN) is most challenging area to be worked with low cost applications in diversified field developed for military as well as public. The current trend for research in WSN can be in the area of agricultural, where the concept of typical wireless communication with real time sensor nodes provides an approach for a low cost monitoring of a crop in an agricultural area that leads to effective utilization of resources as though WSN supports a very vast application we have chosen agriculture field with WSN because of various drawbacks that are discussed in this paper thus providing a solution to these. The objective of our proposed work is to analyze the behavior of sensor network being used for monitoring the crop in a defined area. For the prototype we have implemented the simulation in qualnet simulator as test scenario. Initially nodes are deployed in a simulated area, we have worked for two strategies such as grid and random topology as, where sensor are placed at different positions are dealt with collecting the data. Simulation shows that the readings obtained in qualnet are much more satisfactory with respect to the application employed that is similar to real time sensors. <s> BIB008 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> The existing state-of-the-art in wireless sensor networks for agricultural applications is reviewed thoroughly.The existing WSNs are analyzed with respect to communication and networking technologies, standards, and hardware.The prospects and problems of the existing framework are discussed with case studies for global and Indian scenarios.Few futuristic applications are presented highlighting the factors for improvements for the existing scenarios. The advent of Wireless Sensor Networks (WSNs) spurred a new direction of research in agricultural and farming domain. In recent times, WSNs are widely applied in various agricultural applications. In this paper, we review the potential WSN applications, and the specific issues and challenges associated with deploying WSNs for improved farming. To focus on the specific requirements, the devices, sensors and communication techniques associated with WSNs in agricultural applications are analyzed comprehensively. We present various case studies to thoroughly explore the existing solutions proposed in the literature in various categories according to their design and implementation related parameters. In this regard, the WSN deployments for various farming applications in the Indian as well as global scenario are surveyed. We highlight the prospects and problems of these solutions, while identifying the factors for improvement and future directions of work using the new age technologies. <s> BIB009 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> ABSTRACTA greenhouse is an enclosed structure that provides micro-climate for the plant growth. This paper presents the design of a wireless sensor network that provides real-time monitoring of temperature, humidity and soil moisture of a greenhouse. An automated control system for managing these micro-climate parameters is developed to optimize the parameters and use of water. The sensor node developed handles the data from the sensors and triggers actuators based on the threshold algorithm programmed into the microcontroller. The gateway receives the sensor data and control information through Zigbee and transmits the data to the web application for remote monitoring. The monitor software provides network view with nodes and their information. Information management system is also designed to monitor the data at any required time. <s> BIB010 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> Plant yield and productivity are significantly affected by abiotic stresses such as water or nutrient deficiency. Automated timely detection of plant stress can mitigate stress development, thereby maximizing productivity and fruit quality. A multi-modal sensing system was developed and evaluated to identify the onset and severity of plant stress in young apple trees (cultivar ‘Gale Gala’) under five different water treatments in a greenhouse. The multi-modal sensors include a multispectral camera, an NDVI sensor, a digital camera, an ultrasonic range finder, and a thermal imager. Photosynthesis measurements for each water treatment group were recorded to determine photosynthesis reduction due to water stress and compared with multi-modal sensor responses. Data analysis determined that spectral signature (NDVI) and canopy temperature are highly correlated to plant water stress. The highest correlation to photosynthesis reduction was found for canopy temperature (r 2 = 0.83), followed by GreenSeeker NDVI (r 2 = 0.76) and multispectral camera (r 2 = 0.64). <s> BIB011 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> Abstract Sugarcane is an important crop for tropical and sub-tropical countries. Like other crops, sugarcane agricultural research and practice is becoming increasingly data intensive, with several modeling frameworks developed to simulate biophysical processes in farming systems, all dependent on databases for accurate predictions of crop production. We developed a computational environment to support experiments in sugarcane agriculture and this article describes data acquisition, formatting, storage, and analysis. The potential to support creation of new agricultural knowledge is demonstrated through joint analysis of three experiments in sugarcane precision agriculture. Analysis of these case studies emphasizes spatial and temporal variations in soil attributes, sugarcane quality, and sugarcane yield. The developed computational framework will aid data-driven advances in sugarcane agricultural research. <s> BIB012 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> We conduct a greenhouse phenotyping study on two maize genotypes with two water regimes.Plant projected area accurately predicts shoot fresh weight, dry weight, and leaf area.Daily water consumption is derived and found to be determined by water treatments.Water use efficiency is derived and determined by plant genotype.Leaf spectra from hyperspectral images accurately predicts plant leaf water content. Automated collection of large scale plant phenotype datasets using high throughput imaging systems has the potential to alleviate current bottlenecks in data-driven plant breeding and crop improvement. In this study, we demonstrate the characterization of temporal dynamics of plant growth and water use, and leaf water content of two maize genotypes under two different water treatments. RGB (Red Green Blue) images are processed to estimate projected plant area, which are correlated with destructively measured plant shoot fresh weight (FW), dry weight (DW) and leaf area. Estimated plant FW and DW, along with pot weights, are used to derive daily plant water consumption and water use efficiency (WUE) of the individual plants. Hyperspectral images of plants are processed to extract plant leaf reflectance and correlate with leaf water content (LWC). Strong correlations are found between projected plant area and all three destructively measured plant parameters (R20.95) at early growth stages. The correlations become weaker at later growth stages due to the large difference in plant structure between the two maize genotypes. Daily water consumption (or evapotranspiration) is largely determined by water treatment, whereas WUE (or biomass accumulation per unit of water used) is clearly determined by genotype, indicating a strong genetic control of WUE. LWC is successfully predicted with the hyperspectral images for both genotypes (R2=0.81 and 0.92). Hyperspectral imaging can be a very powerful tool to phenotype biochemical traits of the whole maize plants, complementing RGB for plant morphological trait analysis. <s> BIB013 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> Automation of greenhouses has proved to be extremely helpful in maximizing crop yields and minimizing labor costs. The optimum conditions for cultivating plants are regularly maintained by the use of programmed sensors and actuators with constant monitoring of the system. In this paper, we have designed a prototype of a smart greenhouse using Arduino microcontroller, simple yet improved in feedbacks and algorithms. Only three important microclimatic parameters namely moisture level, temperature and light are taken into consideration for the design of the system. Signals acquired from the sensors are first isolated and filtered to reduce noise before it is processed by Arduino. With the help of LabVIEW program, Time domain analysis and Fast Fourier Transform (FFT) of the acquired signals are done to analyze the waveform. Especially, for smoothing the outlying data digitally, Moving average algorithm is designed. With the implement of this algorithm, variations in the sensed data which could occur from rapidly changing environment or imprecise sensors, could be largely smoothed and stable output could be created. Also, actuators are controlled with constant feedbacks to ensure desired conditions are always met. Lastly, data is constantly acquired by the use of Data Acquisition Hardware and can be viewed through PC or Smart devices for monitoring purposes. <s> BIB014 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter. <s> BIB015 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> We presented a review on the representative vision schemes for harvesting robots.We reviewed hand-eye coordination techniques and their applications in harvesting robots.We presented some fruit or vegetable harvesting robots and their vision control techniques.We described and discussed the challenges and feature trends for robotic harvesting. Although there is a rapid development of agricultural robotic technologies, a lack of access to robust fruit recognition and precision picking capabilities has limited the commercial application of harvesting robots. On the other hand, recent advances in key techniques in vision-based control have improved this situation. These techniques include vision information acquisition strategies, fruit recognition algorithms, and eye-hand coordination methods. In a fruit or vegetable harvesting robot, vision control is employed to solve two major problems in detecting objects in tree canopies and picking objects using visual information. This paper presents a review on these key vision control techniques and their potential applications in fruit or vegetable harvesting robots. The challenges and feature trends of applying these vision control techniques in harvesting robots are also described and discussed in the review. <s> BIB016 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> Management of poultry farms in China mostly relies on manual labor. Since such a large amount of valuable data for the production process either are saved incomplete or saved only as paper documents, making it very difficult for data retrieve, processing and analysis. An integrated cloud-based data management system (CDMS) was proposed in this study, in which the asynchronous data transmission, distributed file system, and wireless network technology were used for information collection, management and sharing in large-scale egg production. The cloud-based platform can provide information technology infrastructures for different farms. The CDMS can also allocate the computing resources and storage space based on demand. A real-time data acquisition software was developed, which allowed farm management staff to submit reports through website or smartphone, enabled digitization of production data. The use of asynchronous transfer in the system can avoid potential data loss during the transmission between farms and the remote cloud data center. All the valid historical data of poultry farms can be stored to the remote cloud data center, and then eliminates the need for large server clusters on the farms. Users with proper identification can access the online data portal of the system through a browser or an APP from anywhere worldwide. ::: Keywords: cloud-based data management system (CDMS), egg production, intensified laying-hen farms, asynchronous data transmission, metadata ::: DOI: 10.3965/j.ijabe.20160904.2488 ::: ::: Citation: Chen H Q, Xin H W, Teng G H, Meng C Y, Du X D, Mao T T, et al. Cloud-based data management system for automatic real-time data acquisition from large-scale laying-hen farms. Int J Agric & Biol Eng, 2016; 9(4): 106-115. <s> BIB017 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> Maximum light use efficiency ( ${\text{LUE}}_{\rm{max}}$ ) is an important parameter in biomass estimation models (e.g., the Production Efficiency Models (PEM)) based on remote sensing data; however, it is usually treated as a constant for a specific plant species, leading to large errors in vegetation productivity estimation. This study evaluates the feasibility of deriving spatially variable crop ${\text{LUE}}_{\rm{max}}$ from satellite remote sensing data. ${\text{LUE}}_{\rm{max}}$ at the plot level was retrieved first by assimilating field measured green leaf area index and biomass into a crop model (the Simple Algorithm for Yield estimate model), and was then correlated with a few Landsat-8 vegetation indices (VIs) to develop regression models. ${\text{LUE}}_{\rm{max}}$ was then mapped using the best regression model from a VI. The influence factors on ${\text{LUE}}_{\rm{max}}$ variability were also assessed. Contrary to a fixed ${\text{LUE}}_{\rm{max}}$ , our results suggest that ${\text{LUE}}_{\rm{max}}$ is affected by environmental stresses, such as leaf nitrogen deficiency. The strong correlation between the plot-level ${\text{LUE}}_{\rm{max}}$ and VIs, particularly the two-band enhanced vegetation index for winter wheat ( Triticum aestivum ) and the green chlorophyll index for maize ( Zea mays ) at the milk stage, provided a potential to derive ${\text{LUE}}_{\rm{max}}$ from remote sensing observations. To evaluate the quality of ${\text{LUE}}_{\rm{max}}$ derived from remote sensing data, biomass of winter wheat and maize was compared with that estimated using a PEM model with a constant ${\text{LUE}}_{\rm{max}}$ and the derived variable ${\text{LUE}}_{\rm{max}}$ . Significant improvements in biomass estimation accuracy were achieved (by about 15.0% for the normalized root-mean-square error) using the derived variable ${\text{LUE}}_{\rm{max}}$ . This study offers a new way to derive ${\text{LUE}}_{\rm{max}}$ for a specific PEM and to improve the accuracy of biomass estimation using remote sensing. <s> BIB018 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> Abstract Assessment of soil health involves determining how well a soil is performing its biological, chemical, and physical functions relative to its inherent potential. Due to high cost, labor requirements, and soil disturbance, traditional laboratory analyses cannot provide high resolution soil health data. Therefore, sensor-based approaches are important to facilitate cost-effective, site-specific management for soil health. In the Central Claypan Region of Missouri, USA, visible and near-infrared (VNIR) diffuse reflectance spectroscopy has successfully been used to estimate biological components of soil health as well as Soil Management Assessment Framework (SMAF) scores. In contrast, estimation models for important chemical and physical aspects of soil health have been less successful with VNIR spectroscopy. The primary objective of this study was to apply a sensor fusion approach to estimate soil health indicators and SMAF scores using VNIR spectroscopy in conjunction with soil apparent electrical conductivity (EC a ), and penetration resistance measured by cone penetrometer (i.e., cone index, CI). Soil samples were collected from two depths (0–5 and 5–15 cm) at 108 locations within a 10-ha research site encompassing different cropping systems and landscape positions. Soil health measurements and VNIR spectral data were obtained in the laboratory, while CI and EC a data were obtained in situ. Calibration models were developed with partial least squares (PLS) regression and model performance was evaluated using coefficient of determination (R 2 ) and root mean square error (RMSE). Models integrating EC a and CI with VNIR reflectance data improved estimates of the overall SMAF score (R 2 = 0.78, RMSE = 7.21%) relative to VNIR alone (R 2 = 0.69, RMSE = 8.41%), reducing RMSE by 14%. Improved models were also achieved for estimates of the individual chemical, biological, and physical soil health scores, demonstrating reductions in RMSE of 2.8, 5.4, and 10.0%, respectively. The results of this study illustrate the potential for rapid quantification of soil health by fusing VNIR sensor data with auxiliary data obtained from complementary sensors. <s> BIB019 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Sensors and Data Analytics in Agriculture <s> Flowchart and general setup with experimental materials for 3D video recordings of broiler chickens to automatically and continuously measure the locomotion behaviours of broiler chickens.Display Omitted A novel method was proposed to assess lameness of broilers.93% of numbers of lying were correctly classified by the proposed 3D vision camera system.The correlation between proposed and reference methods was found very high.Measurements can be made continuously, in a fully automated and non-invasive way. In this study, a new and non-invasive method was developed to automatically assess the lameness of broilers. For this aim, images of broiler chickens were recorded by a 3D vision camera, which has a depth sensor as they walked along a test corridor. Afterwards, the image-processing algorithm was applied to detect the number of lying events (NOL) based on the information of the distance between animal and the depth sensor of 3D camera. In addition to that, latency to lie down (LTL) of broilers was detected by 3D camera. Later on, the data obtained by proposed system were compared with visually assessed manual labelling data (reference method) and the relation between these measures and lameness was investigated. 93% of NOL were correctly classified by the proposed 3D vision camera system when compared to manual labelling using a data set collected from 250 broiler chickens. Furthermore, the results showed a significant correlation between NOL and gait score (R2=0.934) and a significant negative correlation between LTL and gait score level of broiler chickens (R2=0.949). Because of the strong correlations were found between NOL, LTL and gait score level of broilers on the one hand and between the results obtained by 3D system and manual labelling on the other hand, the results indicate that this 3D vision monitoring method can be used as a tool for assessing lameness of broiler chickens. <s> BIB020
3.1. Agronomy/Crop Farming. The general use of wireless sensor networks in crop monitoring and data acquisition can be found in various research and review articles BIB001 BIB008 . Modern agronomical research and practice are becoming more and more data intensive. Data are continually collected, analyzed, and simulated to understand and predict crop growth and behavior under various circumstances. Driemeier et al. BIB012 proposed a computational environment to support research in sugarcane precision agriculture. The work presented a data analysis workflow model for data acquisition, formatting, and verification. The model was employed to analyze three joint experiments comprised of soil attributes, sugarcane quality, and sugarcane yield. Yang et al. BIB002 reported the use of airborne multispectral and hyperspectral imagery and high-resolution satellite imagery for monitoring growth and estimating crop yield. They presented several application examples to demonstrate the advantages and limitations of different remote sensing and imagery analysis techniques. Similarly, Dong et al. BIB018 studied the feasibility of deriving spatially variable crop maximum light use efficiency (LUE max ) from satellite remote sensing data to improve crop biomass estimation. This study offered a new way to derive LUE max for specific production efficiency models (PEM) and to improve the accuracy of biomass estimation using remote sensing. Equations (1), BIB009 , and (3) represent an accuracy assessment model based on three statistical criteria. Here, RMSE, nRMSE, and d-index represent the rootmean-square error, the normalized RMSE, and the index of agreement, respectively. A smaller value for both RMSE and nRMSE gives the higher estimation accuracy. The parameters inside the equations n, M i , E i , and M represent the number of observations, the measured value, the estimated value, and the mean of all measured values. Similarly, Ge et al. BIB013 reported a study to characterize the temporal dynamics of maize plants' growth and water use through RGB (red, green, blue) images and automated pot weights. These methods proved to help in quantifying plant leaf water content. Figure 2 represents the hyperspectral image analysis to extract leaf pixels and the average leaf reflectance to predict plant leaf water content. Kristen et al. BIB019 reported sensor-based approaches to facilitate costeffective and site-specific management for soil health. The authors applied a sensor fusion approach (partial least square analysis) to estimate soil health indicators and soil management assessment framework (SMAF) scores using visible and near-infrared (VNIR) spectra in conjunction with electrical conductivity (EC a ) sensor data. The fusion of EC a and CI data with VNIR improved estimation of the physical category and subsequently the overall SMAF soil health. However, chemical and fertility-related soil properties were not well estimated by this sensor fusion combination. 3.2. Horticulture/Plant Farming. Horticulture can sometimes be considered as a branch of agronomy concerned with the cultivation of plants, fruits, and vegetables rather than crops. The sensing technologies used in both areas are of similar nature. Various papers report design of greenhouse horticulture monitoring and control systems with WSN and commercially available embedded systems or IoT prototyping platforms BIB010 BIB003 BIB005 . Kim and Glenn BIB011 reported the development of a multimodal sensing system to identify the onset and severity of plant stress in young apple trees under different water treatments in a greenhouse. The data analysis result determined the spectral signature, and canopy temperature was highly correlated to plant water stress. Figure 3 represents thermal images of five different apple trees in the temperature range of 22.3°C to 40.7°C. The rectangle in each image indicates the region of interest (ROI) for the calculation of canopy temperature. In a similar manner, Tian et al. BIB006 reported the design of a growth cabinet using an LED light source for hydroponics cultivation of rape plants. The work's focus was to design a light source made up of blue and red LEDs to predict and provide enough energy required by plants for photosynthesis at different growth stages. The designed system could also control microclimatic parameters like temperature, humidity, light intensity, and BIB014 a moving average algorithm for smoothing out variations in sensed data for a greenhouse automation system. The algorithm could greatly help in stabilizing fluctuations caused by rapid change in the environment or imprecise sensors and thus bringing stable output. Equation (4) represents the mathematical expression of the moving average algorithm presented in the work. Rose et al. BIB015 researched the collection, classification, and quantization of phenotypic data of multiple vine rows using commercial multi-view-stereo software. Using a moving sensor platform (a track-driven vehicle, camera, GPS, and data acquisition hardware), morphological data of multiple vine rows were acquired. The authors claimed to complement existing 2D research with 3D solution-based image processing and demonstrated different data processing stages for predicting yields. Similarly, Zhao et al. BIB016 reviewed key techniques in vision-based control for harvesting fruits and vegetables. Their work presented an overview of various vision schemes (binocular, spectral, thermal, laser, etc.) and image processing algorithms (AdaBoost, Bayesian, fuzzy neural, etc.) for fruit recognition system. Jesus et al. BIB004 proposed a field programmable gate array-(FPGA-) based wireless smart sensor for real-time photosynthesis monitoring system. A case study to monitor the photosynthetic response of chili pepper. Capsicum annuum L. is made where the smart sensor acquires and fuses the primary sensor signals to measure temperature, relative humidity, solar radiation, CO2, air pressure, and air flow. The measurements are used to calculate net photosynthesis in real time and transmit the data via wireless communication to a sink node. In addition, the proposed smart sensor was equipped with signal processing ability, such as average decimation and Kalman filters, to the primary sensor readings so as to decrease the amount of noise in them as shown in Hyperspectral image analysis to extract leaf pixels and the average leaf reflectance (pixel intensity) to predict plant leaf water content (Ge et al. BIB013 Journal of Sensors help in the decision-making process and provide a solution to various problems. Aydin BIB020 reported a study to automatically assess the lameness of broiler by observing locomotion behaviors with the use of 3D vision camera and image process algorithm. The work is presented as a first attempt at assessing the lameness of broiler chickens with the use of 3D cameras having depth sensors as shown in Figure 5 . Experiments were conducted to determine the number of lying events and latency to lie down of broiler chickens. The gait scores from 0 to 4 were chosen to rank lameness in the chicken. The proposed system had a high correlation between the output parameters against the manual labeling. Authors, therefore, asserted their work would be useful for developing an automatic animal monitoring and behavior analysis system to assess the health and welfare of broilers. Hongqian et al. BIB017 proposed a cloud-based data management system (CDMS) for automatic data collection for laying-hen farms. The CDMS facilitated asynchronous data transmission (Kafka-based), file distribution (Hadoop-based), information collection and management (MySQL-based) for the farms located in different areas. The system was set with 8 networking nodes and experimented at a commercial egg farm. The work expected to enhance the modern poultry in the efficient management of big data and record-keeping in real-time. Nkwari et al. BIB007 presented an article where a cow behavior is modeled using data collected from GPS sensors to get the expected position of it. The authors used the continuous-time Kalman filters Net photosynthesis processor
The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Journal of Sensors <s> Computer vision technology is a sophisticated inspection technology that is in common use in various industries. However, it is not as widely used in aquaculture. Application of computer vision technologies in aquaculture, the scope of the present review, is very challenging. The inspected subjects are sensitive, easily stressed and free to move in an environment in which lighting, visibility and stability are generally not controllable, and the sensors must operate underwater or in a wet environment. The review describes the state of the art and the evolution of computer vision in aquaculture, at all stages of production, from hatcheries to harvest. The review is organized according to inspection tasks that are common to almost all production systems: counting, size measurement and mass estimation, gender detection and quality inspection, species and stock identification, and monitoring of welfare and behavior. The objective of the review is to highlight areas of research and development in the field of computer vision which have made some progress, but have not matured into a useful tool. There are many potential applications for this technology in aquaculture which could be useful for improving product quality or production efficiency. There have been quite a few initiatives in this direction, and a tight collaboration between engineers, fish physiologists and ethologists could contribute to the search for, and development of solutions for the benefit of aquaculture. <s> BIB001 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Journal of Sensors <s> SummaryIn Central Western France, as in many other areas, traditional apiculture has been replaced by more intensive practices to compensate for colony losses and current decreasing honey yields. One neglected aspect concerns the choice by professional beekeepers of apiary sites in intensive agrosystems, with regard to landscape features, a choice which appears to be largely empirical. ECOBEE is a colony monitoring scheme specifically intended to provide beekeepers and researchers with basic ecological data on honeybees in intensive agrosystems, as well as colony population dynamics. ECOBEE was launched in 2008 as a long-term ecological project with three specific aims: 1. to monitor seasonal and inter-annual population dynamic parameters of honeybee colonies in a heterogeneous farming system; 2. to provide relevant and robust datasets to test specific hypotheses about bees such as the influence of landscape planning, agricultural inputs or human pressure; and 3. to offer opportunities for assessing the e... <s> BIB002 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Journal of Sensors <s> Honey bees have held a critical role in agriculture and nutrition from the dawn of human civilisation. The most crucial role of the bee is pollination; the value of pollination dependant crops is estimated at € 155 billion per year with honey bees identified as the most important pollinator insect. It is clear that honey bees are a vitally important part of the environment which cannot be allowed to fall into decline. The project outlined in this paper uses Wireless Sensor Network (WSN) technology to monitor a beehive colony and collect key information about activity/environment within a beehive as well as its surrounding area. This project uses low power WSN technologies, including novel sensing techniques, energy neutral operation, and multi-radio communications; together with cloud computing to monitor the behaviour within a beehive. The insights gained through this activity could reduce long term costs and improve the yield of beekeeping, as well as providing new scientific evidence for a range of honey bee health issues. WSN is an emerging modern technology, key to the novel concept of the Internet of Things (IoT). Comprised of embedded sensing, computing and wireless communication devices, they have found applications in nearly every aspect of daily life. Informed by biologists' hypotheses, this work used existing, commercially available WSN platforms together with custom built systems in an innovative application to monitor honey bee health and activity in order to better understand how to remotly monitor the health and behaviour of the bees. Heterogeneous sensors were deployed, monitoring the honey bees in the hive (temperature, CO2, pollutants etc.). Weather conditions throughout the deployment were recorded and a relationship between the hive conditions and external conditions was observed. A full solution is presented including a smart hive, communication, and data aggregation and visualisation tools. Future work will focus on improving the energy performance of the system, introducing a more specialised set of sensors, implementing a machine learning algorithm to extract meaning from the data without human supervision; and securing additional deployments of the system. <s> BIB003 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Journal of Sensors <s> In recent years monitoring of beehives through technology has become increasingly frequent in research and industry. This is due to a decline in beekeeping, and stagnant honey bee populations across the globe, due to, among other factors, pests and disease. Recent advances in the area of low power wireless sensor network technology can be applied to the beehive for a better understanding of the colony's condition. This combination of engineering and beekeeping has led to the emergence of Precision Beekeeping. One of the key metrics of the strength of a beehive is the weight of the colony. Changes in weight can accurately reflect the productivity of the colony, as well as its health and condition. This paper describes the development of a wireless platform weighing scales, for implementation as part of a smart beehive. A single point impact load cell was selected as the most appropriate load sensor and was integrated into the design of the scales. The final weighing system was interfaced via a high precision analogue to digital converter to an off-the-shelf processing platform enabled with a low power Zigbee radio, to allow for data transfer to the base station. An initial simulation of the scale's ability was carried out, using standard weights to simulate the brood chamber of a beehive and varying weight to mimic the production and consumption of honey. The results showed that the initial platform scale has a linear output characteristic. The analogue to digital converter was evaluated and the system was found to be able to detect changes in weight in the order of tens of grams. A power analysis of the system was also undertaken to confirm that the solution was suitable for remote, battery powered, deployments. <s> BIB004 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Journal of Sensors <s> A multi-parameters monitoring system based on wireless network was set up to achieve remote real-time monitoring of aquaculture water quality, in order to improve the quality of aquaculture products and solve such problems as being difficult in wiring and high costs in current monitoring system. In the system solar cells and lithium cells were used for power supply. The YCS-2000 dissolved oxygen sensor, pH electrode, Pt1000 temperature sensor and ammonia nitrogen sensor were used to monitor the parameters of aquaculture water quality; STM32F103 chip was used for data processing; Zigbee and GPRS modules were used for data transmission to the remote monitoring center, where the data were stored and displayed. The system was connected with aerator to realize automatic control of dissolved oxygen concentration. The test results showed high confidence level of data transmission with a packet loss rate of 0.43%. Therefore, the system could fulfill the real-time remote monitoring of aquaculture water quality and had great practical significance in reduction of labor intensity, improvement of quality of aquatic products and protection of water environment. ::: Keywords: aquaculture, water quality, real-time monitoring, wireless sensor network, aerator ::: DOI: 10.3965/j.ijabe.20150806.1486 ::: ::: Citation: Luo H P, Li G L, Peng W F, Song J, Bai Q W. Real-time remote monitoring system for aquaculture water quality. Int J Agric & Biol Eng, 2015; 8(6): 136-143. <s> BIB005 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Journal of Sensors <s> Abstract Predictive analytics can be used to make smarter decisions in farming by collecting real-time data on weather, soil and air quality, crop maturity and even equipment and labor costs and availability. This is known as precision agriculture. Big data is expected to play an important role in precision agriculture for managing real-time data analysis with massive streaming data. The data analysis efficiency and throughput would be a challenge with the massive increase in size of big data. The unstructured streaming data received from different agricultural sources would contain multiple dimensions and not the entire content is needed for performing analysis. The core data which is small but that alone enough to represent the entire content should be extracted. This paper explains how to systematically reduce the size of big data by applying a tensor based feature reduction model. The data decomposition and core value extraction is done with the help of IHOSVD algorithm. This way it reduces the overall file size by eliminating unwanted data dimensions. The time involved in data analysis and CPU usage will be significantly reduced when dimensionality reduced data is used in place of raw (unprocessed) data. <s> BIB006 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Journal of Sensors <s> Abstract: Poultry behavior monitoring is an important basis for the poultry disease warning. Manual monitoring is mostly used nowadays. In this work, the automatic monitoring system for assisting manual monitoring was examined. Sophisticated data mining techniques were used to leverage the data collected by RFID devices. Specifically, (1) weighing sensors and wireless networks of Multiple RFID-tag-collector groups were used to monitor the poultry behavior; (2) RFID tags were putted on individual poultry so that the moving time of the poultry between two RFID-tag-collectors could be recorded. Thus, the characteristic functions of poultry behaviors such as speed, ability to snatch food and resting time could be extracted based on the distance between two RFID-tag-collectors and the relevant time parameters; (3) the sick, normal, active and other poultry groups were categorized by using the K-means method which utilizing the behavior characteristics and poultry weight data in data mining. The results demonstrated that accurate classifications could be obtained according to the poultry characteristics, and the clustering results matched with the results obtained by manual method to identify the poultry groups. Consequently, the technique in this paper has great potential for large-scale poultry disease warning and poultry classification. ::: Keywords: poultry behavior, monitoring, cloud computing, internet of things (IoT), radio-frequency identification, data mining ::: DOI: 10.3965/j.ijabe.20160906.1568 ::: ::: Citation: Zhang F Y, Hu Y M, Chen L C, Guo L H, Duan W J, Wang L. Monitoring behavior of poultry based on RFID radio frequency network. Int J Agric & Biol Eng, 2016; 9(6): 139-147. <s> BIB007 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Journal of Sensors <s> A WSN for extended monitoring of beehive activity and condition has been developed.Data collected from a beehive were analysed from a multi-disciplinary perspective.A decision tree algorithm describing hive/colony status was proposed and evaluated.An algorithm for predicting short term rainfall local to the hive was also proposed.The algorithms were deployed in network with a minimal energy increase (5.35%). United Nations reports throughout recent years have stressed the growing constraint of food supply for Earth's growing human population. Honey bees are a vital part of the food chain as the most important pollinator for a wide range of crops. It is clear that protecting the population of honey bees worldwide, as well as enabling them to maximise their productivity, is an important concern. In this paper heterogeneous wireless sensor networks are utilised to collect data on a range of parameters from a beehive with the aim of accurately describing the internal conditions and colony activity. The parameters measured were: CO2, O2, pollutant gases, temperature, relative humidity, and acceleration. Weather data (sunshine, rain, and temperature) were also collected to provide an additional analysis dimension. Using a data set from a deployment at a field-deployed beehive, a biological analysis was undertaken to classify ten important hive states. This classification led to the development of a decision tree based classification algorithm which could describe the beehive using sensor network data with 95.38% accuracy. Finally, a correlation between meteorological conditions and beehive data was observed. This led to the development of an algorithm for predicting short term rain based on the parameters within the hive. Envisioned applications of this algorithm include agricultural and environmental monitoring for short term local forecasts (95.4% accuracy). Experimental results shows the low computational and energy overhead (5.35% increase in energy consumption) of the classification algorithm when deployed on one network node, which allows the node to be a self-sustainable intelligent device for smart bee hives. <s> BIB008 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Journal of Sensors <s> Bees are very important for terrestrial ecosystems and, above all, for the subsistence of many crops, due to their ability to pollinate flowers. Currently, the honey bee populations are decreasing due to colony collapse disorder (CCD). The reasons for CCD are not fully known, and as a result, it is essential to obtain all possible information on the environmental conditions surrounding the beehives. On the other hand, it is important to carry out such information gathering as non-intrusively as possible to avoid modifying the bees’ work conditions and to obtain more reliable data. We designed a wireless-sensor networks meet these requirements. We designed a remote monitoring system (called WBee) based on a hierarchical three-level model formed by the wireless node, a local data server, and a cloud data server. WBee is a low-cost, fully scalable, easily deployable system with regard to the number and types of sensors and the number of hives and their geographical distribution. WBee saves the data in each of the levels if there are failures in communication. In addition, the nodes include a backup battery, which allows for further data acquisition and storage in the event of a power outage. Unlike other systems that monitor a single point of a hive, the system we present monitors and stores the temperature and relative humidity of the beehive in three different spots. Additionally, the hive is continuously weighed on a weighing scale. Real-time weight measurement is an innovation in wireless beehive—monitoring systems. We designed an adaptation board to facilitate the connection of the sensors to the node. Through the Internet, researchers and beekeepers can access the cloud data server to find out the condition of their hives in real time. <s> BIB009 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Journal of Sensors <s> Structured Light (SL) emission sensor was used to monitor multiple fish activities.Introduced tracking system is able to track multiple fish in three-Dimension (3D).Accuracy assessment of results showed reliable trajectory.Introduced system can be used to monitor fish behaviours as a welfare indicator. Image-based monitoring using video tracking has been showing potential in aquaculture behavioural studies during the past decade. It provides higher spatial and temporal resolution in comparison to most conventional methods such as hand scoring, tagging or telemetry. It also permits more quantitative environmental data to be collected than do other methods. Studies about trajectory are usually based on tracking in two-Dimensional (2D) environments; however, most aquatic organisms move in a three-Dimensional (3D) environment, which greatly influences ecological interactions. Furthermore, in most 2D image analysis, occlusion of fish is a frequent problem for analysis of tracking and ultimately evaluating their behaviour. Recently, sensors based on 3D single point imaging technology, which can provide geometric information of 3D environment with high-frame rate in real time have been developed. These sensors provide the opportunity to develop a practical and affordable tracking system to study movements of multiple fish in real-time. This study aims to develop a multiple fish tracking system in 3D space based on currently available structured-light sensor. Kinect I as low cost available structured-light sensor was used to record a 10-min video from four Nile tilapia (Oreochromis niloticus) which were freely swimming in an aquarium. The video was processed to identify position of each fish in 3D space (x, y, and z) within each frame so as to create a trajectory. The system accurately (98%) tracked multiple tilapia in an aquarium. Another objective of this study was comparing trajectory of introduced system with stereo vision as a conventional method for monitoring in 3D space. This study is contributing to feasibility of new sensor for monitoring fish behaviours in 3D space. <s> BIB010 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Journal of Sensors <s> Piglet crushing can be detected online using vocalisation analysis and context information.Fundamental crushing context information can be obtained by tracking the posture of the sows.Spatial event filtering is a prerequisite for a high precision of the crushing detection.Active measures against piglet crushing could replace passive measures like the farrowing cage. Fatal piglet crushing by the mother sow is a pervasive economic and animal welfare issue in piglet production. To keep the mother sow in a farrowing cage is the established countermeasure. This facility is a compromise that results in an impairment of the sows welfare to the benefit of her piglets and the farmer. A natural behaviour pattern which is demonstrated by most but not all sows is to free the trapped piglet by a posture change. Promoting this behaviour through aversive stimulations is an alternative approach to reduce piglet mortality. This approach requires an identification and localisation of ongoing piglet trapping in real-time. The present study investigates the online analysis of piglet vocalisation for this purpose. The results show, that trapping related stress articulations are outnumbered by other stress related articulations by a factor of 1:140 in a farrowing compartment with only 4 sows. Theoretical calculations for larger compartments indicate that this ratio becomes even worse due to an increasing influence of vocalisation from neighbouring pens. However, the specificity could be increased to more than 95% and precision to approximately 30% while maintaining a sensitivity of approximately 70% by retrospectively applying context based event filters. This specificity would be sufficient to limit the average number of erroneous trapping detections to one detection per sow within 3days without a substantial loss of sensitivity. Effective parameters for filtering were the age of the piglets and the sows body posture history. Calculations with hypothetical spatial event filters showed that this classification performance could be maintained even in much larger farrowing compartments. Combined with an aversive stimulation principle that can be applied to a whole region, this detection technology could be useful to reduce piglet mortality in loose farrowing applications. An already known and effective stimulation principle of this type is floor vibration. Such an active piglet rescue system would allow limiting the impairment of welfare to only those sows that actually crush piglets and to the time when piglets are being crushed. <s> BIB011 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Journal of Sensors <s> Smart Farming is a development that emphasizes the use of information and communication technology in the cyber-physical farm management cycle. New technologies such as the Internet of Things and Cloud Computing are expected to leverage this development and introduce more robots and artificial intelligence in farming. This is encompassed by the phenomenon of Big Data, massive volumes of data with a wide variety that can be captured, analysed and used for decision-making. This review aims to gain insight into the state-of-the-art of Big Data applications in Smart Farming and identify the related socio-economic challenges to be addressed. Following a structured approach, a conceptual framework for analysis was developed that can also be used for future studies on this topic. The review shows that the scope of Big Data applications in Smart Farming goes beyond primary production; it is influencing the entire food supply chain. Big data are being used to provide predictive insights in farming operations, drive real-time operational decisions, and redesign business processes for game-changing business models. Several authors therefore suggest that Big Data will cause major shifts in roles and power relations among different players in current food supply chain networks. The landscape of stakeholders exhibits an interesting game between powerful tech companies, venture capitalists and often small start-ups and new entrants. At the same time there are several public institutions that publish open data, under the condition that the privacy of persons must be guaranteed. The future of Smart Farming may unravel in a continuum of two extreme scenarios: 1) closed, proprietary systems in which the farmer is part of a highly integrated food supply chain or 2) open, collaborative systems in which the farmer and every other stakeholder in the chain network is flexible in choosing business partners as well for the technology as for the food production side. The further development of data and application infrastructures (platforms and standards) and their institutional embedment will play a crucial role in the battle between these scenarios. From a socio-economic perspective, the authors propose to give research priority to organizational issues concerning governance issues and suitable business models for data sharing in different supply chain scenarios. <s> BIB012
Markov process (CTMP) in order to model the random movement pattern (stochastic process) as shown in Here, P ij is the total probability that the cow moves from the location i to j. P n ij represents the probability P ij after the cow has taken n steps. p 1 , p 2 , p 3 , and p 4 represent the probability the cow is in four different boundaries. a 1 and b 1 represent point limit of each boundary. By calculating which cow has a greater probability to get stolen, the work expects to help in preventing cattle rustling in farms. Manteuffel et al. BIB011 presented a study which can help in preventing fatal piglet crushing events by mother sow. The work focuses on the extensive study of distress-specific vocalization to detect crushing events and thereby triggering for a mechanism to induce posture changes. Another study conducted by Feiyang et al. BIB007 , a wireless network with the RFID tag collectors and weight sensors was employed to monitor chickens on a farm. The system detected sick chickens in the farm and classified them by studying their behaviors and parameters like the ability to snatch food, resting time, moving speed, and weight. The classification thus made and the extracted information is expected to facilitate precise husbandry and epidemic warning. Equation (6) represents the formula for chickens' resting time and the Euclidean distance formula applied in the K-means clustering method for the recognition of chickens' disease and quality, respectively. Here, Stay i represents the total resting time of chicken i from the beginning, that is, Stay f irst i to last period of stay, that is, Stay last . In the same manner, i = x i1 , x i2 , … , x ip and j = x j1 , x j2 , … , x jp are two p-dimension objects with D i, j Euclidean distance between them. The parameters x i1 , x i2 , … , x ip are the chicken's ability to snatch food, weight, speed, and resting time of the chicken i. The same concept goes for another chicken j. Smaller Euclidean distance represents chickens of similar characteristics. 3.4. Apiculture/Beekeeping. Apiculture or beekeeping is a branch of agriculture where bee colonies are maintained in hives for harvesting honey. Various ICT technologies especially WSN were used previously to monitor the beehives and get various environmental data through sensors BIB011 BIB002 BIB003 BIB004 . Due to various environmental changes, their population is reported to be rapidly decreasing, and various interdisciplinary researches are going on to understand this phenomenon and provide possible solutions as well. Murphy et al. BIB008 proposed a threshold-based algorithm and decision tree algorithms based on a biological study of bees to detect important hive changes and alert the beekeeper in a WSN for monitoring bee health. It classified hives as being in one of the ten possible states ranging from "normal" to "dead" which might or might not require an immediate response from the beekeeper. With the knowledge acquired, the beekeeper can automatically apply established beekeeping knowledge to the collected data, allowing early identification of poor health for improved colony health as well as analysis of the behavior. Figure 6 represents a decision tree algorithm to classify hive states of the bee colonies. Here, [38], a hierarchical three-level model consisting of a wireless node, a local data server, and a cloud data server called WBee is designed for monitoring honey bee colonies. The main distinguishing work presented in the paper is the design of the system acquiring synchronized samples from all the hives and being able to save data at each level in case there is a communication failure. 3.5. Aquaculture/Fishkeeping. Aquaculture outputs have high demands in the global food market, and the application of ICT has helped to increase its quality and the production in recent years. Water quality is the most important parameter in aquaculture since health, appetite, growth, and other activities of aquatic animals depend on it. Various subparameters like dissolved oxygen (DO), pH level, temperature, salinity, turbidity, and ammonia nitrogen content affect the quality of water. Monitoring of such parameters and recirculating water periodically is essential in maintaining the quality of water. Lebrero et al. BIB009 reported the design of multiparameters monitoring system where DO sensor, pH electrode, pt1000 temperature sensor, and NH3-N sensor were used to monitor aquaculture water quality. The system would trigger an aerator ON or OFF if the sensor would read below or above the threshold (4 to 5.5 mg/L) of DO. Similar work by Hongpin et al. BIB005 reported on aquaculture monitoring system and control based on virtual instruments, additional features of power management, and networking solutions. The work has implemented sensor network nodes (dissolved oxygen sensor, temperature sensor, water level sensor, and pH sensor) in fish ponds for maximizing monitoring, control, and recording of the aquaculture system. With such benefits, the work reported on effectively reducing the probability of high risk of fish mortality, increase on economic benefit and consumer confidence, safety, and low energy consumption. The working concept of the designed system is presented in a flowchart as given in Figure 7 . Simbeye et al. presented a multiple fish tracking systems in 3D space with structured-light (SL) sensor for acquisition of detailed information required for behavioral studies. Similarly, Saberioon and Cisar BIB010 used nearinfrared imaging technique to observe feeding process and behavior of fish. Their work BIB010 can help to quantify such behavior of fish and can help in developing an automatic feeding system in the future. Figure 8 represents the work of infrared imaging technique to observe the feeding process of fish. Zion reviewed the use of computer vision technologies in aquaculture and reported the satisfactory level of works done in the area of edible or ornamental fish farms BIB001 . However, in the case of sea cage farms, the author points out the challenges in the application of the technology because of various parameters like deep water level, high 3.6. Chapter Conclusion. In this chapter, we reviewed various sensor-based technologies and data analytics techniques used in the field of agriculture. As mentioned in the previous chapters, agriculture is a very vast field and we do not attempt to review the application and development of different technologies in every process, steps, fields, and subfields of agriculture. We have only covered In-field applications. There are other types of technological advancements made which has revolutionized the field of agriculture. Some of the noticeable things are advances in chemical fertilizers and pesticides, genetic engineering, soil and irrigation technology, agricultural tools and machinery, and production and distribution technology. However, review of such technologies is beyond the scope of this article. Table 1 shows the state-of-the-art use of different sensors and data analytics techniques presented in this work ( ). Tables 2 and 3 show the state-of-the-art technology of different sensing system platforms and big data applications in smart farming and key issues, respectively. Companies? Or government? The problem should be addressed, but again if it is addressed too strictly, it can slow down innovations. Also, it is important to improve the understanding of big data usage. It is necessary to systematically promote the concept, its practical use, necessity, and value of use by expanding the education and sensitization of big data utilization BIB012 . 5.5. Cost and Investment. Investing in the technology should not only make things easier to do but also help in increasing the return. Reduced cost increases the willingness to embrace the technology. Therefore, the cost of sensing systems and ICTs needs to be reduced, and their use needs to be financially sustainable. Farmers should be briefed about the economic consequences before and after the use of the technology BIB006 . 5.6. Multidiscipline Collaboration. Not only through sensors technology and ICT but also many issues in the agriculture can be approached from various other disciplines. They can offer better solutions, enhance productivity, and provide other insights that agriculture scientists and other concerned parties might have overlooked. Collaboration and cooperation among experts from different fields will help in the betterment of the agricultural industry.
The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Korean Scenario <s> The concept of "Plant factory" could realize the multiple targets of high yield, high quality, high efficiency and security. It had become the trend of agricultural development. It solved the growing contradiction between people's increasing demand for green, organic food and the diminishing agricultural arable area in China. According to the research on the key technologies of plant factory, a small simulated environment for crop growth (i.e., a growth cabinet) was designed. The growth cabinet used the light-emitting diode (LED) light source as crop growth light and simulated ecological environment artificially based on the requirement of crop growth and development. The crop can obtain suitable environmental conditions for growth and development in anti-season and non-suitable environmental conditions by using the sensor and embedded technology. The results of experiments showed that the crop growth cabinet’s structure design was reasonable and had the advantages such as reliable performance, low-carbon, intelligence and security. <s> BIB001 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Korean Scenario <s> We conduct a greenhouse phenotyping study on two maize genotypes with two water regimes.Plant projected area accurately predicts shoot fresh weight, dry weight, and leaf area.Daily water consumption is derived and found to be determined by water treatments.Water use efficiency is derived and determined by plant genotype.Leaf spectra from hyperspectral images accurately predicts plant leaf water content. Automated collection of large scale plant phenotype datasets using high throughput imaging systems has the potential to alleviate current bottlenecks in data-driven plant breeding and crop improvement. In this study, we demonstrate the characterization of temporal dynamics of plant growth and water use, and leaf water content of two maize genotypes under two different water treatments. RGB (Red Green Blue) images are processed to estimate projected plant area, which are correlated with destructively measured plant shoot fresh weight (FW), dry weight (DW) and leaf area. Estimated plant FW and DW, along with pot weights, are used to derive daily plant water consumption and water use efficiency (WUE) of the individual plants. Hyperspectral images of plants are processed to extract plant leaf reflectance and correlate with leaf water content (LWC). Strong correlations are found between projected plant area and all three destructively measured plant parameters (R20.95) at early growth stages. The correlations become weaker at later growth stages due to the large difference in plant structure between the two maize genotypes. Daily water consumption (or evapotranspiration) is largely determined by water treatment, whereas WUE (or biomass accumulation per unit of water used) is clearly determined by genotype, indicating a strong genetic control of WUE. LWC is successfully predicted with the hyperspectral images for both genotypes (R2=0.81 and 0.92). Hyperspectral imaging can be a very powerful tool to phenotype biochemical traits of the whole maize plants, complementing RGB for plant morphological trait analysis. <s> BIB002 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Korean Scenario <s> Display Omitted We mould the concept Software Ecosystems to the agricultural domain.We propose a reference architecture for Farm Software Ecosystems.Our reference architecture describes an organizational and technical infrastructure.We motivate that our reference architecture can improve farm enterprise integration.Our reference architecture is used to review some existing initiatives. Smart farming is a management style that includes smart monitoring, planning and control of agricultural processes. This management style requires the use of a wide variety of software and hardware systems from multiple vendors. Adoption of smart farming is hampered because of a poor interoperability and data exchange between ICT components hindering integration. Software Ecosystems is a recent emerging concept in software engineering that addresses these integration challenges. Currently, several Software Ecosystems for farming are emerging. To guide and accelerate these developments, this paper provides a reference architecture for Farm Software Ecosystems. This reference architecture should be used to map, assess design and implement Farm Software Ecosystems. A key feature of this architecture is a particular configuration approach to connect ICT components developed by multiple vendors in a meaningful, feasible and coherent way. The reference architecture is evaluated by verification of the design with the requirements and by mapping two existing Farm Software Ecosystems using the Farm Software Ecosystem Reference Architecture. This mapping showed that the reference architecture provides insight into Farm Software Ecosystems as it can describe similarities and differences. A main conclusion is that the two existing Farm Software Ecosystems can improve configuration of different ICT components. Future research is needed to enhance configuration in Farm Software Ecosystems. <s> BIB003 </s> The State-of-the-Art of Knowledge-Intensive Agriculture: A Review on Applied Sensing Systems and Data Analytics <s> Korean Scenario <s> Delaunay Triangulation was applied to the extraction of behavioral characteristics.Support Vector Machine was used to classify the reflective frame.Serious reflection frames were removed and new data were fitted.The linear correlation coefficient between FIFFB and human expert can reach 0.945. In aquaculture, fish feeding behavior under culture conditions holds important information for the aquaculturist. In this study, near-infrared imaging was used to observe feeding processes of fish as a novel method for quantifying variations in fish feeding behavior. First, images of the fish feeding activity were collected using a near-infrared industrial camera installed at the top of the tank. A binary image of the fish was obtained following a series of steps such as image enhancement, background subtraction, and target extraction. Moreover, to eliminate the effects of splash and reflection on the result, a reflective frame classification and removal method based on the Support Vector Machine and Gray-Level Gradient Co-occurrence Matrix was proposed. Second, the centroid of the fish was calculated by the order moment, and then, the centroids were used as a vertex in Delaunay Triangulation. Finally, the flocking index of fish feeding behavior (FIFFB) was calculated to quantify the feeding behavior of a fish shoal according to the results of the Delaunay Triangulation, and the FIFFB values of the removed reflective frames were fitted by the Least Squares Polynomial Fitting method. The results show that variations in fish feeding behaviors can be accurately quantified and analyzed using the FIFFB values, for which the linear correlation coefficient versus expert manual scoring reached 0.945. This method provides an effective method to quantify fish behavior, which can be used to guide practice. <s> BIB004
South Korea is a highly industrialized country and hosts some of the world's huge leading tech-giants like Samsung, LG, and Hyundai. The Korean government has initiated steps in transferring the technology used in the industrial sector for the development of the agricultural sector as well. A report published by the Korean Rural Economic Institute emphasized the role of Korean agriculture in the nation's Journal of Sensors economic growth for the past 70 years BIB004 . It also reported the country's policy in investing in new technologies including ICT to cope with global warming, lack of resources, and changes in human consumption patterns. Similarly, the Electronics and Telecommunications Research Institute (ETRI) reported emerging ICT technologies to combine agricultural products at each stage in smart farms [44]. Koo et al. reviewed Korean and international research trend related to ICT-based horticultural facilities. With keywords precision agriculture, smart farm, ICT, and IoT, the paper has thoroughly investigated technologies used in Korean agricultural scenario as shown in Table 4 . It has also provided various case studies on failures and given directions and solutions from several perspectives. It emphasizes BIB002 BIB001 9 Journal of Sensors the development of an intelligent service system for management and control of all the process of agricultural production. Yeo reported the development of a moving monitor and control system for crops in the greenhouse. The movable sensing units gather continuous data and the connection is made through the Wi-Fi where the data are saved and processed on the server as shown in Figure 9 . The movable sensing units consist of high-resolution IP camera, environmental sensors, and Wi-Fi repeater and controlling units contain embedded PC, programmable logic controller (PLC), and BLDC motors. This work expects to provide a better solution in monitoring scheme and management of plants and crops in a greenhouse. Another recent article published by Kim [47] researched on the estimation of the factors influencing the future prices of corn and wheat through Bayesian model averaging as shown in BIB003 . With the application of probabilistic factors, the results of the study facilitate the improvement in the ability to forecast grain future prices. Here, Pr A and Pr B represents the probabilities of the occurrence of the events A and B, respectively. Whereas, Pr A | B and Pr B | A represents a conditional probability of the likelihood of the occurrence of the event A given B is true and vice-versa.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> C. Why Is Data Link Important in UAVs? <s> Unmanned Aircraft System networks are a ::: special type of networks where high speeds of the nodes, long distances and ::: radio spectrum scarcity pose a number of challenges. In these networks, the ::: strength of the transmitted/received signals varies due to jamming, multipath ::: propagation, and the changing distance among nodes. High speeds cause another ::: problem, Doppler Effect, which produces a shifting of the central frequency of ::: the signal at the receiver. In this paper we discuss a modular system based on ::: cognitive to enhance the reliability of UAS networks. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> C. Why Is Data Link Important in UAVs? <s> Command and Control (C2) Data Link performance is essential for maintaining safe command and control of the Unmanned Aircraft System (UAS). The tolerance of the automatic flight guidance and control system (AFGCS) to the degradation in C2 Data Link performance depends on the phase of flight and the AFGCS mode(s) of operation. This paper will discuss the tolerance and recommend limits for the C2 Data Link to maintain safe AFGCS operation. The paper will also present a recommended AFGCS notional architecture to enable safe operation with the available C2 Data Link technology. <s> BIB002
UASs come with many challenges, including the need for reliable data links and autonomous controls. Although this is not a general assumption, in case of collision, the kinetic energy stored in a 25 kg UAV would instigate severe damage. That means even the small UAVs need a reliable data link to guarantee safe flights. Many currently popular manned aviation applications use long-range satellite communications, which are expensive, and their large antennas are sometimes impossible to deploy on a small UAV. Employing any type of data links comes with specific advantages and disadvantages on the UAV's functionality regarding range, altitude, and payload. It is also important to emphasize that the demands of the communication system in a UAS are highly dependent on the application and the mission that the system will be used for. Thus, the requirements of the data link will vary accordingly. Some of the popular civilian applications of UASs are shown in Fig. 2 . Removing the onboard pilot from the aircraft in a UAS reduces the pilot awareness of the surroundings and aircraft condition. Therefore, the level of flight safety could decrease significantly. Even in manned aircraft vehicles, the automatic control modes need pilots to assist in providing the required level of performance and reliability. Another issue related to UAVs' data link is with regards to their integration in the NAS. The performance differences between the UAS communication and other traffic types must be considered. This includes the differences in speed, range, and other flight aspects which complicate the ATC responsibilities to manage the co-existence of the manned and unmanned aircraft BIB002 . Hence, for ATC safety analysis, a calculated balance is needed for an unmanned aircraft. compared to the manned aircraft. FAA Air Traffic Organization (ATO) is in-charge of providing the Certificates of Authorization or Waiver (COA) for commercial UASs (either small or large) to guarantee safe flights. However, there is no dominant communication standard or technology for UASs, so ensuring compatibility among different UAS platforms is difficult BIB001 . Moreover, there are no specific standards for UASs to use satellite or cellular communication as a data link. Defining a standard framework for beyond line of sight (BLOS) operations would boost the current interest for unmanned aviation even more than what has been already predicted. In this paper, we focus on the current trends for UAS data links with the hope to help standardization processes.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> A. Spectrum Considerations <s> This paper presents a summary of a measurement campaign on radio propagation channels in air-to-ground (A2G) links based on a usage scenario of unmanned aircraft services (UASs). In order to reveal their propagation characteristics, a measurement setup has been established that transmits a designed FMCW signal continuously from a small manned airplane with frequency bandwidth of 20 MHz at center frequency of 5060 MHz, and records the transmitted signals on ground as IQ waveforms by using a vector signal analyzer. Metrics on radio propagation characteristics including received signal strength (RSS) and channel impulse responses (CIRs) are obtained from the recorded data. The results on obtained RSS have shown that the trends of RSS change in the A2G links is expressed by free-space pathloss with a shadowing component encountered in mobile radio communications. Regarding the CIR in A2G links, it is observed that the radio propagation channels are expressed by a direct path and a ground reflection with scatter components; however, the direct path has margin of more than 20 dB in signal level against the other components. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> A. Spectrum Considerations <s> Friday is Fly Day at 3D Robotics, a maker of small robotic aircraft. So here we are, on a windswept, grassy landfill with a spectacular view of San Francisco's Golden Gate Bridge, looking up at a six-prop copter with a gleaming metal frame. It's like a spiffy toy from the future. Buzzing like a swarm of bees, it lifts off smartly, hovers, then pinwheels. "Jason's making the hex twirl," says CEO Chris Anderson, a trim man in jeans and an untucked oxford shirt. "That's just for show-a human pilot couldn't do that." That's because Jason, the flight tester, did nothing more than figuratively push a button. The hexarotor-technically, the 3DR Y-6-is on autopilot, which it demonstrates by zooming off on a preprogrammed route. The Y-6 sells for US $619. That's a lot for a toy, but it's chicken feed for a capital investment. These mini unmanned aerial vehicles, a.k.a. UAVs, a.k.a. drones-are changing from toys into tools, as businesses worldwide awaken to their importance. <s> BIB002 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> A. Spectrum Considerations <s> This paper describes the radio propagation characteristics in urban environment for a fixed-wing unmanned aircraft (UA) system. In the Great East-Japan Earthquake in 2011, the even the latest mobile phones became almost unavailable due to the breakdown of base stations, electricity outage, and traffic congestion. In addition, many areas in mountains or islands were isolated due to the damage of roads, harbors, and communication infrastructures. In such a situation, a UA system has a potential to provide temporal communication links to the isolated areas while monitoring the situation in the disaster area. Therefore, we have conducted a measurement campaign in order to characterize ground-to-air radio channels for small UA at several locations, including urban and non-urban environments. Some measurement results of ground-to-air link in urban environment are reported in this paper. It is demonstrated that there is a suitable flight altitude for long-range ground-to-air channel in terms of radio propagation. <s> BIB003 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> A. Spectrum Considerations <s> The article deals with theoretical and practical directions of Public Flying Ubiquitous Sensor Networks (FUSN-P) research. Considered the distinctive features of this type of networks from the existing ones. A wide range of issues is covered: from the methods of calculation FUSN to the new types of testing and model network structure for such networks. Presented a model network for full-scale experiment and solutions for the Internet of Things. <s> BIB004 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> A. Spectrum Considerations <s> The communication link integrity of an Unmanned Aerial System (UAS) is influenced by a number of factors, the most relevant being antenna radiation pattern and gain, receiver sensitivity, output power, terrain relief, aircraft's attitude and trajectory, and frequency band. The constraints are especially severe in the context of beyond line-of-sight and low altitude flight plan. This work presents the modeling and simulation of a typical scenario of power line inspection using UAS in order to investigate the impacts of relevant factors in the communication link through the UAV's mission on a terrain follow path. In this perspective, a case study is conducted and the simulation platform developed enables the analysis of the mission beforehand to prevent loss of link due to inappropriate flight plan. Elsewhere, alternative scenarios are considered to expand communication range. <s> BIB005 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> A. Spectrum Considerations <s> This paper discusses the results of exploratory research in analyzing the electromagnetic compatibility (EMC) of commercially available radio frequency transceivers co-located within the chassis of an Unmanned Air System (UAS). Tests were performed on a UAS with multiple communication systems onboard encompassing frequency bands with center frequencies of 915 MHz, 2.4 GHz, and 5.8 GHz. These tests were performed in a normal operational environment a.k.a free space and also inside a multipath environment where the UAS was subjected to performance evaluation i.e. the status of the communication systems of the UAS were monitored while there is no external EM threat and also while applying an external EM field. <s> BIB006 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> A. Spectrum Considerations <s> To support the development of high-capacity air-to-ground links for range extension, measurements of the low-altitude air-to-ground channel were made at 915 MHz. Two transmit antennas were mounted on an unmanned aerial vehicle (UAV), which was flown in loops at an altitude of approximately 200 m above ground level. The received signals were recorded at each of eight antenna elements mounted on a van at locations outside and inside the flight loop. The analysis of the measurements shows that there are regions where the spatial diversity is significant, despite the sparse multipath environment, indicating spatial decorrelation at both the ground and air terminals. The variations in spatial correlation across the receiver array indicate the presence of nonplanar wavefronts produced by the signals' interaction with objects in the array near field, in particular the measurement vehicle. A similar effect is probable at the UAV, and it is expected that more significant near-field effects would arise on a more conventional air platform. These support significant reductions in outage probability at both receiver locations: With appropriate signaling strategies, an airborne platform could provide a viable relay or broadcast node for high-capacity communications using a multiple-input–multiple-output (MIMO) system. <s> BIB007
The most commonly used frequency bands in UASs' data links are K, Ku, X, C, S, and L bands. We discuss each of these briefly next. K (18 to 27 GHz) band is a wide-range band that can carry a large amount of data, but it consumes a lot of power for transmission and is highly affected by environmental interferences. K, Ku (12 to 18 GHz), and Ka (27 to 40 GHz) bands have been mostly used for high-speed links and BLOS. Non-line of sight (NLOS) communication happens when the transmissions across a path between the receiver and the transmitter are partially or completely obstructed with a physical object, or simply is not in the line of sight. BLOS communication implies that the transmitter and the receiver are either too distant, usually as far as thousands of km, or too fully obscured, mostly because of the curvature of the Earth's surface, and the pilots should use cellular or satellite links. The X band (8 to 12 GHz) is reserved for military usage, which is out of the scope of this paper . C band (4 to 8 GHz) is the most popular band for the line of sight (LOS) data links. The weather conditions affect this band less than the other bands. However, due to its relatively short wavelengths and high frequency, the signal attenuation is relatively high, which leads to a considerable amount of power consumption. Frequency channel measurements for C band as the data link for UAVs are studied in BIB001 . Metrics such as received signal strength (RSS) and channel impulse responses (CIRs) are considered. S band (2 to 4 GHz) and L band (1 to 2 GHz) can provide communication links with data rates more than 500 kbps; their large wavelength signals can penetrate through the buildings transferring a large amount of data. Also, the transmitter requires less power for the same distance compared to high-frequency spectrums such as K band. S band radio propagation characteristics and measurements in UAS were studied in BIB003 . Recently, there has been a tremendous interest in moving to lower frequency bands for the civilian UAS data link. For wireless data transfers, 433 MHz and 868 MHz bands in many regions of the world and 915 MHz band in the United States are dedicated to send the telemetry data that can be utilized for the UAV communications BIB004 . These region-specific allocations were determined by International Telecommunication Union (ITU) to utilize the industrial, scientific and medical (ISM) band without requiring a license. Moving to the 915 MHz band in UASs is an efficient option for several civilian applications such as goods delivery in which the UAV must explore a long path. Further, frequency hopping spread-spectrum technique is generally implemented in this band. IEEE 802.15.4 is the basis of many protocols including ZigBee, that utilizes the 915 MHz frequency band. This frequency band is also used in SiK radios for the autopilot drone products. These radios were first developed by 3D Robotics (3DR) on open source platforms. 3DR is a company active in manufacturing commercial drones using 915 MHz data links BIB002 . SiK radio links are capable of having up to 25% less bit error rate compared to the other currently popular UAV links, and the data latency is as low as 33 ms . Another advantage of these radios is their small size and light weight that makes them suitable for sUAV applications. A UAS communication model and simulation to analyze the link quality is presented in BIB005 . As expected, the UAVs operating in low frequency such as 915 MHz show better performance and suffer less from the free space loss. Different performance tests on a UAS with data links in 915 MHz, 2.4 GHz, and 5.8 GHz over an outdoor environment and a complex multipath environment have been studied in BIB006 . They provide a detailed comparison of these links. Several test results on measuring and modeling 915 MHz channel for low-altitude (about 200 m above ground level) UAV have been presented in BIB007 . The capability of 915 MHz band in providing a high-capacity communication between the UAV and the remote pilot is empirically proven in that work. Table I summarizes the features of each frequency band.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> C. SWaP and Resource Allocation Considerations <s> The proven value of DOD Unmanned Aerial Vehicles (UAVs) will ultimately transition to National and Homeland Security missions that require real-time aerial surveillance, situation awareness, force protection, and sensor placement. Public services first responders who routinely risk personal safety to assess and report a situation for emergency actions will likely be the first to benefit from these new unmanned technologies. ‘Packable’ or ‘Portable’ small class UAVs will be particularly useful to the first responder. They require the least amount of training, no fixed infrastructure, and are capable of being launched and recovered from the point of emergency. All UAVs require wireless communication technologies for real- time applications. Typically on a small UAV, a low bandwidth telemetry link is required for command and control (C2), and systems health monitoring. If the UAV is equipped with a real-time Electro-Optical or Infrared (EO/Ir) video camera payload, a dedicated high bandwidth analog/digital link is usually required for reliable high-resolution imagery. In most cases, both the wireless telemetry and real-time video links will be integrated into the UAV with unity gain omni-directional antennas. With limited on-board power and payload capacity, a small UAV will be limited with the amount of radio-frequency (RF) energy it transmitsmore » to the users. Therefore, ‘packable’ and ‘portable’ UAVs will have limited useful operational ranges for first responders. This paper will discuss the limitations of small UAV wireless communications. The discussion will present an approach of utilizing a dynamic ground based real-time tracking high gain directional antenna to provide extend range stand-off operation, potential RF channel reuse, and assured telemetry and data communications from low-powered UAV deployed wireless assets.« less <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> C. SWaP and Resource Allocation Considerations <s> Unmanned Aerial Vehicles (UAVs) have traditionally been used for short duration missions involving surveillance or military operations. Advances in batteries, photovoltaics and electric motors though, will soon allow large numbers of small, cheap, solar powered unmanned aerial vehicles (UAVs) to fly long term missions at high altitudes. This will revolutionize the way UAVs are used, allowing them to form vast communication networks. However, to make effective use of thousands (and perhaps millions) of UAVs owned by numerous disparate institutions, intelligent and robust coordination algorithms are needed, as this domain introduces unique congestion and signal-to-noise issues. In this paper, we present a solution based on evolutionary algorithms to a specific ad-hoc communication problem, where UAVs communicate to ground-based customers over a single wide-spectrum communication channel. To maximize their bandwidth, UAVs need to optimally control their output power levels and orientation. Experimental results show that UAVs using evolutionary algorithms in combination with appropriately shaped evaluation functions can form a robust communication network and perform 180% better than a fixed baseline algorithm as well as 90% better than a basic evolutionary algorithm. <s> BIB002 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> C. SWaP and Resource Allocation Considerations <s> Data link is an important part of unmanned aerial vehicles system. The key of design of data link for multi-UAVs is the effective use and control of the transport channel. Firstly, an analysis is given on the link characteristics between the GCS and the UAV. Secondly, in order to reduce conflict in multi-UAVs transmission, we realize the polling and broadcast scheme through redesigning the control protocol and message protocol on the basis of former data link. Lastly, some comparative data on multi-UAVs communication is given to show improvement of data link for multi-UAVs. <s> BIB003 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> C. SWaP and Resource Allocation Considerations <s> The application areas for Unmanned Aircraft (UA) Systems (UAS) are constantly expanding. Aside from providing an attractive alternative in applications that are risky for humans, smaller UAS become highly attractive for applications where use of larger aircraft is not practical. This paper presents the UAS Collaboration Wireless Network (UAS-CWN), a secure and reliable UAS communication mesh-network. This solution is proposed for the circumstances where a large number of UAS are deployed to cooperatively accomplish a mission such as surveillance in hostile environments. The proposed UAS-CWN system provides high fault-tolerance through use of information dispersal algorithm and meanwhile reduces the risk of information exposure to the adversaries via security-enhancing mechanisms. Our evaluation shows promising results. Especially, a UAS-CWN with high security-level settings can withstand losing 30% of the total number of unmaned aircrafts while steadily achieving above 96% data recovery rate. <s> BIB004 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> C. SWaP and Resource Allocation Considerations <s> Utilizing unmanned aerial vehicle (UAV) as the relay is an effective technical solution for the wireless communication between ground terminals faraway or obstructed. In this letter, the problems of UAV node placement and communication resource allocation are investigated jointly for a UAV relaying system for the first time. Multiple communication pairs on the ground, with one rotary-wing UAV serving as relay, are considered. Transmission power, bandwidth, transmission rate, and UAV’s position are optimized jointly to maximize the system throughput. An optimization problem is formulated, which is non-convex. The global optimal solution is achieved by transforming the formulated problem to be a monotonic optimization problem. <s> BIB005 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> C. SWaP and Resource Allocation Considerations <s> In this paper, we investigate resource allocation algorithm design for multiuser unmanned aerial vehicle (UAV) communication systems in the presence of UAV jittering and user location uncertainty. In particular, we jointly optimize the two-dimensional position and the downlink beamformer of a fixed-altitude UAV for minimization of the total UAV transmit power. The problem formulation takes into account the quality-of-service requirements of the users, the imperfect knowledge of the antenna array response (AAR) caused by UAV jittering, and the user location uncertainty. Despite the non-convexity of the resulting problem, we solve the problem optimally employing a series of transformations and semidefinite programming relaxation. Our simulation results reveal the dramatic power savings enabled by the proposed robust scheme compared to two baseline schemes. Besides, the robustness of the proposed scheme with respect to imperfect AAR knowledge and user location uncertainty at the UAV is also confirmed. <s> BIB006
Size, weight and power (SWaP) of the aircraft are other design considerations to help determine which data link should be used in the system. The data link technologies that provide high range and reliability without increasing the size, weight or power consumption of the system are always preferable. The SWaP considerations are more crucial for small UAVs compared to other UAV classes. As an example, the limited onboard power in small UAVs lowers the payload capacity, and the useful operational range is limited by the power of the RF transmission BIB001 . To solve the problem of limited onboard power, a popular approach is to employ a large number of small, low-cost UAVs to cooperate and make a large-scale network. This design is referred to as "multi-UAV network." It is especially useful in the case of natural disasters where the access to power may be very limited. Further, the approach makes the system robust against hardware failures and software malfunctions. It can also be self-sustaining by storing the data in the UAVs and sending it to the base station whenever a connection is established. Thus, the system is not dependent on having a real-time external communication link. This framework has been useful in mission-critical situations (e.g., natural disasters) BIB004 , BIB002 . In a UAV-based network, the pilot should manage the critical responsibilities such as resource allocation. This task would solve several problems such as the transmission conflict among the UAVs (e.g., through polling techniques) and resource distribution among them BIB003 . Resource allocation is a joint optimization problem with goals such as minimizing the total transmission power and maximizing the throughput. The on-demand flexibility and mobility of the UAVs come at the price of SWaP limitations. To manage these limitations, resource allocation techniques specifically for UAVs have been studied BIB005 - BIB006 . It is important to make sure that the optimal resource allocation would not sacrifice other performance metrics such as transmission rate, spectrum, optimal UAV's placement, user QoS, etc. However, as all the existing works mention that there is not enough research work covering all aspects of the resource allocation of UAV-based networks.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. Signal Propagation Considerations <s> The key challenges in the design of datalinks for UAS systems compared to other wireless links is the long range of distances and speeds that need to be covered. The amount of spectrum available in the L-Band is not sufficient to support video applications common in UASs and so dual-band designs using both L-Band and C-Band are being considered. For LBand, two projects funded by EUROCONTROL L-Band Digital Aeronautical Communications Systems 1 and 2 (L-DACS1 and L-DACS2) are often mentioned for use in UAS also. We briefly discuss issues with their use for UAS. Then we discuss several issues in UAS datalink design including availability, networking, preemption, and chaining. We also propose ways to mitigate interference with other systems in the L-Band. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. Signal Propagation Considerations <s> Two key challenges in the design of datalinks for unmanned aircraft (UAS) systems compared to other wireless links are the long range of distances and speeds that need to be covered. The 960 - 1164 MHz part of the IEEE L band has been identified as a candidate spectrum for future manned and unmanned aircraft datalinks. The amount of spectrum available in the L-Band is not sufficient to support video applications common in UASs and so dual-band designs using both L-Band and C-Band are being considered. For L-Band, two projects funded by EUROCONTROL L-Band Digital Aeronautical Communications Systems 1 and 2 (L-DACS1 and L-DACS2) are often mentioned for use in UAS also. We briefly discuss issues with their use for UAS. We compare the two proposals in terms of their scalability, spectral efficiency, and interference resistance. Then we discuss several issues in UAS datalink design including availability, networking, preemption, and chaining. We also propose ways to mitigate interference with other systems in the L-Band. <s> BIB002 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. Signal Propagation Considerations <s> This paper presents a performance evaluation on path diversity introduced in a wireless network established by unmanned aerial vehicles or unmanned aircrafts (UAs). One of usage scenarios of the wireless network established by UAs is establishment of a temporal communications instead of disrupted terrestrial networks due to a massive disaster. In order to evaluate system performance based on this usage scenario, we have conducted the following simulations; first, radio propagation characteristics between UAs flying over different two conditions have been simulated, after that, system simulations have been carried out over radio channels derived by simulations. Results regarding radio propagation reveal that characteristics including path loss and RMS delay are highly depend on not only altitude of the deployed UAs, but also its polarization of radio signal. System simulations based on the channel impulse responses obtained by propagation simulations, in which IEEE802.11g is utilized as specifications of the radio system, show that path diversity introduced by deploying multiple UAs brings remarkable improvement on achievable throughputs. <s> BIB003 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. Signal Propagation Considerations <s> Orthogonal Frequency Division Multiplexing (OFDM) can be a good candidate for wideband communications to transmit payload data from an Unmanned Aerial Vehicle (UAV) to the ground station in an Unmanned Aerial System (UAS). However, OFDM systems are prone to inter-channel interference caused by the Doppler spread. Furthermore, because of possible high speed of UAVs, the Doppler spread can be large. In order to design a proper OFDM system for a UAS, it is essential to have an appropriate air-to-ground channel model that accurately models the multipath and Doppler properties of the wideband channel from the UAV to the ground station. Six different channel models are proposed based on various scenarios of the altitude of the UAV (very low, low, and high) and the type of the environment that they are flying over (low-density suburban areas and high-density urban areas). Since no measurement data has been published for wideband signaling from UAVs to a ground station, these models are created by combining parameters of narrowband aeronautical channel models with downlink channel models of wideband terrestrial systems, including HiperLAN, LTE and IEEE 802.16 systems. These channel models were used to evaluate the performance of an OFDM for UAV-to-ground communications. Simulation results show that for high-speed UAVs, the number of sub-channels in an OFDM should be relatively small in order to have reliable communications. <s> BIB004
Due to the mobile nature of the UASs, several challenges arise with the signal propagation, including the Doppler frequency shift, dynamic connectivity, antenna power, losses due to signal attenuation, multi-path fading, interference, and jamming. The Doppler frequency shift is one of the important challenges in designing UAS data links. It is caused by the movement of the aircraft, which makes the received frequency at the ground station (GS) to differ from the sent frequency. The difference may be positive or negative depending on whether the aircraft is getting closer to or away from the GS. The performance of the data link is highly affected by the Doppler spread, which limits the UAV speed. Furthermore, since UAVs are mobile, and their connectivity is dynamic, compared to the traditional wireless networks, their communication channel status changes more frequently. The degradation of signal to noise ratio (SNR) at the receiver is caused by the the propagation loss. The propagation loss due to the large distance between the UAV and the GS affects the throughput and error performance of the data link. All these effects are dependent on the communication channel properties. This issue highlight the vital role of a proper channel modeling for these systems. On the other hand, in the future integrated airspace, data and ground platforms would need to be shared among the manned and unmanned aircraft. Due to this, UAVs might not have full access to the required bandwidth resources all the time . As a result, compatibility and co-existence with manned aircraft must be considered BIB001 , BIB002 . Vahidi and Saberinia BIB004 proposed six different channel models for high-frequency UAV data communications. Different scenarios are built upon different types of UAVs and different environments in which they operate. The channel models are defined based on the Doppler properties and delay profiles. As a conclusion, Orthogonal Frequency-Division Multiplexing (OFDM) systems with a small number of subcarriers would provide the best performance in high-frequency UAV applications, due to large Doppler shifts. Several diversity technologies are used in aviation to overcome signal degradations of data links. Frequency diversity, which is the most popular technique, uses multiple channels at different frequencies to transmit the same signal. In time diversity technique, the same signal is transmitted multiple times. And finally, in the path diversity technique, multiple antennas are employed on the receiver or transmitter side or both sides to send multiple copies of the same signal. The physical distance between these antennas must be considerable so that the signal would experience different channel properties on each path. However, employing diversity technologies in the system is a complex and costly technique . The performance of path diversity by employing multiple UAVs using OFDM modulation of IEEE 802.11g protocol has been tested in BIB003 . It is shown that in practice the path diversity improves the UAS system's throughput significantly.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Routing <s> We consider a single Unmanned Aerial Vehicle (UAV) routing problem where there are multiple depots and the vehicle is allowed to refuel at any depot. The objective of the problem is to find a path for the UAV such that each target is visited at least once by the vehicle, the fuel constraint is never violated along the path for the UAV, and the total fuel required by the UAV is a minimum. We develop an approximation algorithm for the problem, and propose fast construction and improvement heuristics to solve the same. Computational results show that solutions whose costs are on an average within 1.4% of the optimum can be obtained relatively fast for the problem involving 5 depots and 25 targets. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Routing <s> In this letter, the efficient deployment of multiple unmanned aerial vehicles (UAVs) acting as wireless base stations that provide coverage for ground users is analyzed. First, the downlink coverage probability for UAVs as a function of the altitude and the antenna gain is derived. Next, using circle packing theory, the 3-D locations of the UAVs is determined in a way that the total coverage area is maximized while maximizing the coverage lifetime of the UAVs. Our results show that, in order to mitigate interference, the altitude of the UAVs must be properly adjusted based on the beamwidth of the directional antenna as well as coverage requirements. Furthermore, the minimum number of UAVs needed to guarantee a target coverage probability for a given geographical area is determined. Numerical results evaluate various tradeoffs. <s> BIB002 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Routing <s> A small group of Unmanned Aerial Vehicles (UAV), each equipped with a communications payload, offers a possible means of providing broadband services over disaster regions. The UAVs are power limited so the number of mobile sub-scribers that can be supported by each UAV depends on its proximity to clusters of mobiles. One way of maximising the total number of mobiles supported within the available RF power is to periodically relocate each of the UAVs in response to the movement of the mobiles. This paper compares two approaches for optimally locating the UAVs. One approach employs a non cooperative game (NCG) as the mechanism to plan the next flying strategies for the group. The other uses evolutionary algorithms (EA) to evolve flying manoeuvres in a collaborative manner. Exemplar comparison results show that although both approaches are able to provide sufficient network coverage adaptively, they exhibit different flying behaviours in terms of flightpath, separation and convergence time. The non cooperative game is found to fly all aerial vehicles in a similar, balanced and conservative way, whilst the evolutionary algorithms enable the emergence of flexible and specialised flying behaviours for each member in the flying group which converge faster to a sufficient global solution. <s> BIB003 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Routing <s> This paper presents the multi-UAVs cooperative target observation and tracking considering the communication interference and the propagation loss. UAVs are assigned to the appropriate target group which maximises information defined by the Fisher information matrix (FIM). We propose multi-UAVs cooperative target optimal measurement and tracking models considering the communication factors. Using the rolling horizon optimization method for optimal solution. The main contributions of this paper are threefold. Firstly, this paper proposes a new method of target tracking by considering the communication interference and the propagation loss. Secondly, UAVs are assigned to the appropriate target group which maximises information defined by the FIM. Lastly, using Information Consensus Filter (ICF) to solve the convergence problem and improve topology. <s> BIB004 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Routing <s> Unmanned aerial vehicles (UAVs) have attracted significant interest recently in wireless communication due to their high maneuverability, flexible deployment, and low cost. This paper studies a UAV-enabled wireless network where the UAV is employed as an aerial mobile base station (BS) to serve a group of users on the ground. To achieve fair performance among users, we maximize the minimum throughput over all ground users by jointly optimizing the multiuser communication scheduling and UAV trajectory over a finite horizon. The formulated problem is shown to be a mixed integer non-convex optimization problem that is difficult to solve in general. We thus propose an efficient iterative algorithm by applying the block coordinate descent and successive convex optimization techniques, which is guaranteed to converge to at least a locally optimal solution. To achieve fast convergence and stable throughput, we further propose a low-complexity initialization scheme for the UAV trajectory design based on the simple circular trajectory. Extensive simulation results are provided which show significant throughput gains of the proposed design as compared to other benchmark schemes. <s> BIB005 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Routing <s> Nowadays, mini-drones, officially called unmanned aerial vehicles, are widely used in many military and civilian fields. Compared to traditional ad hoc networks, the mobile ad hoc networks established by UAVs are more efficient in completing complex tasks in harsh environments. However, due to the unique characteristics of UAVs (e.g., high mobility and sparse deployment), existing protocols or algorithms cannot be directly used for UAVs. In this article, we focus on the routes designed for UAVs, and aim to present a somewhat complete survey of the routing protocols. Moreover, the performance of existing routing protocols is compared in detail, which naturally leads to a great number of open research problems that are outlined afterward. <s> BIB006
Route-planning is a critical step in every applications of UAVs. The scheduled route must be low risk and low cost, while maintaining the mission goals. In a multi-UAV network, this consideration becomes even more complicated. For instance, these concepts must be studied carefully: avoiding any conflict among the UAVs, using minimum number of UAVs to cover the route and finish the specified task, time optimization regards to assigning a UAV to a specific part of a route while others cover the rest, etc. While designing an efficient routing program in a multi-UAV network, it is important to solve the trajectory optimization. These is still a need to work on this issue, as there are only a few preliminary works that have been done . There is a comprehensive survey in BIB006 , focusing on routing protocols for UAVs. Routing in single UAV and multi-UAV networks are studied and compared. Performance of the popular existing routing protocols are reviewed in detail. As mentioned before, small UAVs suffer the most from the resource constraints. In BIB001 , the problem of optimal routing is studied, while the fuel constraints are taken into account. In the designed scenario, which is actually the case for most of the UAV application, the aircraft is supposed to visit several target points during its mission and refueling depots are also positioned in its way. Therefore, the goal of an optimal routing scheduling is to while the UAV fulfills its mission, it never runs out fuel. There are several works researching on optimizing the user scheduling and UAV trajectory to increase the minimum average rate and throughput per user. The main objective of these papers is to minimize the number of required UAVs to cover a specific area with a multi-UAV network BIB005 , BIB002 . Autonomous flying approaches in a network of multiple UAVs to optimally locate the UAVs in the network are described in BIB003 . In BIB004 , the propagation loss and the interference caused by all UAVs in a multi-UAV network have been studied.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> B. Disadvantages <s> The integration of unmanned aircraft systems (UAS) into the National Airspace System (NAS) presents many challenges including airworthiness certification. As an alternative to the time consuming process of modifying the Federal Aviation Regulations (FARs), guidance materials may be generated that apply existing airworthiness regulations toward UAS. This paper discusses research to assist in the development of such guidance material. The results of a technology survey of command, control, and communication (C3) technologies for UAS are presented. Technologies supporting both line-of-sight and beyond line-of-sight UAS operations are examined. For each, data link technologies, flight control, and air traffic control (ATC) coordination are considered. Existing protocols and standards for UAS and aircraft communication technologies are discussed. Finally, future work toward developing the guidance material is discussed. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> B. Disadvantages <s> Although the concept was born within military use, in recent years we have witnessed an impressive development of unmanned aerial vehicles (UAVs) for civil and academic applications. Driving this growth is the myriad of possible scenarios where this technology can be deployed, such as: fire detection, search and rescue operations, surveillance, police operations, building and engineering inspections, aerial photography and video for post-disaster assessment, agricultural monitoring, remote detection (radiation, chemical, electromagnetic), weather services, UAV photogrammetry, airborne relay networks, and more [1]. Undoubtedly, the increased use of UAVs has been sustained through the research and development of multiple low-cost solutions for the control of aerial vehicles, the evolution in microelectronics with multiple off-the-shelf components and sensors, and also through a growing global developers community with several UAV related open source projects. <s> BIB002
Despite all the great benefits of the satellite communication, this technology is an expensive data link, and it becomes cost effective only for high-altitude or at most for medium-altitude UAVs. Hence, SATCOM has not been used for small UASs so far. One of the main challenges with all satellite communications is latency, due to the far distance that the data packet has to travel. Latency can be defined in two ways: oneway or round-trip latency (RTL). One way is the time that a data packet takes to travel from the sender to the receiver. RTL is the time required for the packet to get to the receiver and a response goes back to its sender. Due to the high latency of SATCOM data links, realtime remote piloting becomes less practical. In this case, the complete flight plan can be programmed in a chip and the UAV is guided by an autopilot. Meanwhile, a remote pilot may still surveil the aircraft (which is not real-time monitoring) through a control link with about 10 kbps data rate BIB001 . It is important to note that distance is not the only factor affecting the latency of the SATCOM services. Bandwidth, the load on the network, and the constellation's capacity are some examples of other factors that affect the latency of their services. Another disadvantage of SATCOM is the high level of propagation loss. Signal attenuation caused by several environmental features (e.g., free space losses, atmospheric losses, signal absorption, and dish misalignment) gets worse as the distance between the transmitter and receiver increases. This requires strong high-power amplifiers to be deployed at the satellites. SATCOM often suffers from gaps in communication. A constellation of satellites may not cover the whole area of the Earth's surface. At high geographical longitudes (including poles), most satellite constellations are not visible. This is because the motion of the Earth makes launch a satellite into a polar orbit more difficult than launch it into an equatorial orbit. Further, sometimes the satellites are not in view of their ground stations, and the useful bandwidth cuts to about a two-third. In Table II , the advantages and disadvantages of SATCOM are summarized. Also, its suitability or unsuitability for several UAV applications have been mentioned. However, depending on the available resources (e.g., cost, computational capability of the GS, LOS or BLOS situations, etc.), the constraints might change. Even though cellular communication for UAS applications sounds promising, it comes with several disadvantages. Cellular networks have been used for various applications, and with the increase of mobile applications use, the allocated bands get congested. This situation gets worse in crowded areas. Therefore, there is a need for specialized bands to be allocated for UAS applications to meet the requirements. On the contrary, as was discussed before, SATCOM bandwidth availability is higher and less congested. Another disadvantage of cellular networks compared to SATCOM services is that SATCOM services offer longer range coverage. Cellular network towers have short range coverage area and need several handovers during UAV missions. Plus, they are not available in some rural or remote areas. Thus, in some cases, providing the BLOS communication will be limited to the SATCOM services due to their larger coverage. Different weather conditions may affect the quality of the cellular service as well. The proper solution for this problem is usually through increasing the transmission power or sending redundant copies of the data, which was already explained as diversity techniques. Increasing the transmitted power is usually considered as wasting power and causes interference with others. Further, it constrains the SWaP limitations. One other critical challenge is that the cellular infrastructure is not designed for aviation communications. Most of the antennas transmit signals towards the ground and not upwards. This can cause loss of connection even when the UAV is flying at high altitudes BIB002 . To support UAVs, the signal transmission patterns will have to be changed, so that some of the side lobes point upwards. However, this change will not happen until there is significant UAV traffic to justify the investment. Regardless of the primary purpose of cellular networks, it can still be used as a data link for UAVs shared with the other cellular users. There are different aspects that must be considered based on the end nodes of communication while using cellular aviation for the UAS. These include resource allocation, latency requirements, bit rate, etc. For instance, to design data links for communications among UAVs, resource allocation (e.g., bandwidth, fairness requirements and transmission time) should be considered. For the data link between the remote pilot and the UAV, constraints such as latency, bit rate, and the loss rate are main factors depending upon the degree of autonomy. High bandwidths might be required if these constraints are tight.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. Available SATCOM Services <s> UAVs have been proposed as a platform for providing theater communications for the warfighter. UAVs have the advantage of being rapidly deployable and the potential to provide high capacity over-the-horizon communications. One potential application for UAV hosted communications is to provide personal communication services (PCS) to the warfighter. The key to this service is the use of low cost, battery powered, handheld terminals, while meeting the warfighter's unique requirements. This paper describes an adaptation of the IRIDIUM system satellite and ground terminal equipment to provide this capability to the warfighter. The paper describes the advantages of leveraging a satellite based versus a terrestrial based PCS system for a UAV application. Finally, the paper describes how the IRIDIUM system can be adapted to enhance its utility to the warfighter through new services such as netted voice and enhanced protection. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. Available SATCOM Services <s> From the Publisher: ::: The move toward worldwide wireless communications continues at a remarkable pace, and the antenna element of the technology is crucial to its success. With contributions from more than 30 international experts, the Handbook of Antennas in Wireless Communications brings together all of the latest research and results to provide engineering professionals and students with a one-stop reference on the theory, technologies, and applications for indoor, hand-held, mobile, and satellite systems.Beginning with an introduction to wireless communications systems, it offers an in-depth treatment of propagation prediction and fading channels. It then explores antenna technology with discussion of antenna design methods and the various antennas in current use or development for base stations, hand held devices, satellite communications, and shaping beams. The discussions then move to smart antennas and phased array technology, including details on array theory and beamforming techniques. Space diversity, direction-of-arrival estimation, source tracking, and blind source separation methods are addressed, as are the implementation of smart antennas and the results of field trials of systems using smart antennas implemented. Finally, the hot media topic of the safety of mobile phones receives due attention, including details of how the human body interacts with the electromagnetic fields of these devices.Its logical development and extensive range of diagrams, figures, and photographs make this handbook easy to follow and provide a clear understanding of design techniques and the performance of finished products. Its unique, comprehensive coverage written by top experts in their fields promises tomake the Handbook of Antennas in Wireless Communications the standard reference for the field. <s> BIB002 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. Available SATCOM Services <s> Survivability, the ability of a system to minimize the impact of a finite-duration disturbance on end-user value delivery, is increasingly recognized beyond military contexts as an enabler of maintaining system performance in operational environments characterized by dynamic disturbances. Seventeen general design principles are proposed to inform concept generation of survivable system architectures. Six of these design principles focus on a survivability strategy of susceptibility reduction: (1.1) prevention, (1.2) mobility, (1.3) concealment, (1.4) deterrence, (1.5) preemption, and (1.6) avoidance. Eleven of the principles focus on vulnerability reduction: (2.1) hardness, (2.2) redundancy, (2.3) margin, (2.4) heterogeneity, (2.5) distribution, (2.6) failure mode reduction, (2.7) fail-safe, (2.8) evolution, (2.9) containment, (2.10) replacement, and (2.11) repair. In this paper, the completeness, taxonomic precision, and domain-specific applicability of the design principle framework is empirically tested through case applications to survivability features of the F-16C combat aircraft and Iridium satellite system. Integrating results of these two tests with previous tests (e.g., UH-60A Blackhawk helicopter, A-10A aircraft), the validity of the design principle framework for aerospace systems is demonstrated. <s> BIB003 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. Available SATCOM Services <s> This paper describes a system based on cognitive radio technology to improve the reliability and security of wireless communications of unmanned aerial systems and vehicles (UAS/UAV) networks. UAS/UAV networks can experience problems with connectivity and thus with data reception and delivery. Since UAS/UAV are mobile, their connectivity is dynamic; thus, link status changes are more frequent than for traditional networks. Specifically, link losses due to jamming, interference, fading, and multipath are common problems. Another factor is the way the radio spectrum is used at each specific location. The availability of specific spectrum frequency bands can vary from one location to another, thus making it crucial for aircraft to be frequency agile to maintain connectivity. <s> BIB004 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. Available SATCOM Services <s> Introduction.- Disaster Management and the Emergency Management Culture.- Organizing for Disasters.- Space Systems for Disaster Management.- Space Remote Sensing Fundamentals and Disaster Applications.- Precision Navigation and Timing Systems .- Geographic Information Systems.- Major International and Regional Players.- The Emerging World of Crowd Sourcing, Social Media, Citizen Science, and Remote Support Operations in Disasters.- International Treaties, Non-Binding Agreements, and Policy and Legal Issues.- Future Directions and the Top Ten Things to Know About Space Systems and Disasters.- Appendix A: Key Terms and Acronyms.- Appendix B: Selected Bibliography.- Appendix C: Selected Websites. <s> BIB005
An increasing number of companies are providing satellite communication and are trying to test their services for the UAVs. Unlike the land-based or terrestrial communication, which has been provided by just a few companies offering relatively the same services, there are many different types of SATCOM services. Picking the best service among various available options depends on the application's constraints and requirements of the data link. 1) InmarSAT: InmarSAT was the only SATCOM provider for a long time. After the digital revolution, many other satellite companies providing various types of services appeared. For unmanned applications, InmarSAT offers a machine to machine (M2M) communication service in L band. This service is a member of InmarSAT's Broadband Global Area Network (BGAN) M2M family provided by three GEO satellites started in January 2012 [57] . The BGAN service provides a throughput of about 492 kbps per user. InmarSAT also provides critical safety services for UAVs and their associated applications. Some possible UAV applications using InmarSAT service are data reporting for pipelines, environmental and wildlife monitoring, and electricity consumption data. InmarSAT also offers a hybrid service called Global Xpress (GX) in combination with BGAN. This service is useful for applications that require no interruption, high availability, and seamless connectivity. The GX satellites offer Ka-band services (in the range of 20-30 GHz) for high throughput and BGAN through its L band service provides high availability. This high level of performance and flexibility of the GX satellites make the total throughput of each satellite to be around 12 Gbps. The GX can supply downlink speeds up to 50 Mbps, up to 5 Mbps over the uplink per user, and both the downlink and uplink of BGAN offer data rates up to 492 kbps per user . The typical latency for streaming service in BGAN system is about 1-1.6s round trip. Hence, the one-way latency is about 800 ms at most. However, only 72 of 89 GX satellite spot beams are available at any time (81%) as they travel over the ocean. So many customers would be periodically and unexpectedly limited to older, very high latency FleetBroadband service. FleetBroadband is a maritime global satellite Internet and telephony system built by InmarSAT. The total latency of the FleetBroadband network is in the range of 900 ms to 1150 ms, and the average latency of a GEO satellite is about 500 ms , . Therefore, by weighted average, it is expected that the total average latency of a GX system would be around 600 ms. However there is no official document reporting the user experienced GX system latency. Recently, InmarSAT introduced their new service called InmarSAT SwiftBroadband UAV (SB-UAV) satellite communications service in coordination with Cobham SATCOM. This service can be implemented on low-altitude UAV to provide a satellite communication link for BLOS applications. However, this service suffers from large latency. Conclusion: InmarSAT GX offers services suitable for UAV missions in which seamless communication is essential through their hybrid GX/BGAN service. Applications such as surveillance and delivery are two examples. The BGAN service operating in L band is a proper data link in the matter of supporting mobility due to the Doppler frequency challenge. On the contrary, latency is a disadvantage of InmarSAT that makes it unsuitable for applications that require low latency communication such as real-time monitoring. 2) Iridium NEXT: Iridium's first goal was to build a spacebased type of cellular network stations through 66 satellites. As a common challenge for all SATCOM providers, Iridium has a lapse in coverage of about 4% of the time. Iridium NEXT is the second-generation of Iridium satellites for telecommunications providing worldwide narrowband voice and data services. The constellation offers services in L band for mobile users, supplying data rates up to 128 kbps per user . Iridium NEXT has recently (July 2018) extended its current services by lunching another 10 satellites. In the Iridium NEXT, the throughput increased compared to the first-generation constellation. However there is no official document reporting the throughput , . This set of satellites supplies the users with a fast, secure and comparatively lower latency communication links , . According to BIB003 , packet delays in the first Iridium generation had the average of 178 ms. With the enhanced performance in Iridium NEXT, the delay is below 40 ms . Iridium services have been used widely to provide satellite communications for UAV hosted personal communication services (PCS) for warfighters using low cost, battery powered handsets. Each UAV acts as a relay station to extend the coverage. UAVs provide BLOS data services to about 1000 handsets in its coverage area BIB001 . These data links are suitable for low and medium endurance UAVs BIB004 . Conclusion: Iridium's satellite services operating in the L band would help with Doppler shift challenge in UASs, due to its lower frequency compared to Ka band. This service has been widely used for military handsets communications and shown a good performance in large coverage areas. The benefit of LEO satellites in this constellation is its relatively lower latency communications. However, this comes with the higher cost and bigger antennas which make it unsuitable for small UAV applications. 3) Globalstar: Globalstar is an LEO satellite constellation that operates in both S and L bands. The second generation of the Globalstar constellation has 24 LEO satellites. Its launch started in 2010 and was finished by early in 2013 . Globalstar is known as an Iridium-like service; it has a delay of about 40 ms . There is no official document reporting the throughput per satellite. The average data rate provided by the system is approximately 7.2 kbps per user . Globalstar and ADS-B (Automatic Dependent Surveillance-Broadcast) Technologies have been cooperating for several years for aviation communication services. They provide a simple and low-cost satellite-based ADS-B system called ADS-B Link Augmentation System (ALAS). The main goal is that when the aircraft is not in the LOS of the ground station, the Globalstar satellites provide an NLOS communication link for the ADS-B signals. This system guarantees a highly reliable NLOS air traffic management (ATM) system. In other words, this service extends the ADS-B coverage into BLOS areas with almost no performance degradation in non-satellite-based communications . Also, it does not add any interference to the other aircraft's normal transmissions. ADS-B is discussed further in Section VI. Recently, these two companies, Globalstar and ADS-B technologies, in coordination with NASA Langley Research Center, integrated the ALAS service for UAV applications. A Cirrus SR22 aircraft was used as a test vehicle and flown remotely from the ground. The test results indicated that the system delivers a constant rate communication link between the UAV and the satellite with only a few breaks and quick reconnections . Conclusion: Similar to Iridium services, this SATCOM communication system also provides robust data links against Doppler shift operating in S and L bands. With high mobility and relatively lower latency services, Globalstar is a potential data link for a wide range of UAV applications. 4) Orbcomm Generation 2: Orbcomm Generation 2 (OG2) is the second generation on Orbcomm constellation. The constellation uses very high frequency (VHF) band and frequency hopping to avoid interference in this crowded band. The satellite average latency has been reported as under 1 minute in almost all on-ground operations . OG2 is dedicated to M2M communications. This constellation of satellites consists of 18 satellites, with the total 57 kbps throughput per satellite BIB002 . The data rate per user has not been reported. Orbcomm's services are mostly designed to work in unmanned environments for remote tracking and monitoring of oil and gas extraction and distribution . Orbcomm also provides low power Internet of Things (IoT) services and M2M communications that can be used in multi-UAV networks. It has established a combined robust network consisting of satellite service and terrestrial cellular network along with dual-mode network access. This helps provide a flexible communication system to accommodate the user's demands. Conclusion: OG2 provides links with low power communication, which is desirable especially considering the SWaP limitations of UAVs. Their hybrid service in combination with the cellular network can satisfy a wide range of service requirements based on UAV's specific task; however, it will not be a pure SATCOM service. Their low-frequency operational band makes these services suitable for a wide range of UAV application. Their operating VHF band supports a high level of mobility without facing Doppler shift challenge. Even they have a relatively smaller distance through their LEO satellites, 1-minute latency is not proper for real-time applications. 5) OneWeb: The main motto of founding OneWeb was to provide affordable Internet services in the current under-developed regions . Satellite network provided by OneWeb, formerly known as WorldVu, will consist of 648 LEO satellites to provide a broadband global Internet service by the end of 2019. The satellites would operate in Ku band, 12-18 GHz range of the radio spectrum. The throughput of each satellite is anticipated to be about 6 Gbps. OneWeb suggested a new technique called "progressive pitch" to be implemented in their constellation. In this method, the satellites will be slightly turned occasionally to avoid interference with other Ku band satellites in GEO. This is shown in Fig. 4 , where the lobe has been moved to the left by α angle. OneWeb will support variable data rates, depending on the instantaneous modulation and coding scheme. It is expected to be at least 10 Mbps data rate per user, however; the exact data speed is not available yet , . OneWeb's constellation benefits from the main latency advantage of LEO satellites, which have a low RTL compared to higher orbits. OneWeb services could have latency as low as 50 ms, while the latency of a typical office LAN or ADSL (asymmetric digital subscriber line) connection is in the range of 15-100 ms. OneWeb will support UAV operations over the Arctic, an area recently opened to maritime lanes, but it is beyond the reach of GEO satellites. The Arctic is the polar region located in the northernmost part of Earth. Conclusion: OneWeb services would suffer less from interference due to their progressive pitch technique, which is a bonus for SATCOM data links used in UASs. Further, along with their relatively lower latency communication links, this SATCOM service would be suitable for most critical satellitebased UAV missions, such as network coverage for remote areas. However, since their operating band is in the Ku band, they are not able to support applications that require high mobility due to the Doppler shift problem. Also, it is expected that this SATCOM service comes with high costs due to the number of satellites and maintenance complexity. 6) O3b Networks: O3b Networks offer SATCOM services deploying high speed and medium latency satellites that deliver Internet services to remote areas such as Africa, South America, and Asia. The company was founded in 2007 . O3b Networks introduced their latest product, "O3bsatellites," constellation containing 12 satellites, while they are planning to extend to 20 satellites by 2021. O3b service has a round-trip latency of approximately 132.5 ms for data services . High-performance satellite terminals support service rates up to 24 Mbps . This constellation operates in the medium earth orbit (MEO). The MEO satellites operate in altitudes between 2,000 km and 35,000 km. The total throughput is 16 Gbps per satellite . For UAV applications, O3b network can be used as an IP-based optimized satellite system solution BIB005 . SES Government Solutions, a company that now owns the O3b network, offers robust communication capabilities for remotely piloted aircraft using O3b satellites. It also provides flexible operations for advanced remote-controlled sensor platforms . Conclusion: O3b can help UAV applications by providing network coverage in the remote areas. Even though the service is very reliable and robust, it suffers from a high level of latency. Their operating frequency band does not support a high level of mobility either. As a result, they are suitable only for low mobility UAS applications with no latency constraints, where a high level of reliability is required, such as secure data collection. 7) SpaceX: SpaceX has declared to build Internet services from the space by implementing a network of 4,000 small and low-cost LEO satellites in the Ku band spectrum, promised to be fully functioning in 2020. SpaceX is cooperating with Google to construct an LEO satellite constellation, which will provide low-latency and high-capacity Internet services worldwide . The total promised throughput is up to 200 Tbps. SpaceX plans to improve the latency by placing the satellites in a lower earth orbit at 650 km and also having space connections among the satellites . By this strategy, the latency would decrease from 150 ms to 20 ms, which is about the average latency of a fiber optic cable Internet for home services in the United States , . Conclusion: Even though SpaceX has not been employed in civilian UAVs, it has a great potential. Due to the promised low latency service (if it is successfully implemented), it can be tested out for near real-time monitoring. UAV applications such as weather services, agriculture, and delivery operations can benefit from a SpaceX data link communication. However, claiming to cover all the Internet users with the promised throughput might not seem very practical considering the future drastic number of users. Some UAV missions demand very high throughput, especially when video streaming is needed.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Related Research <s> Rapidly increasing growth and demand in various Unmanned Aircraft Systems (UAS) have pushed governmental regulation development and numerous technology research activities toward integrating unmanned and manned aircraft into the same civil airspace. Safety of other airspace users is the primary concern; thus, with the introduction of UAS into the National Airspace System (NAS), a key issue to overcome is the risk of a collision with manned aircraft. The challenge of UAV integration is global. In this paper, the authors present the operational concept and benefits of an airborne collision system for UAS based on a modified automatic dependent surveillance — broadcast (ADS-B) system, a promising technology under development globally. With the affordable Universal Access Transceiver (UAT) Beacon Radio developed by The MITRE Corporation and the hybrid estimation approach for resource-limited UAS, the ADS-B radar concept appears to be an economically viable solution to detect both cooperative and non-cooperative targets using a single avionics package. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Related Research <s> Collaboration indicates the action of help or support someone in the performance of any activity, contributing to the achievement of a goal. In particular, when teams working collaboratively can obtain greater resources as, recognition and reward when facing competition for finite resources. The ability to establish a collaborative mechanism between unmanned sensors platforms can bring benefits through improved situational awareness (SA). In the design of future unmanned vehicle systems, coordination of heterogeneous teams is one of the key issues that should be solved. <s> BIB002 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Related Research <s> The National Institute of Information and Communications Technology (NICT) has been developing an on-board Ka-band tracking antenna system which realizes the communication link between unmanned aircraft (UA) and remote pilot via satellite with the increased use of UA research and standardization of the control communication system for UA. The tracking on-board antenna designed for UA has various functions such as low-profile and broadband. In this paper, we explain the results of the development and the background of the tracking antenna so far. <s> BIB003 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Related Research <s> Toward increased use of unmanned aircrafts (UA), researches and standardizations of the control communication system for UA using satellites in ITU-R and the like have been discussed recently. The National Institute of Information and Communications Technology (NICT) has been developing an on-board tracking antenna system for UA which realizes a communication link between UA and remote pilot through satellite. The tracking on-board antenna is designed for UA and has several features such as low-profile and broadband. In this paper, we explain the results to date and background of this development. <s> BIB004 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Related Research <s> Live aerial observations from field to control and operations centers, in form of photos and video for visual situational awareness, are valuable in several mission-critical operations, such as disaster management, search and rescue, border control, police operations, security and safety. The use of small UAVs to obtain these observations is attractive, but often challenged by lack of suitable solutions to get live images back to decision makers. In mission-critical operations, detailed observations may be required in real-time, shared beyond the location of a pilot and payload operator. For UAV flights anytime/anywhere, potentially beyond radio line-of-sight, one cannot depend on terrestrial communications alone. Satellite communications is required either in the UAV itself or as a relay via ground to secure the observations can be shared. High definition photos and video have given high costs and long delay, often needing more capacity than available. We present a novel concept for obtaining live mission-critical visual information from UAVs, that combats these traditional barriers for operations. <s> BIB005 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Related Research <s> In order to provide a control and non-payload communication (CNPC) link for civil-use unmanned aircraft systems (UAS) when operating in beyond-line-of-sight (BLOS) conditions, satellite communication links are generally required. The International Civil Aviation Organization (ICAO) has determined that the CNPC link must operate over protected aviation safety spectrum allocations. Although a suitable allocation exists in the 5030–5091 MHz band, no satellites provide operations in this band and none are currently planned. In order to avoid a very lengthy delay in the deployment of UAS in BLOS conditions, it has been proposed to use existing satellites operating in the Fixed Satellite Service (FSS), of which many operate in several spectrum bands. Regulatory actions by the International Telecommunications Union (ITU) are needed to enable such a use on an international basis, and indeed Agenda Item (AI) 1.5 for the 2015 World Radiocommunication Conference (WRC) was established to decide on the enactment of possible regulatory provisions. As part of the preparation for AI 1.5, studies on the sharing FSS bands between existing services and CNPC for UAS are being contributed by NASA and others. These studies evaluate the potential impact of satellite CNPC transmitters operating from UAS on other in-band services, and on the potential impact of other in-band services on satellite CNPC receivers operating on UAS platforms. Such studies are made more complex by the inclusion of what are essentially moving FSS earth stations, compared to typical sharing studies between fixed elements. Hence, the process of determining the appropriate technical parameters for the studies meets with difficulty. In order to enable a sharing study to be completed in a less-than-infinite amount of time, the number of parameters exercised must be greatly limited. Therefore, understanding the impact of various parameter choices is accomplished through selectivity analyses. In the case of sharing studies for AI 1.5, identification of worst-case parameters allows the studies to be focused on worst-case scenarios with assurance that other parameter combinations will yield comparatively better results and therefore do not need to be fully analyzed. In this paper, the results of such sensitivity analyses are presented for the case of sharing between UAS CNPC satellite transmitters and terrestrial receivers using the Fixed Service (FS) operating in the same bands, and the implications of these analyses on sharing study results. <s> BIB006 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Related Research <s> In order to provide for the safe integration of unmanned aircraft systems into the National Airspace System, the control and non-payload communications (CNPC) link connecting the ground-based pilot with the unmanned aircraft must be highly reliable. A specific requirement is that it must operate using aviation safety radiofrequency spectrum. The 2012 World Radiocommunication Conference (WRC-12) provided a potentially suitable allocation for radio line-of-sight (LOS), terrestrial based CNPC link at 5030–5091 MHz. For a beyond radio line-of-sight (BLOS), satellite-based CNPC link, aviation safety spectrum allocations are currently inadequate. Therefore, the 2015 WRC will consider the use of Fixed Satellite Service (FSS) bands to provide BLOS CNPC under Agenda Item 1.5. This agenda item requires studies to be conducted to allow for the consideration of how unmanned aircraft can employ FSS for BLOS CNPC while maintaining existing systems. Since there are terrestrial Fixed Service systems also using the same frequency bands under consideration in Agenda Item 1.5 one of the studies required considered spectrum sharing between earth stations on-board unmanned aircraft and Fixed Service station receivers. Studies carried out by NASA have concluded that such sharing is possible under parameters previously established by the International Telecommunications Union. As the preparation for WRC-15 has progressed, additional study parameters Agenda Item 1.5 have been proposed, and some studies using these parameters have been added. This paper examines the study results for the original parameters as well as results considering some of the more recently proposed parameters to provide insight into the complicated process of resolving WRC-15 Agenda Item 1.5 and achieving a solution for BLOS CNPC for unmanned aircraft. <s> BIB007 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> E. Related Research <s> Abstract Small satellites and autonomous vehicles have greatly evolved in the last few decades. Hundreds of small satellites have been launched with increasing functionalities, in the last few years. Likewise, numerous autonomous vehicles have been built, with decreasing costs and form-factor payloads. Here we focus on combining these two multifaceted assets in an incremental way, with an ultimate goal of alleviating the logistical expenses in remote oceanographic operations. The first goal is to create a highly reliable and constantly available communication link for a network of autonomous vehicles, taking advantage of the small satellite lower cost, with respect to conventional spacecraft, and its higher flexibility. We have developed a test platform as a proving ground for this network, by integrating a satellite software defined radio on an unmanned air vehicle, creating a system of systems, and several tests have been run successfully, over land. As soon as the satellite is fully operational, we will start to move towards a cooperative network of autonomous vehicles and small satellites, with application in maritime operations, both in-situ and remote sensing. <s> BIB008
Finding a proper solution to the problem of sharing the same spectrum with FSS in Ku/Ka band have been studied in BIB007 , BIB006 . In these works, the primary focus is to achieve an efficient BLOS satellite data link for the UAVs. However, as concluded, a specific regulation for SATCOM is still needed. From another perspectives, there are other on-going research works in this area. For instance, designing a satellite-based antenna system operating in the Ka band between the UAV and the remote pilot have been studied in BIB003 , BIB004 . The proposed onboard satellite antennas designed for UAVs are low-profile and broadband antennas that are very small in dimensions and operate in a wide range of frequency spectrum. These two main features are very useful and essential for designing small UAVs. Improving the UAV situational awareness has been studied in BIB002 . The proposed solution is based on establishing a collaborative mechanism between UAVs using satellite communication. In the paper, UAVs are called unmanned satellite vehicles (USV). The positive aspects of using a swarm of collaborative USVs in a small area are analyzed. For instance, the USVs are able to finish their mission autonomously without any human interactions. Several situational awareness missions such as resource searching mission, fire detection mission, critical infrastructure surveillance and warning detection are considered. For collision avoidance, utilizing satellite-based radar for UASs has been studied in BIB001 . A modified Automatic Dependent Surveillance-Broadcast (ADS-B) system is used. The main objective of the research is to ensure the safety of NAS through the co-existence of UAVs with other aircraft. The proposed ADS-B satellite radar satisfies this objective by sharing the situational awareness information among all the aircraft. This area of research is essential due to the requirements of UAS integration into NAS and for faster improvements in the future ADS-B systems. Skinnemoen BIB005 has studied the challenges of using SATCOM for UAVs. In other words, the main focus of this research work is on live photo and video sharing using satellite communication. They have used InmarSAT BGAN service in their work. Some primary studies on using small satellite antennas in future have been tested recently on UAVs in BIB008 . Simulated SATCOM service providers are used to mimic the behavior and feature of a real satellite data link. However, in the built testbed, the simulated small SATCOM antenna is able to communicate only in a half-duplex mode, so it might not be a proper representation of a real-world case. Hence, more research work is required in this area, even though these primary studies are a huge step. This section is divided into two parts. First, we discuss the research works considering the UAVs as users of cellular networks. Next, we highlight the research case studies focusing on deploying UAVs as flying BS to provide assistance for the cellular networks.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> C. Cellular Aviation, 4G, and 5G <s> This letter studies a wireless system consisting of distributed ground terminals (GTs) communicating with an unmanned aerial vehicle (UAV) that serves as a mobile base station (BS). The UAV flies cyclically above the GTs at a fixed altitude, which results in a cyclical pattern of the strength of the UAV-GT channels. To exploit such periodic channel variations, we propose a new cyclical multiple access (CMA) scheme to schedule the communications between the UAV and GTs in a cyclical time-division manner based on the flying UAV's position. The time allocations to different GTs are optimized to maximize their minimum throughput. It is revealed that there is a fundamental tradeoff between throughput and access delay in the proposed CMA. Simulation results show significant throughput gains over the case of a static UAV BS in delay-tolerant applications. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> C. Cellular Aviation, 4G, and 5G <s> This paper proposes a method of finding the optimal position of unmanned aerial vehicles (UAVs) functioning as a communication relay node to improve the network connectivity and communication performance of a team of ground nodes/vehicles. A three-dimensional complex urban environment containing many buildings is considered where the line-of-sight between ground nodes is often blocked. The particle swarm optimisation is used to find the optimal UAV position using three different communication performance metrics depending on the requirement. Numerical simulations are performed to show the advantage of using relay UAVs and the specific metric in sample scenarios. An indoor proof-of-concept experiment is also performed to show the feasibility of the proposed approach in a real time. <s> BIB002
As the mobile communication evolved to the fourth generation (4G), data rate, latency, throughput, and interference management improved significantly. Also, 4G or LTE enabled data links are dynamic and can be configured based on UAV requirements. Thus, they are considered a potential candidate for several UAS applications. As the fifth generation (5G) is approaching, many promises have been made to improve the 4G services by delivering ultrahigh reliability, ultra-high availability, incredibly low-latency, and strong end-to-end security. Some of the significant features provided by 5G have been summarized in Fig. 5 . 5G has promised to increase efficiency by enabling all these features with a lower cost for a wider area. It is planned to use 5G services to support public safety using bands above 24 GHz and employ UAVs or robot-based surveillance systems to provide remote monitoring . In the following, we will discuss important features of 4G and 5G services that make them suitable for cellular-based UAV applications. 1) Availability: The data links used in UASs requires high availability so that the remote pilot can have constant access to the UAV. Coverage range of cellular towers is not very high. Hence, in the missions that UAVs need to travel a long distance, each UAV will be served by multiple cell towers. Properly optimized handover mechanisms need to be planned to increase the coverage range with no lapse in the communication. One of the exceptional characteristics of 4G LTE is the Coordinated Multi-Point (CoMP) base station technology. In this technique, two or more base stations coordinate transmissions and receptions to the user to improve the availability, especially at the cell edge. Having a high-quality data link to the base station even at the cell edges can improve the availability and performance and avoid any collision caused by the poor communication. CoMP is important to achieve the required SINR (Signal to Interference plus Noise Ratio) of the system. Further, CoMP will enable QoS features and enhance the spectrum efficiency [99] . On the other hand, in cellular communication, the coverage and availability are also dependent on the node density of that particular area. Hence, cellular networks might not be an efficient choice in highly dense or technologically advanced areas . Another important factor regarding the cellular networks is the reuse distance. Reuse distance means that the cellular network can reuse the frequencies in specific distances based on the inference level. This feature increases both the availability and capacity of the network. The reuse distance is dependent on the tower's cell radius and the number of cells per cluster in a specific area. However, with increasing capacity, the reuse distance becomes very short, the reuse cells start to overlap with each other, causing interference, thereby SINR decreases significantly. 2) Throughput and Data Rate: The throughput provided by the 4G network has improved 10 times more than 3G technology, which is relatively sufficient for video services. However, it is promised that 5G would offer a much higher level of throughput that would be uniform with no lapse in connection. This will improve the UAV's video-based applications even further. The Federal Communications Commission (FCC) has planned 5G mobile networks to be implemented in specific frequency bands in the tens of GHz, called "millimeterwave" bands. Millimeter wave would enable 5G to have gigabit-per-second data rates, which supplies the UAVs with previously mentioned ultra-high-resolution video communications. However, because the frequency is high, signal propagation becomes a challenge and should be taken care of by the carrier providers. Also, these frequencies do not penetrate into buildings as easily as the lower bands . Exploiting 5G as the data link for UAS and the increased throughput will also enhance the direct UAV-to-UAV communication in multi-UAV networks. UAVs with high data rate data links employed in a mesh network acting as flying relays to help the data exchange between terrestrial users has been studied in BIB001 - BIB002 . 3) Latency: Through ultra-low latency 5G networks, new mission-critical services will be possible in the UAV application domain. That means 5G communication capabilities may be expanded beyond human constraints, in latency and reliability. In UAV-based mission-critical applications, it is crucial to have seamless connectivity, and failure is not an option . The latency of 4G networks has improved to about half of the 3G technology and is about 50 ms. The expected latency of the 5G is promised to be less than 1 ms. This level of ultralow latency enables designing proper data links for unmanned and automated technologies while guaranteeing the mission's safety.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. UAVs for Cellular Services <s> This paper presents an overview of unmanned aircraft systems developed at the University of Colorado and the development of a new software defined radio sensor that will be integrated into the existing systems. The architecture and performance of the new sensor is discussed. Potential research applications enabled by the radio sensor are covered. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. UAVs for Cellular Services <s> Novel air traffic management (ATM) strategies are proposed through the Next Generation Air Transportation and Single European Sky for ATM Research projects to improve the capacity of the airspace and to meet the demands of the future air traffic. The implementation of the proposed solutions leads to increasing use of wireless data for aeronautical communications. Another emerging trend is the unmanned aerial vehicles. The unmanned aerial systems (UASs) need reliable wireless data link and dedicated spectrum allocation for its operation. On-board broadband connectivity also needs dedicated spectrum to satisfy the quality of service requirements of the users. With the growing demand, the aeronautical spectrum is expected to be congested. However, the studies revealed that the aeronautical spectrum is underutilized due to the static spectrum allocation strategy. The aeronautical communication systems, such as air–air and air–ground communication systems, inflight infotainment systems, wireless avionics intra-communications, and UAS, can benefit significantly from the introduction of cognitive radio-based transmission schemes. This paper summarizes the current trends in aeronautical spectrum management followed by the major applications and contributions of cognitive radio in solving the spectrum scarcity crisis in the aeronautical domain. Also, to cope with the evolving technological advancement, researchers have prioritized the issues in the case of cognitive radio that needs to be addressed depending on the domain of operation. The proposed cognitive aeronautical communication systems should also be compliant with the Aeronautical Radio Incorporated and Aerospace Recommended Practice standards. An overview of these standards and the challenges that need immediate attention to make the solution feasible for a large-scale operation, along with the future avenues of research is also furnished. <s> BIB002 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. UAVs for Cellular Services <s> In any emergency situation, it is paramount that communication be established between those affected by an emergency and the emergency responders. This communication is typically initiated by contacting an emergency service number such as 9-1-1 which will then notify the appropriate responders. The communication link relies heavily on the use of the public telephone network. If an emergency situation causes damage to, or otherwise interrupts, the public telephone network then those affected by the emergency are unable to call for help or warn others. A backup emergency response communication system is required to restore communication in areas where the public telephone network is inoperable. The use of unmanned aerial vehicles is proposed to act as mobile base stations and route wireless communication to the nearest working public telephone network access point. This thesis performs an analysis based on wireless attributes associated with communication in this type of network such as channel capacity, network density and propagation delay. Index Words: Unmanned Aerial Vehicle, Emergency Response Communication DESIGN, ANALYSIS AND EVALUATION OF UNMANNED AERIAL VEHICLE AD HOC NETWORK FOR EMERGENCY RESPONSE COMMUNICATIONS <s> BIB003
UAVs can also be very useful for cellular services (e.g., in case of emergency or disaster) to perform as a flying base stations (BS). In cellular communications, a wireless connection to the public telephone network is established through the local cell tower. During a disaster, these towers may lose their functionality. This leads to the loss of communication in the affected regions which could lead to further disasters. In such situations, constant coverage and communication are vital for public safety. In this case, UAVs can establish instant connectivity by implementing cognitive radio in a multi-UAV network to form a wireless mesh network with devices in the affected area. Cognitive radio concept is based on changing the spectrum access dynamically for the opportunistic utilization of licensed and unlicensed frequency bands in a specific area. Since cognitive radio networks are infrastructure less and spontaneous, they are very suitable in disaster situations. There are detailed investigations in papers - BIB001 on employing cognitive radio technique in UAS networks. A comprehensive survey of cognitive radio for aeronautical communications is provided in BIB002 . They also discuss the significant performance improvement that cognitive radio brings for UASs. Cognitive radio would also help the problem of increasing number of civilian UAVs facing spectrum scarcity. UAVs can also form an ad-hoc network to replace the malfunctioning tower in the cellular network. Once set up, the UAVs can act as mobile base stations and start routing traffic to and from the cell tower. However, due to their limited power sources, using UAVs is a temporary solution while trying to restart communication through the permanent networks BIB003 . In addition to that, similar techniques can be used in remote or rural areas that lack cellular towers. A similar UAV-based BS concept can be applied to provide temporary cellular connection and Internet access for the users to cover the area.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> Recent advances in interference cancellation and signal processing techniques can enable full-duplex radios and multi-packet reception (MPR) capability, which will have significant impacts on the medium access control (MAC) design. In this paper, we study the MAC design in UAV ad-hoc networks with full-duplex radios and MPR. To efficiently handle the highly mobile environment of a UAV ad-hoc network, a token-based technique is used for updating information in the network. The MAC scheme in the presence of perfect and imperfect channel state information are formulated as a combinatorial optimization problem and a discrete stochastic optimization problem, respectively. Simulation results are presented to show the effectiveness of the proposed MAC. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> In this paper, the deployment of an unmanned aerial vehicle (UAV) as a flying base station used to provide the fly wireless communications to a given geographical area is analyzed. In particular, the coexistence between the UAV, that is transmitting data in the downlink, and an underlaid device-to-device (D2D) communication network is considered. For this model, a tractable analytical framework for the coverage and rate analysis is derived. Two scenarios are considered: a static UAV and a mobile UAV. In the first scenario, the average coverage probability and the system sum-rate for the users in the area are derived as a function of the UAV altitude and the number of D2D users. In the second scenario, using the disk covering problem, the minimum number of stop points that the UAV needs to visit in order to completely cover the area is computed. Furthermore, considering multiple retransmissions for the UAV and D2D users, the overall outage probability of the D2D users is derived. Simulation and analytical results show that, depending on the density of D2D users, the optimal values for the UAV altitude, which lead to the maximum system sum-rate and coverage probability, exist. Moreover, our results also show that, by enabling the UAV to intelligently move over the target area, the total required transmit power of UAV while covering the entire area, can be minimized. Finally, in order to provide full coverage for the area of interest, the tradeoff between the coverage and delay, in terms of the number of stop points, is discussed. <s> BIB002 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> The communication link between the unmanned aerial vehicle and its ground station is a critical part of the complete unmanned aerial system. Nowadays, there are several kinds of the communication interfaces with different parameters such as operational frequency, data throughput, transmitting power etc. This article deals with the possibility to use the latest generations of mobile telecommunication systems, specifically 3th generation and 4th generation, for the purpose of the unmanned aerial vehicle communication interface. <s> BIB003 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> For the safe control of Unmanned Aerial Vehicle (UAV), a reliable communication link is very important, so a Control and Non-Payload Communication (CNPC) standard is progressing in Radio Technical Commission for Aeronautics (RTCA) [1][2]. The CNPC standard of RTCA only includes the physical layer specification, so a network architecture and upper layer protocols are required for the CNPC network deployment. In this paper, we propose the LTE based network architecture and the LTE based upper layer protocols. In addition, we propose a security architecture for the CNPC. <s> BIB004 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> Although the concept was born within military use, in recent years we have witnessed an impressive development of unmanned aerial vehicles (UAVs) for civil and academic applications. Driving this growth is the myriad of possible scenarios where this technology can be deployed, such as: fire detection, search and rescue operations, surveillance, police operations, building and engineering inspections, aerial photography and video for post-disaster assessment, agricultural monitoring, remote detection (radiation, chemical, electromagnetic), weather services, UAV photogrammetry, airborne relay networks, and more [1]. Undoubtedly, the increased use of UAVs has been sustained through the research and development of multiple low-cost solutions for the control of aerial vehicles, the evolution in microelectronics with multiple off-the-shelf components and sensors, and also through a growing global developers community with several UAV related open source projects. <s> BIB005 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> UA (Unmanned Aircraft) technology has rapidly advanced and its civilian applications such as public safety, research operation, and agricultural operation have been employed. To securely integrate the UAs into the general airspace, a reliable link to control and monitor those UAs is essentially required. The link between a UA and its GCS (Ground Control Station) is referred to as the CNPC (Control and Non-Payload Communication) link and the network including UAs and GCS is referred to as the integrated CNPC network. The CNPC link may include a terrestrial network such as the LTE (Long Term Evolution) and the terrestrial network connects a UA to a GCS. In this paper, we propose a communication architecture to integrate the LTE technology into the integrated CNPC network and defines the security requirements of the communication architecture. Then, we modify an authentication and key agreement protocol and handover key management protocols for the network. We compare the modified protocols with LTE counterpart protocols. Our comparison shows that the modified protocols outperforms the LTE counterpart protocols in terms of security while it induces almost the same volume of communication overhead. Our future work is to do a series of simulations to evaluate the performance and security of our protocols rigorously. <s> BIB006 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> The purpose of this article is to bestow the reader with a timely study of UAV cellular communications, bridging the gap between the 3GPP standardization status quo and the more forward-looking research. Special emphasis is placed on the downlink command and control (C&C) channel to aerial users, whose reliability is deemed of paramount technological importance for the commercial success of UAV cellular communications. Through a realistic side-by-side comparison of two network deployments -- a present-day cellular infrastructure versus a next-generation massive MIMO system -- a plurality of key facts are cast light upon, with the three main ones summarized as follows: (i) UAV cell selection is essentially driven by the secondary lobes of a base station's radiation pattern, causing UAVs to associate to far-flung cells; (ii) over a 10 MHz bandwidth, and for UAV heights of up to 300 m, massive MIMO networks can support 100 kbps C&C channels in 74% of the cases when the uplink pilots for channel estimation are reused among base station sites, and in 96% of the cases without pilot reuse across the network; (iii) supporting UAV C&C channels can considerably affect the performance of ground users on account of severe pilot contamination, unless suitable power control policies are in place. <s> BIB007 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> In this paper, we investigate resource allocation algorithm design for multiuser unmanned aerial vehicle (UAV) communication systems in the presence of UAV jittering and user location uncertainty. In particular, we jointly optimize the two-dimensional position and the downlink beamformer of a fixed-altitude UAV for minimization of the total UAV transmit power. The problem formulation takes into account the quality-of-service requirements of the users, the imperfect knowledge of the antenna array response (AAR) caused by UAV jittering, and the user location uncertainty. Despite the non-convexity of the resulting problem, we solve the problem optimally employing a series of transformations and semidefinite programming relaxation. Our simulation results reveal the dramatic power savings enabled by the proposed robust scheme compared to two baseline schemes. Besides, the robustness of the proposed scheme with respect to imperfect AAR knowledge and user location uncertainty at the UAV is also confirmed. <s> BIB008 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> Using base stations mounted on an unmanned aerial vehicle (UAV-BSs) is a promising new evolution of wireless networks for the provision of on-demand high data rates. While many studies have explored deploying UAV-BSs in a green field—no existence of terrestrial BSs, this letter focuses on the deployment of UAV-BSs in the presence of a terrestrial network. The purpose of this letter is twofold: 1) to provide supply-side estimation for how many UAV-BSs are needed to support a terrestrial network so as to achieve a particular quality of service and 2) to investigate where these UAV-BSs should hover. We propose a novel stochastic geometry-based network planning approach that focuses on the structure of the network to find strategic placement for multiple UAV-BSs in a large-scale network. <s> BIB009 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> There are two main questions regarding the interaction of drones with wireless networks: first, how wireless networks can support personal or professional use of drones, and second, how drones can support wireless network performance (i.e., boosting capacity on demand, increasing coverage range, enhancing reliability and agility as an aerial node). From a communications perspective, this article categorizes drones in the first case as mobile-enabled drones (MEDs) and drones in the second case as wireless infrastructure drones (WIDs). At the dawn of 5G Release-16, this study investigates both the MED and WID cases within the realistic constraints of 5G. Furthermore, we discuss potential solutions for highlighted open issues, either via application of current standards or by providing suggestions toward further enhancements. Although integrating drones into cellular networks is a rather complicated issue, 4G LTE-A and the 5G Rel-15 standards seem to have significant accomplishments in building fundamental mechanisms. Nevertheless, finetuning future releases by studying existing methods from the aspects of MEDs and WIDs, and bridging the gaps with new techniques are still needed. <s> BIB010 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> Enabling high-rate, low-latency and ultra-reliable wireless communications between UAVs and their associated ground pilots/users is of paramount importance to realize their large-scale usage in the future. To achieve this goal, cellular- connected UAV, whereby UAVs for various applications are integrated into the cellular network as new aerial users, is a promising technology that has drawn significant attention recently. Compared to conventional cellular communication with terrestrial users, cellular-connected UAV communication possesses substantially different characteristics that present new research challenges as well as opportunities. In this article, we provide an overview of this emerging technology, by first discussing its potential benefits, unique communication and spectrum requirements, as well as new design considerations. We then introduce promising technologies to enable the future generation of 3D heterogeneous wireless networks with coexisting aerial and ground users. Last, we present simulation results to corroborate our discussions and highlight key directions for future research. <s> BIB011 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> The use of flying platforms such as unmanned aerial vehicles (UAVs), popularly known as drones, is rapidly growing. In particular, with their inherent attributes such as mobility, flexibility, and adaptive altitude, UAVs admit several key potential applications in wireless systems. On the one hand, UAVs can be used as aerial base stations to enhance coverage, capacity, reliability, and energy efficiency of wireless networks. On the other hand, UAVs can operate as flying mobile terminals within a cellular network. Such cellular-connected UAVs can enable several applications ranging from real-time video streaming to item delivery. In this paper, a comprehensive tutorial on the potential benefits and applications of UAVs in wireless communications is presented. Moreover, the important challenges and the fundamental tradeoffs in UAV-enabled wireless networks are thoroughly investigated. In particular, the key UAV challenges such as 3D deployment, performance analysis, channel modeling, and energy efficiency are explored along with representative results. Then, open problems and potential research directions pertaining to UAV communications are introduced. Finally, various analytical frameworks and mathematical tools, such as optimization theory, machine learning, stochastic geometry, transport theory, and game theory are described. The use of such tools for addressing unique UAV problems is also presented. In a nutshell, this tutorial provides key guidelines on how to analyze, optimize, and design UAV-based wireless communication systems. <s> BIB012 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> In this paper, a novel concept of three-dimensional (3D) cellular networks, that integrate drone base stations (drone-BS) and cellular-connected drone users (drone-UEs), is introduced. For this new 3D cellular architecture, a novel framework for network planning for drone-BSs and latency-minimal cell association for drone-UEs is proposed. For network planning, a tractable method for drone-BSs’ deployment based on the notion of truncated octahedron shapes is proposed, which ensures full coverage for a given space with a minimum number of drone-BSs. In addition, to characterize frequency planning in such 3D wireless networks, an analytical expression for the feasible integer frequency reuse factors is derived. Subsequently, an optimal 3D cell association scheme is developed for which the drone-UEs’ latency, considering transmission, computation, and backhaul delays, is minimized. To this end, first, the spatial distribution of the drone-UEs is estimated using a kernel density estimation method, and the parameters of the estimator are obtained using a cross-validation method. Then, according to the spatial distribution of drone-UEs and the locations of drone-BSs, the latency-minimal 3D cell association for drone-UEs is derived by exploiting tools from an optimal transport theory. The simulation results show that the proposed approach reduces the latency of drone-UEs compared with the classical cell association approach that uses a signal-to-interference-plus-noise ratio (SINR) criterion. In particular, the proposed approach yields a reduction of up to 46% in the average latency compared with the SINR-based association. The results also show that the proposed latency-optimal cell association improves the spectral efficiency of a 3D wireless cellular network of drones. <s> BIB013 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> 1) UAVs as Users: <s> Agile networking can reduce over-engineering, costs, and energy waste. Toward that end, it is vital to exploit all degrees of freedom of wireless networks efficiently, so that the service quality is not sacrificed. In order to reap the benefits of flexible networking, we propose a spatial network configuration (SNC) scheme, which can result in efficient networking; both from the perspective of network capacity and profitability. First, the SNC utilizes the drone-base-stations (drone-BSs) to configure access points. Drone-BSs are shifting paradigms of heterogeneous wireless networks by providing radically flexible deployment opportunities. On the other hand, their limited endurance and potential high cost increase the importance of utilizing drone-BSs efficiently. Therefore, second, user mobility is exploited via user-in-the-loop (UIL), which aims at influencing users’ mobility by offering incentives. The proposed uncoordinated SNC is a computationally efficient method, yet, it may be insufficient to exploit the synergy between the drone-BSs and UIL. Hence, we propose a joint SNC, which increases the performance gain along with the computational cost. Finally, the semi-joint SNC combines the benefits of the joint SNC with computational efficiency. The numerical results show that the semi-joint SNC is two orders of magnitude faster than the joint SNC, and a profit of more than 15% can be obtained compared to conventional systems. <s> BIB014
The applicability of using 3G and 4G mobile communications for UAVs' data link has been studied in BIB003 . The results of the paper show that the Long-Term Evolution (LTE) and UMTS network provide secure, low latency, and high throughput data exchange. These features are critical in the UAV applications. On the other hand, level of readiness and issues of 5G cellular network for drones are looked into in BIB010 . In other research, LTE-based control and non-payload communication (CNPC) network for UAVs is investigated BIB004 . Security aspects are also studied since security is highly important for command and control data links. Any failure or malicious attack can jeopardize the whole mission. UAS civil applications using cellular communication network has been studied in BIB005 . Different technologies such as EDGE, UMTS, HSPA+ (High-Speed Packet Access Plus), LTE, and LTE-A (LTE Advanced) have been investigated. Some experiments on radio propagation are presented as well. Wide radio coverage, high throughputs, reduced latencies, and large availability of radio modems are mentioned as advantages of using cellular communication for UASs. Some discussion on an integrated UAS CNPC network architecture with LTE cellular data link are presented in BIB006 . A new authentication mechanism, key agreement protocol, and handover key management protocol are also proposed. Providing authentication security policy is very useful in sensitive UAV applications such as delivery, industrial inspection, monitoring, and surveillance. Since it is important to make sure that only authorized users are able to access the data. The potential and challenges of integrating the UAVs to cellular networks as aerial users are studied in BIB011 . Some primary studies regarding different UAV heights have also been shown in the paper. Using cellular data links for UAVs at the same time with serving the ground users is the main focus of BIB007 . Comprehensive analysis of current and next-generation cellular networks are studied. The current traditional topology is based on single user mode, which means one user is served per frequency-time resource at each time, whereas in next generation, multi-user massive MIMO (multiple input multiple output) BSs are available. The latter means at each time, multiple users will be served per frequency-time resource. 2) UAVs as BSs: Investigating the performance of a UAV acting as a mobile base station is provided in BIB002 . The main objective of the paper is to improve the coverage and connectivity in a specific area in which users are cooperating using device to device (D2D) protocol. There is a comprehensive study on medium access control (MAC) design for UAV-based ad hoc networks using full-duplex and multipacket reception (MPR) abled antennas BIB001 . In their scheme, each UAV uses code division multiple access (CDMA) technique to model the MPR. The simulation results show that the idea of combining these two capabilities (full-duplex radios along with MPR) will significantly enhance the performance of UAV-based ad hoc networks. In a comprehensive tutorial paper, BIB012 , the potentials and advantages of using UAVs as aerial base stations in cellular networks are studied. The authors discuss the challneges, future research directions, and statistical methods for analyzing and improving their performance. They also have presented their study results, BIB008 , on energy consumption efficiency regarding optimal localization and number of deployed drones. As mention before, the cellular networks were not designed originally for flying users. In BIB013 , the focus is on trying to tackle this challenge by desinging a UAV-based cellular network specifically designed to serve the drone applications. Through optimizing the three dimensional placement and frequency planning, the proposed system showed a significant improvement in the communication delay for the flying drone users. In BIB014 , a dense urban scenario is considered where the existing cellular network falls short in satisfying the users' requested QoSs. A UAV assistant BS is designed in the network to handle the users that could not be served by the main network due to overload. A joint versatile configurations is proposed based on locating the drone in a proper three dimension and the optimal incentive offered to each user. It might be useful to mention that there exist an older study, BIB009 , that has initiated this idea of using UAVs to assist the existing terrestrial BSs through optimizng the number of deployed UAVs and their location.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> Aeronautical <s> The potential benefits and challenges of applications of IEEE 802.16j-based relays in AeroMACS networks are discussed at the outset. Perhaps the most important advantage of application of multihop relays in AeroMACS networks is the flexible and cost effective radio range extension that it may allow for airport areas shadowed by large constructions and natural obstacles with virtually no increase in the required network power levels. With respect to PHY layer RSs may be classified as Transparent Relays (TRS) and Non-Transparent Relays (NTRS). While a TRS essentially functions as a repeater and bears no logical connection to the subscriber station (SS), a NTRS operates as a “mini base station (BS)” and is physically and logically connected to the SSs that it serves. Regarding MAC sublayer functionalities, RSs may operate in centralized or distributed modes. Distributed mode means that the RS is capable of scheduling network resources in coordination with multihop relay base station (MR-BS); otherwise the RS is in centralized mode. The RS can be in distributed or centralized mode with respect to security arrangements as well. The NTRS relays may further be divided into two categories; time-division transmit and receive relays (TTR) and simultaneous transmit and receive (STR) relays; both of which are supported by IEEE 802.16j standard. The TTR relay communicates with its subordinate and superordinate nodes using the same radio channel. The employment of relays in an AeroMACS network requires no alteration in the subscriber system. The key concept of “multihop gain”, which explains how the application of multihop relay enables performance enhancement in AeroMACS networks, is introduced. Under a reasonable set of assumptions and using a simple analysis, multihop gain is quantified in the form of an equation that provides a raw measure of this gain in Decibel. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> Aeronautical <s> We present in this survey new technologies proposed for the evolution of the aeronautical communication infrastructure. Motivated by studies that estimate the growth of air traffic flow, it was decided to develop a future communication infrastructure (FCI) adapted to the future aeronautical scenario. The FCI development involves researchers, industrials, and aeronautical authorities from many countries around the world, and started in 2004. The L-band Digital Aeronautical Communication System (L-DACS) is the part of the FCI that will be in charge of continental communication. The L-DACS is being developed in Europe since 2007 and two candidates were preselected: L-DACS1 and L-DACS2. In this paper, we first describe the motivations of the FCI. We then give an overview of its development activities from 2004 to 2009. After that, we provide some insights about both preselected L-DACS candidates, at their physical and medium access layers. Finally, we address the challenges on the development of the FCI/L-DACS. <s> BIB002 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> Aeronautical <s> As the Aeronautical Mobile Airport Communications System (AeroMACS) has evolved from a technology concept to a deployed communications network over major US airports, it is now time to contemplate whether the existing capacity of AeroMACS is sufficient to meet the demands set forth by all fixed and mobile applications over the airport surface given the AeroMACS constraints regarding bandwidth and transmit power. The underlying idea in this article is to present IEEE 802.16j-based WiMAX as a technology that can address future capacity enhancements and therefore is most feasible for AeroMACS applications. The principal argument in favor IEEE 802.16j technology is the flexible and cost effective extension of radio coverage that is afforded by relay fortified networks, with virtually no increase in the power requirements and virtually no rise in interference levels to co-allocated applications. The IEEE 802.16j-based multihop relay systems are briefly described. The focus is on key features of this technology, frame structure, and its architecture. Next, AeroMACS is described as a WiMAX-based wireless network. The two major relay modes supported by IEEE 802.16j amendment, i.e., transparent and non-transparent are described. The benefits of employing multihop relays are listed. Some key challenges related to incorporating relays into AeroMACS networks are discussed. The selection of relay type in a broadband wireless network affects a number of network parameters such as latency, signal overhead, PHY and MAC layer protocols, consequently it can alter key network quantities of throughput and QoS. <s> BIB003 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> Aeronautical <s> Aeronautical Mobile Airport Communications System (AeroMACS) is a WiMAX-based cellular technology that enables the access of Subscriber Stations (SS) to support ATC, AOC and airport applications on the airport surface. SS can be fixed stations, aircraft or vehicle embedded radios, or handheld devices. The AeroMACS access service network is provided by a number of Base Stations (BS) that operate in dedicated 5 MHz bandwidth channels. The BS manages the access of the SS to the common channel by accessing configured channels in radio cells. The connectivity to the service network is enabled through an ASN gateway that establishes the data path between the SSs and the ground network. <s> BIB004 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> Aeronautical <s> In recent years, there has been a dramatic increase in the use of unmanned aerial vehicles (UAVs), particularly for small UAVs, due to their affordable prices, ease of availability, and ease of operability. Existing and future applications of UAVs include remote surveillance and monitoring, relief operations, package delivery, and communication backhaul infrastructure. Additionally, UAVs are envisioned as an important component of 5G wireless technology and beyond. The unique application scenarios for UAVs necessitate accurate air-to-ground (AG) propagation channel models for designing and evaluating UAV communication links for control/non-payload as well as payload data transmissions. These AG propagation models have not been investigated in detail when compared to terrestrial propagation models. In this paper, a comprehensive survey is provided on available AG channel measurement campaigns, large and small scale fading channel models, their limitations, and future research directions for UAV communication scenarios. <s> BIB005
Mobile Airport Communications System (AeroMACS) is based on WiMAX standard, IEEE 802. BIB005 . This standard is defined for the physical layer and the MAC layer of the ground-to-aircraft and aircraft-to-aircraft communications at airports . AeroMACS was developed by the Radio Technical Commission for Aeronautics (RTCA) and then proposed at WRC-2007 (World Radiocommunication Conference 2007). It can provide different QoS based on various network constraints such as error rate, throughput, time delay, and resource management. Also, this standard is flexible regarding scalability for both large and small areas, with cell sizes up to 3 km. This standard operates in C band of the protected AM(R)S spectrum (5091-5150 MHz). It provides data rates up to 54 Mbps per system. Standardization of AeroMACS by RTCA is complete. It is being used in public trials in the United States . The FAA considers AeroMACS as an important element of the future communication system. Some current applications of AeroMACS include airline operational communications (AOC) messaging, ground traffic control, controller pilot data link communication (CPDLC) messaging, weather forecast information, ATM, airport operations. AeroMACS bandwidth may need to grow over time, and the allocated RF spectrum to AeroMACS will have to increase to be able to satisfy the unmanned aviation needs . In , a large number of flight tests have been done to provide a seamless connection through smooth handovers among three future data links: AeroMACS, VDL Mode 2 and BGAN. The project is called SANDRA, and the main goal is to achieve flexible and scalable network connectivity. Some AeroMACS services can benefit from a flexible asymmetric ratio of the number of OFDMA symbols assigned to downlink (DL) and uplink (UL) channels. This concept is studied in BIB004 . Their research is based on the fact that AeroMACS's TDD framework supports different shares of throughput between DL and UL, which can be beneficial for many UAV applications. They provide a comprehensive analysis of different DL/UL symbol ratio. The examined ratios are based on the cell constraints and data rate requirements. The main focus of the paper is on real-time applications such as video surveillance and sensors. Primary studies on how to apply IEEE 802.16j multi-hop relays to the AeroMACS prototype to enhance its capacity and flexibility are discussed in BIB003 - BIB001 . The proposed method increases the ground station capacity and provides the transmitters with transparent and non-transparent relay modes. This technique reduces the interference. Conclusion: AeroMACS is a good candidate for UAV communications due to its scalability and flexibility. Such a flexible scheme can be adapted based on the UAV mission requirements. Supporting different shares of data rates for the DL and the UL would compensate any resource limitation. However, its limited bandwidth may need to grow to be able to support both manned and unmanned aircraft. Further, its limited coverage area must be extended using several GSs for UAV applications requiring larger coverage. Another minor point is that, since this standard is primarily designed for fixed or stationary users, the mobility support is not that high. • L-DACS2, which is similar to the GSM, is based on the All-purpose Multichannel Aviation Communication System standard (AMACS) and the L band Data Link (LDL) using GMSK (Gaussian Minimum Shift Keying) modulation. L-DACS1 offers interoperability among services so that they would share the same hardware that provides navigation and surveillance. L-DACS1 performs more efficiently compared to L-DACS2 and is considered as an almost mature technology . Data transmission in L-DACS1 happens in full-duplex which means transmissions happen in both directions (i.e., downlink and uplink) simultaneously. Whereas, in L-DACS2, uplink and downlink transmissions take place alternatively, in a half-duplex method. We refer the readers to BIB002 for more information on these two techniques and their frame structures.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> B. L-DACS <s> The demand for air transportation is continuously growing. Likewise, unmanned aircraft systems (UAS) are proliferating at a tremendous pace. Both current and future air-to-ground (AG) communications systems will be deployed in the L-band (960-1164 MHz), and possibly in other bands. Aiming toward modernization, Eurocontrol recently defined two L-Band Digital Aeronautical Communication Systems (L-DACS). Primary goals of LDACS are high data rate transmission and high reliability. There are two L-DACS technology candidates, LDACS1 and LDACS2. The LDACS1 scheme employs a fairly broadband (0.5 MHz) transmission using Orthogonal Frequency-Division Multiplexing (OFDM) together with adaptive coding and modulation. The LDACS2 scheme follows a more traditional approach which is based on GSM (Global System for Mobile Communications), i.e., on second generation cellular mobile radio technology. In this paper, we investigate and compare the physical layer characteristics of L-DACS1 and L-DACS2 and then via simulations we illustrate the performance of these two communication systems in an air-to-ground channel. The air-to-ground channel we employ is one based upon a recent extensive measurement campaign. Results for error probability vs. signal to noise ratio are the focus. We also propose a new filterbank multicarrier (FBMC) based air-ground communication air interface which is consistent with previous requirements of L-DACS. We compare the FBMC performance with that of the LDACS schemes and show that FBMC has higher spectral efficiency via better time-frequency localized prototype subcarrier filters. This enables use of some guard subcarriers as data carrying subcarriers, increasing throughput. The BER results of our FBMC based L-DACS system are equivalent to those of L-DACS1 and better than those of L-DACS2. The simulation results also show the sensitivity of L-DACS2 systems to channel phase shifts and show the necessity of channel equalization for L-DACS2 receivers. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> B. L-DACS <s> Aeronautical vehicle use, and consequently, air-to-ground communication systems, are growing rapidly. A growing portion of these vehicles are unmanned aerial vehicles (UAVs) or unmanned aerial systems (UAS) operating in civil aviation systems. As a consequence of this growth, air traffic volume for these vehicles is increasing dramatically, and it is estimated that traffic density will at least double by 2025. This traffic growth has led civil aviation authorities to explore development of future communication infrastructures (FCI). The L-band digital aeronautical communication system one (L-DACS1) is one of the air-ground (AG) communication systems proposed by Eurocontrol. L-DACS1 is a multicarrier communication system whose channels will be deployed in between Distance Measurement Equipment (DME) channels in frequency. DME is a transponder-based radio navigation technology, and its channels are distributed in 1 MHz frequency increments in the L-band spectrum from 960 to 1164 MHz. In this paper we investigate the effect of DME as the main interference signal to AG FCI systems. Recently we proposed a new multicarrier L-band communication system based on filterbank multicarrier (FBMC), which has some significant advantages over L-DACS1. In this paper we briefly describe these systems and compare the performance of L-DACS1 and FBMC communication systems in the coverage volume of one cell of an L-band communication cellular network working in the area of multiple DME stations. We will show the advantage and robustness of the L-band FBMC system in suppressing the DME interference from several DME ground stations across a range of geometries. In our simulations we use a channel model proposed for hilly/suburban environments based on the channel measurement results obtained by NASA Glenn Research Center. We compare bit error ratio (BER) results, power spectral densities for L-DACS1 and FBMC communication systems, and show the advantages of FBMC as a promising candidate for FCI systems. <s> BIB002
Several research papers exist on L-DACS1 such as BIB001 - BIB002 . A new multicarrier communication system operating in L band based on filter-bank multicarrier (FBMC) was investigated in these papers to enhance the advantages of L-DACS1. Of the two versions, only L-DACS1 is now active. It is being considered to be a part of the NextGen as a multi-purpose aviation technology for CNS. L-DACS1 also offers interoperability among ATM services (e.g., navigation and surveillance) . Conclusion: L-DACS1 is an aviation standard, being considered for UAVs. The main advantage of this standard is its operating frequency, which helps the system support high level of mobility.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> C. ADS-B <s> Air traffic is continuously increasing worldwide, with both manned and unmanned aircraft looking to coexist in the same airspace in the future. Next generation air traffic management systems are crucial in successfully handling this growth and improving the safety of billions of future passengers. The Automatic Dependent Surveillance Broadcast (ADS-B) system is a core part of this future. Unlike traditional radar systems, this technology empowers aircraft to automatically broadcast their locations and intents, providing enhanced situational awareness. This article discusses important issues with the current state of ADS-B as it is being rolled out. We report from our OpenSky sensor network in Central Europe, which is able to capture about 30 percent of the European commercial air traffic. We analyze the 1090 MHz communication channel to understand the current state and its behavior under the increasing traffic load. Furthermore, the article considers important security challenges faced by ADS-B. Our insights are intended to help identify open research issues, furthering new interest and developments in this field. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> C. ADS-B <s> The National Airspace System (NAS) has experienced rapid growth in unmanned aircraft system (UAS) use and demand for airspace access. Expanding UAS military and civilian applications result in increasing demand for NAS access. It is driving the need for researches on UAS operations and air traffic management in a safe and effective manner. The ability for an unmanned aircraft to “sense and avoid” is the primary and pressing technical challenge. Sense and Avoid Systems (SAAS) have been perceived as potential solution for conflict detection and air traffic collisions avoidance, but the technical implementations for UAS have been far from satisfaction. One possible method for the SAAS to continuously localize surrounding aircraft and obtain traffic information necessary for integrating UAS may be through Automatic Dependent Surveillance-Broadcast (ADS-B). It is one of the most potential tracking technologies in Next Generation Air Transportation System (NextGen) to improve situational awareness for pilots. This paper examines the use of ADS-B system in future SAAS, possible implementations for UAS towards infrastructures, data link communication capabilities and satellite communication abilities. The technical challenges and safety concerns associated with UAS integration are presented. Finally, a practical approach of ADS-B unit for UAS based on soft defined radio in support with satellite relay and dynamic data link switch is discussed and developed. <s> BIB002
Automatic Dependent Surveillance-Broadcast (ADS-B) is a standard developed by the FAA. Current ADS-B systems will evolve to a next generation, called "ADS-B Next." The current systems work on 978/1090 MHz, but the next generation will be centered initially at 1030 MHz for more robustness and better efficiency. Complete maturity and equipping the ADS-B Next systems are expected to happen in 2025 as a part of the FAA's NextGen program . Upon deploying ADS-B Next, additional bandwidth would be provided through spectrum reallocation of the 1030 MHz band. In the ADS-B sense and avoid systems, there are two communication links with different frequencies. Aircraft operating below 6 km use 978 MHz Universal Access Transceiver (UAT), and aircraft operating above that height deploy 1090 MHz Extended Squitter (1090ES) data link. Currently, ADS-B systems mostly operate in a single frequency link, because operating in two different frequencies will cause compatibility issues for communication between different aircraft. Further, trying to address this problem may not be cost-efficient due to the SWaP and budget limitation. However, in manned aircraft, dual frequency ADS-B systems are widely used. The dual frequency scheme can be implemented in a switching manner BIB002 . ADS-B systems suffer from the data loss for distances above 280 km between the aircraft and the ground base, and this data loss starts increasing almost linearly after 50 km. Hence, the aircraft might not be able to get too far from the nearest base station BIB001 . UAT is planned to be implemented on all aviation aircraft operating at or below Class A altitudes in the NAS. UAT supplies the aircraft with traffic information, called Traffic Information Service-Broadcast (TIS-B) , and weather and aeronautical information, called Flight Information Service-Broadcast (FIS-B) . This multi-purpose data link architecture reduces significantly the operating costs. In addition, it increases the flight safety by providing traffic situational awareness, conflict detection, and alerts. An ADS-B enabled aircraft can send circumstantial information to other aircraft; this advantage allows unique situational awareness . As stated before, the UAS civil aviation is still under regulation process. ADS-B Next is a part of FAA's NextGen program and will replace radars on all aircraft by 2020. ADS-B systems employ existing GPS hardware and software using real-time satellite-based Internet communications. Conclusion: This automatic satellite-based standard provides a great coverage range. The provided services such as situational awareness is a bonus to be employed in UASs. Even though ADS-B does not offer enough flexibility and adaptability like other standards, this standard has great potential to be used in the UAVs' communication system.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. IEEE 802.11 <s> Advances in control engineering and material science made it possible to develop small-scale unmanned aerial vehicles (UAVs) equipped with cameras and sensors. These UAVs enable us to obtain a bird's eye view of the environment. Having access to an aerial view over large areas is helpful in disaster situations, where often only incomplete and inconsistent information is available to the rescue team. In such situations, airborne cameras and sensors are valuable sources of information helping us to build an "overview" of the environment and to assess the current situation. This paper reports on our ongoing research on deploying small-scale, battery-powered and wirelessly connected UAVs carrying cameras for disaster management applications. In this "aerial sensor network" several UAVs fly in formations and cooperate to achieve a certain mission. The ultimate goal is to have an aerial imaging system in which UAVs build a flight formation, fly over a disaster area such as wood fire or a large traffic accident, and deliver high-quality sensor data such as images or videos. These images and videos are communicated to the ground, fused, analyzed in real-time, and finally delivered to the user. In this paper we introduce our aerial sensor network and its application in disaster situations. We discuss challenges of such aerial sensor networks and focus on the optimal placement of sensors. We formulate the coverage problem as integer linear program (ILP) and present first evaluation results. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. IEEE 802.11 <s> We consider the problem of mitigating a highly varying wireless channel between a transmitting ground node and receivers on a small, low-altitude unmanned aerial vehicle (UAV) in a 802.11 wireless mesh network. One approach is to use multiple transmitter and receiver nodes that exploit the channel's spatial/temporal diversity and that cooperate to improve overall packet reception. We present a series of measurement results from a real-world testbed that characterize the resulting wireless channel. We show that the correlation between receiver nodes on the airplane is poor at small time scales so receiver diversity can be exploited. Our measurements suggest that using several receiver nodes simultaneously can boost packet delivery rates substantially. Lastly, we show that similar results apply to transmitter selection diversity as well. <s> BIB002 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. IEEE 802.11 <s> We analyze unmanned aerial vehicle (UAV)-to-ground links for an 802.11a-based small quadrotor UAV network with two on-board antennas via a set of field experiments. The paper presents our first results toward modeling the uplink and downlink channel and provide the path loss exponents for an open field and a campus scenario. We illustrate the impact of antenna orientation on the received signal strength and UDP throughput performance for different heights, yaws, and distances. When both antennas are horizontal (parallel to the flight direction plane), yaw differences can be handled, whereas a vertical antenna can assist against signal loss due to tilting of the UAV during acceleration/deceleration. Further work is required to analyze fading as well as UAV-UAV links in a multi-UAV network. <s> BIB003 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. IEEE 802.11 <s> Like most advances, wireless LAN poses both opportunities and risks. The evolution of wireless networking in recent years has raised many serious security issues. These security issues are of great concern for this technology as it is being subjected to numerous attacks. Because of the free-space radio transmission in wireless networks, eavesdropping becomes easy and consequently a security breach may result in unauthorized access, information theft, interference and service degradation. Virtual Private Networks (VPNs) have emerged as an important solution to security threats surrounding the use of public networks for private communications. While VPNs for wired line networks have matured in both research and commercial environments, the design and deployment of VPNs for WLAN is still an evolving field. This paper presents an approach to secure IEEE 802.11g WLAN using OpenVPN, a transport layer VPN solution and its impact on performance of IEEE 802.11g WLAN. <s> BIB004 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. IEEE 802.11 <s> We developed UAVNet, a framework for the autonomous deployment of a flying Wireless Mesh Network using small quadrocopter-based Unmanned Aerial Vehicles (UAVs). The flying wireless mesh nodes are automatically interconnected to each other and building an IEEE 802.11s wireless mesh network. The implemented UAVNet prototype is able to autonomously interconnect two end systems by setting up an airborne relay, consisting of one or several flying wireless mesh nodes. The developed software includes basic functionality to control the UAVs and to setup, deploy, manage, and monitor a wireless mesh network. Our evaluations have shown that UAVNet can significantly improve network performance. <s> BIB005 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. IEEE 802.11 <s> One of the most important design problems for multi-UAV (Unmanned Air Vehicle) systems is the communication which is crucial for cooperation and collaboration between the UAVs. If all UAVs are directly connected to an infrastructure, such as a ground base or a satellite, the communication between UAVs can be realized through the in-frastructure. However, this infrastructure based communication architecture restricts the capabilities of the multi-UAV systems. Ad-hoc networking between UAVs can solve the problems arising from a fully infrastructure based UAV networks. In this paper, Flying Ad-Hoc Networks (FANETs) are surveyed which is an ad hoc network connecting the UAVs. The differences between FANETs, MANETs (Mobile Ad-hoc Networks) and VANETs (Vehicle Ad-Hoc Networks) are clarified first, and then the main FANET design challenges are introduced. Along with the existing FANET protocols, open research issues are also discussed. <s> BIB006 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. IEEE 802.11 <s> Small-scale multicopters operating as autonomous teams in the air are envisioned for aerial monitoring and transport of goods in a variety of applications, including disaster management and environmental monitoring. For such applications to become reality, a high-throughput wireless network is needed. This paper presents experimental performance results with commercially available quadrocopters communicating via IEEE 802.11a. In particular, we compare the infrastructure and mesh modes of 802.11 for one-hop and two-hop communications, thus analyzing network layer versus MAC layer relaying. Results illustrate that changes are required in the mesh mode to support applications demanding high throughput with low jitter. <s> BIB007 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. IEEE 802.11 <s> The commercial availability of small unmanned aerial vehicles (UAVs) opens new horizons for applications in disaster response, search and rescue, event monitoring, and delivery of goods. An important building block is the wireless communication between UAVs and to base stations. Design of such a wireless network may vary vastly from existing networks due to aerial network characteristics such as high mobility of UAVs in 3D space. This paper presents experimental performance results with commercially available UAVs. First, we show throughput results for IEEE 802.11ac in a UAV setting. Second, we demonstrate that IEEE 802.11n can have much higher throughput over longer ranges than reported in [1] and [2]. Third, we analyze the fairness in a multi-sender aerial network. Fourth, we test a real-world coverage scenario with two mobile UAVs sending to a single receiver. Performance analysis considers the rate adaptation mechanism in both indoor and outdoor line-of-sight scenarios. <s> BIB008 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> D. IEEE 802.11 <s> Unmanned aerial vehicles (UAVs) have enormous potential in the public and civil domains. These are particularly useful in applications, where human lives would otherwise be endangered. Multi-UAV systems can collaboratively complete missions more efficiently and economically as compared to single UAV systems. However, there are many issues to be resolved before effective use of UAVs can be made to provide stable and reliable context-specific networks. Much of the work carried out in the areas of mobile ad hoc networks (MANETs), and vehicular ad hoc networks (VANETs) does not address the unique characteristics of the UAV networks. UAV networks may vary from slow dynamic to dynamic and have intermittent links and fluid topology. While it is believed that ad hoc mesh network would be most suitable for UAV networks yet the architecture of multi-UAV networks has been an understudied area. Software defined networking (SDN) could facilitate flexible deployment and management of new services and help reduce cost, increase security and availability in networks. Routing demands of UAV networks go beyond the needs of MANETS and VANETS. Protocols are required that would adapt to high mobility, dynamic topology, intermittent links, power constraints, and changing link quality. UAVs may fail and the network may get partitioned making delay and disruption tolerance an important design consideration. Limited life of the node and dynamicity of the network lead to the requirement of seamless handovers, where researchers are looking at the work done in the areas of MANETs and VANETs, but the jury is still out. As energy supply on UAVs is limited, protocols in various layers should contribute toward greening of the network. This paper surveys the work done toward all of these outstanding issues, relating to this new class of networks, so as to spur further research in these areas. <s> BIB009
Wireless local area networks (WLAN) are the most popular data links used in small UAVs so far. The reasons for its popularity include easy setup, mobility support, and low cost. IEEE 802.11, commonly known as Wi-Fi, is a set of standards for implementing WLAN in 2.4, 3.6, 5 and 60 GHz bands BIB004 . The 802.11 and 802.11b are the oldest ones released in 1999. OFDM and direct sequence spread spectrum (DSSS) modulation are usually used in 802.11 protocols. A summary of the most popular IEEE 802.11 protocols that have potential to be utilized as data link for various UAV applications is provided in Table VI. There are IEEE 802.11 protocols that are used specifically for network design or quality constraints. For instance, IEEE 802.11s defines a standard for wireless mesh networks describing how devices can form a WLAN multi-hop network BIB009 . IEEE 802.11e is a QoS scheme, defining standards for Automatic Power Save Delivery (APSD) and it considers the trade-off between energy efficiency and delivery delay for mobile devices in the data link layer. There are several research works such as BIB006 , focusing on the design of proper data links for flying adhoc networks (FANETs). The proposed systems are multi-hop networks using IEEE 802.11 for the UAVs and pilot to collaborate and exchange data. A prototype of a multi-UAV network, called UAVNet, is suggested in BIB005 . A framework for flying wireless mesh A network of small UAVs, wirelessly connected through IEEE 802.11g, carrying cameras and sensors in disaster management applications have been studied in BIB001 . The primary goal is to have efficient cooperation among the UAVs through the network. Their proposed aerial imaging system for mission-critical situations is capable of providing services for different desired levels of detail and resolution. There are other research studies focusing on performance measurements and tests of UASs using IEEE 802.11 as the communication network. The channel fading properties through spatial diversity, by employing multiple antennas are studied in BIB002 . The final scheme includes a small, low-altitude UAV within an 802.11 wireless mesh network. Several field tests to model the UAV frequency channel are investigated in BIB003 . This study investigates path loss exponents for a small UAV in an 802.11a-based network using UDP (user datagram protocol). The tests on the UAV are designed to be mission-like, and the UAV goes to different waypoints, hovering around to model the behavior for a UAV needed to gather sensing information from different points. They investigate the effect of the antenna's direction on the received signal strength (RSS) and the communication throughput of the UAV's data link flying at different altitudes. In research works BIB008 , BIB007 the main focus is to improve the communication links between the UAVs and UAVs to the pilot by conducting several tests. They consider throughput and radio transmission range as performance metrics and use 802.11n and 802.11ac in the infrastructure and mesh structure among the UAVs. Their results prove that 802.11n and 802.11ac provide high throughput along with high data rates. Conclusion: IEEE 802.11 standards have been used worldwide, since they can function in a wide range of frequencies. This huge deployment has led to the maturity of these standards, which is an advantage of this standard . Other benefits of this standard can be mentioned as they are easy to setup with low cost, and they support adequate level of mobility. However, considering its disadvantages from the aeronautical standardization point of view, they are considered as short-range wireless LANs. Also, they have not been tested officially for aeronautical use by any official standardization body yet. Another downside of these data links is the high level of interference in the license-exempt bands that might cause problems for mission-critical UAVs. Moreover, these standards were not initially designed for aeronautical or aviation purposes, although they have been widely employed for unmanned aerial applications.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> F. CPDLC <s> This paper presents an analysis of the use of Controller-Pilot Data Link Communication (CPDLC) application for Unmanned Aerial System (UAS). A fault model for injecting different types of faults in the communication application was developed, which has been integrated in a federated-simulation framework for the safety analysis of communications within the Aeronautical Telecommunications Network (ATN). Thereby, the ATN Fault Module was used to perform fault injection in communication, in order to analyze the risk posed by interferences in the communication between UAS operations and ground-based air traffic service unit (ATSU), operating in non-segregated airspace. The presented analysis and results offer new elements to guide the discussion on the need of new communication technologies for next generation of aircraft. A simulation scenario was setup with typical characteristics of the aeronautical communication environment, being composed by the UAS dynamics, the digital link equipment, the CPDLC application and the ATN Fault Module. The results so far seem to demonstrate that our model is capable of appropriately representing some of the possible failure modes in digital communication in the UAS operations, allowing the analysis of safety issues related to message contents and transmission times in the interaction with Air Traffic Control (ATC). <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> F. CPDLC <s> A multitude of wireless technologies are used by air traffic communication systems during different flight phases. From a conceptual perspective, all of them are insecure as security was never part of their design and the evolution of wireless security in aviation did not keep up with the state of the art. Recent contributions from academic and hacking communities have exploited this inherent vulnerability and demonstrated attacks on some of these technologies. However, these inputs revealed that a large discrepancy between the security perspective and the point of view of the aviation community exists. In this thesis, we aim to bridge this gap and combine wireless security knowledge with the perspective of aviation professionals to improve the safety of air traffic communication networks. To achieve this, we develop a comprehensive new threat model and analyse potential vulnerabilities, attacks, and countermeasures. Since not all of the required aviation knowledge is codified in academic publications, we examine the relevant aviation standards and also survey 242 international aviation experts. Besides extracting their domain knowledge, we analyse the awareness of the aviation community concerning the security of their wireless systems and collect expert opinions on the potential impact of concrete attack scenarios using insecure technologies. Based on our analysis, we propose countermeasures to secure air traffic communication that work transparently alongside existing technologies. We discuss, implement, and evaluate three different approaches based on physical and data link layer information obtained from live aircraft. We show that our countermeasures are able to defend against the injection of false data into air traffic control systems and can significantly and immediately improve the security of air traffic communication networks under the existing real-world constraints. Finally, we analyse the privacy consequences of open air traffic control protocols. We examine sensitive aircraft movements to detect large-scale events in the real world and illustrate the futility of current attempts to maintain privacy for aircraft owners. <s> BIB002
Controller pilot data link communication (CPDLC) is a message-based service for manned aviation communications between the pilot and the ATC. CPDLC usually uses satellite communication and VDL2 data links for ATC communications. InmarSAT was trying to get the certification to be allowed to tunnel CPDLC over their satellite Internet protocol links . Until 2016, Iridium satellite was the only network authorized to their communication link with CPDLC. The message elements provided by CPDLC are categorized as "clearance," "information" or "request," which has the same phraseologies used in the radiotelephone. The ATC uses CPDLC via a terminal to issue clearances, to exchange information, and to answer the messages such as required instructions, advisories, and emergency guidance. The pilot can reply to the messages, to request clearances, to exchange information, and to announce or call in an emergency. The communication happens by selecting predefined phrases (e.g., EMERGENCY, REPORT, CLEARANCE, REQUEST, LOG, WHEN CAN WE, etc.) Further, both the pilot and the ATC can also exchange free text messages which do not follow the pre-defined formats . CPDLC satisfies the communication requirements to meet the CNS demands for the future global aviation . CPDLC is secure; thus, it is popular for communicating confidential and critical aviation information BIB002 . Safety analysis of employing CPDLC for unmanned systems has been studied in BIB001 . A faulty communication model has been tried out to examine the risks associated with the interference in CPDLC communications. However, as they assert the results, it is practical to use CPDLC in the UAS communication, but some adaptation is necessary to guarantee the message's integrity. Conclusion: CPDLC can assure the required safety for the data link used in the UASs. Features such as robustness, easy to employ, and efficiency make it useful for sensitive applications where failure is not accepted. To be used as the general standard aviation platform for UAVs, modifications must be applied. Using satellite or terrestrial data links along with CPDLC will affect the coverage range and the financial cost of the system.
Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> G. SWIM <s> The NextGen National Airspace System (NAS) has begun to take shape with the functional emergence of En Route Automation Modernization (ERAM) and the Terminal Automation Modernization and Replacement (TAMR). NextGen Programs include: 1) Automatic Dependent Surveillance-Broadcast (ADS-B); 2) Collaborative Air Traffic Management Technologies (CATMT); 3) National Airspace System Voice System (NVS); 4) NextGen Weather; 5) Data Communications (Data Comm); and 6) System Wide Information Management (SWIM). These programs are being implemented to transform the operational precepts of the NAS, as based currently on ground control of aircraft separation. The NextGen programs will fundamentally change the NAS operational framework from a voice-centric to a data-centric model. The new model will rely extensively on air-ground operational integration coupled with ready-access to communication, navigation and surveillance (CNS) data to enable 4-dimensional trajectory-based-operation (4DT) flow management. In addition to these advanced-development NextGen programs, the FAA has sponsored a pathfinder initiative to enable flightcrew access to SWIM data for enhanced shared situational awareness, (SSA). This pathfinder, known as Aircraft Access to SWIM (AAtS), has organized itself within a broad framework of public/private collaboration (government & industry partnership) focused on the coupled evolution of air-ground-integration and related enabling technologies. As a pathfinder, the AAtS operational paradigm is consistent with the U.S. vision for SWIM. The U.S. Government work not protected by U.S. copyright solution developed focus on connecting SWIM to aircraft. Other air-service regions are considering different concepts, architectures and requirements for SWIM-aircraft connectivity and are targeted for later dates for entry into service. The AAtS initiative has developed: • an operations concept; • technical implementation framework; and • conducted operational demonstrations. These establish a proof-of-concept benchmark for implementation of a collaborative decision making (CDM) capability. The AAtS pathfinder environment leverages existing systems and new communications infrastructure, including broadband IP and COTS technology (see Figure 1). Demonstration data promoted better understanding of: 1) usage patterns for available products; 2) system-level performance; 3) utility of the demonstrated data links; 4) data protocol efficiency; 5) communication issue mitigation; and 6) potential performance improvements through lessons learned. Phase 2 will demonstrate air-to-ground communications in addition to the ground-to-air exchange from Phase 1. The Phase 2 demonstrations will measure operational systems performance, with an eye toward how best to facilitate CDM among the ATC, flightcrews, and the aircraft operations center (AOC). Data from resulting operational improvements will illustrate system efficiencies to be harvested through CDM. Air-ground integration within the realm of 4DT data exchange schema promises to yield material system efficiencies, especially as they apply to route optimization and accommodation of transient disruptions, e.g. traffic or weather induced flow upsets. The AAtS initiative will shed light on the nature of the expected efficiencies in addition to issues pertaining to system-safety and information security. AAtS, as a pathfinder, is well underway and has spurred significant future-systems architectural development within individual avionics original equipment manufacturers (OEMs) in addition to collaborative development OEM teams. The future of AAtS is to build on a premise that SWIM services can produce a benefit for the operators and FAA thru CATM. As a result, CATM via AAtS will enable improved airspace usage as part of a broader connected aircraft strategy. The belief is that air traffic management and connected aircraft will benefit from increased situational awareness, combined with their ability to expeditiously formulate operational solutions working with controllers and company dispatchers. To become viable from a business perspective, AAtS must yield material benefit through cost avoidance in the air and during ground operations. <s> BIB001 </s> Potential Data Link Candidates for Civilian Unmanned Aircraft Systems: A Survey <s> G. SWIM <s> This paper evaluates the use of System Wide Information Management (SWIM) to support sharing of safety ATM data link information from Controller Pilot Data Link Communications (CPDLC) and Automatic Dependent Surveillance (ADS), in addition to advisory information. The convergence of both advisory and safety critical information into IP-based (including ATN/IPS) networks makes the use of common infrastructure resources a viable possibility for data link and SWIM service providers in a cost-effective way. <s> BIB002
As mentioned before, FAA is focusing on its future global plan for aviation standardization, NextGen, planning to roll out by 2025. System Wide Information Management (SWIM) is the main part of this plan. FAA is focusing on using SWIM to provide a secure platform for cooperation among the national and international aviation organizations. SWIM concept was first initiated by ICAO to improve the data access for all the elements in the network . It is supposed to turn the NAS into a network-enabled system and use the exchanged data from the ATM networks to improve the traffic management, safety, and situational awareness . Enterprise Service Bus (ESB) is one of the leading parts of the SWIM architecture. The primary role of the ESB is to provide a general middleware for the basic communication between different service users. ESB is also expected to provide higher level functions including content-based routing, data monitoring for fault management and security functions, password management, authentication, and authorization. Further, secure gateways to communicate with all non-NAS users are needed . SWIM must be able to provide communication links between the NAS and non-NAS users. These services will follow the "publish/subscribe" or "request/reply" messaging patterns. They all will use one of the four data exchange standards; Aircraft Information Exchange Model (AIXM); Weather Information Exchange Model (WXXM); Flight Information Exchange Model (FIXM), or the proposed ICAO ATM Information Reference Model (AIRM) BIB001 . The aviation data link of this platform is called Aircraft Access to SWIM (AAtS) that manages the communication between the aircraft and the ground station through cellular or satellite communication networks. However, the functionality of this data link is limited to just advising services and not actually controlling the aircraft. Hence, this data link is not used for command and control and is only for planning and awareness. Further, an intermediary service will be implemented in SWIM called Data Management Service (DMS) providing a data link for piloting the aircraft BIB002 . Conclusion: since SWIM has not been implemented completely, we cannot judge its performance or suitability for UAVs yet. However, the anticipated features by FAA sound very promising to pave the way of integrating all manned and unmanned vehicles into the same airspace. This service is supposed to provide a safe platform for communications and offering a higher level of situational awareness.
Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> Presents serial and parallel algorithms for solving a system of equations that arises from the discretization of the Hamilton-Jacobi equation associated to a trajectory optimization problem of the following type. A vehicle starts at a prespecified point x/sub 0/ and follows a unit speed trajectory x(t) inside a region in /spl Rfr//sup m/, until an unspecified time T that the region is excited. A trajectory minimising a cost function of the form /spl int//sub 0//sup T/ r(x(t))dt+q(x(T)) is sought. The discretized Hamilton-Jacobi equation corresponding to this problem is usually served using iterative methods. Nevertheless, assuming that the function r is positive, one is able to exploit the problem structure and develop one-pass algorithms for the discretized problem. The first m resembles Dijkstra's shortest path algorithm and runs in time O(n log n), where n is the number of grid points. The second algorithm uses a somewhat different discretization and borrows some ideas from Dial's shortest path algorithm; it runs in time O(n), which is the best possible, under some fairly mild assumptions. Finally, the author shows that the latter algorithm can be efficiently parallelized: for two-dimensional problems and with p processors, its running time becomes O(n/p), provided that p=O(/spl radic/n/log n). > <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> Abstract ::: A fast marching level set method is presented for monotonically advancing fronts, which leads to an extremely fast scheme for solving the Eikonal equation. Level set methods are numerical techniques for computing the position of propagating fronts. They rely on an initial value partial differential equation for a propagating level set function and use techniques borrowed from hyperbolic conservation laws. Topological changes, corner and cusp development, and accurate determination of geometric properties such as curvature and normal direction are naturally obtained in this setting. This paper describes a particular case of such methods for interfaces whose speed depends only on local position. The technique works by coupling work on entropy conditions for interface motion, the theory of viscosity solutions for Hamilton-Jacobi equations, and fast adaptive narrow band level set methods. The technique is applicable to a variety of problems, including shape-from-shading problems, lithographic development calculations in microchip manufacturing, and arrival time problems in control theory. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> The Fast Marching Method is a numerical algorithm for solving the Eikonal equation on a rectangular orthogonal mesh in O(M log M) steps, where M is the total number of grid points. The scheme relies on an upwind finite difference approximation to the gradient and a resulting causality relationship that lends itself to a Dijkstra-like programming approach. In this paper, we discuss several extensions to this technique, including higher order versions on unstructured meshes in Rn and on manifolds and connections to more general static Hamilton-Jacobi equations. <s> BIB003 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> In this paper a fast sweeping method for computing the numerical solution of Eikonal equations on a rectangular grid is presented. The method is an iterative method which uses upwind difference for discretization and uses Gauss-Seidel iterations with alternating sweeping ordering to solve the discretized system. The crucial idea is that each sweeping ordering follows a family of characteristics of the corresponding Eikonal equation in a certain direction simultaneously. The method has an optimal complexity of O(N) for N grid points and is extremely simple to implement in any number of dimensions. Monotonicity and stability properties of the fast sweeping algorithm are proven. Convergence and error estimates of the algorithm for computing the distance function is studied in detail. It is shown that 2 n Gauss-Seidel iterations is enough for the distance function in n dimensions. An estimation of the number of iterations for general Eikonal equations is also studied. Numerical examples are used to verify the analysis. <s> BIB004 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> In this note we present an implementation of the fast marching algorithm for solving Eikonal equations that in practice reduces the original run-time from O(NlogN) to linear. This lower run-time cost is obtained while keeping an error bound of the same order of magnitude as the original algorithm. This improvement is achieved introducing the straight forward untidy priority queue, obtained via a quantization of the priorities in the marching computation. We present the underlying framework, estimations on the error, and examples showing the usefulness of the proposed approach. <s> BIB005 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> A computational study of the fast marching and the fast sweeping methods for the eikonal equation is given. It is stressed that both algorithms should be considered as "direct" (as opposed to iterative) methods. On realistic grids, fast sweeping is faster than fast marching for problems with simple geometry. For strongly nonuniform problems and/or complex geometry, the situation may be reversed. Finally, fully second order generalizations of methods of this type for problems with obstacles are proposed and implemented. <s> BIB006 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> We present a variational framework that integrates the statistical boundary shape models into a Level Set system that is capable of both segmenting and recognizing objects. Since we aim to recognize objects, we trace the active contour and stop it near real object boundaries while inspecting the shape of the contour instead of enforcing the contour to get a priori shape. We get the location of character boundaries and character labels at the system output. We developed a promising local front stopping scheme based on both image and shape information for fast marching systems. A new object boundary shape signature model, based on directional Gauss gradient filter responses, is also proposed. The character recognition system that employs the new boundary shape descriptor outperforms the other systems, based on well-known boundary signatures such as centroid distance, curvature etc. <s> BIB007 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> A distance field is a representation where, at each point within the field, we know the distance from that point to the closest point on any object within the domain. In addition to distance, other properties may be derived from the distance field, such as the direction to the surface, and when the distance field is signed, we may also determine if the point is internal or external to objects within the domain. The distance field has been found to be a useful construction within the areas of computer vision, physics, and computer graphics. This paper serves as an exposition of methods for the production of distance fields, and a review of alternative representations and applications of distance fields. In the course of this paper, we present various methods from all three of the above areas, and we answer pertinent questions such as How accurate are these methods compared to each other? How simple are they to implement?, and What is the complexity and runtime of such methods?. <s> BIB008 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> In this paper, we propose a segmentation method based on the generalized fast marching method (GFMM) developed by Carlini et al. (submitted). The classical fast marching method (FMM) is a very efficient method for front evolution problems with normal velocity (see also Epstein and Gage, The curve shortening flow. In: Chorin, A., Majda, A. (eds.) Wave Motion: Theory, Modelling and Computation, 1997) of constant sign. The GFMM is an extension of the FMM and removes this sign constraint by authorizing time-dependent velocity with no restriction on the sign. In our modelling, the velocity is borrowed from the Chan–Vese model for segmentation (Chan and Vese, IEEE Trans Image Process 10(2):266–277, 2001). The algorithm is presented and analyzed and some numerical experiments are given, showing in particular that the constraints in the initialization stage can be weakened and that the GFMM offers a powerful and computationally efficient algorithm. <s> BIB009 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> In this paper we propose a novel computational technique to solve the Eikonal equation efficiently on parallel architectures. The proposed method manages the list of active nodes and iteratively updates the solutions on those nodes until they converge. Nodes are added to or removed from the list based on a convergence measure, but the management of this list does not entail an extra burden of expensive ordered data structures or special updating sequences. The proposed method has suboptimal worst-case performance but, in practice, on real and synthetic datasets, runs faster than guaranteed-optimal alternatives. Furthermore, the proposed method uses only local, synchronous updates and therefore has better cache coherency, is simple to implement, and scales efficiently on parallel architectures. This paper describes the method, proves its consistency, gives a performance analysis that compares the proposed method against the state-of-the-art Eikonal solvers, and describes the implementation on a single instruction multiple datastream (SIMD) parallel architecture. <s> BIB010 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> In clinical practice, renal cancer diagnosis is performed by manual quantifications of tumor size and enhancement, which are time consuming and show high variability. We propose a computer-assisted clinical tool to assess and classify renal tumors in contrast-enhanced CT for the management and classification of kidney tumors. The quantification of lesions used level-sets and a statistical refinement step to adapt to the shape of the lesions. Intra-patient and inter-phase registration facilitated the study of lesion enhancement. From the segmented lesions, the histograms of curvature-related features were used to classify the lesion types via random sampling. The clinical tool allows the accurate quantification and classification of cysts and cancer from clinical data. Cancer types are further classified into four categories. Computer-assisted image analysis shows great potential for tumor diagnosis and monitoring. <s> BIB011 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> In this paper, we outline two improvements to the fast sweeping method to improve the speed of the method in general and more specifically in cases where the speed is changing rapidly. The conventional wisdom is that fast sweeping works best when the speed changes slowly, and fast marching is the algorithm of choice when the speed changes rapidly. The goal here is to achieve run times for the fast sweeping method that are at least as fast, or faster, than competitive methods, e.g. fast marching, in the case where the speed is changing rapidly. The first improvement, which we call the locking method, dynamically keeps track of grid points that have either already had the solution successfully calculated at that grid point or for which the solution cannot be successfully calculated during the current iteration. These locked points can quickly be skipped over during the fast sweeping iterations, avoiding many time-consuming calculations. The second improvement, which we call the two queue method, keeps all of the unlocked points in a data structure so that the locked points no longer need to be visited at all. Unfortunately, it is not possible to insert new points into the data structure while maintaining the fast sweeping ordering without at least occasionally sorting. Instead, we segregate the grid points into those with small predicted solutions and those with large predicted solutions using two queues. We give two ways of performing this segregation. This method is a label correcting (iterative) method like the fast sweeping method, but it tends to operate near the front like the fast marching method. It is reminiscent of the threshold method for finding the shortest path on a network, [F. Glover, D. Klingman, and N. Phillips, Oper. Res., 33 (1985), pp. 65-73]. We demonstrate the numerical efficiency of the improved methods on a number of examples. <s> BIB012 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> This paper presents a hardware based least time path fast marching method for seismic traveltime calculation. The hardware implementation uses field programmable gate array (FPGA) reconfigurable computing technology and applies it to a compute intensive seismic problem. The algorithm transformation process is described at a system level and at a detailed hardware level. <s> BIB013 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> We have developed a novel hierarchical data structure for the efficient representation of sparse, time-varying volumetric data discretized on a 3D grid. Our “VDB”, so named because it is a Volumetric, Dynamic grid that shares several characteristics with Bptrees, exploits spatial coherency of time-varying data to separately and compactly encode data values and grid topology. VDB models a virtually infinite 3D index space that allows for cache-coherent and fast data access into sparse volumes of high resolution. It imposes no topology restrictions on the sparsity of the volumetric data, and it supports fast (average O(1)) random access patterns when the data are inserted, retrieved, or deleted. This is in contrast to most existing sparse volumetric data structures, which assume either static or manifold topology and require specific data access patterns to compensate for slow random access. Since the VDB data structure is fundamentally hierarchical, it also facilitates adaptive grid sampling, and the inherent acceleration structure leads to fast algorithms that are well-suited for simulations. As such, VDB has proven useful for several applications that call for large, sparse, animated volumes, for example, level set dynamics and cloud modeling. In this article, we showcase some of these algorithms and compare VDB with existing, state-of-the-art data structures. <s> BIB014 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> We compare the computational performance of the Fast Marching Method, the Fast Sweeping Method and of the Fast Iterative Method to determine a numerical solution to the eikonal equation. We point out how the FIM outperforms the other two thanks to its parallel processing capabilities. <s> BIB015 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> We presented a new ray tracing technique which is applicable for crosshole radar traveltime tomography. The new algorithm divides the ray tracing process into two steps: First the wavefront propagation times of all grid points in a velocity field are calculated using the multistencils fast marching method (MSFM), and then the ray tracing paths having the minimum traveltime can be easily obtained by following the steepest gradient direction from the receiver to the transmitter. In contrast to traditional fast marching method (FMM) and higher accuracy fast marching method (HAFMM), MSFM algorithm calculates traveltimes using two stencils at the same time, and the information in diagonal direction can be included, thus the calculation accuracy and efficiency can be improved greatly. In order to verify the accuracy and efficiency of the new ray tracing method, we test the proposed scheme on two synthetic velocity models where the exact solutions can be calculated, and we compared our results with the one obtained by a FMM based and a HAFMM based steepest descend ray tracing methods. This comparison indicated that the suggested ray tracing technique can achieve much better results both on accuracy and efficiency compared to the FMM based and the HAFMM based steepest descend ray tracing methods. <s> BIB016 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> The hippocampus is a part of the limbic system and plays an important role in long-term memory and spatial navigation. As part of hippocampal imaging studies, the region of the hippocampus is usually segmented manually, which is a time-consuming process. Here, we describe a comparison of the active contour model (ACM) and the fast marching method (FMM) using magnetic resonance (MR) images of the human brain. We determine optimized input parameters for both models to segment the hippocampus using T1-weighted MR images, and we found that the ACM provided superior performance compared with the FMM without significant additional computational expense. <s> BIB017 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> Full waveform inversion (FWI) is a process in which seismic data in time or frequency domain is fit by changing the velocity of the media under investigation. The problem is non-linear, and therefore optimization techniques have been used to find a geological solution to the problem. The main problem in fitting the data is the lack of low spatial frequencies. This deficiency often leads to a local minimum and to non-geologic solutions. In this work we explore how to obtain low frequency information for FWI. Our approach involves augmenting FWI with travel time tomography, which has low-frequency features. By jointly inverting these two problems we enrich FWI with information that can replace low frequency data. In addition, we use high order regularization in a preliminary inversion stage to prevent high frequency features from polluting our model in the initial stages of the reconstruction. This regularization also promote the low-frequencies non-dominant modes that exist in the FWI sensitivity. By applying a smoothly regularized joint inversion we are able to obtain a smooth model than can later be used to recover a good approximation for the true model. A second contribution of this paper involves the acceleration of the main computational bottleneck in FWI--the solution of the Helmholtz equation. We show that the solution time can be significantly reduced by solving the equation for multiple right hand sides using block multigrid preconditioned Krylov methods. <s> BIB018 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> ABSTRACTTraveltime computation is essential for many seismic data processing applications and velocity analysis tools. High-resolution seismic imaging requires eikonal solvers to account for anisotropy whenever it significantly affects the seismic wave kinematics. Moreover, computation of auxiliary quantities, such as amplitude and take-off angle, relies on highly accurate traveltime solutions. However, the finite-difference-based eikonal solution for a point-source initial condition has upwind source singularity at the source position because the wavefront curvature is large near the source point. Therefore, all finite-difference solvers, even the high-order ones, show inaccuracies because the errors due to source-singularity spread from the source point to the whole computational domain. We address the source-singularity problem for tilted transversely isotropic (TTI) eikonal solvers using factorization. We solve a sequence of factored tilted elliptically anisotropic (TEA) eikonal equations iteratively,... <s> BIB019 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> I. INTRODUCTION <s> In this paper, we propose a framework for online quadrotor motion planning for autonomous navigation in unknown environments. Based on the onboard state estimation and environment perception, we adopt a fast marching-based path searching method to find a path on a velocity field induced by the Euclidean signed distance field (ESDF) of the map, to achieve better time allocation. We generate a flight corridor for the quadrotor to travel through by inflating the path against the environment. We represent the trajectory as piecewise Bezier curves by using Bernstein polynomial basis and formulate the trajectory generation problem as typical convex programs. By using Bezier curves, we are able to bound positions and higher order dynamics of the trajectory entirely within safe regions. The proposed motion planning method is integrated into a customized light-weight quadrotor platform and is validated by presenting fully autonomous navigation in unknown cluttered indoor and outdoor environments. We also release our code for trajectory generation as an open-source package. <s> BIB020
The Fast Marching Method (FMM) has been extensively applied since it was first proposed in 1995 BIB001 as a solution to isotropic control problems using first-order semi-Langragian discretizations on Cartesian grids. The first approach was introduced by Tsitsiklis BIB001 , but the most popular solution was given a few months later by Sethian BIB002 using firstorder upwind finite differences in the context of isotropic front propagation. The differences and similarities between both works can be found in . The Fast Sweeping Method (FSM) is a more modern iterative algorithm which uses Gauss-Seidel iterations with alternating sweeping ordering to also solve a discretized Eikonal equation on a rectangular grid BIB004 . As long as the same first-order upwind discretization is used in both methods, the computed solution with the Fast Sweeping Method is exactly the same as the one given by the Fast Marching Method. These two different methods, commonly named as Fast Methods BIB006 , BIB003 , were originally suggested to simulate a wavefront propagation through a regular discretization of the The associate editor coordinating the review of this manuscript and approving it for publication was Bora Onat. space. However, many different approaches have been proposed, extending these methods to other discretizations and formulations. For a more detailed history of Fast Methods, we refer the interested readers to . One of the reasons for the popularity of the Fast Methods is that they can be applied in many different fields, such us: path planning in robotics - BIB020 ; image segmentation BIB009 - BIB017 , shape and surface recognition and segmentation BIB007 , , volumetric data representation BIB014 or quantification of lesions in contrast-enhanced tomography BIB011 in computer vision; traveltime computation in geophysical applications such as tomography BIB016 - BIB018 or seismology BIB013 , BIB019 . Despite the vast amount of literature on Fast Methods, there is a lack of in-depth comparison and benchmarking among the proposed methods. In this paper, nine sequential (mono-thread), isotropic, grid-based Fast Methods are detailed in the following sections: Fast Marching Method (FMM), FibonacciHeap FMM (FMMFib), Simplified FMM (SFMM), Untidy FMM (UFMM), Group Marching Method (GMM), Fast Iterative Method (FIM), Fast Sweeping Method (FSM), Locking Sweeping Method (LSM) and Double Dynamic Queue Method (DDQM). All these algorithms provide FIGURE 1. Comparisons among algorithms. Colors refer to different works: orange BIB012 , gray BIB006 , yellow BIB010 , green BIB015 , black BIB005 , and blue BIB008 . exactly the same solution except for UFMM and FIM, which have bounded errors. However, the question of which one is the best for which applications is still open because of the lack of comparison. For example, BIB006 compares only FMM and FSM in spite of the fact that GMM and UFMM had already been published. Survey BIB008 mentions most of the algorithms but only compares FMM and SFMM. A more recent work compares FMM, FSM and FIM in 2D BIB015 . However, FIM was parallelized and implemented in CUDA providing a biased comparison. Fig. 1 schematically shows the comparisons between algorithms carried out in the literature. As an example, UFMM has barely been compared to its counterparts whereas FMM and FSM are compared in many papers despite the fact that it is well known when each of them performs better: FSM is faster in simple environments with constant speed. In addition, results from one work cannot be directly extrapolated to other works since the performance of these methods highly depends on their implementation.
Fast Methods for Eikonal Equations: An Experimental Survey <s> OTHER ALGORITHMS NOT INCLUDED IN THE SURVEY <s> Recently, fast marching methods (FMM) beyond first order have been developed for producing rapid solutions to the eikonal equation. In this paper, we present imaging results for 3‐D prestack Kirchhoff migration using traveltimes computed using the first‐order and second‐order FMM on several 3‐D prestack synthetic and real data sets. The second order traveltimes produce a much better image of the structure. Moreover, insufficiently sampled first order traveltimes can introduce consistent errors in the common reflection point gathers that affect velocity analysis. First‐order traveltimes tend to be smaller than analytic traveltimes, which in turn affects the migration velocity analysis, falsely indicating that the interval velocity was too low. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> OTHER ALGORITHMS NOT INCLUDED IN THE SURVEY <s> We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> OTHER ALGORITHMS NOT INCLUDED IN THE SURVEY <s> We present an algorithm for solving in parallel the Eikonal equation. The efficiency of our approach is rooted in the ordering and distribution of the grid points on the available processors; we utilize a Cuthill-McKee ordering. The advantages of our approach is that (1) the efficiency does not plateau for a large number of threads; we compare our approach to the current state-of-the-art parallel implementation of Zhao (2007) [14] and (2) the total number of iterations needed for convergence is the same as that of a sequential implementation, i.e. our parallel implementation does not increase the complexity of the underlying sequential algorithm. Numerical examples are used to illustrate the efficiency of our approach. <s> BIB003 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> OTHER ALGORITHMS NOT INCLUDED IN THE SURVEY <s> The use of local single-pass methods (like, e.g., the Fast Marching method) has become popular in the solution of some Hamilton-Jacobi equations. The prototype of these equations is the eikonal equation, for which the methods can be applied saving CPU time and possibly memory allocation. Then, some natural questions arise: can local single-pass methods solve any Hamilton- Jacobi equation? If not, where the limit should be set? This paper tries to answer these questions. In order to give a complete picture, we present an overview of some fast methods available in literature and we briey analyze their main features. We also introduce some numerical tools and provide several numerical tests which are intended to exhibit the limitations of the methods. We show that the construction of a local single-pass method for general Hamilton-Jacobi equations is very hard, if not impossible. Nevertheless, some special classes of problems can be actually solved, making local single-pass methods very useful from the practical point of view. games, and it has a great impact in many areas, such as robotics, aeronautics, electrical and aerospace engineering. In particular, for control/game problems, an approximation of the value function allows for the synthesis of optimal control laws in feedback form, and then for the computation of optimal trajectories. The value function for a control problem (resp., dierential game) can be characterized as the unique viscosity solution of the corresponding Hamilton-Jacobi-Bellman (HJB) equation (resp., Hamilton-Jacobi-Isaacs (HJI) equation), and it is obtained by passing to the limit in the well known Bellman's Dynamic Programming (DP) principle. The DP approach can be rather expensive from the computational point of view, but in some situations it gives a real advantage when compared to methods based on the Pontryagin's Maximum Principle, because the latter approach allows one to compute only open-loop controls and locally-optimal trajectories. Moreover, weak solutions to HJ equations are nowadays well understood in the framework of viscosity solutions, which oers the correct notion of solution for many applied problems. <s> BIB004 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> OTHER ALGORITHMS NOT INCLUDED IN THE SURVEY <s> The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. This inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation.In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss-Newton. <s> BIB005
There are some variations of these approaches which focus on improving some of the characteristics of the solution given by the fast methods. For example, in order to enhance the computation time, parallel approaches have been proposed of both the Fast Marching and Fast Sweeping BIB003 methods. In the Heat Method, used for computing the geodesic distances in near-linear time, was introduced. Although it outperforms the presented methods in terms of computation time, it only works with constant speed functions, so it does not solve the problems analyzed in the experimental section. Additionally, it is obvious that the accuracy of the computed solution for these methods depends on the chosen grid size, however, higher order approaches , BIB001 are able to improve the accuracy using the same grid at the cost of more computation time. For applications in which a high accuracy solution of the Eikonal equation at the source point is needed, the factored Eikonal equation leads to much more accurate solutions by analytically handling the source singularity BIB002 , BIB005 . Different two-scale methods are proposed in : Fast Marching-Sweeping Method (FMSM), Heap Cell Method (HCM), and Fast Heap Cell Method (FHCM). They combine the FMM and FSM in order obtain the best features of both algorithms, dividing the grid into two different levels and performing marching on a coarser scale and then sweeping on a finer scale. However, these methods have not been included in this analysis for different reasons: 1) the performance of HCM and FHCM depends on the discretization of the coarse grid, where the optimal parameter depends on the speed profile. Furthermore, FHCM includes additional error. 2) the FMSM error is not mathematically bounded. Thus, the comparison with other Fast Methods becomes more complex. 3) they suppose that the speed is almost constant on domains of arbitrary size , although this is not a restriction for the actual speed function, it is a strong assumption for some of the designed experiments. Additionally, the single-pass methods suggested in BIB004 have not been included in this survey because, as the authors conclude, it is not always possible to know in advance which method, among those presented, should be used. This is an important drawback for practical applications such as robotics.
Fast Methods for Eikonal Equations: An Experimental Survey <s> A. N-DIMENSIONAL DISCRETE EIKONAL EQUATION <s> We devise new numerical algorithms, called PSC algorithms, for following fronts propagating with curvature-dependent speed. The speed may be an arbitrary function of curvature, and the front also can be passively advected by an underlying flow. These algorithms approximate the equations of motion, which resemble Hamilton-Jacobi equations with parabolic right-hand sides, by using techniques from hyperbolic conservation laws. Non-oscillatory schemes of various orders of accuracy are used to solve the equations, providing methods that accurately capture the formation of sharp gradients and cusps in the moving fronts. The algorithms handle topological merging and breaking naturally, work in any number of space dimensions, and do not require that the moving surface be written as a function. The methods can be also used for more general Hamilton-Jacobi-type problems. We demonstrate our algorithms by computing the solution to a variety of surface motion problems. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> A. N-DIMENSIONAL DISCRETE EIKONAL EQUATION <s> Abstract ::: A fast marching level set method is presented for monotonically advancing fronts, which leads to an extremely fast scheme for solving the Eikonal equation. Level set methods are numerical techniques for computing the position of propagating fronts. They rely on an initial value partial differential equation for a propagating level set function and use techniques borrowed from hyperbolic conservation laws. Topological changes, corner and cusp development, and accurate determination of geometric properties such as curvature and normal direction are naturally obtained in this setting. This paper describes a particular case of such methods for interfaces whose speed depends only on local position. The technique works by coupling work on entropy conditions for interface motion, the theory of viscosity solutions for Hamilton-Jacobi equations, and fast adaptive narrow band level set methods. The technique is applicable to a variety of problems, including shape-from-shading problems, lithographic development calculations in microchip manufacturing, and arrival time problems in control theory. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> A. N-DIMENSIONAL DISCRETE EIKONAL EQUATION <s> In this paper a fast sweeping method for computing the numerical solution of Eikonal equations on a rectangular grid is presented. The method is an iterative method which uses upwind difference for discretization and uses Gauss-Seidel iterations with alternating sweeping ordering to solve the discretized system. The crucial idea is that each sweeping ordering follows a family of characteristics of the corresponding Eikonal equation in a certain direction simultaneously. The method has an optimal complexity of O(N) for N grid points and is extremely simple to implement in any number of dimensions. Monotonicity and stability properties of the fast sweeping algorithm are proven. Convergence and error estimates of the algorithm for computing the distance function is studied in detail. It is shown that 2 n Gauss-Seidel iterations is enough for the distance function in n dimensions. An estimation of the number of iterations for general Eikonal equations is also studied. Numerical examples are used to verify the analysis. <s> BIB003
In this section the most common first-order discretization of the Eikonal equation is detailed. It is first derived in 2D for better understanding and then an n-dimensional approach is explained. The most common first-order discretization is given in BIB001 , which uses an upwind-difference scheme to approximate partial derivatives of T (x) (D ±x ij represents the one-sided partial difference operator in direction ±x): A simpler but less accurate solution to (3) is proposed in : BIB003 and x and y are the grid spacing in the x and y directions. Substituting (3) in (4) and letting we can rewrite the Eikonal Equation, for a discrete 2D space as: Since we are assuming that the speed of the front is positive (F > 0), T must be greater than T x and T y whenever the front wave has not already passed over the coordinates (i, j). Therefore, (6) can be simplified as: Equation (7) is a regular quadratic equation of the form aT BIB002 + bT + c = 0, where: In order to simplify the notation for the n-dimensional case, we assume that the grid is composed of hypercubic cells, that is, x = y = z = · · · = h. Let us denote T d as the generalization of T x or T y for dimension d, up to N dimensions. We also denote by F the propagation speed for the point with coordinates (i, j, k, . . . ). Operating and simplifying terms, the discretization of the Eikonal is a quadratic equation with parameters:
Fast Methods for Eikonal Equations: An Experimental Survey <s> III. FAST MARCHING METHODS <s> Abstract ::: A fast marching level set method is presented for monotonically advancing fronts, which leads to an extremely fast scheme for solving the Eikonal equation. Level set methods are numerical techniques for computing the position of propagating fronts. They rely on an initial value partial differential equation for a propagating level set function and use techniques borrowed from hyperbolic conservation laws. Topological changes, corner and cusp development, and accurate determination of geometric properties such as curvature and normal direction are naturally obtained in this setting. This paper describes a particular case of such methods for interfaces whose speed depends only on local position. The technique works by coupling work on entropy conditions for interface motion, the theory of viscosity solutions for Hamilton-Jacobi equations, and fast adaptive narrow band level set methods. The technique is applicable to a variety of problems, including shape-from-shading problems, lithographic development calculations in microchip manufacturing, and arrival time problems in control theory. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> III. FAST MARCHING METHODS <s> We present a variational framework that integrates the statistical boundary shape models into a Level Set system that is capable of both segmenting and recognizing objects. Since we aim to recognize objects, we trace the active contour and stop it near real object boundaries while inspecting the shape of the contour instead of enforcing the contour to get a priori shape. We get the location of character boundaries and character labels at the system output. We developed a promising local front stopping scheme based on both image and shape information for fast marching systems. A new object boundary shape signature model, based on directional Gauss gradient filter responses, is also proposed. The character recognition system that employs the new boundary shape descriptor outperforms the other systems, based on well-known boundary signatures such as centroid distance, curvature etc. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> III. FAST MARCHING METHODS <s> In this paper, we propose a segmentation method based on the generalized fast marching method (GFMM) developed by Carlini et al. (submitted). The classical fast marching method (FMM) is a very efficient method for front evolution problems with normal velocity (see also Epstein and Gage, The curve shortening flow. In: Chorin, A., Majda, A. (eds.) Wave Motion: Theory, Modelling and Computation, 1997) of constant sign. The GFMM is an extension of the FMM and removes this sign constraint by authorizing time-dependent velocity with no restriction on the sign. In our modelling, the velocity is borrowed from the Chan–Vese model for segmentation (Chan and Vese, IEEE Trans Image Process 10(2):266–277, 2001). The algorithm is presented and analyzed and some numerical experiments are given, showing in particular that the constraints in the initialization stage can be weakened and that the GFMM offers a powerful and computationally efficient algorithm. <s> BIB003 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> III. FAST MARCHING METHODS <s> We have developed a novel hierarchical data structure for the efficient representation of sparse, time-varying volumetric data discretized on a 3D grid. Our “VDB”, so named because it is a Volumetric, Dynamic grid that shares several characteristics with Bptrees, exploits spatial coherency of time-varying data to separately and compactly encode data values and grid topology. VDB models a virtually infinite 3D index space that allows for cache-coherent and fast data access into sparse volumes of high resolution. It imposes no topology restrictions on the sparsity of the volumetric data, and it supports fast (average O(1)) random access patterns when the data are inserted, retrieved, or deleted. This is in contrast to most existing sparse volumetric data structures, which assume either static or manifold topology and require specific data access patterns to compensate for slow random access. Since the VDB data structure is fundamentally hierarchical, it also facilitates adaptive grid sampling, and the inherent acceleration structure leads to fast algorithms that are well-suited for simulations. As such, VDB has proven useful for several applications that call for large, sparse, animated volumes, for example, level set dynamics and cloud modeling. In this article, we showcase some of these algorithms and compare VDB with existing, state-of-the-art data structures. <s> BIB004 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> III. FAST MARCHING METHODS <s> Automated algorithms to build accurate models of 3D neuronal arborization is much in demand due to large volume of microscopy data. We present a tracking algorithm for automatic and reliable extraction of neuronal morphology. It is robust to ambiguous branch discontinuities, variability of intensity and curvature of fibres, arbitrary branch cross-sections, noise and irregular background illumination. We complete the presentation of our method with demonstration of its performance on synthetic data modeling challenging scenarios and on confocal microscopy data of Olfactory Projection fibres from DIADEM data set with promising results. <s> BIB005 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> III. FAST MARCHING METHODS <s> Unmanned surface vehicles (USVs) have been deployed over the past decade. Current USV platforms are generally of small size with low payload capacity and short endurance times. To improve effectiveness there is a trend to deploy multiple USVs as a formation fleet. This paper presents a novel computer based algorithm that solves the problem of USV formation path planning. The algorithm is based upon the fast marching (FM) method and has been specifically designed for operation in dynamic environments using the novel constrained FM method. The constrained FM method is able to model the dynamic behaviour of moving ships with efficient computation time. The algorithm has been evaluated using a range of tests applied to a simulated area and has been proved to work effectively in a complex navigation environment. <s> BIB006 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> III. FAST MARCHING METHODS <s> The hippocampus is a part of the limbic system and plays an important role in long-term memory and spatial navigation. As part of hippocampal imaging studies, the region of the hippocampus is usually segmented manually, which is a time-consuming process. Here, we describe a comparison of the active contour model (ACM) and the fast marching method (FMM) using magnetic resonance (MR) images of the human brain. We determine optimized input parameters for both models to segment the hippocampus using T1-weighted MR images, and we found that the ACM provided superior performance compared with the FMM without significant additional computational expense. <s> BIB007 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> III. FAST MARCHING METHODS <s> In this paper, we propose a framework for online quadrotor motion planning for autonomous navigation in unknown environments. Based on the onboard state estimation and environment perception, we adopt a fast marching-based path searching method to find a path on a velocity field induced by the Euclidean signed distance field (ESDF) of the map, to achieve better time allocation. We generate a flight corridor for the quadrotor to travel through by inflating the path against the environment. We represent the trajectory as piecewise Bezier curves by using Bernstein polynomial basis and formulate the trajectory generation problem as typical convex programs. By using Bezier curves, we are able to bound positions and higher order dynamics of the trajectory entirely within safe regions. The proposed motion planning method is integrated into a customized light-weight quadrotor platform and is validated by presenting fully autonomous navigation in unknown cluttered indoor and outdoor environments. We also release our code for trajectory generation as an open-source package. <s> BIB008
The Fast Marching Method (FMM) BIB001 is the most common Eikonal solver. It can be classified as a label-setting, Dijkstralike algorithm . It uses a first-order upwind finite difference scheme, which is described in detail in Section II, to simulate an isotropic front propagation computing the solution following Bellman's optimality principle : In other words, a node x i is connected to the parent x j in its neighborhood N (x i ) which minimizes (or maximizes) the value of the function (in this case T i ) composed by the value of T j plus the addition of the cost of traveling from x j to x i , represented as c ij . This discretization takes into account the spatial representation (i.e., a rectangular grid in two dimensions) and the values of all the causal upwind neighbors. This is the main difference with Dijsktra's algorithm, since Dijkstra is designed to work on graphs, assuming discrete traveling, and the value of a node x i only depends on one parent x j . The algorithm labels the cells in three different sets: 1) Frozen: those cells whose value has already been computed and will not change during new iterations, 2) Unknown: cells with no value assigned, to be evaluated, and 3) Narrow band (or just Narrow): the frontier between Frozen and Unknown containing those cells with a value assigned that may still improve. These sets are mutually exclusive, that is, a cell cannot belong to more than one of them at the same time. The implementation of the Narrow set is a critical aspect of FMM, so a more detailed discussion will be carried out in Section III-A. The procedure to compute FMM is detailed in Algorithm 3. Initially, all points 2 in the grid belong to the Unknown set and have an infinite arrival time. The initial points (wave sources) are assigned a value of 0 and inserted in Frozen (lines 2-7). Then, the main FMM loop starts by choosing the element with minimum arrival time from Narrow (line 10) and all its non-Frozen neighbors are evaluated: for each of them the Eikonal is solved and the new arrival time value is kept if it improves the existing one. In case the evaluated cell is in Unknown, it is transferred to Narrow (lines BIB006 BIB008 BIB003 BIB005 BIB007 BIB002 BIB004 . Finally, the previously chosen point from Narrow is transferred to Frozen (lines 21 and 22) and a new iteration starts until the Narrow set is empty. The arrival times map T is returned as the result of the procedure.
Fast Methods for Eikonal Equations: An Experimental Survey <s> 8: <s> Unmanned surface vehicles (USVs) have been deployed over the past decade. Current USV platforms are generally of small size with low payload capacity and short endurance times. To improve effectiveness there is a trend to deploy multiple USVs as a formation fleet. This paper presents a novel computer based algorithm that solves the problem of USV formation path planning. The algorithm is based upon the fast marching (FM) method and has been specifically designed for operation in dynamic environments using the novel constrained FM method. The constrained FM method is able to model the dynamic behaviour of moving ships with efficient computation time. The algorithm has been evaluated using a range of tests applied to a simulated area and has been proved to work effectively in a complex navigation environment. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 8: <s> In this paper, we propose a framework for online quadrotor motion planning for autonomous navigation in unknown environments. Based on the onboard state estimation and environment perception, we adopt a fast marching-based path searching method to find a path on a velocity field induced by the Euclidean signed distance field (ESDF) of the map, to achieve better time allocation. We generate a flight corridor for the quadrotor to travel through by inflating the path against the environment. We represent the trajectory as piecewise Bezier curves by using Bernstein polynomial basis and formulate the trajectory generation problem as typical convex programs. By using Bezier curves, we are able to bound positions and higher order dynamics of the trajectory entirely within safe regions. The proposed motion planning method is integrated into a customized light-weight quadrotor platform and is validated by presenting fully autonomous navigation in unknown cluttered indoor and outdoor environments. We also release our code for trajectory generation as an open-source package. <s> BIB002
end for Propagation: 9: while Narrow = ∅ do 10: x min ← arg min x i ∈Narrow {T i } Narrow top operation. BIB001 : For all neighbors not in Frozen. BIB002 : T i ← T i Narrow increase operation if x i ∈ Narrow. Narrow ← Narrow ∪ {x i }
Fast Methods for Eikonal Equations: An Experimental Survey <s> 15: <s> We present a variational framework that integrates the statistical boundary shape models into a Level Set system that is capable of both segmenting and recognizing objects. Since we aim to recognize objects, we trace the active contour and stop it near real object boundaries while inspecting the shape of the contour instead of enforcing the contour to get a priori shape. We get the location of character boundaries and character labels at the system output. We developed a promising local front stopping scheme based on both image and shape information for fast marching systems. A new object boundary shape signature model, based on directional Gauss gradient filter responses, is also proposed. The character recognition system that employs the new boundary shape descriptor outperforms the other systems, based on well-known boundary signatures such as centroid distance, curvature etc. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 15: <s> A distance field is a representation where, at each point within the field, we know the distance from that point to the closest point on any object within the domain. In addition to distance, other properties may be derived from the distance field, such as the direction to the surface, and when the distance field is signed, we may also determine if the point is internal or external to objects within the domain. The distance field has been found to be a useful construction within the areas of computer vision, physics, and computer graphics. This paper serves as an exposition of methods for the production of distance fields, and a review of alternative representations and applications of distance fields. In the course of this paper, we present various methods from all three of the above areas, and we answer pertinent questions such as How accurate are these methods compared to each other? How simple are they to implement?, and What is the complexity and runtime of such methods?. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 15: <s> We have developed a novel hierarchical data structure for the efficient representation of sparse, time-varying volumetric data discretized on a 3D grid. Our “VDB”, so named because it is a Volumetric, Dynamic grid that shares several characteristics with Bptrees, exploits spatial coherency of time-varying data to separately and compactly encode data values and grid topology. VDB models a virtually infinite 3D index space that allows for cache-coherent and fast data access into sparse volumes of high resolution. It imposes no topology restrictions on the sparsity of the volumetric data, and it supports fast (average O(1)) random access patterns when the data are inserted, retrieved, or deleted. This is in contrast to most existing sparse volumetric data structures, which assume either static or manifold topology and require specific data access patterns to compensate for slow random access. Since the VDB data structure is fundamentally hierarchical, it also facilitates adaptive grid sampling, and the inherent acceleration structure leads to fast algorithms that are well-suited for simulations. As such, VDB has proven useful for several applications that call for large, sparse, animated volumes, for example, level set dynamics and cloud modeling. In this article, we showcase some of these algorithms and compare VDB with existing, state-of-the-art data structures. <s> BIB003 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 15: <s> We presented a new ray tracing technique which is applicable for crosshole radar traveltime tomography. The new algorithm divides the ray tracing process into two steps: First the wavefront propagation times of all grid points in a velocity field are calculated using the multistencils fast marching method (MSFM), and then the ray tracing paths having the minimum traveltime can be easily obtained by following the steepest gradient direction from the receiver to the transmitter. In contrast to traditional fast marching method (FMM) and higher accuracy fast marching method (HAFMM), MSFM algorithm calculates traveltimes using two stencils at the same time, and the information in diagonal direction can be included, thus the calculation accuracy and efficiency can be improved greatly. In order to verify the accuracy and efficiency of the new ray tracing method, we test the proposed scheme on two synthetic velocity models where the exact solutions can be calculated, and we compared our results with the one obtained by a FMM based and a HAFMM based steepest descend ray tracing methods. This comparison indicated that the suggested ray tracing technique can achieve much better results both on accuracy and efficiency compared to the FMM based and the HAFMM based steepest descend ray tracing methods. <s> BIB004
end if BIB001 : Narrow push operation. end if 16: end for 17: Narrow ← Narrow\{x min } Narrow pop operation. BIB003 : end if 20: end while 21: return T 22: end procedure FIGURE 2. Untidy priority queue representation. Top: first iteration, the four neighbors of the initial point are pushed. Middle: the first bucket becomes empty, so the circular array advances one position. Cell c2 is first evaluated because it was the first pushed into the bucket. Bottom: after a few iterations, an entire loop on the queue is about to be completed. Narrow ← Narrow ∪ {x i } end for 16: end for Propagation: 17: while Q 1 = ∅ do 19: x i ← Q 1 .FRONT Extracts the front element. BIB004 : for x j ∈ (N (x i ) ∩ Frozen) do 24: Add improvable neighbors to their queue. BIB002 : Frozen ← Frozen\{x j }
Fast Methods for Eikonal Equations: An Experimental Survey <s> 23: <s> The average number of levels that a new element moves up when inserted into a heap is investigated. Two probabilistic models under which such an average might be computed are proposed. A `Lemma of Conservation of Ignorance' is formulated and used in the derivation of an exact formula for the average in one of these models. It is shown that this average is bounded by a constant and its asymptotic behaviour is discussed. Numerical data for the second model are also provided and analyzed. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 23: <s> In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an n-item heap in 0(log n) amortized time and all other standard heap operations in 0(1) amortized time. Using F-heaps we are able to obtain improved running times for several network optimization algorithms. <s> BIB002
end while 24: return T 25: end procedure TABLE 1. Summary of amortized time complexities for common heaps used in FMM (n is the number of elements in the heap). Among all the existing heaps, FMM is usually implemented with a binary heap BIB001 . However, the Fibonacci Heap BIB002 has a better amortized time for Increase and Push operations, but it has additional computational overhead with respect to other heaps. For relatively small grids, where the narrow band is composed of few elements and the performance is still far from its asymptotic behavior, the binary heap performs better. Table 1 summarizes the time complexities for these heaps. Note that n is the number of cells in the map, as the worst case is to have all the cells in the heap. Each cell is pushed and popped at most once in the heap. For each loop, the top of Narrow is accessed (O(1)), the Eikonal is solved for at most 2 N neighbors (O(1) for a given N ), these cells are pushed or increased (O(log n) in the worst case), and finally the top cell is popped (O(log n)). Therefore each loop is at most O(log n). Since this loop is executed at most n times, the total FMM complexity is O(n log n), where n represents the total number of cells of the grid, which is the worst case scenario. Furthermore, as pointed out in , the method has a bad cache locality, since adjacent cells on the FMM heap have no spatial relationship and this problem becomes worse as the number of dimensions increases. end if 24: end for 25: Narrow ← Narrow\{x i }
Fast Methods for Eikonal Equations: An Experimental Survey <s> C. UNTIDY FAST MARCHING METHOD <s> In this paper a fast sweeping method for computing the numerical solution of Eikonal equations on a rectangular grid is presented. The method is an iterative method which uses upwind difference for discretization and uses Gauss-Seidel iterations with alternating sweeping ordering to solve the discretized system. The crucial idea is that each sweeping ordering follows a family of characteristics of the corresponding Eikonal equation in a certain direction simultaneously. The method has an optimal complexity of O(N) for N grid points and is extremely simple to implement in any number of dimensions. Monotonicity and stability properties of the fast sweeping algorithm are proven. Convergence and error estimates of the algorithm for computing the distance function is studied in detail. It is shown that 2 n Gauss-Seidel iterations is enough for the distance function in n dimensions. An estimation of the number of iterations for general Eikonal equations is also studied. Numerical examples are used to verify the analysis. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> C. UNTIDY FAST MARCHING METHOD <s> In this note we present an implementation of the fast marching algorithm for solving Eikonal equations that in practice reduces the original run-time from O(NlogN) to linear. This lower run-time cost is obtained while keeping an error bound of the same order of magnitude as the original algorithm. This improvement is achieved introducing the straight forward untidy priority queue, obtained via a quantization of the priorities in the marching computation. We present the underlying framework, estimations on the error, and examples showing the usefulness of the proposed approach. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> C. UNTIDY FAST MARCHING METHOD <s> The fast marching algorithm computes an approximate solution to the eikonal equation in O(N log N) time, where the factor log N is due to the administration of a priority queue. Recently, Yatziv, Bartesaghi and Sapiro have suggested to use an untidy priority queue, reducing the overall complexity to O(N) at the price of a small error in the computed solution. In this paper, we give an explicit estimate of the error introduced, which is based on a discrete comparison principle. This estimates implies in particular that the choice of an accuracy level that is independent of the speed function F results in the complexity bound O(Fmax /Fmin N). A numerical experiment illustrates this robustness problem for large ratios Fmax /Fmin . <s> BIB003 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> C. UNTIDY FAST MARCHING METHOD <s> In this paper, we propose a segmentation method based on the generalized fast marching method (GFMM) developed by Carlini et al. (submitted). The classical fast marching method (FMM) is a very efficient method for front evolution problems with normal velocity (see also Epstein and Gage, The curve shortening flow. In: Chorin, A., Majda, A. (eds.) Wave Motion: Theory, Modelling and Computation, 1997) of constant sign. The GFMM is an extension of the FMM and removes this sign constraint by authorizing time-dependent velocity with no restriction on the sign. In our modelling, the velocity is borrowed from the Chan–Vese model for segmentation (Chan and Vese, IEEE Trans Image Process 10(2):266–277, 2001). The algorithm is presented and analyzed and some numerical experiments are given, showing in particular that the constraints in the initialization stage can be weakened and that the GFMM offers a powerful and computationally efficient algorithm. <s> BIB004
The Untidy Fast Marching Method (UFMM) BIB002 , BIB003 follows exactly the same procedure as FMM. However, a special heap structure, which reduces the computational complexity of the method to O(n), is used: the untidy priority queue. This untidy priority queue is closer to a look-up table than to a tree. It assumes that the values of F are bounded, hence the values of T are also bounded. The untidy queue, depicted in Fig. 2 , is a circular array which divides the maximum range of T into a set of k consecutive buckets. Each bucket contains an unordered list of cells with similar values of T i . The low and high threshold values of each bucket evolve with the iterations of the algorithm, trying to maintain a uniform distribution of the elements in Narrow among the buckets. Since the index of the corresponding bucket can be analytically computed, Push is O(1), as well as Top and Pop. Besides, as the number of buckets is smaller than the number of cells, the Increase operation is, in average, O(1). Therefore, the total complexity of UFMM is O(n). However, since elements within a bucket are not sorted (a FIFO strategy is applied in each bucket), errors are introduced into the final result. Nevertheless, it has been shown that the accumulated additional error is bounded by O(h), with h being the cell size, which is the same order of magnitude as in the original FMM. while Narrow = ∅ do 3: x min ← arg min x i ∈Narrow {T i } Narrow top operation. BIB001 : Narrow ← Narrow\{x min } 6: for x i ∈ (N (x min ) ∩ X \Frozen) do All neighbors not in Frozen. : Update arrival time. 10: Narrow ← Narrow ∪ {x i } Narrow push operation. 12: end if BIB004 : Unknown ← Unknown\{x i }
Fast Methods for Eikonal Equations: An Experimental Survey <s> IV. FAST SWEEPING METHODS <s> We derive a Godunov-type numerical flux for the class of strictly convex, homogeneous Hamiltonians that includes $H(p,q)=\sqrt{ap^{2}+bq^{2}-2cpq},$ $c^{2}<ab.$ We combine our Godunov numerical fluxes with simple Gauss--Seidel-type iterations for solving the corresponding Hamilton--Jacobi (HJ) equations. The resulting algorithm is fast since it does not require a sorting strategy as found, e.g., in the fast marching method. In addition, it providesa way to compute solutions to a class of HJ equations for which the conventional fast marching method is not applicable. Our experiments indicate convergence after a few iterations, even in rather difficult cases. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> IV. FAST SWEEPING METHODS <s> In this paper a fast sweeping method for computing the numerical solution of Eikonal equations on a rectangular grid is presented. The method is an iterative method which uses upwind difference for discretization and uses Gauss-Seidel iterations with alternating sweeping ordering to solve the discretized system. The crucial idea is that each sweeping ordering follows a family of characteristics of the corresponding Eikonal equation in a certain direction simultaneously. The method has an optimal complexity of O(N) for N grid points and is extremely simple to implement in any number of dimensions. Monotonicity and stability properties of the fast sweeping algorithm are proven. Convergence and error estimates of the algorithm for computing the distance function is studied in detail. It is shown that 2 n Gauss-Seidel iterations is enough for the distance function in n dimensions. An estimation of the number of iterations for general Eikonal equations is also studied. Numerical examples are used to verify the analysis. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> IV. FAST SWEEPING METHODS <s> Efficient path-planning algorithms are a crucial issue for modern autonomous underwater vehicles. Classical path-planning algorithms in artificial intelligence are not designed to deal with wide continuous environments prone to currents. We present a novel Fast Marching (FM)-based approach to address the following issues. First, we develop an algorithm we call FM* to efficiently extract a 2-D continuous path from a discrete representation of the environment. Second, we take underwater currents into account thanks to an anisotropic extension of the original FM algorithm. Third, the vehicle turning radius is introduced as a constraint on the optimal path curvature for both isotropic and anisotropic media. Finally, a multiresolution method is introduced to speed up the overall path-planning process <s> BIB003 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> IV. FAST SWEEPING METHODS <s> This paper introduces a novel path planning method under non-holonomic constraint for car-like vehicles, which associates map discovery and heuristic search to attain an optimal resultant path. The map discovery applies fast marching method to investigate the map geometric information. After that, the support vector machine is performed to find obstacle clearance information. This information is then used as a heuristic function which helps greatly reduce the search space. The fast marching is performed again, guided by this function to generate vehicle motions under kinematic constraints. Experimental results have shown that this method is able to generate motions for non-holonomic vehicles. In comparison with related methods, the path generated by proposed method is smoother and stay farther away from the obstacles. <s> BIB004 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> IV. FAST SWEEPING METHODS <s> Unmanned surface vehicles (USVs) have been deployed over the past decade. Current USV platforms are generally of small size with low payload capacity and short endurance times. To improve effectiveness there is a trend to deploy multiple USVs as a formation fleet. This paper presents a novel computer based algorithm that solves the problem of USV formation path planning. The algorithm is based upon the fast marching (FM) method and has been specifically designed for operation in dynamic environments using the novel constrained FM method. The constrained FM method is able to model the dynamic behaviour of moving ships with efficient computation time. The algorithm has been evaluated using a range of tests applied to a simulated area and has been proved to work effectively in a complex navigation environment. <s> BIB005 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> IV. FAST SWEEPING METHODS <s> ABSTRACTTraveltime computation is essential for many seismic data processing applications and velocity analysis tools. High-resolution seismic imaging requires eikonal solvers to account for anisotropy whenever it significantly affects the seismic wave kinematics. Moreover, computation of auxiliary quantities, such as amplitude and take-off angle, relies on highly accurate traveltime solutions. However, the finite-difference-based eikonal solution for a point-source initial condition has upwind source singularity at the source position because the wavefront curvature is large near the source point. Therefore, all finite-difference solvers, even the high-order ones, show inaccuracies because the errors due to source-singularity spread from the source point to the whole computational domain. We address the source-singularity problem for tilted transversely isotropic (TTI) eikonal solvers using factorization. We solve a sequence of factored tilted elliptically anisotropic (TEA) eikonal equations iteratively,... <s> BIB006 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> IV. FAST SWEEPING METHODS <s> In this paper, we propose a framework for online quadrotor motion planning for autonomous navigation in unknown environments. Based on the onboard state estimation and environment perception, we adopt a fast marching-based path searching method to find a path on a velocity field induced by the Euclidean signed distance field (ESDF) of the map, to achieve better time allocation. We generate a flight corridor for the quadrotor to travel through by inflating the path against the environment. We represent the trajectory as piecewise Bezier curves by using Bernstein polynomial basis and formulate the trajectory generation problem as typical convex programs. By using Bezier curves, we are able to bound positions and higher order dynamics of the trajectory entirely within safe regions. The proposed motion planning method is integrated into a customized light-weight quadrotor platform and is validated by presenting fully autonomous navigation in unknown cluttered indoor and outdoor environments. We also release our code for trajectory generation as an open-source package. <s> BIB007
The Fast Sweeping Method (FSM) BIB002 , BIB001 is an iterative algorithm which computes the time-of-arrival map by successively sweeping (traversing) the whole grid following a specific order. FSM performs Gauss-Seidel iterations in alternating directions. These directions are chosen so that all the possible characteristic curves of the solution to the Eikonal are divided into the possible quadrants (or octants in 3D) of the environment. For instance, a bi-dimensional grid has four possible Gauss-Seidel iterations (the combinations of traversing x and y dimensions forwards and backwards): North-East, North-West, South-East and South-West, as shown in Fig. 3 . The FSM is a simple algorithm: it performs sweeps until no value is improved. In each sweep, the Eikonal equation is solved for every cell. However, to generalize this algorithm to N dimensions is complex and, up to our knowledge, there are only 2D and 3D solutions. In this survey we introduce a novel n-dimensional version, which is detailed in Algorithm 5. We will denote the sweeping directions as a binary array SweepDirs with elements 1 or −1, with 1 (−1) meaning forwards (backwards) traversal in that dimension. This array is initialized to 1 (North-East in the 2D case or North-EastTop in 3D) and the grid is initialized as in FMM (lines 2-5). The main loop updates SweepDirs and then a sweep is performed in the new direction (lines 9-10). The GETSWEEPDIRS() procedure (see Algorithm 6) is in charge of generating the appropriate Gauss-Seidel iteration directions. If a 3D SweepDirs = [1, 1, 1] vector is given, the following sequence will be generated: Note that the literature describes at least three different sequences for the sweep pattern and shows that the optimal sequence depends on the environment BIB006 , BIB001 . The sequence used in this work has been chosen to be calculated efficiently in an n-dimensional version. Besides, it is equally valid as the same directions are visited when the sweeps are done. Finally, the SWEEP() procedure (see Algorithm 7) recursively generates the Gauss-Seidel iterations following the traversal directions specified by the corresponding value of SweepDirs (line 4). Each recursive level traverses the whole corresponding dimension. Note that the extent of dimension n is denoted by X n . Once the most inner loop is reached, the corresponding cell is evaluated and its value updated if necessary (lines BIB003 BIB004 BIB005 BIB007 . The FSM carries out as many grid traversals as necessary until the value T i for every cell has converged. Since no ordering is of the data is used, the evaluation of each cell is O(1). As there are n cells and t traversals, the total computational complexity of FSM is O(nt). for x i ∈ X s do 5: T i ← 0 6: end for Propagation: 7: stop ← False SweepDirs ← GETSWEEPDIRS X , SweepDirs 10: end while 12: return T 13: end procedure Algorithm 6 Sweep Directions Algorithm 1: procedure getSweepDirs(X , SweepDirs) 2: break Finish For loop. end if 9: end for 10: return SweepDirs 11: end procedure However, the complexity constants depend greatly on the speed function F(x). For instance, in the case of an 2D empty map with constant speed of propagation, four sweeps are enough to cover the entire map, therefore the complexity is O(4n), which is the minimum possible constant (assuming the start point is not in a corner of the map). On the other hand, the more complex the speed function or the environment are, the more sweeps the algorithm will need to converge to the final solution, increasing the complexity of the method. Note that, as long as the same first-order upwind discretization is used, the T returned by FSM is exactly the same as all the FMM-like algorithms (except UFMM).
Fast Methods for Eikonal Equations: An Experimental Survey <s> 13: <s> We present a variational framework that integrates the statistical boundary shape models into a Level Set system that is capable of both segmenting and recognizing objects. Since we aim to recognize objects, we trace the active contour and stop it near real object boundaries while inspecting the shape of the contour instead of enforcing the contour to get a priori shape. We get the location of character boundaries and character labels at the system output. We developed a promising local front stopping scheme based on both image and shape information for fast marching systems. A new object boundary shape signature model, based on directional Gauss gradient filter responses, is also proposed. The character recognition system that employs the new boundary shape descriptor outperforms the other systems, based on well-known boundary signatures such as centroid distance, curvature etc. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 13: <s> In this paper we propose a novel computational technique to solve the Eikonal equation efficiently on parallel architectures. The proposed method manages the list of active nodes and iteratively updates the solutions on those nodes until they converge. Nodes are added to or removed from the list based on a convergence measure, but the management of this list does not entail an extra burden of expensive ordered data structures or special updating sequences. The proposed method has suboptimal worst-case performance but, in practice, on real and synthetic datasets, runs faster than guaranteed-optimal alternatives. Furthermore, the proposed method uses only local, synchronous updates and therefore has better cache coherency, is simple to implement, and scales efficiently on parallel architectures. This paper describes the method, proves its consistency, gives a performance analysis that compares the proposed method against the state-of-the-art Eikonal solvers, and describes the implementation on a single instruction multiple datastream (SIMD) parallel architecture. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 13: <s> In clinical practice, renal cancer diagnosis is performed by manual quantifications of tumor size and enhancement, which are time consuming and show high variability. We propose a computer-assisted clinical tool to assess and classify renal tumors in contrast-enhanced CT for the management and classification of kidney tumors. The quantification of lesions used level-sets and a statistical refinement step to adapt to the shape of the lesions. Intra-patient and inter-phase registration facilitated the study of lesion enhancement. From the segmented lesions, the histograms of curvature-related features were used to classify the lesion types via random sampling. The clinical tool allows the accurate quantification and classification of cysts and cancer from clinical data. Cancer types are further classified into four categories. Computer-assisted image analysis shows great potential for tumor diagnosis and monitoring. <s> BIB003 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 13: <s> We have developed a novel hierarchical data structure for the efficient representation of sparse, time-varying volumetric data discretized on a 3D grid. Our “VDB”, so named because it is a Volumetric, Dynamic grid that shares several characteristics with Bptrees, exploits spatial coherency of time-varying data to separately and compactly encode data values and grid topology. VDB models a virtually infinite 3D index space that allows for cache-coherent and fast data access into sparse volumes of high resolution. It imposes no topology restrictions on the sparsity of the volumetric data, and it supports fast (average O(1)) random access patterns when the data are inserted, retrieved, or deleted. This is in contrast to most existing sparse volumetric data structures, which assume either static or manifold topology and require specific data access patterns to compensate for slow random access. Since the VDB data structure is fundamentally hierarchical, it also facilitates adaptive grid sampling, and the inherent acceleration structure leads to fast algorithms that are well-suited for simulations. As such, VDB has proven useful for several applications that call for large, sparse, animated volumes, for example, level set dynamics and cloud modeling. In this article, we showcase some of these algorithms and compare VDB with existing, state-of-the-art data structures. <s> BIB004 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 13: <s> Automated algorithms to build accurate models of 3D neuronal arborization is much in demand due to large volume of microscopy data. We present a tracking algorithm for automatic and reliable extraction of neuronal morphology. It is robust to ambiguous branch discontinuities, variability of intensity and curvature of fibres, arbitrary branch cross-sections, noise and irregular background illumination. We complete the presentation of our method with demonstration of its performance on synthetic data modeling challenging scenarios and on confocal microscopy data of Olfactory Projection fibres from DIADEM data set with promising results. <s> BIB005 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 13: <s> The hippocampus is a part of the limbic system and plays an important role in long-term memory and spatial navigation. As part of hippocampal imaging studies, the region of the hippocampus is usually segmented manually, which is a time-consuming process. Here, we describe a comparison of the active contour model (ACM) and the fast marching method (FMM) using magnetic resonance (MR) images of the human brain. We determine optimized input parameters for both models to segment the hippocampus using T1-weighted MR images, and we found that the ACM provided superior performance compared with the FMM without significant additional computational expense. <s> BIB006 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 13: <s> Unmanned surface vehicles (USVs) have been deployed over the past decade. Current USV platforms are generally of small size with low payload capacity and short endurance times. To improve effectiveness there is a trend to deploy multiple USVs as a formation fleet. This paper presents a novel computer based algorithm that solves the problem of USV formation path planning. The algorithm is based upon the fast marching (FM) method and has been specifically designed for operation in dynamic environments using the novel constrained FM method. The constrained FM method is able to model the dynamic behaviour of moving ships with efficient computation time. The algorithm has been evaluated using a range of tests applied to a simulated area and has been proved to work effectively in a complex navigation environment. <s> BIB007
end if 14: end for 15: end if 16: return stop 17: end procedure The LSM procedure is detailed in Algorithm 8. During the initialization, all the cells are labeled as Frozen (the meaning of locked and Frozen is the same). Then, the starting cells X s are assigned a 0 value and all their neighbors are labeled as Narrow (the meaning of unlocked and Narrow is the same). Then, the wave propagation is computed performing as many grid traversals as necessary until no cell improves its time-of-arrival value. As in FSM, the GETSWEEPDIRS() procedure is in charge of generating the appropriate Gauss-Seidel iteration directions. For every iteration, the recursive locking sweeping algorithm, detailed in Algorithm 9, is performed. Essentially, it is the same procedure as in FSM. However, there are two main differences: 1) the Eikonal equation is computed only for those cells labeled as Narrow, otherwise they are skipped (see line 9), and 2) after every evaluation, if the time-of-arrival value (T i ) of cell x i is improved, all neighbors of cell x i which have a higher value than T i are labeled as Narrow so that they are evaluated in the next iteration (lines BIB005 BIB006 BIB001 BIB004 BIB003 . Note that the asymptotic computational complexity of FSM is kept as O(n) and the number of required sweeps is also maintained. However in practice, it turns out that most of the cells are locked during a sweep, therefore, the time saved during the computation is important. while stop = True do 14: SweepDirs ← GETSWEEPDIRS X , SweepDirs 15: end while 17: return T 18: end procedure Consider a front propagating, at a given time, the Narrow band will be composed by the set of cells belonging to the wavefront. GMM selects a group G out of Narrow composed by the global minimum and the local minima in Narrow. Then, every neighboring cell to G is evaluated and added to Narrow. These points in G have to be chosen carefully so that causality is not violated, since GMM does not sort the Narrow set. In order to select those values, GMM follows: Although in , in which the GMM was presented, √ N was used, we have chosen (16) as detailed in BIB002 , since the results for the original δ τ are much worse than FMM in most cases, reaching one order of magnitude of difference. If the time difference between two adjacent cells is larger than δ τ , their values will barely affect each other since the wavefront propagation direction is more perpendicular than parallel to the line segment formed by both cells. However, the downwind points (those to be evaluated in future iterations) can be affected by both adjacent cells. Therefore, points in G are evaluated twice to avoid instabilities. GMM is detailed in Algorithm 10. Its initialization is done in the same way as in FMM. Then, a reverse traversal through the selected points is performed, computing and updating their value (lines 20-24). Next, in lines 28-40 a forward traversal is carried out. The operations used are the same as in the if n > 1 then 4: for i ∈ X n following SweepDirs n do 5: stop ← LOCKSWEEP X , T , F, SweepDirs, n − 1 6: end for 7: else 8: for i ∈ X 1 following SweepDirs 1 do 9: if x i ∈ Narrow then 10: x i is the corresponding cell. BIB007 : for x j ∈ N (x i ) do 15: if T i < T j then Add improvable neighbors to Narrow. BIB001 : Narrow ← Narrow ∪ {x j }
Fast Methods for Eikonal Equations: An Experimental Survey <s> B. DYNAMIC DOUBLE QUEUE METHOD <s> In this paper, we outline two improvements to the fast sweeping method to improve the speed of the method in general and more specifically in cases where the speed is changing rapidly. The conventional wisdom is that fast sweeping works best when the speed changes slowly, and fast marching is the algorithm of choice when the speed changes rapidly. The goal here is to achieve run times for the fast sweeping method that are at least as fast, or faster, than competitive methods, e.g. fast marching, in the case where the speed is changing rapidly. The first improvement, which we call the locking method, dynamically keeps track of grid points that have either already had the solution successfully calculated at that grid point or for which the solution cannot be successfully calculated during the current iteration. These locked points can quickly be skipped over during the fast sweeping iterations, avoiding many time-consuming calculations. The second improvement, which we call the two queue method, keeps all of the unlocked points in a data structure so that the locked points no longer need to be visited at all. Unfortunately, it is not possible to insert new points into the data structure while maintaining the fast sweeping ordering without at least occasionally sorting. Instead, we segregate the grid points into those with small predicted solutions and those with large predicted solutions using two queues. We give two ways of performing this segregation. This method is a label correcting (iterative) method like the fast sweeping method, but it tends to operate near the front like the fast marching method. It is reminiscent of the threshold method for finding the shortest path on a network, [F. Glover, D. Klingman, and N. Phillips, Oper. Res., 33 (1985), pp. 65-73]. We demonstrate the numerical efficiency of the improved methods on a number of examples. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> B. DYNAMIC DOUBLE QUEUE METHOD <s> This paper introduces a novel path planning method under non-holonomic constraint for car-like vehicles, which associates map discovery and heuristic search to attain an optimal resultant path. The map discovery applies fast marching method to investigate the map geometric information. After that, the support vector machine is performed to find obstacle clearance information. This information is then used as a heuristic function which helps greatly reduce the search space. The fast marching is performed again, guided by this function to generate vehicle motions under kinematic constraints. Experimental results have shown that this method is able to generate motions for non-holonomic vehicles. In comparison with related methods, the path generated by proposed method is smoother and stay farther away from the obstacles. <s> BIB002
The Dynamic Double Queue Method (DDQM) BIB001 is inspired by LSM but resembles GMM. DDQM is 39012 VOLUME 7, 2019 Algorithm 10 Group Marching Method 1: procedure GMM(X , T , F, X s ) Initialization: 2: Unknown ← X , Narrow ← ∅, Frozen ← ∅ 3: for x i ∈ X s do 6: Adding neighbors of starting points to Narrow. BIB002 : end if 14: Unknown ← Unknown\{x i }
Fast Methods for Eikonal Equations: An Experimental Survey <s> 16: <s> Recently, fast marching methods (FMM) beyond first order have been developed for producing rapid solutions to the eikonal equation. In this paper, we present imaging results for 3‐D prestack Kirchhoff migration using traveltimes computed using the first‐order and second‐order FMM on several 3‐D prestack synthetic and real data sets. The second order traveltimes produce a much better image of the structure. Moreover, insufficiently sampled first order traveltimes can introduce consistent errors in the common reflection point gathers that affect velocity analysis. First‐order traveltimes tend to be smaller than analytic traveltimes, which in turn affects the migration velocity analysis, falsely indicating that the interval velocity was too low. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 16: <s> In clinical practice, renal cancer diagnosis is performed by manual quantifications of tumor size and enhancement, which are time consuming and show high variability. We propose a computer-assisted clinical tool to assess and classify renal tumors in contrast-enhanced CT for the management and classification of kidney tumors. The quantification of lesions used level-sets and a statistical refinement step to adapt to the shape of the lesions. Intra-patient and inter-phase registration facilitated the study of lesion enhancement. From the segmented lesions, the histograms of curvature-related features were used to classify the lesion types via random sampling. The clinical tool allows the accurate quantification and classification of cysts and cancer from clinical data. Cancer types are further classified into four categories. Computer-assisted image analysis shows great potential for tumor diagnosis and monitoring. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 16: <s> ABSTRACTFirst-break traveltime tomography is based on the eikonal equation. Because the eikonal equation is solved at fixed-shot positions and only receiver positions can move along the raypath, the adjoint-state tomography relies on inversion to resolve possible contradicting information between independent shots. The double-square-root (DSR) eikonal equation allows not only the receivers but also the shots to change position, and thus describes the prestack survey as a whole. Consequently, its linearized tomographic operator naturally handles all shots together, in contrast with the shotwise approach in the traditional eikonal-based framework. The DSR eikonal equation is singular for the horizontal waves, which require special handling. Although it is possible to recover all branches of the solution through postprocessing, our current forward modeling and tomography focuses on the diving wave branch only. We consider two upwind discretizations of the DSR eikonal equation and show that the explicit schem... <s> BIB003
end for 17: end for Propagation: 18: while Narrow = ∅ do BIB002 : for x i ∈ (Narrow ≤ T m ) REVERSE do Reverse traversal. BIB003 : end if 26: end for 27: end for 28: for x i ∈ (Narrow ≤ T m ) FORWARD do Forward tranversal. 29: for x j ∈ (N (x i ) ∩ X \Frozen) do : end if BIB001 : Unknown ← Unknown\{x i }
Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> Abstract ::: A fast marching level set method is presented for monotonically advancing fronts, which leads to an extremely fast scheme for solving the Eikonal equation. Level set methods are numerical techniques for computing the position of propagating fronts. They rely on an initial value partial differential equation for a propagating level set function and use techniques borrowed from hyperbolic conservation laws. Topological changes, corner and cusp development, and accurate determination of geometric properties such as curvature and normal direction are naturally obtained in this setting. This paper describes a particular case of such methods for interfaces whose speed depends only on local position. The technique works by coupling work on entropy conditions for interface motion, the theory of viscosity solutions for Hamilton-Jacobi equations, and fast adaptive narrow band level set methods. The technique is applicable to a variety of problems, including shape-from-shading problems, lithographic development calculations in microchip manufacturing, and arrival time problems in control theory. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> The Fast Marching Method is a numerical algorithm for solving the Eikonal equation on a rectangular orthogonal mesh in O(M log M) steps, where M is the total number of grid points. The scheme relies on an upwind finite difference approximation to the gradient and a resulting causality relationship that lends itself to a Dijkstra-like programming approach. In this paper, we discuss several extensions to this technique, including higher order versions on unstructured meshes in Rn and on manifolds and connections to more general static Hamilton-Jacobi equations. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> In this paper a fast sweeping method for computing the numerical solution of Eikonal equations on a rectangular grid is presented. The method is an iterative method which uses upwind difference for discretization and uses Gauss-Seidel iterations with alternating sweeping ordering to solve the discretized system. The crucial idea is that each sweeping ordering follows a family of characteristics of the corresponding Eikonal equation in a certain direction simultaneously. The method has an optimal complexity of O(N) for N grid points and is extremely simple to implement in any number of dimensions. Monotonicity and stability properties of the fast sweeping algorithm are proven. Convergence and error estimates of the algorithm for computing the distance function is studied in detail. It is shown that 2 n Gauss-Seidel iterations is enough for the distance function in n dimensions. An estimation of the number of iterations for general Eikonal equations is also studied. Numerical examples are used to verify the analysis. <s> BIB003 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> A computational study of the fast marching and the fast sweeping methods for the eikonal equation is given. It is stressed that both algorithms should be considered as "direct" (as opposed to iterative) methods. On realistic grids, fast sweeping is faster than fast marching for problems with simple geometry. For strongly nonuniform problems and/or complex geometry, the situation may be reversed. Finally, fully second order generalizations of methods of this type for problems with obstacles are proposed and implemented. <s> BIB004 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> Efficient path-planning algorithms are a crucial issue for modern autonomous underwater vehicles. Classical path-planning algorithms in artificial intelligence are not designed to deal with wide continuous environments prone to currents. We present a novel Fast Marching (FM)-based approach to address the following issues. First, we develop an algorithm we call FM* to efficiently extract a 2-D continuous path from a discrete representation of the environment. Second, we take underwater currents into account thanks to an anisotropic extension of the original FM algorithm. Third, the vehicle turning radius is introduced as a constraint on the optimal path curvature for both isotropic and anisotropic media. Finally, a multiresolution method is introduced to speed up the overall path-planning process <s> BIB005 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> In this paper, we propose a segmentation method based on the generalized fast marching method (GFMM) developed by Carlini et al. (submitted). The classical fast marching method (FMM) is a very efficient method for front evolution problems with normal velocity (see also Epstein and Gage, The curve shortening flow. In: Chorin, A., Majda, A. (eds.) Wave Motion: Theory, Modelling and Computation, 1997) of constant sign. The GFMM is an extension of the FMM and removes this sign constraint by authorizing time-dependent velocity with no restriction on the sign. In our modelling, the velocity is borrowed from the Chan–Vese model for segmentation (Chan and Vese, IEEE Trans Image Process 10(2):266–277, 2001). The algorithm is presented and analyzed and some numerical experiments are given, showing in particular that the constraints in the initialization stage can be weakened and that the GFMM offers a powerful and computationally efficient algorithm. <s> BIB006 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> In clinical practice, renal cancer diagnosis is performed by manual quantifications of tumor size and enhancement, which are time consuming and show high variability. We propose a computer-assisted clinical tool to assess and classify renal tumors in contrast-enhanced CT for the management and classification of kidney tumors. The quantification of lesions used level-sets and a statistical refinement step to adapt to the shape of the lesions. Intra-patient and inter-phase registration facilitated the study of lesion enhancement. From the segmented lesions, the histograms of curvature-related features were used to classify the lesion types via random sampling. The clinical tool allows the accurate quantification and classification of cysts and cancer from clinical data. Cancer types are further classified into four categories. Computer-assisted image analysis shows great potential for tumor diagnosis and monitoring. <s> BIB007 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> In this paper, we outline two improvements to the fast sweeping method to improve the speed of the method in general and more specifically in cases where the speed is changing rapidly. The conventional wisdom is that fast sweeping works best when the speed changes slowly, and fast marching is the algorithm of choice when the speed changes rapidly. The goal here is to achieve run times for the fast sweeping method that are at least as fast, or faster, than competitive methods, e.g. fast marching, in the case where the speed is changing rapidly. The first improvement, which we call the locking method, dynamically keeps track of grid points that have either already had the solution successfully calculated at that grid point or for which the solution cannot be successfully calculated during the current iteration. These locked points can quickly be skipped over during the fast sweeping iterations, avoiding many time-consuming calculations. The second improvement, which we call the two queue method, keeps all of the unlocked points in a data structure so that the locked points no longer need to be visited at all. Unfortunately, it is not possible to insert new points into the data structure while maintaining the fast sweeping ordering without at least occasionally sorting. Instead, we segregate the grid points into those with small predicted solutions and those with large predicted solutions using two queues. We give two ways of performing this segregation. This method is a label correcting (iterative) method like the fast sweeping method, but it tends to operate near the front like the fast marching method. It is reminiscent of the threshold method for finding the shortest path on a network, [F. Glover, D. Klingman, and N. Phillips, Oper. Res., 33 (1985), pp. 65-73]. We demonstrate the numerical efficiency of the improved methods on a number of examples. <s> BIB008 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> We have developed a novel hierarchical data structure for the efficient representation of sparse, time-varying volumetric data discretized on a 3D grid. Our “VDB”, so named because it is a Volumetric, Dynamic grid that shares several characteristics with Bptrees, exploits spatial coherency of time-varying data to separately and compactly encode data values and grid topology. VDB models a virtually infinite 3D index space that allows for cache-coherent and fast data access into sparse volumes of high resolution. It imposes no topology restrictions on the sparsity of the volumetric data, and it supports fast (average O(1)) random access patterns when the data are inserted, retrieved, or deleted. This is in contrast to most existing sparse volumetric data structures, which assume either static or manifold topology and require specific data access patterns to compensate for slow random access. Since the VDB data structure is fundamentally hierarchical, it also facilitates adaptive grid sampling, and the inherent acceleration structure leads to fast algorithms that are well-suited for simulations. As such, VDB has proven useful for several applications that call for large, sparse, animated volumes, for example, level set dynamics and cloud modeling. In this article, we showcase some of these algorithms and compare VDB with existing, state-of-the-art data structures. <s> BIB009 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> This paper introduces a novel path planning method under non-holonomic constraint for car-like vehicles, which associates map discovery and heuristic search to attain an optimal resultant path. The map discovery applies fast marching method to investigate the map geometric information. After that, the support vector machine is performed to find obstacle clearance information. This information is then used as a heuristic function which helps greatly reduce the search space. The fast marching is performed again, guided by this function to generate vehicle motions under kinematic constraints. Experimental results have shown that this method is able to generate motions for non-holonomic vehicles. In comparison with related methods, the path generated by proposed method is smoother and stay farther away from the obstacles. <s> BIB010 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> Automated algorithms to build accurate models of 3D neuronal arborization is much in demand due to large volume of microscopy data. We present a tracking algorithm for automatic and reliable extraction of neuronal morphology. It is robust to ambiguous branch discontinuities, variability of intensity and curvature of fibres, arbitrary branch cross-sections, noise and irregular background illumination. We complete the presentation of our method with demonstration of its performance on synthetic data modeling challenging scenarios and on confocal microscopy data of Olfactory Projection fibres from DIADEM data set with promising results. <s> BIB011 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> We presented a new ray tracing technique which is applicable for crosshole radar traveltime tomography. The new algorithm divides the ray tracing process into two steps: First the wavefront propagation times of all grid points in a velocity field are calculated using the multistencils fast marching method (MSFM), and then the ray tracing paths having the minimum traveltime can be easily obtained by following the steepest gradient direction from the receiver to the transmitter. In contrast to traditional fast marching method (FMM) and higher accuracy fast marching method (HAFMM), MSFM algorithm calculates traveltimes using two stencils at the same time, and the information in diagonal direction can be included, thus the calculation accuracy and efficiency can be improved greatly. In order to verify the accuracy and efficiency of the new ray tracing method, we test the proposed scheme on two synthetic velocity models where the exact solutions can be calculated, and we compared our results with the one obtained by a FMM based and a HAFMM based steepest descend ray tracing methods. This comparison indicated that the suggested ray tracing technique can achieve much better results both on accuracy and efficiency compared to the FMM based and the HAFMM based steepest descend ray tracing methods. <s> BIB012 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> Unmanned surface vehicles (USVs) have been deployed over the past decade. Current USV platforms are generally of small size with low payload capacity and short endurance times. To improve effectiveness there is a trend to deploy multiple USVs as a formation fleet. This paper presents a novel computer based algorithm that solves the problem of USV formation path planning. The algorithm is based upon the fast marching (FM) method and has been specifically designed for operation in dynamic environments using the novel constrained FM method. The constrained FM method is able to model the dynamic behaviour of moving ships with efficient computation time. The algorithm has been evaluated using a range of tests applied to a simulated area and has been proved to work effectively in a complex navigation environment. <s> BIB013 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 40: <s> In this paper, we propose a framework for online quadrotor motion planning for autonomous navigation in unknown environments. Based on the onboard state estimation and environment perception, we adopt a fast marching-based path searching method to find a path on a velocity field induced by the Euclidean signed distance field (ESDF) of the map, to achieve better time allocation. We generate a flight corridor for the quadrotor to travel through by inflating the path against the environment. We represent the trajectory as piecewise Bezier curves by using Bernstein polynomial basis and formulate the trajectory generation problem as typical convex programs. By using Bezier curves, we are able to bound positions and higher order dynamics of the trajectory entirely within safe regions. The proposed motion planning method is integrated into a customized light-weight quadrotor platform and is validated by presenting fully autonomous navigation in unknown cluttered indoor and outdoor environments. We also release our code for trajectory generation as an open-source package. <s> BIB014
Frozen ← Frozen ∪ {x i } 41: end for 42: end while 43: return T 44: end procedure conceptually simple: the Narrow set is divided into two nonsorted FIFO queues: one with cells to be evaluated sooner and the other one with cells to be evaluated later. Every iteration, an element from the first queue is evaluated. If its arrival time is improved, the neighboring cells with higher time are unlocked and added to the first or second queue, depending on the value of the updated cell. Once the first queue is empty, the queues are swapped and the algorithm continues. The purpose is to achieve a pseudo-ordering of the cells, so that cells with lower value are evaluated first. Since the queues are not sorted, the arrival time of the same cell could require being solved many times until its value converges. DDQM dynamically computes the threshold value, which sets the division of the two queues, depending on the number of points of each queue, trying to reach an equilibrium. Reference BIB008 includes an in depth analysis of the update of the threshold in each iteration. In this work, the initial value of the step of the threshold is increased every iteration according to: where n is the total number of cells in the grid. Originally, it was suggested to compute this step as step = However, the step value should have time units whereas this expression has [t −1 ] units (probably an error due to the ambiguity of using speed F or slowness f = 1 F ). Therefore, (17) is proposed as an alternative in this work. While the algorithm evolves, every time the first queue is emptied UPDATESTEP() (see Algorithm 11) is called, using the value of the current step, the number of cells inserted in the first queue c 1 , and the total number of cells inserted c total as inputs. Then, step is modified so that the number of cells inserted in the first queue is between 65% and 75% of the total inserted cells. This is a conservative approach, since the closer this percentage is to 50% the faster DDQM is. However, the penalization provoked by percentages lower than 50% is much more significant than for higher percentages. Note that in Algorithm 11 the step is increased by a factor of 1.5 but decreased by a factor of 2. This makes step to converge to a value instead of overshooting around the optimal value. Dividing by a larger number causes the first queue to become empty earlier. Thus, the next iteration will finish faster and a better step value can be computed. The method of DDQM is detailed in Algorithm 12. As in LSM, points are divided into the locked (Frozen) or unlocked (Narrow) sets. The initialization labels all the points as frozen except for the neighbors of the start points, which are added to the first queue (lines BIB001 BIB003 BIB004 BIB002 BIB005 BIB010 BIB013 BIB014 BIB006 BIB011 . While the first queue is not empty, its front element is extracted and evaluated (lines BIB009 BIB007 BIB012 . If its time value is improved, all its locked neighbors with higher value are unlocked and added to their corresponding queue. In BIB008 , three methods were proposed: 1) single-queue (SQ), and therefore a simpler algorithm, 2) two-queue static (TQS), where the step is not updated, and 3) twoqueue dynamic (which we call DDQM). SQ and TQS slightly improve on DDQM in some experiments, but when end if 13: return step 14: end procedure DDQM improves on SQ and TQS (for instance, in environments with noticeable speed changes) the difference can reach one order of magnitude. Therefore, we decided to include DDQM instead of SQ and TQS since it has shown a more adaptive behavior. In any case, any of these methods returns the same solution as FMM. Regarding its complexity, in the worst case, the whole grid is contained in both queues and traversed many times during the propagation. However, since queue insertion and deletion are O(1) operations, the overall complexity is O(n). Note that SWAP() can be efficiently implemented in O(1) as a circular binary index, or updating references (or pointers), and therefore there is no need for a real swap operation. end while 41: step ← UPDATESTEP step, c 1 , c total 42: c total ← 0 45: th ← th + step 46: end while 47: return T 48: end procedure just before the point being currently evaluated, x i . Finally, this point is removed from Narrow and labeled as Frozen (lines 25 and 26) . During the different iterations of the algorithm, a node can be added several times to the Narrow set, since every time an upwind (parent) neighbor is updated, the node can improve its value. In the worst case, Narrow contains the whole grid and the loop would go through all the points several times. Operations on the list are O(1), therefore, the overall computational complexity of FIM is O(n).
Fast Methods for Eikonal Equations: An Experimental Survey <s> C. FAST ITERATIVE METHOD <s> We present a variational framework that integrates the statistical boundary shape models into a Level Set system that is capable of both segmenting and recognizing objects. Since we aim to recognize objects, we trace the active contour and stop it near real object boundaries while inspecting the shape of the contour instead of enforcing the contour to get a priori shape. We get the location of character boundaries and character labels at the system output. We developed a promising local front stopping scheme based on both image and shape information for fast marching systems. A new object boundary shape signature model, based on directional Gauss gradient filter responses, is also proposed. The character recognition system that employs the new boundary shape descriptor outperforms the other systems, based on well-known boundary signatures such as centroid distance, curvature etc. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> C. FAST ITERATIVE METHOD <s> In this paper we propose a novel computational technique to solve the Eikonal equation efficiently on parallel architectures. The proposed method manages the list of active nodes and iteratively updates the solutions on those nodes until they converge. Nodes are added to or removed from the list based on a convergence measure, but the management of this list does not entail an extra burden of expensive ordered data structures or special updating sequences. The proposed method has suboptimal worst-case performance but, in practice, on real and synthetic datasets, runs faster than guaranteed-optimal alternatives. Furthermore, the proposed method uses only local, synchronous updates and therefore has better cache coherency, is simple to implement, and scales efficiently on parallel architectures. This paper describes the method, proves its consistency, gives a performance analysis that compares the proposed method against the state-of-the-art Eikonal solvers, and describes the implementation on a single instruction multiple datastream (SIMD) parallel architecture. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> C. FAST ITERATIVE METHOD <s> In this paper, we propose a segmentation method based on the generalized fast marching method (GFMM) developed by Carlini et al. (submitted). The classical fast marching method (FMM) is a very efficient method for front evolution problems with normal velocity (see also Epstein and Gage, The curve shortening flow. In: Chorin, A., Majda, A. (eds.) Wave Motion: Theory, Modelling and Computation, 1997) of constant sign. The GFMM is an extension of the FMM and removes this sign constraint by authorizing time-dependent velocity with no restriction on the sign. In our modelling, the velocity is borrowed from the Chan–Vese model for segmentation (Chan and Vese, IEEE Trans Image Process 10(2):266–277, 2001). The algorithm is presented and analyzed and some numerical experiments are given, showing in particular that the constraints in the initialization stage can be weakened and that the GFMM offers a powerful and computationally efficient algorithm. <s> BIB003 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> C. FAST ITERATIVE METHOD <s> In clinical practice, renal cancer diagnosis is performed by manual quantifications of tumor size and enhancement, which are time consuming and show high variability. We propose a computer-assisted clinical tool to assess and classify renal tumors in contrast-enhanced CT for the management and classification of kidney tumors. The quantification of lesions used level-sets and a statistical refinement step to adapt to the shape of the lesions. Intra-patient and inter-phase registration facilitated the study of lesion enhancement. From the segmented lesions, the histograms of curvature-related features were used to classify the lesion types via random sampling. The clinical tool allows the accurate quantification and classification of cysts and cancer from clinical data. Cancer types are further classified into four categories. Computer-assisted image analysis shows great potential for tumor diagnosis and monitoring. <s> BIB004 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> C. FAST ITERATIVE METHOD <s> We have developed a novel hierarchical data structure for the efficient representation of sparse, time-varying volumetric data discretized on a 3D grid. Our “VDB”, so named because it is a Volumetric, Dynamic grid that shares several characteristics with Bptrees, exploits spatial coherency of time-varying data to separately and compactly encode data values and grid topology. VDB models a virtually infinite 3D index space that allows for cache-coherent and fast data access into sparse volumes of high resolution. It imposes no topology restrictions on the sparsity of the volumetric data, and it supports fast (average O(1)) random access patterns when the data are inserted, retrieved, or deleted. This is in contrast to most existing sparse volumetric data structures, which assume either static or manifold topology and require specific data access patterns to compensate for slow random access. Since the VDB data structure is fundamentally hierarchical, it also facilitates adaptive grid sampling, and the inherent acceleration structure leads to fast algorithms that are well-suited for simulations. As such, VDB has proven useful for several applications that call for large, sparse, animated volumes, for example, level set dynamics and cloud modeling. In this article, we showcase some of these algorithms and compare VDB with existing, state-of-the-art data structures. <s> BIB005 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> C. FAST ITERATIVE METHOD <s> ABSTRACTFirst-break traveltime tomography is based on the eikonal equation. Because the eikonal equation is solved at fixed-shot positions and only receiver positions can move along the raypath, the adjoint-state tomography relies on inversion to resolve possible contradicting information between independent shots. The double-square-root (DSR) eikonal equation allows not only the receivers but also the shots to change position, and thus describes the prestack survey as a whole. Consequently, its linearized tomographic operator naturally handles all shots together, in contrast with the shotwise approach in the traditional eikonal-based framework. The DSR eikonal equation is singular for the horizontal waves, which require special handling. Although it is possible to recover all branches of the solution through postprocessing, our current forward modeling and tomography focuses on the diving wave branch only. We consider two upwind discretizations of the DSR eikonal equation and show that the explicit schem... <s> BIB006 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> C. FAST ITERATIVE METHOD <s> Automated algorithms to build accurate models of 3D neuronal arborization is much in demand due to large volume of microscopy data. We present a tracking algorithm for automatic and reliable extraction of neuronal morphology. It is robust to ambiguous branch discontinuities, variability of intensity and curvature of fibres, arbitrary branch cross-sections, noise and irregular background illumination. We complete the presentation of our method with demonstration of its performance on synthetic data modeling challenging scenarios and on confocal microscopy data of Olfactory Projection fibres from DIADEM data set with promising results. <s> BIB007 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> C. FAST ITERATIVE METHOD <s> We presented a new ray tracing technique which is applicable for crosshole radar traveltime tomography. The new algorithm divides the ray tracing process into two steps: First the wavefront propagation times of all grid points in a velocity field are calculated using the multistencils fast marching method (MSFM), and then the ray tracing paths having the minimum traveltime can be easily obtained by following the steepest gradient direction from the receiver to the transmitter. In contrast to traditional fast marching method (FMM) and higher accuracy fast marching method (HAFMM), MSFM algorithm calculates traveltimes using two stencils at the same time, and the information in diagonal direction can be included, thus the calculation accuracy and efficiency can be improved greatly. In order to verify the accuracy and efficiency of the new ray tracing method, we test the proposed scheme on two synthetic velocity models where the exact solutions can be calculated, and we compared our results with the one obtained by a FMM based and a HAFMM based steepest descend ray tracing methods. This comparison indicated that the suggested ray tracing technique can achieve much better results both on accuracy and efficiency compared to the FMM based and the HAFMM based steepest descend ray tracing methods. <s> BIB008 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> C. FAST ITERATIVE METHOD <s> The hippocampus is a part of the limbic system and plays an important role in long-term memory and spatial navigation. As part of hippocampal imaging studies, the region of the hippocampus is usually segmented manually, which is a time-consuming process. Here, we describe a comparison of the active contour model (ACM) and the fast marching method (FMM) using magnetic resonance (MR) images of the human brain. We determine optimized input parameters for both models to segment the hippocampus using T1-weighted MR images, and we found that the ACM provided superior performance compared with the FMM without significant additional computational expense. <s> BIB009 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> C. FAST ITERATIVE METHOD <s> Full waveform inversion (FWI) is a process in which seismic data in time or frequency domain is fit by changing the velocity of the media under investigation. The problem is non-linear, and therefore optimization techniques have been used to find a geological solution to the problem. The main problem in fitting the data is the lack of low spatial frequencies. This deficiency often leads to a local minimum and to non-geologic solutions. In this work we explore how to obtain low frequency information for FWI. Our approach involves augmenting FWI with travel time tomography, which has low-frequency features. By jointly inverting these two problems we enrich FWI with information that can replace low frequency data. In addition, we use high order regularization in a preliminary inversion stage to prevent high frequency features from polluting our model in the initial stages of the reconstruction. This regularization also promote the low-frequencies non-dominant modes that exist in the FWI sensitivity. By applying a smoothly regularized joint inversion we are able to obtain a smooth model than can later be used to recover a good approximation for the true model. A second contribution of this paper involves the acceleration of the main computational bottleneck in FWI--the solution of the Helmholtz equation. We show that the solution time can be significantly reduced by solving the equation for multiple right hand sides using block multigrid preconditioned Krylov methods. <s> BIB010
The Fast Iterative Method (FIM) BIB002 is based on the iterative method proposed by but inspired by FMM. It also resembles DDQM (concretely, its single queue variant). It iteratively evaluates every point in Narrow until it converges. Once a node has converged its neighbors are inserted into Narrow and the process continues. Narrow is implemented as a non-sorted list. The algorithm requires a convergence parameter : if T i is improved less than , it is considered as converged. As a result of FIM, if a small enough (depending on the environment) is chosen, the same solution as FMM is returned. However, it can be sped up allowing small errors bounded by . FIM is designed to be efficient for parallel computing, since all the elements in Narrow can be evaluated simultaneously. However, we are focusing on its sequential implementation in order to have a fair comparison with the other methods. Algorithm 13 details FIM steps. Its initialization is the same as FMM. Then, for each element in Narrow, its value is updated (lines BIB003 BIB007 . If the value difference is less than , the neighbors are evaluated and, in case their value is improved, they are added to Narrow (lines BIB009 BIB001 BIB005 BIB004 BIB008 BIB006 BIB010 . Since Narrow is a list, the new elements should be inserted Frozen ← X , Narrow ← ∅ 3: c total ← 0 6: n is the total number of cells.
Fast Methods for Eikonal Equations: An Experimental Survey <s> 1) EMPTY MAP <s> In this paper we propose a novel computational technique to solve the Eikonal equation efficiently on parallel architectures. The proposed method manages the list of active nodes and iteratively updates the solutions on those nodes until they converge. Nodes are added to or removed from the list based on a convergence measure, but the management of this list does not entail an extra burden of expensive ordered data structures or special updating sequences. The proposed method has suboptimal worst-case performance but, in practice, on real and synthetic datasets, runs faster than guaranteed-optimal alternatives. Furthermore, the proposed method uses only local, synchronous updates and therefore has better cache coherency, is simple to implement, and scales efficiently on parallel architectures. This paper describes the method, proves its consistency, gives a performance analysis that compares the proposed method against the state-of-the-art Eikonal solvers, and describes the implementation on a single instruction multiple datastream (SIMD) parallel architecture. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 1) EMPTY MAP <s> In this paper, we outline two improvements to the fast sweeping method to improve the speed of the method in general and more specifically in cases where the speed is changing rapidly. The conventional wisdom is that fast sweeping works best when the speed changes slowly, and fast marching is the algorithm of choice when the speed changes rapidly. The goal here is to achieve run times for the fast sweeping method that are at least as fast, or faster, than competitive methods, e.g. fast marching, in the case where the speed is changing rapidly. The first improvement, which we call the locking method, dynamically keeps track of grid points that have either already had the solution successfully calculated at that grid point or for which the solution cannot be successfully calculated during the current iteration. These locked points can quickly be skipped over during the fast sweeping iterations, avoiding many time-consuming calculations. The second improvement, which we call the two queue method, keeps all of the unlocked points in a data structure so that the locked points no longer need to be visited at all. Unfortunately, it is not possible to insert new points into the data structure while maintaining the fast sweeping ordering without at least occasionally sorting. Instead, we segregate the grid points into those with small predicted solutions and those with large predicted solutions using two queues. We give two ways of performing this segregation. This method is a label correcting (iterative) method like the fast sweeping method, but it tends to operate near the front like the fast marching method. It is reminiscent of the threshold method for finding the shortest path on a network, [F. Glover, D. Klingman, and N. Phillips, Oper. Res., 33 (1985), pp. 65-73]. We demonstrate the numerical efficiency of the improved methods on a number of examples. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 1) EMPTY MAP <s> We compare the computational performance of the Fast Marching Method, the Fast Sweeping Method and of the Fast Iterative Method to determine a numerical solution to the eikonal equation. We point out how the FIM outperforms the other two thanks to its parallel processing capabilities. <s> BIB003 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 1) EMPTY MAP <s> We presented a new ray tracing technique which is applicable for crosshole radar traveltime tomography. The new algorithm divides the ray tracing process into two steps: First the wavefront propagation times of all grid points in a velocity field are calculated using the multistencils fast marching method (MSFM), and then the ray tracing paths having the minimum traveltime can be easily obtained by following the steepest gradient direction from the receiver to the transmitter. In contrast to traditional fast marching method (FMM) and higher accuracy fast marching method (HAFMM), MSFM algorithm calculates traveltimes using two stencils at the same time, and the information in diagonal direction can be included, thus the calculation accuracy and efficiency can be improved greatly. In order to verify the accuracy and efficiency of the new ray tracing method, we test the proposed scheme on two synthetic velocity models where the exact solutions can be calculated, and we compared our results with the one obtained by a FMM based and a HAFMM based steepest descend ray tracing methods. This comparison indicated that the suggested ray tracing technique can achieve much better results both on accuracy and efficiency compared to the FMM based and the HAFMM based steepest descend ray tracing methods. <s> BIB004
This experiment is designed to show the performance of the methods in the most basic situation, where most of the algorithms perform best. An empty map with constant speed represents the simplest possible case for the Fast Methods. In fact, analytical methods could be implemented by computing the Euclidean distance from every point to the initial point. However, it is interesting because it shows the performance of the algorithms on open spaces which, in a real application, can be part of large environments. The same environment is divided into a different number of cells to study how the algorithms behave as the number of cells increases. Composed of empty 2D, 3D and 4D hypercubical environments of size [0, 1] N , with N = 2, 3, 4. The speed is a constant F i = 1 on X . The wavefront starts at the center of the grid. This experimental setup can be found in previous publications such as BIB002 , BIB001 , and . The number of cells was chosen so that an experiment has the same (or as close as possible) number of cells in all dimensions. For instance, a 50x50 2D grid has 2500 cells. Therefore, the equivalent 3D grid is 14x14x14 (2744) and in 4D is 7x7x7x7 (2401). This way, it is possible to also analyze the performance of the algorithms for different numbers of dimensions. Thus, we have chosen the following number of cells for each dimension for 2D grid: represented by a 100x100x200 grid is chosen, with equally distributed alternating barriers (from 0 to 9) along the z-axis. In all cases, the wavefront starts close to a corner of the map. Similar experimental setups can be found in the literature , BIB003 , BIB001 . and 45x45x45x45 in 4D. These discretizations are chosen so that it is possible to make a direct comparison with the empty map problem for the corresponding grid sizes. The wavefront starts in the center of the grid. This setup is inspired by the experiments carried out in BIB002 , BIB001 , and . Additionally, the maximum speed is increased from 1 to 100 (in steps of 10 units) to analyze how the algorithms behave with increasing speed changes. 2D examples are shown in Fig. 5. An example of the time-of-arrival field computed by FMM is shown in Fig. 10 . Note that all algorithms provide the same exact solution in this case. The higher the resolution, the better the accuracy. The results for the empty map experiment are shown in Fig. 11 for 2D, Fig. 12 for 3D, and Fig. 13 for 4D . In all cases two plots are included: raw computation times for each algorithm, and time ratios computed as: Time BIB004 so that larger ratios represent better performances. FMM is, as expected, the slowest algorithm in almost all cases, because the rest of the algorithms were proposed as improvements of FMM. Besides, an empty map environment is the most favorable case for any of the algorithms. As the number of cells increases, FMMFib quickly outperforms FMM since the number of elements in the narrow band increases exponentially with the number of dimensions, and therefore the better amortized times of the Fibonacci heap become useful. SFMM, the other O(n log n) Fast Method, is always faster than FMM and most of the times also faster then FMMFib, due to its simpler and faster heap management. However, as the number of cells increases, the tendencies of FMMFib and SFMM are very similar to that of FMM (the ratio remains constant as the number of cells increase). When the number of cells is large enough, it is hard to say whether SFMM or FMMFib are faster. The sweeping-based methods show a similar behavior to the FMM-based methods. FSM is only slower than FMM-based methods for environments with a small number of cells. However, its linear complexity quickly makes it faster than FMM, FMMFib, and SFMM if the environment becomes larger. In spite of this, when the number of dimensions increases, the number of required sweeps also increases (duplicates) and therefore this penalizes the algorithm, as it will evaluate each cell more times. LSM and DDQM are methods that improve on FSM by avoiding the recomputation of the cells. Therefore, they become the fastest algorithms as they do not deal with heap operations and they minimize cell recomputation. In this case, LSM is faster in most cases (in 4D it is slower than DDQM but presents a better tendency than DDQM). This happens because DDQM maintains two queues. Whereas the operations of these queues are efficient, they still represent some overhead over LSM, which does not update any internal container. The iterative algorithms, such as GMM and FIM, present moderate results with similar behavior. As they keep simple data structures, they usually are faster than FMM-like methods. However, they do not leverage the brute-force approach followed by sweeping-based methods, adding some additional overhead to the iterations and thus, being usually slower than them. UFMM also provides average computation times for a similar reason: it maintains a heap, something which is more efficient than FMM-like algorithms, but still requires performing additional operations in comparison to sweeping-based methods. As the speed is constant all over the grid, UFMM provides the same solution as the other methods. In a previous comparison between GMM and FMM BIB001 , GMM was about 50% faster than FMM in all cases. In the results presented here, GMM is at most 40% better. We attribute this difference to the implementation, as the heaps for FMM and FMMFib are highly optimized. Therefore, it is worth mentioning that the results shown here are also slightly subject to implementation details and, sometimes, details that are out of the reach of regular users, such as internal cache memory management, prefetchers, and other low-level details of the hardware used. The conclusions from these experiments are that, in the absence of obstacles and propagation speed modifications, sweeping-like methods perform the best. Although this setup is unlikely to be present in a practical scenario, the results allow understanding the behavior of the algorithms in ideal situations and their major advantages.
Fast Methods for Eikonal Equations: An Experimental Survey <s> 5) PATH PLANNING <s> Being able to build a map of the environment and to simultaneously localize within this map is an essential skill for mobile robots navigating in unknown environments in absence of external referencing systems such as GPS. This so-called simultaneous localization and mapping (SLAM) problem has been one of the most popular research topics in mobile robotics for the last two decades and efficient approaches for solving this task have been proposed. One intuitive way of formulating SLAM is to use a graph whose nodes correspond to the poses of the robot at different points in time and whose edges represent constraints between the poses. The latter are obtained from observations of the environment or from movement actions carried out by the robot. Once such a graph is constructed, the map can be computed by finding the spatial configuration of the nodes that is mostly consistent with the measurements modeled by the edges. In this paper, we provide an introductory description to the graph-based SLAM problem. Furthermore, we discuss a state-of-the-art solution that is based on least-squares error minimization and exploits the structure of the SLAM problems during optimization. The goal of this tutorial is to enable the reader to implement the proposed methods from scratch. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 5) PATH PLANNING <s> The maps that are built by standard feature-based simultaneous localization and mapping (SLAM) methods cannot be directly used to compute paths for navigation, unless enriched with obstacle or traversability information, with the consequent increase in complexity. Here, we propose a method that directly uses the Pose SLAM graph of constraints to determine the path between two robot configurations with lowest accumulated pose uncertainty, i.e., the most reliable path to the goal. The method shows improved navigation results when compared with standard path-planning strategies over both datasets and real-world experiments. <s> BIB002
Fast Methods are commonly used for low-dimensional path planning problems, given that they produce deterministic results, are complete (will find a solution if there is any), and it is possible to easily influence some properties of the paths, such as their smoothness or obstacle clearance . Fast Methods are applied from a given start point until a goal point is reached and labeled as frozen, then gradient descent is applied from the goal point in order to obtain a path to the global minimum (the start point). The gradient descent step is omitted in these experiments as it is out of the scope of this paper. Experiments in 2D and 3D maps are included. It is important to remark that since the Fast Methods are based on grids, they scale exponentially with the number of dimensions. To the best of the authors' knowledge, in path planning, there are no practical applications of Fast Methods further than 3D. For 2D, the chosen map shown in Fig. 7 belongs to the Intel Research Laboratory building in Seattle. This map is commonly used in path planning and robot navigation research BIB002 , BIB001 . The map is square-shaped, and contains binary values (1 for free cells, 0 for occupied cells). Similarly to the empty map experiment, different resolutions have been chosen for the sides of the square: {400, 800, 1000, 1500, 2000, 2500, 3000, 4000} For the 3D case, binary 3D gridmaps with different resolutions are created from a 3D model of a building used for video games, shown in Fig. 8 , which has been taken from the Internet 4 with slight modifications (such as adding more pieces of furniture or more internal rooms). This model is composed of two floors. The first floor is divided into three The initial point for the propagation has been arbitrarily chosen in both cases. However, as the algorithms are run without a goal point, all the reachable cells in the map are evaluated. Therefore, although the choice of the initial point can modify the results, the experimental setup guarantees that the impact is negligible. The FMM time-of-arrival field is shown in Fig. 25 for the 2D path planning problem. As the map is composed of binary values, all algorithms provide the exact same solution. Visualization of the results of the 3D path planning problem is not included as they cannot be clearly depicted with a single figure. The results for the path planning experiments are shown in Fig. 26 for 2D, and Fig. 27 for 3D . In both cases two plots are included: the raw computation times for each algorithm and the time ratios (analogously to the results of the empty map experiment). The results for such a binary and complex map greatly depend on its topology. In this case, the Intel Research Laboratories map has both large clear spaces and cluttered, irregular areas, therefore, the results can be interpreted as a mix between those of the empty map experiment and those from the alternating barriers experiment. For both the 2D and 3D experiments, the results are very similar to the alternating barriers experiments. FMM and FMMFib perform similarly, with SFMM being faster in all cases. FSM and LSM suffer from the complexity of the environments, being the slowest in many of the cases. As the number of cells in the map increases, the ratio of these methods with respect to FMM decreases because the map becomes relatively simpler. In other words, there is a higher density of free cells (especially in open areas) and, therefore, a larger number of cells can be computed in every sweep even if the proportion to the total map remains the same. This is a manifestation of the fact that the total number of cells increases exponentially, but the complexity of such methods increases linearly. However, DDQM shows the fastest computation times since it is rather immune to the environment complexity. Both GMM and FIM are among the slowest methods. This behavior has been appreciated already in other experiments with constant propagation speed (empty map and alternating barriers). However, UFMM is among the fastest algorithms because of its efficient heap structure. In the 3D results there is a bump on the times and ratios for a given resolution. The reason for this irregularity is that in the chosen map, for that given resolution, there are very specific voxels that are difficult to reach, requiring many iterations of the algorithm. This is the reason why it affects only the purely iterative algorithms (FSM, LSM, and FIM). For UFMM, however, these specific voxels cause the untidy heap to be less uniform and therefore also requires more operations.
Fast Methods for Eikonal Equations: An Experimental Survey <s> 6) VESSEL SEGMENTATION <s> Presents serial and parallel algorithms for solving a system of equations that arises from the discretization of the Hamilton-Jacobi equation associated to a trajectory optimization problem of the following type. A vehicle starts at a prespecified point x/sub 0/ and follows a unit speed trajectory x(t) inside a region in /spl Rfr//sup m/, until an unspecified time T that the region is excited. A trajectory minimising a cost function of the form /spl int//sub 0//sup T/ r(x(t))dt+q(x(T)) is sought. The discretized Hamilton-Jacobi equation corresponding to this problem is usually served using iterative methods. Nevertheless, assuming that the function r is positive, one is able to exploit the problem structure and develop one-pass algorithms for the discretized problem. The first m resembles Dijkstra's shortest path algorithm and runs in time O(n log n), where n is the number of grid points. The second algorithm uses a somewhat different discretization and borrows some ideas from Dial's shortest path algorithm; it runs in time O(n), which is the best possible, under some fairly mild assumptions. Finally, the author shows that the latter algorithm can be efficiently parallelized: for two-dimensional problems and with p processors, its running time becomes O(n/p), provided that p=O(/spl radic/n/log n). > <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 6) VESSEL SEGMENTATION <s> In this work we compare the performance of a number of vessel segmentation algorithms on a newly constructed retinal vessel image database. Retinal vessel segmentation is important for the detection of numerous eye diseases and plays an important role in automatic retinal disease screening systems. A large number of methods for retinal vessel segmentation have been published, yet an evaluation of these methods on a common database of screening images has not been performed. To compare the performance of retinal vessel segmentation methods we have constructed a large database of retinal images. The database contains forty images in which the vessel trees have been manually segmented. For twenty of those forty images a second independent manual segmentation is available. This allows for a comparison between the performance of automatic methods and the performance of a human observer. The database is available to the research community. Interested researchers are encouraged to upload their segmentation results to our website (http://www.isi.uu.nl/Research/Databases). The performance of five different algorithms has been compared. Four of these methods have been implemented as described in the literature. The fifth pixel classification based method was developed specifically for the segmentation of retinal vessels and is the only supervised method in this test. We define the segmentation accuracy with respect to our gold standard as the performance measure. Results show that the pixel classification method performs best, but the second observer still performs significantly better. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 6) VESSEL SEGMENTATION <s> In this paper, we develop a third order accurate fast marching method for the solution of the eikonal equation in two dimensions. There have been two obstacles to extending the fast marching method to higher orders of accuracy. The first obstacle is that using one-sided difference schemes is unstable for orders of accuracy higher than two. The second obstacle is that the points in the difference stencil are not available when the gradient is closely aligned with the grid. We overcome these obstacles by using a two-dimensional (2D) finite difference approximation to improve stability, and by locally rotating the grid 45 degrees (i.e., using derivatives along the diagonals) to ensure all the points needed in the difference stencil are available. We show that in smooth regions the full difference stencil is used for a suitably small enough grid size and that the difference scheme satisfies the von Neumann stability condition for the linearized eikonal equation. Our method reverts to first order accuracy near caustics without developing oscillations by using a simple switching scheme. The efficiency and high order of the method are demonstrated on a number of 2D test problems. <s> BIB003 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> 6) VESSEL SEGMENTATION <s> We introduce a novel fast marching approach with curvature regularization for vessel segmentation. Since most vessels have a smooth path, curvature can be used to distinguish desired vessels from short cuts, which usually contain parts with high curvature. However, in previous fast marching approaches, curvature information is not available, so it cannot be used for regularization directly. Instead, usually length regularization is used under the assumption that shorter paths should also have a lower curvature. However, for vessel segmentation, this assumption often does not hold and leads to short cuts. We propose an approach, which integrates curvature regularization directly into the fast marching framework, independent of length regularization. Our approach is globally optimal, and numerical experiments on synthetic and real retina images show that our approach yields more accurate results than two previous approaches. <s> BIB004
Another of the main uses of Fast Methods is computer vision for medical applications, as intermediary steps in more complex algorithms. For example, in BIB004 , FMM is used for 2D vessel segmentation in retina images based on the assumption that vessels usually follow a smooth path. Flórez-Valencia et al. perform a center line extraction in arteries using FMM, after defining an appropriate speed function and stopping criterion. It is then used to extract 2D contours in cross-sectional planes. The contours are finally used to progressively reconstruct a regularized continuous 3D surface. This section aims to show the performance of the Fast Methods in such applications. More concretely, a vessel segmentation algorithm similar to the one used in BIB004 is implemented. When using a Fast Method for segmentation, the main goal is to define a speed function, F(x), which provokes the wave expansion to happen faster in those areas which have to be segmented. In this case, the image is first processed with a high pass filter in order to subtract the smoothly varying background. Then, a speed function, in which the pixels corresponding to vessels have a higher value, is computed. Using this function, the Fast Method is performed using a central point of a vessel as starting point. Finally, gradient descent is used to extract the geodesics, which correspond to center lines of the vessels. This experiment involves the use of the Fast Methods in grids with slightly structured propagation speed, which can be considered as a mix between the random speed function and checkerboard experiments. In this case, two examples are covered: 2D and 3D vessel segmentation, using as input for the algorithms the images shown in Fig. 9 , taken from the DRIVE database of retinal vessels BIB002 . The 2D image has a resolution of 2560x2560 pixels with a range of speed [1.7234, 100] (of the preprocessed image according the aforementioned segmentation algorithm), whereas the 3D case is composed of a grid of 128x128x128 voxels with a range of speed BIB001 100] . As in previous cases, a total of ten runs per algorithm are executed for each case. The starting point in the 3D case is close to one of the bottom corners. In this experiment, as it focuses on a real application, the range of speed values used is smaller than those presented in the experiments above, since these values are due to the real image values. Besides, after the application of the Fast Methods, the necessary steps to complete the segmentation algorithm are not included in the experiment, because they are out of the scope of this study. It is also important to remark that in this kind of application, the Fast Methods employed are usually those based on high-accuracy solutions BIB003 , which are generally slower than the methods included in this paper.
Fast Methods for Eikonal Equations: An Experimental Survey <s> VII. DISCUSSION <s> We derive a Godunov-type numerical flux for the class of strictly convex, homogeneous Hamiltonians that includes $H(p,q)=\sqrt{ap^{2}+bq^{2}-2cpq},$ $c^{2}<ab.$ We combine our Godunov numerical fluxes with simple Gauss--Seidel-type iterations for solving the corresponding Hamilton--Jacobi (HJ) equations. The resulting algorithm is fast since it does not require a sorting strategy as found, e.g., in the fast marching method. In addition, it providesa way to compute solutions to a class of HJ equations for which the conventional fast marching method is not applicable. Our experiments indicate convergence after a few iterations, even in rather difficult cases. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> VII. DISCUSSION <s> We propose a new sweeping algorithm which discretizes the Legendre transform of the numerical Hamiltonian using an explicit formula. This formula yields the numerical solution at a grid point using only its immediate neighboring grid values and is easy to implement numerically. The minimization that is related to the Legendre transform in our sweeping scheme can either be solved analytically or numerically. We illustrate the efficiency and accuracy approach with several numerical examples in two and three dimensions. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> VII. DISCUSSION <s> Efficient path-planning algorithms are a crucial issue for modern autonomous underwater vehicles. Classical path-planning algorithms in artificial intelligence are not designed to deal with wide continuous environments prone to currents. We present a novel Fast Marching (FM)-based approach to address the following issues. First, we develop an algorithm we call FM* to efficiently extract a 2-D continuous path from a discrete representation of the environment. Second, we take underwater currents into account thanks to an anisotropic extension of the original FM algorithm. Third, the vehicle turning radius is introduced as a constraint on the optimal path curvature for both isotropic and anisotropic media. Finally, a multiresolution method is introduced to speed up the overall path-planning process <s> BIB003 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> VII. DISCUSSION <s> We propose a new image compression method based on geodesic Delaunay triangulations. Triangulations are generated by a progressive geodesic meshing algorithm which exploits the anisotropy of images through a farthest point sampling strategy. This seeding is performed according to anisotropic geodesic distances which force the anisotropic Delaunay triangles to follow the geometry of the image. Geodesic computations are performed using a Riemannian Fast Marching, which recursively updates the geodesic distance to the seed points. A linear spline approximation on this triangulation allows to approximate faithfully sharp edges and directional features in images. The compression is achieved by coding both the coefficients of the spline approximation and the deviation of the geodesic triangulation from an Euclidean Delaunay triangulation. Numerical results show that taking into account the anisotropy improves the approximation by isotropic triangulations of complex images. The resulting geodesic encoder competes well with wavelet-based encoder such as JPEG-2000 on geometric images. <s> BIB004 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> VII. DISCUSSION <s> This article provides a comprehensive view of the novel fast marching (FM) methods we developed for robot path planning. We recall some of the methods developed in recent years and present two improvements upon them: the saturated FM square (FM2) and an heuristic optimization called the FM2 star (FM2*) method. <s> BIB005
Two different sets of experiments have been carried out. The first set contains canonical problems previously included in the literature that cover the best and worst case scenarios for all the methods. This set consists of: empty map, alternating barriers, random speed function and checkerboard experiment. The main hypothesis considered while designing these experiments is that any other environment can be thought of as a combination of free space with obstacles and high-frequency or low-frequency speed changes of different magnitudes. Consequently, the second set aims to test this hypothesis while applying the Fast Methods to problems which represent real world applications. The second set is composed of the path planning experiment and the vessels experiment, in which the environments are more unpredictable and can contain many of the characteristics of the canonical problems. Note that the different grid sizes used in the experiments vary from extremely small to extraordinarily big grids. The path planning and vessels results can be correlated with those in the canonical problems, therefore supporting the choice of the latter. However, it is important to remark that these results are very sensitive to the environments chosen. Some algorithms, such as FMM, FMMFib, and SFMM are completely independent of the environment, as their internal data structures behave in the same manner regardless of shape or speed, and therefore show consistent behavior for all the experiments, but they are never among the fastest methods. FIM results can be sped up for non-constant speed problems if larger errors are allowed. UFMM can probably be improved as well. However, our experience is that the configuration of its parameters is complex and requires a deep analysis of the environment to which it is to be applied on. The results are also subject to low-level factors out of the scope of this paper. These factors are, for example, internal cache levels and memory management, data prefetching, etc. For example, the performance of algorithms that maintain a heap for the narrow band greatly depends on whether the narrow band is small enough that can be stored completely in microprocessor cache memories. Another example is that some algorithms such as DDQM or FIM make it complex for the prefetchers to prefetch the appropriate indices of the narrow band that are going to be evaluated on the following iterations since, commonly, grid neighbors are not contiguously stored in memory. Taking into account the results from the conducted experiments, several conclusions for the use of the different types of Fast Methods will now be summarized: • There is no practical reason to use FMM or FMMFib as SFMM is faster in every case. It shows the same behavior as its counterparts regardless of the characteristics of the environment, since the internal narrow band implementation is more efficient. • If a sweep-based method is to be used, LSM should be always chosen, as it greatly outperforms FSM, since it recomputes cells only if there is a chance of improving their value. In fact, it was not possible to find any case in which FSM performs better than LSM. Therefore, FSM methods are recommended when applied to simple scenarios with constant propagation speed, because they require lower number of sweeps. • In problems with constant speed and negligible dependence on the environmental complexity, DDQM should be chosen, as it has shown the best performance for the empty map and the alternating barriers environments, given that it combines the advantages of sweep-based and wavefront-based methods. However, for problems with variable speed values, its performance is highly influenced by the distribution of speed changes throughout the environment, as it might require many evaluations of the same cells in order to converge to the final solution. • For variable speed functions, but simple scenarios, GMM is the algorithm to choose, as it guarantees that only two cell evaluations are required. However, complex environments distort the narrow band, requiring more iterations of its main loop. • UFMM is hard to tune and its results include errors. Also, it has been outperformed in most of the cases by DDQM in constant speed scenarios, or by SFMM or FIM in experiments with variable speed. In order to evaluate whether this method should be considered, an in-depth study for the specific problem is required. • There is no clear winner for complex scenarios with variable speed. UFMM can perform well in all cases if tuned properly. Otherwise, SFMM is a safe choice, especially in cases where there is not much information about the environment. If a goal point is selected, cost-to-go heuristics can be applied BIB005 , which would greatly affect the results. Heuristics for FMM, FMMFib and SFMM are straightforward and they can be similarly applied to UFMM. They would improve the results in most of the cases. However, it is not clear if they can be applied to other Fast Methods. Anisotropic solutions given to anisotropic problems based on some of the presented methods are also interesting BIB003 , BIB001 , BIB004 , BIB002 .
Fast Methods for Eikonal Equations: An Experimental Survey <s> VIII. CONCLUSIONS <s> A computational study of the fast marching and the fast sweeping methods for the eikonal equation is given. It is stressed that both algorithms should be considered as "direct" (as opposed to iterative) methods. On realistic grids, fast sweeping is faster than fast marching for problems with simple geometry. For strongly nonuniform problems and/or complex geometry, the situation may be reversed. Finally, fully second order generalizations of methods of this type for problems with obstacles are proposed and implemented. <s> BIB001 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> VIII. CONCLUSIONS <s> We present an algorithm for solving in parallel the Eikonal equation. The efficiency of our approach is rooted in the ordering and distribution of the grid points on the available processors; we utilize a Cuthill-McKee ordering. The advantages of our approach is that (1) the efficiency does not plateau for a large number of threads; we compare our approach to the current state-of-the-art parallel implementation of Zhao (2007) [14] and (2) the total number of iterations needed for convergence is the same as that of a sequential implementation, i.e. our parallel implementation does not increase the complexity of the underlying sequential algorithm. Numerical examples are used to illustrate the efficiency of our approach. <s> BIB002 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> VIII. CONCLUSIONS <s> In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a "lazy" dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds-the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n-1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT*, especially in high-dimensional configuration spaces and in scenarios where collision-checking is expensive. <s> BIB003 </s> Fast Methods for Eikonal Equations: An Experimental Survey <s> VIII. CONCLUSIONS <s> We present the first asymptotically optimal feedback planning algorithm for nonholonomic systems and additive cost functionals. Our algorithm is based on three well-established numerical practices: 1 positive coefficient numerical approximations of the Hamilton-Jacobi-Bellman equations; 2 the Fast Marching Method, which is a fast nonlinear solver that utilizes Bellman's dynamic programming principle for efficient computations; and 3 an adaptive mesh-refinement algorithm designed to improve the resolution of an initial simplicial mesh and reduce the solution numerical error. By refining the discretization mesh globally, we compute a sequence of numerical solutions that converges to the true viscosity solution of the Hamilton-Jacobi-Bellman equations. In order to reduce the total computational cost of the proposed planning algorithm, we find that it is sufficient to refine the discretization within a small region in the vicinity of the optimal trajectory. Numerical experiments confirm our theoretical findings and establish that our algorithm outperforms previous asymptotically optimal planning algorithms, such as PRM* and RRT*. <s> BIB004
In this paper we have introduced the main Fast Methods in a common mathematical framework, adopting a practical point of view. Besides, an exhaustive comparison of the methods has been performed, which allows the users to choose among them depending on the application. The code is publicly available, BIB001 as are the automatic benchmark programs. This code has been thoroughly tested and it can serve as a basis for future algorithm design, as it provides all the tools required to easily implement and compare novel Fast Methods. Future research will focus on three different aspects: developing an analogous review for parallelized Fast Methods BIB002 , studying the application of these methods to anisotropic problems as well as to the new Fast Marching-based solutions focused on path planning applications BIB003 , BIB004 . Finally, the combination of UFMM and SFMM seems straightforward and it would presumably outperform both algorithms.
Control Allocation- A Survey <s> Introduction <s> The performanceand computational requirements ofoptimization methodsfor control allocation areevaluated. Two control allocation problems are formulated: a direct allocation method that preserves the directionality of the moment and a mixed optimization method that minimizes the error between the desired and the achieved momentsaswellasthecontroleffort.Theconstrainedoptimizationproblemsaretransformedinto linearprograms so that they can be solved using well-tried linear programming techniques such as the simplex algorithm. A variety of techniques that can be applied for the solution of the control allocation problem in order to accelerate computations are discussed. Performance and computational requirements are evaluated using aircraft models with different numbers of actuators and with different properties. In addition to the two optimization methods, three algorithms with low computational requirements are also implemented for comparison: a redistributed pseudoinverse technique, a quadratic programming algorithm, and a e xed-point method. The major conclusion is that constrained optimization can be performed with computational requirements that fall within an order of magnitude of those of simpler methods. The performance gains of optimization methods, measured in terms of the error between the desired and achieved moments, are found to be small on the average but sometimes signie cant.Avariety ofissuesthataffecttheimplementation ofthevariousalgorithmsin ae ight-controlsystem are discussed. <s> BIB001 </s> Control Allocation- A Survey <s> Introduction <s> In aircraft control, control allocation can be used to distribute the total control effort among the actuators when the number of actuators exceeds the number of controlled variables. The control allocation problem is often posed as a constrained least squares problem to incorporate the actuator position and rate limits. Most proposed methods for real-time implementation, like the redistributed pseudoinverse method, only deliver approximate, and sometimes unreliable solutions. We investigate the use of classical active set methods for control allocation. We develop active set algorithms that always find the optimal control distribution, and show by simulation that the timing requirements are in the same range as for two previously proposed solvers. <s> BIB002 </s> Control Allocation- A Survey <s> Introduction <s> The paper considers the objective of optimally specifying redundant actuators under constraints, a problem commonly referred to as control allocation. The problem is posed as a mixed /spl lscr//sub 2/-norm optimization objective and converted to a quadratic programming formulation. The implementation of an interior-point algorithm is presented. Alternative methods including fixed-point and active set methods are used to evaluate the reliability, accuracy and efficiency of the primal-dual interior-point method. While the computational load of the interior-point method is found to be greater for problems of small size, convergence to the optimal solution is also more uniform and predictable. In addition, the properties of the algorithm scale favorably with problem size. <s> BIB003 </s> Control Allocation- A Survey <s> Introduction <s> Control allocation problems can be formulated as optimization problems, where the objective is typically to minimize the use of control effort (or power) subject to actuator rate and position constraints, and other operational constraints. Here we consider the additional objective of singularity avoidance, which is essential to avoid loss of controllability in some applications, leading to a nonconvex nonlinear program. We suggest a sequential quadratic programming approach, solving at each sample a convex quadratic program approximating the nonlinear program. The method is illustrated by simulated maneuvers for a marine vessel equipped with azimuth thrusters. The example indicates reduced power consumption and increased maneuverability as a consequence of the singularity-avoidance. <s> BIB004 </s> Control Allocation- A Survey <s> Introduction <s> Constrained control allocation is studied, and it is shown how an explicit piecewise linear representation of the optimal solution can be computed numerically using multiparametric quadratic programming. Practical benefits of the approach include simple and efficient real-time implementation that permits software verifiability. Furthermore, it is shown how to handle control deficiency, reconfigurability, and flexibility to incorporate, for example, rate constraints. The algorithm is demonstrated on several overactuated aircraft control configurations, and the computational complexity is compared to other explicit approaches from the literature. The applicability of the method is further demonstrated using overactuated marine vessel dynamic position experiments on a scale model in a basin. <s> BIB005 </s> Control Allocation- A Survey <s> Introduction <s> Control allocation problems for marine vessels can be formulated as optimization problems, where the objective typically is to minimize the use of control effort (or power) subject to actuator rate and position constraints, power constraints as well as other operational constraints. In addition, singularity avoidance for vessels with azimuthing thrusters represent a challenging problem since a non-convex nonlinear program must be solved. This is useful to avoid temporarily loss of controllability in some cases. In this paper, a survey of control allocation methods for overactuated vessels are presented. <s> BIB006
The objective of this survey is to give an overview of control allocation methods. It is not the objective of this paper to give a complete bibliography, but rather provide a subjective survey with an emphasis on recent developments within a common framework that is independent of the application domains where control allocation is conventionally used. The article is intended to encourage cross-disciplinary transfer of ideas and complement existing overview articles such as ) that focus on aerospace applications of control allocation, and BIB006 ) that focus on marine applications. In particular, there has recently been increasing interest in control allocation in the automotive and other industries where mechatronics prevail, which has led to increased research on nonlinear approaches to control allocation. Optimization-based allocation methods are emphasized since their computational complexity are already within the capabilities of today's off-the-shelf embedded computer technology, e.g. BIB001 BIB002 BIB004 BIB005 BIB003 ).
Control Allocation- A Survey <s> Over-actuated mechanical systems <s> A demonstration that feedback control of systems with redundant controls can be reduced to feedback control of systems without redundant controls and control allocation is presented. It is shown that control allocation can introduce unstable zero dynamics into the system, which is important if input/output inversion control techniques are utilized. The daisy chain control allocation technique for systems with redundant groups of controls is also presented. Sufficient conditions are given to ensure that the daisy chain control allocation does not introduce unstable zero dynamics into the system. Aircraft flight control examples are given to demonstrate the derived results. <s> BIB001 </s> Control Allocation- A Survey <s> Over-actuated mechanical systems <s> Closed-loop stability for dynamic inversion controllers depends on the stability of the zero dynamics. The zero dynamics, however, depend on a generally nonlinear control allocation function that optimally distributes redundant controls. Therefore, closed-loop stability depends on the control allocation function. A sufe cient condition is provided for globally asymptotically stable zero dynamics with a class of admissible nonlinear control allocation functions. It is shown that many common control allocation functions belong to the class of functions that are covered by the aforementioned zero dynamics stability condition. Aircraft e ight control examples are given to demonstrate the utility of the results. <s> BIB002 </s> Control Allocation- A Survey <s> Over-actuated mechanical systems <s> The majority of current mechanical systems used in machinery and especially those which are controlled by microprocessors can be described as equal-actuated. This means that the number of actuators (drives, controls) is equal to the number of degrees-of-freedom. The mechanical systems can have directly such property or it can be as such treated during the design and operation. Classical rigid mechanisms can have such property naturally. Generally all flexible mechanisms violate this property as not all flexible degrees-of-freedom can be actuated and thus directly controlled. However, there are important mechanical systems which do not fulfil this criterion at equality of actuators and degrees-of-freedom. The examples of under-actuated systems are bio-mechanical systems during dynamic phase of motion, technical systems of cranes, vehicles, underwater robots, missiles with failed engines, inverted pendulum, and ball on the beam. The examples of over-actuated systems are again biomechanical systems during the contact with ground and recently introduced redundantly actuated parallel robots. The paper first deals with the introduction and definition of the property of under-actuation, equal-actuation and over-actuation. Then the paper deals with the control of under-actuated or over-actuated systems and the challenge of how to design such systems. For the covering abstract see ITRD E125059. <s> BIB003 </s> Control Allocation- A Survey <s> Over-actuated mechanical systems <s> This paper considers actuator redundancy management for a class of overactuated nonlinear systems. Two tools for distributing the control effort among a redundant set of actuators are optimal control design and control allocation. In this paper, we investigate the relationship between these two design tools when the performance indexes are quadratic in the control input. We show that for a particular class of nonlinear systems, they give exactly the same design freedom in distributing the control effort among the actuators. Linear quadratic optimal control is contained as a special case. A benefit of using a separate control allocator is that actuator constraints can be considered, which is illustrated with a flight control example. <s> BIB004 </s> Control Allocation- A Survey <s> Over-actuated mechanical systems <s> Mechanical actuators are integral components of many engineered systems. Many of the presently available actuator systems lack the desired stroke, power, controllability and reliability. The hierarchical actuator is a natural extension of the trend toward improving the performance of actuators through increments in geometric complexity and control. The hierarchical concept is to build integrated actuators out of a combination of smaller actuators. The smaller actuators are arranged geometrically and controlled so as to extend the performance of the total actuator into ranges that are not possible with actuators that are based on a few active elements and levels of control. Precision, speed increase, force output, load sharing, efficiency under smooth load/displacement control, smooth motion, stroke amplification/reduction and redundancy are all possible. Mechanics and mechanisms of hierarchical actuators are examined, along with a few experiments to demonstrate the operating principles. <s> BIB005
Motion control systems are used to control the motion of mechanical systems such as vehicles and machines. Effectors are mechanical devices that can be used in order to generate time-varying mechanical forces and moments on the mechanical system, such as rudders, fins, propellers, jets, thrusters, and tires. Actuators are electromechanical devices that are used to control the magnitude and/or direction of forces generated by the individual effectors. By mechanical design, there may be more effectors than strictly needed to meet the motion control objectives of a given application. Hence, in over-actuated mechanical systems, the controllability of the chosen states and outputs would also be achieved with less control inputs. An overactuated mechanical design may be favorable due to several reasons: • Need for effector redundancy in order to meet fault tolerance and control reconfiguration requirements. • It may be desirable to choose a particular set of effectors rather than a smaller set of effectors for reasons such as cost, standardization, size, accuracy, dynamic response, flexibility, maintenance and mechanical design (see e.g. BIB005 ). • Certain effectors can be shared among several control systems with different objectives, and therefore be redundant for the given motion control system. For example, a lateral stability control system for a car may use the four individual wheel brake actuators in order to set up a yaw moment while these actuators are primarily designed for the car's brake system to support the driver's control of longitudinal motion, see also BIB003 . The design of control algorithms for over-actuated mechanical systems is often divided into several levels. First, a high level motion control algorithm is designed to compute a vector of virtual inputs τ c to the mechanical system. The virtual inputs are usually chosen as a number of forces and moments that equals the number of degrees of freedom that the motion control system wants to control, m, and such that the basic requirement of controllability is met. For a wide range of mechanical systems, this leads to a dynamic model that is linear in the virtual inpuṫ x = f (x,t) + g(x,t)τ (1) y = (x,t) (2) where f , g, are functions, x ∈ R n is the state vector, t is time, y ∈ R m is the vector with the outputs that shall be controlled, and τ ∈ R m is the virtual input vector that should equal the output command τ c of the high-level motion control algorithm, i.e. τ = τ c . It is remarked that the model's linearity with respect to τ is not of importance for the control allocation design, but is convenient for the design of the high-level motion control algorithm design, although it still should consider constraints τ ∈ A, where A is the attainable set of virtual controls. Second, a control allocation algorithm is designed in order to map the vector of commanded virtual input forces and moments τ c into individual effector forces or moments such that the total forces and moments generated by all effectors amounts to the commanded virtual input τ c . This design is usually based on a static effector model in the form τ = h(u, x,t) where h is a function, u ∈ U ⊂ R p is the control input, and U represents control constraints due to saturation and other physical constraints. Since the system is assumed to be overactuated, we have p > m, such that the inverse problem of computing u ∈ U given τ = τ c is ill-posed because its solution is generally not unique. Commonly, effector models are linear in u such that τ = h(u, x,t) = B(x,t)u Third, there may be a separate low-level controller for each effector that controls its actuators in order to achieve its desired force and moment. The modular structure of the control algorithm is illustrated in the block diagram in Figure 1 . This modularity allows the high-level motion control algorithm to be designed without detailed knowledge about the effector and actuator system. In addition to coordinating the effect of the different effectors in the system, issues such as effector/actuator fault tolerance, redundancy, and control constraints are typically handled within the control allocation module. Note that the effector model (3) is usually chosen to be static, so the lowlevel actuator control should handle the dynamic control of the actuators. Although the choice of a static effector model is common, it should be mentioned that it is also common that the control allocation algorithm is designed with actuator rate constraints in mind, and we will in later sections survey extensions where more sophisticated dynamic actuator models are integrated with the control allocation algorithm design. It should also be mentioned that the design on the control allocation algorithm and the high-level motion control algorithm cannot always be independent. For example, it has been illustrated that the zero dynamics of the closed loop may depend on the control allocation design, such that a dynamic inversion type of control design approach may depend on the control allocation design to ensure stable zero dynamics (minimum phase response), see BIB001 BIB002 . On the other hand, it is also proven that in the framework of optimal control (LQ or nonlinear methods) the high-level motion control and control allocation can be separated (by choosing weight matrices appropriately) with no loss of control performance BIB004 . Moreover, as mentioned above and illustrated in , lack of feasibil-ity of the control allocation should be observed and handled by the high-level motion control algorithm in order to avoid unacceptable degradation of performance in such cases.
Control Allocation- A Survey <s> Perspectives <s> Abstract We refer to Model Predictive Control (MPC) as that family of controllers in which there is a direct use of an explicit and separately identifiable model. Control design methods based on the MPC concept have found wide acceptance in industrial applications and have been studied by academia. The reason for such popularity is the ability of MPC designs to yield high performance control systems capable of operating without expert intervention for long periods of time. In this paper the issues of importance that any control system should address are stated. MPC techniques are then reviewed in the light of these issues in order to point out their advantages in design and implementation. A number of design techniques emanating from MPC, namely Dynamic Matrix Control, Model Algorithmic Control, Inferential Control and Internal Model Control, are put in perspective with respect to each other and the relation to more traditional methods like Linear Quadratic Control is examined. The flexible constraint handling capabilities of MPC are shown to be a significant advantage in the context of the overall operating objectives of the process industries and the 1-, 2-, and ∞-norm formulations of the performance objective are discussed. The application of MPC to non-linear systems is examined and it is shown that its main attractions carry over. Finally, it is explained that though MPC is not inherently more or less robust than classical feedback, it can be adjusted more easily for robustness. <s> BIB001 </s> Control Allocation- A Survey <s> Perspectives <s> This paper provides an overview of commercially available model predictive control (MPC) technology, both linear and nonlinear, based primarily on data provided by MPC vendors. A brief history of industrial MPC technology is presented first, followed by results of our vendor survey of MPC control and identification technology. A general MPC control algorithm is presented, and approaches taken by each vendor for the different aspects of the calculation are described. Identification technology is reviewed to determine similarities and differences between the various approaches. MPC applications performed by each vendor are summarized by application area. The final section presents a vision of the next generation of MPC technology, with an emphasis on potential business and research opportunities. r 2002 Elsevier Science Ltd. All rights reserved. <s> BIB002 </s> Control Allocation- A Survey <s> Perspectives <s> Artificial lifting is a costly, but indispensable means to recover oil from high depth reservoirs. Continuous gas-lift works by injecting high pressure gas at the bottom of the production tubing to gasify the oil column, thereby forcing the flow of fluid to surface facilities. The problem consists in deciding which wells should produce and allocating a limited lift-gas rate to the active ones, subject to lower and upper bounds on gas injection, activation precedence constraints, and nonlinearities and discontinuities of the well performance curves. To this end, this paper develops a piecewise linear formulation of the lift-gas allocation problem that allows the application of powerful integer-programming algorithms. More specifically, it analyzes the constraint polyhedron of the piecewise linear formulation and extends cover inequalities of the knapsack polytope to the problem at hand. <s> BIB003 </s> Control Allocation- A Survey <s> Perspectives <s> The information flow used for optimization of an offshore oil production plant is described. The elements in this description include data acquisition, data storage, processing facility model updating, well model updating, reservoir model updating, production planning, reservoir planning, and strategic planning. Methods for well allocation, gas lift and gas/water injection optimization and updating of the models are reviewed in relationship with the information flow described. Challenges of real time optimization are discussed. <s> BIB004 </s> Control Allocation- A Survey <s> Perspectives <s> This paper addresses the challenge of controlling an overactuated engine thermal management system where two actuators, with different dynamic authorities and saturation limits, are used to obtain tight temperature regulation. A modular control strategy is proposed that combines model predictive control allocation (MPCA) with the use of an inner loop reference model. This results in an inner loop controller that closely matches a dynamic specification for input-output performance while addressing actuator dynamics and saturation constraints. This paper presents the design and implementation strategy and illustrates the effectiveness of the proposed solution through real-time simulation and experimental results. <s> BIB005
This survey focuses on motion control for over-actuated mechanical systems, which is the conventional application area of control allocation. However, the principles of control allocation are general and not limited to motion control systems. Consequently, one does not need to restrict the virtual control input τ to be interpreted as generalized forces (forces and moments) and they may also represent quantities like energy and mass, for example. In particular, process plants are often characterized by excessive degrees of freedom for control. One example is allocation of gas lift rates in offshore oil production where the petroleum producing wells are coupled due to common pipelines and constraints on the available lift gas resources, BIB003 BIB004 . In process control, any excessive degrees of freedom are commonly exploited via model predictive control (MPC) and real-time optimization, BIB002 BIB001 , which are multi-variable optimizationbased control strategies where the functionality of control allocation is inherently built into the optimal control formulation that is solved numerically online. Although control allocation usually has less ambitious objectives than MPC -recall the static effector model -we shall in later sections review how MPC can be used to solve the control allocation problem in motion control systems when actuator dynamics should be considered at the control allocation level. However, predictive control allocation has also been proposed in applications like engine management, BIB005 .
Control Allocation- A Survey <s> Unconstrained linear control allocation <s> Linear algebra and matrix theory are fundamental tools in mathematical and physical science, as well as fertile fields for research. This new edition of the acclaimed text presents results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrates their importance in a variety of applications. The authors have thoroughly revised, updated, and expanded on the first edition. The book opens with an extended summary of useful concepts and facts and includes numerous new topics and features, such as: - New sections on the singular value and CS decompositions - New applications of the Jordan canonical form - A new section on the Weyr canonical form - Expanded treatments of inverse problems and of block matrices - A central role for the Von Neumann trace theorem - A new appendix with a modern list of canonical forms for a pair of Hermitian matrices and for a symmetric-skew symmetric pair - Expanded index with more than 3,500 entries for easy reference - More than 1,100 problems and exercises, many with hints, to reinforce understanding and develop auxiliary themes such as finite-dimensional quantum systems, the compound and adjugate matrices, and the Loewner ellipsoid - A new appendix provides a collection of problem-solving hints. <s> BIB001 </s> Control Allocation- A Survey <s> Unconstrained linear control allocation <s> Nonlinear dynamic inversion affords the control system designer a straightforward means of deriving control laws for nonlinear systems. The control inputs are used to cancel unwanted terms in the equations of motion using negative feedback of these terms. In this paper, we discuss the use of nonlinear dynamic inversion in the design of a flight control system for a Supermaneuvera ble aircraft. First, the dynamics to be controlled are separated into fast and slow variables. The fast variables are the three angular rates and the slow variables are the angle of attack, sideslip angle, and bank angle. A dynamic inversion control law is designed for the fast variables using the aerodynamic control surfaces and thrust vectoring control as inputs. Next, dynamic inversion is applied to the control of the slow states using commands for the fast states as inputs. The dynamic inversion system was compared with a more conventional, gain-scheduled system and was shown to yield better performance in terms of lateral acceleration, sideslip, and control deflections. <s> BIB002 </s> Control Allocation- A Survey <s> Unconstrained linear control allocation <s> The problem of controlling underwater mobile robots in 6 degrees of freedom (DOF) is addressed. Underwater mobile robots where the number of thrusters and control surfaces exceeds the number of controllable DOF are considered in detail. Unlike robotic manipulators underwater mobile robots should include a velocity dependent thruster configuration matrix B(q), which modifies the standard manipulator equation to: Mq + C(q)q + g(x) = B(q)u where x = J(x)q. Uncertainties in the thruster configuration matrix due to unmodeled nonlinearities and partly known thruster characteristics are modeled as multiplicative input uncertainty. This article proposes two methods to compensate for the model uncertainties: (1) an adaptive passivity-based control scheme and (2) deriving a hybrid (adaptive and sliding) controller. The hybrid controller combines the adaptive scheme where M, C, and g are estimated on-line with a switching term added to the controller to compensate for uncertainties in the input matrix B. Global stability is ensured by applying Barbalat's Lyapunov-like lemma. The hybrid controller is simulated for the horizontal motion of the Norwegian Experimental Remotely Operated Vehicle (NEROV). <s> BIB003 </s> Control Allocation- A Survey <s> Unconstrained linear control allocation <s> This paper addresses the problem of the allocation of several airplane flight controls to the generation of specified body-axis moments. The number of controls is greater than the number of moments being controlled, and the ranges of the controls are constrained to certain limits. They are assumed to be individually linear in their effect throughout their ranges of motion and independent of one another in their effects. The geometries of the subset of the constrained controls and of its image in moment space are examined. A direct method of allocating these several controls is presented that guarantees the maximum possible moment can be generated within the constraints of the controls. It is shown that no single generalized inverse can yield these maximum moments everywhere without violating some control constraint. A method is presented for the determination of a generalized inverse that satisfies given specifications which are arbitrary but restricted in number. We then pose and solve a minimization problem that yields the generalized inverse that best approximates the exact solutions. The results are illustrated at each step by an example problem involving three controls and two moments. <s> BIB004 </s> Control Allocation- A Survey <s> Unconstrained linear control allocation <s> This paper describes the results of recent research into the problem of allocating several flight control effectors to generate moments acting on a flight vehicle. The results focus on the use of various generalized inverse solutions and a hybrid solution utilizing daisy chaining. In this analysis, the number of controls is greater than the number of moments being controlled, and the ranges of the controls are constrained to certain limits. The control effectors are assumed to be individually linear in their effects throughout their ranges of motion and independent of one another in their effects. A standard of comparison is developed based on the volume of moments or moment coefficients a given method can yield using admissible control deflections. Details of the calculation of the various volumes are presented. Results are presented for a sample problem involving 10 flight control effectors. The effectivenesses of the various allocation schemes are contrasted during an aggressive roll about the velocity vector at low dynamic pressure. The performance of three specially derived generalized inverses, a daisy-chaining solution, and direct control allocation are compared. <s> BIB005 </s> Control Allocation- A Survey <s> Unconstrained linear control allocation <s> The control allocation problem is defined in terms of solving or approximately solving a system of linear equations subject to constraints for redundant actuator control variable commands. There is one equation for each controlled axis. The constraints arise from actuation rate and position limits. An axis priority weighting is introduced when the equations cannot be solved exactly because of the constraints. A actuator command preference weighting and preferred values are introduced to uniquely solve the equations when there are more unknowns than equations. Several approaches to this problem are discussed. The approaches are broadly grouped into linear and quadratic programming approaches. While the linear programming approach is well suited to solving the general problem a computationally faster but approximate solution was found with the quadratic programming approach. Example results based on tailless aircraft flight control are presented to illustrate the key aspects of the approaches to control allocation. Conclusions and recommendations are stated relative to the various approaches. <s> BIB006
The main challenge of inverting the model (11) is that B is not a square matrix. Usually, for an over-actuated system, B will have full row rank (equal to m < p) and we will in general assume it has a non-trivial null space. This means there is an infinite number of vectors u ∈ R p that satisfies (11) for any given τ ∈ R m . The common way to deal with such extra freedom is to use generalized inverses (or pseudoinverses), e.g. BIB001 . Below, we present this approach in the context of minimizing a least-squares cost function. Neglecting any saturation and rate constraints on the input u, and choosing for convenience a quadratic cost function that measures the cost of control action, leads to the control allocation cost function formulation where W ∈ R p×p is a positive definite weighting matrix, and u p is the preferred value of u. When B has full rank, this weighted least-squares problem has the following explicit solution u = (I −CB)u p +Cτ c (13) where (14) is a generalized inverse that can be derived from optimality conditions of (12) using Lagrange multipliers, see e.g. BIB005 BIB006 BIB002 BIB004 BIB003 . For the special case W = I and u p = 0, the solution u = B + τ c is defined by the Moore-Penrose pseudo-inverse, BIB001 , given by C = B + = B T (BB T ) −1 . Rank-deficiency of B means that no force or moment can be generated in certain direction of the space R m where τ c belongs. This means that all commands τ c cannot be achieved, even without considering saturation. Although the mechanical design of the effectors and actuators will normally avoid a rank-deficient B-matrix, it might appear in special cases like singularities, effector or actuator failures, so the control allocation algorithm should be able to handle it in some applications. Several regularization methods could be applied, like a damped least-squares inverse where ε ≥ 0 is a small regularization parameter that must be strictly positive when B does not have full rank. Alternatively, a singular value decomposition (SVD) of the matrix BW −1 B T = UΣV T will characterize the directions where no generalized force can be produced, . The matrix Σ = diag(σ 1 , σ 2 , ..., σ p ) contains the singular values. Inverting only the singular values that are non-zero (with some small margin δ > 0), leads to the reduced rank approximation where r is the number of singular values larger than the regularization parameter δ , i.e. σ i ≥ δ . This leads to the approximate inverse to be used instead of C in (13). The SVD can also be used when B has full rank, e.g. ).
Control Allocation- A Survey <s> Redistributed pseudo-inverse and Daisy chaining <s> A demonstration that feedback control of systems with redundant controls can be reduced to feedback control of systems without redundant controls and control allocation is presented. It is shown that control allocation can introduce unstable zero dynamics into the system, which is important if input/output inversion control techniques are utilized. The daisy chain control allocation technique for systems with redundant groups of controls is also presented. Sufficient conditions are given to ensure that the daisy chain control allocation does not introduce unstable zero dynamics into the system. Aircraft flight control examples are given to demonstrate the derived results. <s> BIB001 </s> Control Allocation- A Survey <s> Redistributed pseudo-inverse and Daisy chaining <s> The performanceand computational requirements ofoptimization methodsfor control allocation areevaluated. Two control allocation problems are formulated: a direct allocation method that preserves the directionality of the moment and a mixed optimization method that minimizes the error between the desired and the achieved momentsaswellasthecontroleffort.Theconstrainedoptimizationproblemsaretransformedinto linearprograms so that they can be solved using well-tried linear programming techniques such as the simplex algorithm. A variety of techniques that can be applied for the solution of the control allocation problem in order to accelerate computations are discussed. Performance and computational requirements are evaluated using aircraft models with different numbers of actuators and with different properties. In addition to the two optimization methods, three algorithms with low computational requirements are also implemented for comparison: a redistributed pseudoinverse technique, a quadratic programming algorithm, and a e xed-point method. The major conclusion is that constrained optimization can be performed with computational requirements that fall within an order of magnitude of those of simpler methods. The performance gains of optimization methods, measured in terms of the error between the desired and achieved moments, are found to be small on the average but sometimes signie cant.Avariety ofissuesthataffecttheimplementation ofthevariousalgorithmsin ae ight-controlsystem are discussed. <s> BIB002 </s> Control Allocation- A Survey <s> Redistributed pseudo-inverse and Daisy chaining <s> Allocation efficiency is an important performance index to measure the quality of the allocation algorithm. In order to compute the efficiency, the volume of the subset of attainable moments must be solved. The efficiency of the redistributed pseudo inverse (RPI) algorithm depends on the choice of the pseudo-inverse matrix. The subset of attainable moments of RPI is a complex non-convex polyhedron. By analyzing two-dimensional and three-dimensional allocation problems with a “micro-element” method, here we propose an approximate calculation algorithm to compute the volume of the non-convex polyhedron. In order to improve the allocation efficiency of RPI, genetic algorithm is used to find the best pseudo-inverse matrix. The simulation results show that the best pseudo-inverse matrix can be easily chosen by the proposed method and the high allocation efficiency is achieved. <s> BIB003
The first step of the redistributed pseudo-inverse method (see e.g. BIB003 ) is to solve the unconstrained control allocation problem, such as (12) (or a simpler version). If the solution satisfies the constraints, no further steps are needed. Otherwise, the unconstrained optimal vector u is projected onto the admissible set U (i.e. saturated) to satisfy the constraints:ū = Proj U (u). In order to reduce the gap between desired and allocated generalized forces, the unsaturated elements of the control vector u are re-computed by solving a reduced problem using a reduced pseudo-inverse. More specifically, letū = (ū T C ,ū T U ) T be decomposed into the saturated elementsū C and unsaturated elementsū U , and let B = (B C , B U ) be the associated decomposition of the B-matrix. Thenτ C = B CūC is the allocated generalized force due to the saturated controls, and the remaining controls u U are redistributed by solving the redistribution equation using the pseudo-inverse method. Then new elements of the sub-vector u U may be saturated, and the redistribution procedure is repeated until either a feasible solution (that gives exact generalized force allocation) is found, or no further information improvement can be made. Although the method is simple, and often effective, it does neither guarantee that a feasible solution is found whenever possible, nor that the resulting control allocation minimizes the allocation error in some sense. There are examples (see e.g. BIB002 ) that demonstrate that clearly sub-optimal control allocation can result. The daisy chaining method (Adams, Buffington, Sparks & Banda 1994 BIB001 ) offers a very simple alternative, but is often less effective than the above mentioned methods. This method groups the effectors into two or more groups that are ranked such that first the control allocation problem is solved for the highest prioritized group. If one or more effectors in that group saturates, the settings of the whole group is frozen. The gap between allocated and required generalized forces is then allocated by the second group. This is then repeated if a feasible solution is still not found, and there are more than two groups. Depending on the selected groups, this may lead to solutions where several effectors may not be fully utilized to minimize the allocation error, and can be sub-optimal compared to the redistributed pseudo-inverse.
Control Allocation- A Survey <s> Direct allocation <s> This paper addresses the problem of the allocation of several airplane flight controls to the generation of specified body-axis moments. The number of controls is greater than the number of moments being controlled, and the ranges of the controls are constrained to certain limits. They are assumed to be individually linear in their effect throughout their ranges of motion and independent of one another in their effects. The geometries of the subset of the constrained controls and of its image in moment space are examined. A direct method of allocating these several controls is presented that guarantees the maximum possible moment can be generated within the constraints of the controls. It is shown that no single generalized inverse can yield these maximum moments everywhere without violating some control constraint. A method is presented for the determination of a generalized inverse that satisfies given specifications which are arbitrary but restricted in number. We then pose and solve a minimization problem that yields the generalized inverse that best approximates the exact solutions. The results are illustrated at each step by an example problem involving three controls and two moments. <s> BIB001 </s> Control Allocation- A Survey <s> Direct allocation <s> This paper describes the results of recent research into the problem of allocating several flight control effectors to generate moments acting on a flight vehicle. The results focus on the use of various generalized inverse solutions and a hybrid solution utilizing daisy chaining. In this analysis, the number of controls is greater than the number of moments being controlled, and the ranges of the controls are constrained to certain limits. The control effectors are assumed to be individually linear in their effects throughout their ranges of motion and independent of one another in their effects. A standard of comparison is developed based on the volume of moments or moment coefficients a given method can yield using admissible control deflections. Details of the calculation of the various volumes are presented. Results are presented for a sample problem involving 10 flight control effectors. The effectivenesses of the various allocation schemes are contrasted during an aggressive roll about the velocity vector at low dynamic pressure. The performance of three specially derived generalized inverses, a daisy-chaining solution, and direct control allocation are compared. <s> BIB002 </s> Control Allocation- A Survey <s> Direct allocation <s> This paper presents a method for the solution of the constrained control allocation problem for the case of three moments. The control allocation problem is to find the "best" combination of several flight control effectors for the generation of specified body-axis moments. The number of controls is greater than the number of moments being controlled, and the ranges of the controls are constrained to certain limits. The controls are assumed to be individually linear in their effect throughout their ranges of motion and complete in the sense that they generate moments in arbitrary combinations. The best combination of controls is taken to be an apportioning of the controls that yields the greatest total moment in a specified ratio of moments without exceeding any control constraint. The method of solving the allocation problem is presented as an algorithm and is demonstrated for a problem of seven aerodynamic controls on an F-18 airplane. <s> BIB003 </s> Control Allocation- A Survey <s> Direct allocation <s> The performanceand computational requirements ofoptimization methodsfor control allocation areevaluated. Two control allocation problems are formulated: a direct allocation method that preserves the directionality of the moment and a mixed optimization method that minimizes the error between the desired and the achieved momentsaswellasthecontroleffort.Theconstrainedoptimizationproblemsaretransformedinto linearprograms so that they can be solved using well-tried linear programming techniques such as the simplex algorithm. A variety of techniques that can be applied for the solution of the control allocation problem in order to accelerate computations are discussed. Performance and computational requirements are evaluated using aircraft models with different numbers of actuators and with different properties. In addition to the two optimization methods, three algorithms with low computational requirements are also implemented for comparison: a redistributed pseudoinverse technique, a quadratic programming algorithm, and a e xed-point method. The major conclusion is that constrained optimization can be performed with computational requirements that fall within an order of magnitude of those of simpler methods. The performance gains of optimization methods, measured in terms of the error between the desired and achieved moments, are found to be small on the average but sometimes signie cant.Avariety ofissuesthataffecttheimplementation ofthevariousalgorithmsin ae ight-controlsystem are discussed. <s> BIB004 </s> Control Allocation- A Survey <s> Direct allocation <s> The direct allocation method is considered for the control allocation problem. The original method assumed that every three columns of the controls effectiveness matrix were linearly independent. Here, the condition is relaxed, so that systems with coplanar controls can be considered. For fast online execution, an approach using spherical coordinates is also presented, and results of the implementation demonstrate improved performance over a sequential search. Linearized state-space models of a C-17 aircraft and of a tailless aircraft are used in the evaluation. Oincreasethe reliability ofaircraft,cone gurations with a large numberofactuatorsandcontrolsurfacesareadvantageous.Recone gurable control laws may be used to exploit all of the available control power despite failures and damages. 1;2 Control allocation is the problem of distributing the control requirements among multiple actuators to satisfy the desired objectives while accounting for the limited range of the actuators. Although solutions exist for the control allocation problem, an issue of current interest is that of the feasibilityoftheirimplementationonexistingcomputersforaircraft with a large number of actuators. 3 The direct allocation approach 4i6 is based on the concept of the attainable moment set (AMS), which is the set of all of the moment vectors that are achievable within the control constraints. The methodofdirectallocation allowsoneto achieve100%oftheAMS, whereas some other approaches such as daisy chaining, pseudoinverse,and generalized inversesolutionshavebeen shown to achieve <s> BIB005
Some constrained control allocation methods are based on some scaling of the unconstrained optimal control allocation, such that the resulting control allocation is projected onto the boundary of the set of attainable generalized forces. In aerospace applications this is commonly referred to as the attainable moment set (AMS) since moments in 3-DOF are normally allocated. Here, the set of attainable generalized forces is denoted A and is the set of vectors τ ∈ R m when the constrained optimization problem (e.g. (8)) has a feasible solution. The direct allocation method BIB001 ) starts with the unconstrained control allocation computed using some pseudo-inverse, e.g.ũ = B + τ c . Ifũ ∈ U (i.e. satisfies the input constraints), no further steps are needed and we use u =ũ. Otherwise the method will search for another u that preserves the direction of τ c but leads to an allocated generalized force Bu on the boundary of A: where α ∈ [0, 1] is a scalar. Notice that when the set U is polyhedral, then also A is a polyhedral set. Solving the optimization problem (19) is not trivial for problems where the dimension of u is large, since there will be a significant amount of facets and vertices and it is not straightforward to identify which facet is intersected by the straight line from τ c to the origin. Different numerical algorithms have been suggested, with different computational complexities. Improvements over the original algorithm BIB001 are based on various data structures, enumerations and representations BIB002 BIB003 BIB005 as well as linear programming BIB004 ).
Control Allocation- A Survey <s> Error minimization using linear programming <s> An actuator selection procedure is presented that uses linear programming to optimally specify bounded aerosurface deflections and jet firings in response to differential torque and/or force commands. This method creates a highly adaptable interface to vehicle control logic by automatically providing intrinsic actuator decoupling, dynamic response to actuator reconfiguration, dynamic upper bound and objective specification, and the capability of coordinating hybrid operation with dissimilar actuators. The objective function minimized by the linear programming algorithm is adapted to realize several goals, i.e., discourage large aerosurface deflections, encourage the use of certain aerosurfaces (speedbrake, body flap) as a function of vehicle state, minimize drag, contribute to translational control, and adjust the balance between jet firings and aerosurface activity during hybrid operation. A vehicle model adapted from Space Shuttle aerodynamic data is employed in simulation examples that drive the actuator selection with a six-axis vehicle controller tracking a scheduled re-entry trajectory. <s> BIB001 </s> Control Allocation- A Survey <s> Error minimization using linear programming <s> The performanceand computational requirements ofoptimization methodsfor control allocation areevaluated. Two control allocation problems are formulated: a direct allocation method that preserves the directionality of the moment and a mixed optimization method that minimizes the error between the desired and the achieved momentsaswellasthecontroleffort.Theconstrainedoptimizationproblemsaretransformedinto linearprograms so that they can be solved using well-tried linear programming techniques such as the simplex algorithm. A variety of techniques that can be applied for the solution of the control allocation problem in order to accelerate computations are discussed. Performance and computational requirements are evaluated using aircraft models with different numbers of actuators and with different properties. In addition to the two optimization methods, three algorithms with low computational requirements are also implemented for comparison: a redistributed pseudoinverse technique, a quadratic programming algorithm, and a e xed-point method. The major conclusion is that constrained optimization can be performed with computational requirements that fall within an order of magnitude of those of simpler methods. The performance gains of optimization methods, measured in terms of the error between the desired and achieved moments, are found to be small on the average but sometimes signie cant.Avariety ofissuesthataffecttheimplementation ofthevariousalgorithmsin ae ight-controlsystem are discussed. <s> BIB002 </s> Control Allocation- A Survey <s> Error minimization using linear programming <s> Note: Includes bibliographical references, 3 appendixes and 2 indexes.- Diskette v 2.06, 3.5''[1.44M] for IBM PC, PS/2 and compatibles [DOS] Reference Record created on 2004-09-07, modified on 2016-08-08 <s> BIB003 </s> Control Allocation- A Survey <s> Error minimization using linear programming <s> Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the actuator deflections. The paper discusses the alternative choice of the l∞ ∞ ∞ ∞ norm, or sup norm. Minimization of the control effort translates into the minimization of the maximum actuator deflection (min-max optimization). The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are also investigated through examples. In particular, the min-max criterion results in a type of load balancing, where the load is the desired command and the algorithm balances this load among various actuators. The solution using the l∞ ∞ ∞ ∞ norm also results in better robustness to failures and to lower sensitivity to nonlinearities in illustrative examples. <s> BIB004 </s> Control Allocation- A Survey <s> Error minimization using linear programming <s> The next generation (NextGen) transport aircraft configurations being investigated as part of the NASA Aeronautics Subsonic Fixed Wing Project have more control surfaces, or control effectors, than existing transport aircraft configurations. Conventional flight control is achieved through two symmetric elevators, two antisymmetric ailerons, and a rudder. The five control surfaces, reduced to three command variables, produce moments along the three main axes of the aircraft and enable the pilot to control the attitude of the aircraft. Next generation aircraft will have additional redundant control effectors to control the three moments, creating a situation where the aircraft is over-actuated and where a simple relationship no longer exists between the required control surface deflections and the desired moments. NextGen flight controllers will incorporate control allocation algorithms to determine the optimal effector commands to attain the desired moments, taking into account the effector limits. Approaches to solving the problem using linear programming and quadratic programming algorithms have been proposed and tested. It is of great interest to understand their relative advantages and disadvantages and how design parameters may affect their properties. In this paper, we investigate the sensitivity of the effector commands with respect to the desired moments and show on some examples the sensitivity of the solutions provided by the linear programming and quadratic programming methods. <s> BIB005 </s> Control Allocation- A Survey <s> Error minimization using linear programming <s> Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l 1 or l 2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l 1 norm for minimization of the tracking error and a normalized l 221E; norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l 221E; algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached. <s> BIB006
A powerful approach is to explicitly minimize the weighted error between the allocated virtual control input and the desired one. Extending the unconstrained optimization problem formulation (12) with input constraints leads to formulations such as (8). The constraint set U is usually polyhedral, i.e. for some appropriate matrix A and vector b it can be represented as Rate constraints C can be formulated as a polyhedral set, too. With the cost function defined using either 1-norm or ∞-norm, this resulting problem is a linear program (LP) that can be solved using iterative numerical LP algorithms (e.g. BIB001 BIB002 BIB004 ) by relatively straightforward reformulations into any of the standard LP forms via the introduction of additional variables. As an example, consider the 1-norm control allocation problem subject to With symmetric effectors and actuators, we have u min = −u max and δ min = −δ max . Introducing auxiliary variables we have s i = s Stacking these variables into vectors s + , s − , u + u − and defining w = (w 1 , ..., w p ) T and q = (q 1 , ..., q m ) T we get the following linear program subject to I, −I, 0, 0 (29) Other LP standard forms exists, and similar reformulations can be made, e.g. BIB002 . The use of ∞-norm will minimize the maximum effector use and therefore lead to a balanced use of effectors, BIB005 BIB006 BIB004 , and can also be reformulated into linear programs using similar augmentations with auxiliary variables. The use of slack variables in the above formulations ensures that a feasible solution always exists. It should be mentioned that infeasibility handling can also be included via a two-level approach (e.g. BIB002 ). While this might lead to reduced computational complexity on average, it may not contribute to reduced worst-case computational complexity that is usually the main concern in real-time implementations. The most common numerical methods for linear programming are the simplex method, active set methods and interior point methods, . The simplex method is studied for control allocation problems in BIB002 , where the main conclusions are that the computational complexity is clearly within the capabilities of current embedded computer hardware technology. The simplex method iterates between vertices of the polyhedral set describing the set of feasible solutions, where at each iteration a system of linear equations corresponding to a basic solution is solved using numerical linear algebra. Since there is a finite number of basic solutions, the simplex algorithm is a combinatorial approach that finds the optimal solution in a finite number of iterations. The simplex algorithm usually beats the combinatorial complexity of the problem by trying to reduce the cost function at each iteration. Many numerically robust implementations of the simplex method exists, including portable C code, BIB003 , which makes the approach fairly straightforward to apply in many embedded control platforms. However, there are some issues that may require special attention. Although the simplex method tends to converge to the optimal solution within a number of iterations that is not bigger than the number of variables and constraints, BIB003 , it is hard to give a guaranteed limit on the number of iterations. Hence, the control allocation may have to accept some degree of sub-optimality since only a limited number of iterations may be allowed in a real-time implementation. Degeneracies in the problem are characterized by constraints that are redundant, . They may lead to non-uniqueness and singular linear algebraic inversion problems that in combination with numerical inaccuracies may require some additional considerations. Due to degeneracies, a change in the basic solution during one iteration may lead to a new basic solution where the cost remains the same. If particular care is not taken, a phenomena called cycling may arise, where the solver jumps back and forth between the same set of basic solutions forever without making any progress towards the optimum. As observed in BIB002 , anti-cycling procedures are indeed needed since symmetric effectors may easily lead to degeneracies in the control allocation problem. Fairly efficient general procedures are available to find a feasible initial point for starting the simplex method. In the control allocation problems above, the use of slack variables s makes it trivial to find a feasible initial point since there are no constraints on s, see also BIB002 . Moreover, in control allocation problems, the solution from the previous sample is often a good initial guess for the current solution, since the problem parameters (including τ c ) often does not change significantly from one sample to the next. Still, one needs to have in mind that there may be exceptions due to failures or abrupt command changes. The number of iterations needed in the simplex algorithm may be reduced (at least in average) by explicitly exploiting such information for warm start. Optimal solutions of LPs are found at vertices of the feasible set. A consequence of this is that the LP-based method tends to favor the use of a smaller number of effectors, while methods based on a quadratic cost function and ∞-norm tends to use all effectors, but to a smaller degree, BIB002 . This seems to be the main reason why error minimization approaches are based on quadratic programming in most cases.
Control Allocation- A Survey <s> Error minimization using quadratic programming <s> Preface Table of Notation Part 1: Unconstrained Optimization Introduction Structure of Methods Newton-like Methods Conjugate Direction Methods Restricted Step Methods Sums of Squares and Nonlinear Equations Part 2: Constrained Optimization Introduction Linear Programming The Theory of Constrained Optimization Quadratic Programming General Linearly Constrained Optimization Nonlinear Programming Other Optimization Problems Non-Smooth Optimization References Subject Index. <s> BIB001 </s> Control Allocation- A Survey <s> Error minimization using quadratic programming <s> Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the fight body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases. <s> BIB002 </s> Control Allocation- A Survey <s> Error minimization using quadratic programming <s> In aircraft control, control allocation can be used to distribute the total control effort among the actuators when the number of actuators exceeds the number of controlled variables. The control allocation problem is often posed as a constrained least squares problem to incorporate the actuator position and rate limits. Most proposed methods for real-time implementation, like the redistributed pseudoinverse method, only deliver approximate, and sometimes unreliable solutions. We investigate the use of classical active set methods for control allocation. We develop active set algorithms that always find the optimal control distribution, and show by simulation that the timing requirements are in the same range as for two previously proposed solvers. <s> BIB003 </s> Control Allocation- A Survey <s> Error minimization using quadratic programming <s> The paper considers the objective of optimally specifying redundant actuators under constraints, a problem commonly referred to as control allocation. The problem is posed as a mixed /spl lscr//sub 2/-norm optimization objective and converted to a quadratic programming formulation. The implementation of an interior-point algorithm is presented. Alternative methods including fixed-point and active set methods are used to evaluate the reliability, accuracy and efficiency of the primal-dual interior-point method. While the computational load of the interior-point method is found to be greater for problems of small size, convergence to the optimal solution is also more uniform and predictable. In addition, the properties of the algorithm scale favorably with problem size. <s> BIB004 </s> Control Allocation- A Survey <s> Error minimization using quadratic programming <s> Linear-programming formulations of control allocation problems are considered, including those associated with direct allocation and mixed 1 -norm objectives. Primal-dual and predictor-corrector path-following interior-point algorithms, that are shown to be well suited for the control-allocation problems, are described in some detail with an emphasis on preferred implementations. The performance of each algorithm is evaluated for computational efficiency and for accuracy using linear models of a C-17 transport and a tailless fighter aircraft. Appropriate choices of stopping tolerances and other algorithm parameters are studied. Comparisons of speed and accuracy are made to the simplex method. Results show that real-time implementation of the algorithms is feasible, without requiring excessive number of computations. <s> BIB005
With the common choice of 2-norm, the control allocation problems leads to a quadratic program (QP) that can be solved using numerical QP methods, e.g. BIB003 BIB004 BIB005 . As an example for such a formulation, consider the control allocation formulation It can be transformed into a standard QP form without additional variables, where subject to where H = 2 · diag(w 1 , ..., w p , q 1 , ..., q m ). When all weights in the cost function are strictly positive (which they normally should be in a control allocation problem), the QP is strictly convex with H positive definite. With the use of slack variables, a feasible solution always exists and the problem always admits a unique optimal solution. It should be noted that several variations of the formulation can be made, including the use of a 1-norm for the slack variables which may have the advantage that with appropriate tuning the slack is zero whenever feasible, a property known as an exact penalty function, e.g. BIB001 . QPs are usually solved using active set methods or interiorpoint methods, , and both have been studied in the context of control allocation BIB003 BIB005 ). An iterative fixed-point method has also been proposed and shown to be efficient, BIB002 . Active set methods are iterative methods, where at each iteration they improve their guess of the optimal active set, . The optimal active set is the set of indices of the inequality constraints that are active (i.e. the inequality holds with equality) at the optimum. At each iteration, a search direction is computed based on the assumption of the working active set. The algorithm then searches for better solutions along this search direction, and either finds the optimum or detects how the working active set needs to be changed in order to make further progress towards the optimum. More specifically, consider the QP in standard form An active set algorithm for QP, as described in BIB003 , can be used to solve the control allocation problem at each sample. Basic active set quadratic programming algorithm. Initialization: Let z 0 be a feasible starting point for (35) (possibly based on the solution from the previous sample), and let the working active set W 0 contain the indices of the active inequality constraints at z 0 . Set the iteration index k = 0. Repeat: (1) Given z k , find the optimal direction p k by solving where G W k contains the rows of G indexed by the working active set W k . (2) If z k + p k is feasible, then set z k+1 = z k + p k , and compute the vectors of Lagrange multipliers µ k and λ k associated with the equality and inequality constraints, respectively. (a) If λ k ≥ 0, then z k+1 is the optimal solution. Terminate. (b) Otherwise, remove the constraint associated with the most negative Lagrange multiplier in the vector λ k from the working active set to define W k+1 . Increment k and repeat. (3) Otherwise, apply a line search procedure in order to determine the maximum step α ≥ 0 such that z k+1 = z k + α p k is feasible. Add the primary bounding constraints to the working active set to define W k+1 . Increment k and repeat.
Control Allocation- A Survey <s> 2 <s> In aircraft control, control allocation can be used to distribute the total control effort among the actuators when the number of actuators exceeds the number of controlled variables. The control allocation problem is often posed as a constrained least squares problem to incorporate the actuator position and rate limits. Most proposed methods for real-time implementation, like the redistributed pseudoinverse method, only deliver approximate, and sometimes unreliable solutions. We investigate the use of classical active set methods for control allocation. We develop active set algorithms that always find the optimal control distribution, and show by simulation that the timing requirements are in the same range as for two previously proposed solvers. <s> BIB001 </s> Control Allocation- A Survey <s> 2 <s> Note: Includes bibliographical references, 3 appendixes and 2 indexes.- Diskette v 2.06, 3.5''[1.44M] for IBM PC, PS/2 and compatibles [DOS] Reference Record created on 2004-09-07, modified on 2016-08-08 <s> BIB002 </s> Control Allocation- A Survey <s> 2 <s> Linear-programming formulations of control allocation problems are considered, including those associated with direct allocation and mixed 1 -norm objectives. Primal-dual and predictor-corrector path-following interior-point algorithms, that are shown to be well suited for the control-allocation problems, are described in some detail with an emphasis on preferred implementations. The performance of each algorithm is evaluated for computational efficiency and for accuracy using linear models of a C-17 transport and a tailless fighter aircraft. Appropriate choices of stopping tolerances and other algorithm parameters are studied. Comparisons of speed and accuracy are made to the simplex method. Results show that real-time implementation of the algorithms is feasible, without requiring excessive number of computations. <s> BIB003 </s> Control Allocation- A Survey <s> 2 <s> A novel nonlinear programming based control allocation scheme is developed. The performance of this nonlinear control allocation algorithm is compared with that of other control allocation approaches, including a mixed optimization scheme, a redistributed pseudo-inverse approach, and a direct allocation (geometric) method. The control allocation methods are first compared using open-loop measures such as the ability to attain commanded moments for a prescribed maneuver. The methods are then compared in closed-loop with a dynamic inversion-based control law. Next, the performance of the different algorithms is compared for different reference trajectories under a variety of failure conditions. Finally, we perform some preliminary studies employing "split actuators" that increase available control authority under failure conditions. All studies are conducted on a re-entry vehicle simulation. <s> BIB004 </s> Control Allocation- A Survey <s> 2 <s> This paper examines the computational complexity certification of the fast gradient method for the solution of the dual of a parametric convex program. To this end, a lower iteration bound is derived such that for all parameters from a compact set a solution with a specified level of suboptimality will be obtained. For its practical importance, the derivation of the smallest lower iteration bound is considered. In order to determine it, we investigate both the computation of the worst case minimal Euclidean distance between an initial iterate and a Lagrange multiplier and the issue of finding the largest step size for the fast gradient method. In addition, we argue that optimal preconditioning of the dual problem cannot be proven to decrease the smallest lower iteration bound. The findings of this paper are of importance in embedded optimization, for instance, in model predictive control. <s> BIB005
Interior-point methods, on the other hand, replaces the inequality constraints with a barrier function that prevents the solution for going into the infeasible region, ). Newton's method is then applied to search towards the optimum of the unconstrained optimization problem resulting from this reformulation. For each iteration, the barrier function is reduced in order to allow the solver to approach the boundary of the feasible region in case the optimal solution is located there. The active set methods tend to perform well in control allocation problems, BIB001 , while interior point methods have their advantage for larger-scale problems BIB003 . Active set methods have the advantage that their initialization can take advantage of the solution from the previous sample (known as warm start), which is often a good guess for the optimal solution at the current sample. This may reduce the number of iterations needed to find the optimal solution in many cases. Interior-point methods are generally initialized with points near the center of the feasible region and will always need a minimum number of iterations in order to converge due to the need to reduce the barrier function penalty in several steps. Warm start procedures are therefore difficult to implement for interior-point methods. Like in LP, it is hard to give guarantees on the maximum number of iterations and computation time needed to find the optimal solution. Hence, some degree of sub-optimality may need to be accepted in order to respect limitations on computational resources in order to meet real-time computation constraints. Numerical challenges with degeneracies and cycling must be addressed also in QPs. Several implementations of active set QP solvers were studied for control allocation problems in BIB001 , with fairly modest differences in computational complexity being observed. For non-real-time implementations, the Matlab toolbox QCAT ) is dedicated to QP-based control allocation and implements the methods of BIB001 BIB003 . For realtime implementation on embedded systems, there exists a few portable C code solvers such as the FORTRAN-to-Cconverted active set solver QLD ), the interior-point method automatic code generation tool CVX-GEN , the active-set-like solver QPOASES , and the conjugate gradient method, BIB002 . Recently, real-time certification and software for automated C-code generation of first-order (fast gradient) methods have become available, BIB005 . In , this strategy was studied in an example and shown to produce adequate results, with only minor increase in computational complexity (due to linearization and quadratic approximation) compared to a quadratic programming approach to linear control allocation. However, this conclusion can not be expected to generalize to arbitrary nonlinear control allocation problems. In particular, applications that require very large changes in allocated forces from one sample to the next may require several linear/quadratic approximations to be computed sequentially in order to achieve the necessary accuracy, see BIB004 . In a full SQP implementation, e.g. , the linearization and QP steps in the above algorithm are repeated iteratively during one sampling instant until the optimality conditions are satisfied. With N iterations, the computation times would be roughly speaking N times the computation time of the sequential linearization and quadratic programming algorithm above. Applications with strong nonlinearities may lead to nonconvex cost or constraint functions such that the optimization may get stuck in local minimums that may severely degenerate performance, or require additional computational mechanisms in order to find close to global optimal solutions. Unlike the linear control allocation case, there is little hope to find a general-purpose nonlinear programming algorithm and numerical software implementation for general nonlinear allocation problems. While a general nonlinear optimization framework can accommodate any cost function, any model and any type of constraints, it is of great interest to study control allocation for specific classes of nonlinearities and constraints. By exploiting structural properties one may pursue the analysis of theoretical properties such as guaranteed convergence to optimal solutions without excessive amount of computations.
Control Allocation- A Survey <s> Dynamics and fault tolerance <s> To enable autonomous operation of future reusable launch vehicles, reconfiguration technologies will be needed to facilitate mission recovery following a major anomalous event. The Air Force’s Integrated Adaptive Guidance and Control program developed such a system for Boeing’s X-40A, and the total in-flight simulator research aircraft was employed to flight test the algorithms during approach and landing. The inner loop employs a modelfollowing/dynamic-inversion approach with optimal control allocation to account for control-surface failures. Further, the reference-model bandwidth is reduced if the control authority in any one axis is depleted as a result of control effector saturation. A backstepping approach is utilized for the guidance law, with proportional feedback gains that adapt to changes in the reference model bandwidth. The trajectory-reshaping algorithm is known as the optimum-path-to-go methodology. Here, a trajectory database is precomputed off line to cover all variations under consideration. An efficient representation of this database is then interrogated in flight to rapidly find the “best” reshaped trajectory, based on the current state of the vehicle’s control capabilities. The main goal of the flight-test program was to demonstrate the benefits of integrating trajectory reshaping with the essential elements of control reconfiguration and guidance adaptation. The results indicate that for more severe, multiple control failures, control reconfiguration, guidance adaptation, and trajectory reshaping are all needed to recover the mission. <s> BIB001 </s> Control Allocation- A Survey <s> Dynamics and fault tolerance <s> A model predictive, dynamic control allocation algorithm is developed in this paper for the inner loop of a re-entry vehicle guidance and control system. The purpose of the control allocation portion of the guidance and control architecture is to distribute control power among redundant control effectors to meet the desired control objectives under a set of constraints. Most existing algorithms neglect the actuator dynamics or deal with the actuator dynamics separately, thereby assuming a static relationship between actuator outputs (in our case, control surface deflections) and plant inputs (i.e., moments about the three body axis). We propose a dynamic control allocation scheme based on model-based predictive control (MPC) that directly takes into account actuators with noneligible dynamics and hard constraints. Model-based predictive control schemes compute the control inputs by optimizing an open-loop control objective over a future time interval at each control step. In our setup, the model-predictive control allocation problem is posed as a sequential quadratic programming problem with dynamic constraints, which can be cast into a linear complementary problem (LCP) and therefore solved by linear programming approaches in a finite number of iterations. The time-varying affine internal model used in the MPC design enhances the ability of the control loop to deal with unmodeled system nonlinearities. The approach can be easily extended to encompass a variety of linear actuator dynamics without the need to redesign the overall scheme. Results are based on the model of an experimental reusable launch vehicle, and compared with that of existing static control allocation schemes. <s> BIB002 </s> Control Allocation- A Survey <s> Dynamics and fault tolerance <s> Given a time history of desired moments, the control allocation problem is to solve for the effector inputs so that some norm of the error between the achieved and desired moments is minimized. Existing methods solve for the actuator deflections, while accounting for magnitude and rate limitations of the effectors. In this paper, we propose the dynamic control allocation (DCA) Method, that also accounts for effector dynamics, in addition to magnitude and rate limits. We show through numerical experiments that the DCA method allocates the desired moments according to effector bandwidths - that is the slow effectors are allocated the lower frequencies in the desired moments. The numerical simulations also show that the DCA outperforms the existing simplex algorithm based LP method, that does not account for actuator dynamics. <s> BIB003 </s> Control Allocation- A Survey <s> Dynamics and fault tolerance <s> In this paper, the problem of control allocation - distribution of control power among redundant control effectors, under a set of constraints - for the inner loop of a re-entry vehicle guidance and control system is studied. Our control allocation scheme extends a previously developed model-predictive algorithm by providing asymptotic tracking of time-varying control input commands. The approach accounts for non-negligible dynamics of the actuators with hard constraints, setting it apart from most existing control allocation schemes, where a static relationship between control surface deflections (actuator outputs) and moments about a three-body axis (plant inputs) is assumed. The approach is readily extended to encompass a variety of linear actuator dynamics without the need for redesign of the overall control allocation scheme, allowing for increased effectiveness of the inner loop in terms of speed of maneuverability. Simulation results, with consideration given toward implementation, are provided for an experimental reusable launch vehicle, and are compared to those of static control allocation schemes. <s> BIB004 </s> Control Allocation- A Survey <s> Dynamics and fault tolerance <s> Overactuated systems often arise in automotive, aerospace, and robotics applications, where for reasons of redundancy or performance constraints, it is beneficial to equip a system with more control inputs than outputs. This necessitates control allocation methods that distribute control effort amongst many actuators to achieve a desired effect. Until recently, most methods have treated the control allocation as static in the sense that different dynamic authorities of the actuators were not taken into account. Recent advances have used model predictive control allocation (MPCA) to consider the dynamic authorities of the actuators over a receding horizon. In this paper, we consider the dynamic control allocation problem for overactuated systems where each actuator has different dynamic control authority and hard saturation limits. A modular control design approach is proposed, where the controller consists of an outer loop controller that synthesizes a desired virtual control input signal and an inner loop controller that uses MPCA to achieve the desired virtual control signal. We derive sufficient stability conditions for the composite feedback system and show how these conditions may be realized by imposing an additional constraint on the MPCA design. An automotive example is provided to illustrate the effectiveness of the proposed algorithm. <s> BIB005 </s> Control Allocation- A Survey <s> Dynamics and fault tolerance <s> This paper proposes an on-line sliding mode control allocation scheme for fault tolerant control. The effectiveness level of the actuators is used by the control allocation scheme to redistribute the control signals to the remaining actuators when a fault or failure occurs. The paper provides an analysis of the sliding mode control allocation scheme and determines the nonlinear gain required to maintain sliding. The on-line sliding mode control allocation scheme shows that faults and even certain total actuator failures can be handled directly without reconfiguring the controller. The simulation results show good performance when tested on different fault and failure scenarios. <s> BIB006 </s> Control Allocation- A Survey <s> Dynamics and fault tolerance <s> In this paper we address control systems with redundant actuators and characterize the concepts of weak and strong input redundancy. Based on this characterization, we propose a dynamic augmentation to a control scheme which performs the plant input allocation with the goal of employing each actuator in a suitable way, based on its magnitude and rate limits. The proposed theory is first developed for redundant plants without saturation and then extended to the case of magnitude saturation first and of magnitude and rate saturation next. Several simulation examples illustrate the proposed technique and show its advantages for practical application. <s> BIB007 </s> Control Allocation- A Survey <s> Dynamics and fault tolerance <s> This paper presents a fault-tolerant adaptive control allocation scheme for overactuated systems subject to loss of effectiveness actuator faults. The main idea is to use an ‘ad hoc’ online parameters estimator, coupled with a control allocation algorithm, in order to perform online control reconfiguration whenever necessary. Time-windowed and recursive versions of the algorithm are proposed for nonlinear discrete-time systems and their properties analyzed. Two final examples have been considered to show the effectiveness of the proposed scheme. The first considers a simple linear system with redundant actuators and it is mainly used to exemplify the main properties and potentialities of the scheme. In the second, a realistic marine vessel scenario under propeller and thruster faults is treated in full details. Copyright © 2010 John Wiley & Sons, Ltd. <s> BIB008 </s> Control Allocation- A Survey <s> Dynamics and fault tolerance <s> Control allocation deals with the allocation of control among a redundant set of effectors, while taking into account the individual constraints. The use of model predictive control (MPC) for control allocation allows the response times of the actuators to be accounted for, and to take advantage of predictions of the virtual control input as well as differences in dynamic control authority and cost of use among the actuators. Quadratic programming (QP) is essential for implementation of the optimal constrained control allocation strategies. The main contributions of the present paper are the investigation of using the software system CVXGEN and the MPC-based control allocation method. CVXGEN synthesizes a customized portable and library-free C-source code QP interior-point solver for the specific QP problem resulting from the MPC formulation, exploiting structural properties of the specific QP and optimizing the source code for execution speed. Two case studies, one being a missile auto-pilot, illustrates the benefits of using the MPC formulation, and the efficiency of CVXGEN. <s> BIB009
An optimization-based control allocation method that is integrated with a parameter estimation scheme is described in BIB008 . It leads to an adaptive solution that can accommodate an unknown time-varying B-matrix due to losses and faults. Control allocation is an effective approach to implement fault tolerant control. When effector or actuator faults are identified, they can be modeled as changes in the B-matrix of the constraints, or other parameters in the optimization problem. For example, an actuator that is locked in a faulty position could be systematically treated by setting the lower and upper constraint limits to the locked value. Alternatively, the preferred control vector could also be set to locked actuator values as proposed in BIB001 . A systematic method that also changes the weights in the pseudo-inverse in order to ensure that faults are well distributed among the fault-free effectors without reconfiguring the high level controller is proposed in BIB006 . A dynamic control allocation approach is presented in BIB007 . It is designed to allocate the required control effort, while allocating the excessive degrees of freedom through a dynamic system that can be tuned for optimizing secondary objectives and constraints. It is relatively straightforward to design a basic control allocation algorithm to comply with actuator rate constraints by incorporating this as a constraint or penalty on the change in control inputs from the previous sample to the current sample, see (7) and . More sophisticated dynamic actuator models may be incorporated by using the MPC framework to solve the constrained control allocation problem BIB002 BIB004 BIB005 . MPC is an optimization-based control approach which can be used in control allocation, being able to handle actuator dynamics as well as actuator saturation. MPC is a systematic design method that utilizes a model of the plant for predicting outputs and states. In control allocation this model describes the actuator dynamics. Using MPC the control allocation problem is solved on a future horizon, and the optimal solution is a future trajectory. Because of the predictive nature of the controller, the calculated control can pre-act to the actuator system dynamics to improve dynamic performance. On the negative side, MPC allocation requires significantly more computations than the static control allocation formulation since the number of optimization variables and constraints is a multiple of the prediction horizon, which may be a factor of 10-20 larger compared to the static problem. Still, it has been demonstrated in BIB009 ) that with efficient numerical QP solver software ) that the real-time computations of MPC allocation with a linear dynamic actuator model can also be implemented with current off-the-shelf computer technology. A similar strategy, that considers the current and past history of commanded virtual forces and moments, is given in BIB003 ). There, the control allocation problem is to solve for the control inputs so that some norm of the error between the achieved and desired moments is minimized. A computationally simpler strategy to include actuator constraints in the control allocation is via post-processing of the static control allocation, as proposed in . The postprocessor will over-drive the actuator in order to compenaste for the dynamics of a first or second order linear actuator model.
Control Allocation- A Survey <s> Mixed-integer programming methods <s> This paper proposes a framework for modeling and controlling systems described by interdependent physical laws, logic rules, and operating constraints, denoted as mixed logical dynamical (MLD) systems. These are described by linear dynamic equations subject to linear inequalities involving real and integer variables. MLD systems include linear hybrid systems, finite state machines, some classes of discrete event systems, constrained linear systems, and nonlinear systems which can be approximated by piecewise linear functions. A predictive control scheme is proposed which is able to stabilize MLD systems on desired reference trajectories while fulfilling operating constraints, and possibly take into account previous qualitative knowledge in the form of heuristic rules. Due to the presence of integer variables, the resulting on-line optimization procedures are solved through mixed integer quadratic programming (MIQP), for which efficient solvers have been recently developed. Some examples and a simulation case study on a complex gas supply system are reported. <s> BIB001 </s> Control Allocation- A Survey <s> Mixed-integer programming methods <s> This paper establishes equivalences among five classes of hybrid systems: mixed logical dynamical (MLD) systems, linear complementarity (LC) systems, extended linear complementarity (ELC) systems, piecewise affine (PWA) systems, and max-min-plus-scaling (MMPS) systems. Some of the equivalences are established under (rather mild) additional assumptions. These results are of paramount importance for transferring theoretical properties and tools from one class to another, with the consequence that for the study of a particular hybrid system that belongs to any of these classes, one can choose the most convenient hybrid modeling framework. <s> BIB002
One particular model class leading to control allocation problem formulations that can be solved using mixed-integer linear programming (MILP) are piecewise linear functions ). Generally, a control allocation problem based on a piecewise linear effector model, piecewise linear cost function, and a constraint set that can be described as the non-convex union of polyhedral sets can be formulation as an MILP, see BIB001 BIB002 ) for equivalence classes and how to formulate MILPs. While numerical MILP solvers are highly complex numerical software systems that may be difficult to verify and validate for use in a safety-critical real-time application, it can be noted that it has been demonstrated that simple enumeration methods in combination with numerical quadratic programming can be effective for solving practical non-convex control allocation problems where non-convex constraint sets are represented as the union of a small number of polyhedral sets, ).
Control Allocation- A Survey <s> Dynamic optimum-seeking methods <s> Control allocation is commonly utilized in overactuated mechanical systems in order to optimally generate a requested generalized force using a redundant set of actuators. Using a control-Lyapunov approach, we develop an optimizing control allocation algorithm in the form of a dynamic update law, for a general class of nonlinear systems. The asymptotically optimal control allocation in interaction with an exponentially stable trajectory-tracking controller guarantees uniform boundedness and uniform global exponential convergence. <s> BIB001 </s> Control Allocation- A Survey <s> Dynamic optimum-seeking methods <s> This paper proposed an optimal control allocation method for a general class of overactuated nonlinear systems with internal dynamics. Dynamic inversion technique is used for the commanded subsystem to track a stable model reference control law. The corresponding control allocation law has to guarantee the stability of the internal dynamics and satisfy control constraints. The proposed method is based on a Lyapunov design approach with finite-time convergence to optimality. The derived control allocation is in the form of a dynamic update law, which, together with the stable model reference control law, guarantees the stability of the closed-loop nonlinear system. <s> BIB002 </s> Control Allocation- A Survey <s> Dynamic optimum-seeking methods <s> Abstract In this work we address the optimizing control allocation problem for a nonlinear over-actuated time-varying system where parameters affine in the dynamic actuator and effector model may be assumed unknown. In-stead of finding the optimal control allocation at each time instant, a dynamic approach is considered by constructing update-laws that represent asymptotically optimal allocation search and adaptation. Using Lyapunov analysis for cascaded set-stable systems, uniform global/local asymptotic stability is guaranteed for the optimal set described by the system, the optimal allocation update-law and the adaptive update-law. Simulations of a scaled-model ship, manoeuvred at low-speed, demonstrate the performance of the proposed allocation scheme. <s> BIB003 </s> Control Allocation- A Survey <s> Dynamic optimum-seeking methods <s> In this work we address the control allocation problem for a nonlinear over-actuated time-varying system where parameters affine in the effector model may be assumed unknown. Instead of optimizing the control allocation at each time instant, a dynamic approach is considered by constructing update-laws that represent asymptotically optimal allocation search and adaptation. Using Lyapunov analysis for cascaded set-stable systems, uniform global/local asymptotic stability is guaranteed for the sets described by the system, the optimal allocation update-law and the adaptive update-law. <s> BIB004 </s> Control Allocation- A Survey <s> Dynamic optimum-seeking methods <s> In this brief, we propose a control allocation method for a particular class of uncertain over-actuated affine nonlinear systems, with unstable internal dynamics. Dynamic inversion technique is used for the commanded output to track a smooth output reference trajectory. The corresponding control allocation law has to guarantee the boundedness of the states, including the internal dynamics, and satisfy control constraints. The proposed method is based on a Lyapunov design approach with finite-time convergence to a given invariant set. The derived control allocation is in the form of a dynamic update law which, together with a sliding mode control law, guarantees boundedness of the output tracking error as well as of the internal dynamics. The effectiveness of the control law is tested on a numerical model of the non-minimum phase planar vertical take-off and landing (PVTOL) system. <s> BIB005 </s> Control Allocation- A Survey <s> Dynamic optimum-seeking methods <s> This paper presents an adaptive nonlinear control allocation method for a general class of non-minimum phase uncertain systems. Indirect adaptive approach and Lyapunov design approach are applied to the design of adaptive control allocation. The derived adaptive control allocation law, together with a stable model reference control, guarantees that the closed-loop nonlinear system is input-to-state stable. <s> BIB006 </s> Control Allocation- A Survey <s> Dynamic optimum-seeking methods <s> An adaptive control allocation method is presented for a general class of non-linear systems with internal dynamics and unknown parameters. A certainty equivalence indirect adaptive approach is used to estimate the unknown parameters. Based on the estimated parameters, model reference control and control allocation techniques are used to control the non-linear system subject to control constraints and internal dynamics stabilisation. A Lyapunov design approach with the property of convergence to a positively invariant set is proposed. The derived adaptive control allocation is in the form of a dynamic update law, which, together with a stable model reference control, guarantees that the closed-loop non-linear system be input-to-state stable. <s> BIB007 </s> Control Allocation- A Survey <s> Dynamic optimum-seeking methods <s> Abstract Comparison of static and dynamic control allocation techniques for nonlinear constrained optimal distribution of tire forces in a vehicle control system is presented. The total body forces and moments, obtained from a high level controller, are distributed among tire forces, which are constrained to nonlinear constraint of saturation, through two approaches. For the static control allocation technique the interior-point method is employed to perform the nonlinear constrained optimization problem. Also, by the dynamic control allocation technique a dynamic update law is derived for desired forces of each tire. Through simulation results efficiency of two approaches in enhancing vehicle handling and stability is evaluated and the results are compared. <s> BIB008
In BIB001 it was proposed to re-formulate the static nonlinear optimization formulations of control allocation, i.e. (5), (7), or (8), as a control Lyapunov-function and use constructive Lyapunov-design methods. In particular, it was assumed that the cost function J (x, u,t) = J(x, u,t) + ρ(u) was augmented with a barrier or penalty-function ρ(·) in order to enforce that input constraints u ∈ U are satisfied. Then the corresponding Lagrange function is formulated, with λ ∈ R m being the vector of Lagrange multipliers Assuming that a Lyapunov function V 0 (x,t) for the high level motion control algorithm exists, the following control Lyapunov function is defined for the control allocation design for some σ > 0. Requiring a negative time-derivative of V along trajectories of the system, one can derive a control allocation update algorithm on the forṁ where Γ and K are symmetric positive definite gain matrices, and α, ξ , β and φ are signals defined in BIB001 . The control allocation update law (39) will asymptotically track the optimal control allocation (assuming feasibility), while guaranteeing not to destabilize the closed-loop system. Notice that the latter is not an obvious feature due to the fact that this dynamic control allocation is only asymptotically optimal, and may deviate at every time-instant from the instantaneous optimal allocation of the corresponding static control allocation problem. This is leading to some loss of performance as shown in a case study in BIB008 . The main advantage of the method is that no direct numerical optimization is needed (optimality tracking is built into the dynamic update law (39)) leading to modest computational complexity. Disadvantages of the method include possible convergence problems in case of non-convex cost function and constraints, similar to the nonlinear programming approach. Actuator rate constraints can in some case be enforced and implemented by choosing the gain Γ sufficiently small, although there is no guarantee that they can be met if ξ is not small when the high level motion control algorithm requires fast changes in the virtual control that cannot be implemented with the given actuator system. An extension that leads to convergence to optimal control allocation in finitetime was proposed in BIB002 , and the effects of internal dynamics and minimum phase properties when using dynamic inversion high level motion controllers were studied in BIB002 BIB005 ). The concept relies on a control Lyapunov-function, which allows for certain extensions to be made within the same framework. An adaptive approach where uncertain parameters θ in the effector model h(u, x,t, θ ) are stably adapted using an adaptation law that is designed by augmenting the control Lyapunov-function in a standard way was proposed in BIB004 ). This framework was further extended to dynamically account for actuator dynamics within the control allocation in BIB003 , and internal dynamics in the context of model reference adaptive control BIB006 BIB007 ).
Control Allocation- A Survey <s> Direct nonlinear allocation <s> Control allocation techniques for inversion-based control laws have evolved that ensure that commands to the control effectors do not violate rate or position limits of the individual effectors. These allocation techniques typically assume that all effector dynamics are fast relative to the system being controlled and are therefore neglected. Unfortunately there are practical cases where this assumption breaks down and it becomes desirable to compensate for lags between the commands to the effectors and the effector response. In this paper, a technique for compensating for individual effector dynamics while respecting actuator constraints is proposed. <s> BIB001 </s> Control Allocation- A Survey <s> Direct nonlinear allocation <s> Concepts for new constrained control allocation strategies are developed that deal with systems where moments are nonlinearly related to effector deflections such as those encountered in the case of yawing moment contributions from left-right effector pairs on aircraft.. These concepts are illustrated by considering single and multiple left-right pair effector mixing problems for moments that lie in the roll-yaw moment plane. Methods for generating the boundary of an attainable moment set for a class of multiple non-linear effectors and for clipping unattainable moment commands with axis prioritization are presented. <s> BIB002 </s> Control Allocation- A Survey <s> Direct nonlinear allocation <s> A method for generating attainable moment sets for left-right pairs of nonlinear control effectors on aircraft is presented. It is shown that the determination of the attainable moment set boundary can be posed as a constrained, nonlinear optimization problem. The first-order necessary conditions for a point to be on the boundary are simply the Kuhn-Tucker first-order necessary conditions. An algorithm is given where the Kuhn-Tucker points for a given point in the pitch-roll plane are constructed for each control effector configuration that forms a candidate boundary. The Kuhn-Tucker points are then checked for feasibility, and the points on the boundary are those that produce extremal values of the objective function. The nonlinear attainable moment set boundary is used to determine feasibility of moment commands generated by the flight control system. Infeasible commands are clipped to the boundary of the attainable moment set. If a commanded moment is feasible, a constrained optimization problem is solved for the control surface deflections. <s> BIB003
An extension of the method of attainable moment set computations and direct allocation for nonlinear effector models can be found in BIB003 . The methods rely of nonlinear programming and the ideas in this paper can be traced back to BIB002 BIB001 .
Control Allocation- A Survey <s> Aircraft <s> This paper addresses the problem of the allocation of several airplane flight controls to the generation of specified body-axis moments. The number of controls is greater than the number of moments being controlled, and the ranges of the controls are constrained to certain limits. They are assumed to be individually linear in their effect throughout their ranges of motion and independent of one another in their effects. The geometries of the subset of the constrained controls and of its image in moment space are examined. A direct method of allocating these several controls is presented that guarantees the maximum possible moment can be generated within the constraints of the controls. It is shown that no single generalized inverse can yield these maximum moments everywhere without violating some control constraint. A method is presented for the determination of a generalized inverse that satisfies given specifications which are arbitrary but restricted in number. We then pose and solve a minimization problem that yields the generalized inverse that best approximates the exact solutions. The results are illustrated at each step by an example problem involving three controls and two moments. <s> BIB001 </s> Control Allocation- A Survey <s> Aircraft <s> A demonstration that feedback control of systems with redundant controls can be reduced to feedback control of systems without redundant controls and control allocation is presented. It is shown that control allocation can introduce unstable zero dynamics into the system, which is important if input/output inversion control techniques are utilized. The daisy chain control allocation technique for systems with redundant groups of controls is also presented. Sufficient conditions are given to ensure that the daisy chain control allocation does not introduce unstable zero dynamics into the system. Aircraft flight control examples are given to demonstrate the derived results. <s> BIB002 </s> Control Allocation- A Survey <s> Aircraft <s> Closed-loop stability for dynamic inversion controllers depends on the stability of the zero dynamics. The zero dynamics, however, depend on a generally nonlinear control allocation function that optimally distributes redundant controls. Therefore, closed-loop stability depends on the control allocation function. A sufe cient condition is provided for globally asymptotically stable zero dynamics with a class of admissible nonlinear control allocation functions. It is shown that many common control allocation functions belong to the class of functions that are covered by the aforementioned zero dynamics stability condition. Aircraft e ight control examples are given to demonstrate the utility of the results. <s> BIB003 </s> Control Allocation- A Survey <s> Aircraft <s> In the age of modern aircraft and fly-by-wire control systems, the inclusion of mechanical backup systems for handling instances of control failures is becoming more uncommon. As a result, pilots are forced to fully rely on the failure immunities designed into these systems and be assured that, should such failures occur, the aircraft maintain adequate flying qualities long enough for a safe ejection or an emergency landing. This paper demonstrates the control reconfiguration aspects of a control allocation with rate limiting algorithm to adapt to various control failures while still achieving desired moments, thus providing a safe environment for the pilot. <s> BIB004 </s> Control Allocation- A Survey <s> Aircraft <s> Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the fight body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases. <s> BIB005 </s> Control Allocation- A Survey <s> Aircraft <s> This paper presents the development and application of one approach to the control of aircraft with large numbers of control effectors. This approach, referred to as real-time adaptive control allocation, combines a nonlinear method for control allocation with actuator failure detection and isolation. The control allocator maps moment (or angular acceleration) commands into physical control effector commands as functions of individual control effectiveness and availability. The actuator failure detection and isolation algorithm is a model-based approach that uses models of the actuators to predict actuator behavior and an adaptive decision threshold to achieve acceptable false alarm/missed detection rates. This integrated approach provides control reconfiguration when an aircraft is subjected to actuator failure, thereby improving maneuverability and survivability of the degraded aircraft. This method is demonstrated on a next generation military aircraft (Lockheed-Martin Innovative Control Effector) simulation that has been modified to include a novel nonlinear fluid flow control control effector based on passive porosity. Desktop and real-time piloted simulation results demonstrate the performance of this integrated adaptive control allocation approach. Introduction Future aircraft are being proposed with many more control effectors than the traditional elevator, aileron, and rudder. Innovative control effectors under study range from thrust vectoring to all-moving wing tips and actuated forebody surfaces to more exotic effectors such as shape-change materials and fluidflow devices (figure 1). For example, hundreds or even thousands of micro fluid-flow devices could be distributed in arrays across the upper and lower surface of a wing to allow direct control of the wing’s boundary layer. Optimal use of this large number of effectors will be challenging, but the potential control power and redundancy offers the flight controls designer freedom to maximize mission performance and enhance survivability. * Research Engineer † Senior Research Engineer No copyright is asserted in the United States under Title 17, U.S. Code. The U.S. Government has a royalty-free license to exercise all rights under the copyright claimed herein for Governmental purposes. In particular, reconfigurable control approaches seek to take advantage of this control redundancy to mitigate the deleterious effects of control effector failure or battle damage [Buffington 1998, Brinker 1999, Tallant 1999]. A key element in these approaches is reconfigurable control allocation. The reconfigurable control allocator [Lallman 1985, Durham 1992, Buffington 1997, Enns 1998] maps moment (or angular acceleration) commands into physical control effector commands as a function of individual control effectiveness and availability. This paper presents the development of an integrated reconfigurable control allocation method which combines a nonlinear method for control allocation with actuator failure detection and isolation (FDI) (figure 2). This integrated approach provides control reconfiguration when an aircraft is subjected to actuator failure, thereby improving maneuverability and survivability of the degraded aircraft. Desktop and realtime piloted simulation are used to demonstrate the performance of this integrated adaptive control allocation approach. Real-Time Adaptive Control Allocation The adaptive control allocation is an integrated approach that combines two main elements: a nonlinear control allocation approach and actuator failure detection and isolation. These elements are discussed in more detail in the following sections. Control Allocation In general, the forces and moments generated by the control effectors are a function of vehicle flight condition and effector commands. In the case where, at a given flight condition, the control effectiveness is relatively linear with effector command and there is minimal control coupling, the system can be considered as a linear control allocation problem. The linear control allocation problem can be stated as follows. Given the system <s> BIB006 </s> Control Allocation- A Survey <s> Aircraft <s> The performanceand computational requirements ofoptimization methodsfor control allocation areevaluated. Two control allocation problems are formulated: a direct allocation method that preserves the directionality of the moment and a mixed optimization method that minimizes the error between the desired and the achieved momentsaswellasthecontroleffort.Theconstrainedoptimizationproblemsaretransformedinto linearprograms so that they can be solved using well-tried linear programming techniques such as the simplex algorithm. A variety of techniques that can be applied for the solution of the control allocation problem in order to accelerate computations are discussed. Performance and computational requirements are evaluated using aircraft models with different numbers of actuators and with different properties. In addition to the two optimization methods, three algorithms with low computational requirements are also implemented for comparison: a redistributed pseudoinverse technique, a quadratic programming algorithm, and a e xed-point method. The major conclusion is that constrained optimization can be performed with computational requirements that fall within an order of magnitude of those of simpler methods. The performance gains of optimization methods, measured in terms of the error between the desired and achieved moments, are found to be small on the average but sometimes signie cant.Avariety ofissuesthataffecttheimplementation ofthevariousalgorithmsin ae ight-controlsystem are discussed. <s> BIB007 </s> Control Allocation- A Survey <s> Aircraft <s> The forces and moments produced by a vehicle’s aerodynamic control surfaces are often nonlinear functions of control surface deflection. This phenomenon limits the accuracy of state-of-the-art control allocation algorithms since all of the approaches are based on the assumption that the control variable rates are linear functions of the surface deflections and that control variable rate increments are not produced for zero deflections. The errors introduced by this assumption are currently mitigated by the robustness resulting from feedback control laws. A method for improving the performance of the feedback control/control allocation system is presented that directly attacks the inaccuracies introduced by these linear assumptions. The approach is demonstrated using a dynamic inversion-based control law developed for a lifting body with four control surfaces. The commanded body rates constitute the input to a dynamic inversion control law that forms the inner-loop control structure. The outputs of the dynamic inversion algorithm serve as the inputs to a linear programming based control allocator. The control allocation algorithm is a mixed optimization scheme that minimizes the norm of the error and the deviation of the control input from a preferred value. Whether the underlying dynamic system is linear or nonlinear in the way that the controls affect the system state, the control allocator requires a linear approximation to the system inputs. In this work, instead of using a linear approximation to describe the control variable rate due to a control surface deflection (a control subspace, that contains the zero element), an affine relationship is utilized. This allows one to modify the input to the control allocator by providing an additional intercept term that corrects for the errors introduced by the control allocation algorithm’s assumption of linearity. Since on-line control allocators perform computations at flight control system update rates, the decision about how far to move the control surface in the next time interval is critical. In order to accurately compute the next set of surface deflection commands, the local slope at the current operating point is used and the intercept is accounted for at the input to the control allocator. With relatively little computational and design overhead, the accuracy of current control allocation algorithms can be improved utilizing an affine relationship for a vehicle’s control dependent accelerations. Simulation experiments show a significant improvement in tracking commanded body rates when the control allocation correction term is used. <s> BIB008 </s> Control Allocation- A Survey <s> Aircraft <s> Existing approaches to control allocation in overactuated aircraft under control input saturation are conservative. This is due to the fact that the solution to the corresponding constrained optimization problem may not be attainable with a chosen control input weighting matrix. On the other hand, there may exist a weighting matrix for which a solution exists under control input constraints. In this paper we propose two control allocation algorithms (CAAs) that can be used either online or off-line, and that, under an assumption that at least one solution to the tracking control design exists, finds that solution. The properties of the proposed CAAs are illustrated through simulations of the F/A-18C/D aircraft. <s> BIB009 </s> Control Allocation- A Survey <s> Aircraft <s> The paper considers the objective of optimally specifying redundant actuators under constraints, a problem commonly referred to as control allocation. The problem is posed as a mixed /spl lscr//sub 2/-norm optimization objective and converted to a quadratic programming formulation. The implementation of an interior-point algorithm is presented. Alternative methods including fixed-point and active set methods are used to evaluate the reliability, accuracy and efficiency of the primal-dual interior-point method. While the computational load of the interior-point method is found to be greater for problems of small size, convergence to the optimal solution is also more uniform and predictable. In addition, the properties of the algorithm scale favorably with problem size. <s> BIB010 </s> Control Allocation- A Survey <s> Aircraft <s> To enable autonomous operation of future reusable launch vehicles, reconfiguration technologies will be needed to facilitate mission recovery following a major anomalous event. The Air Force’s Integrated Adaptive Guidance and Control program developed such a system for Boeing’s X-40A, and the total in-flight simulator research aircraft was employed to flight test the algorithms during approach and landing. The inner loop employs a modelfollowing/dynamic-inversion approach with optimal control allocation to account for control-surface failures. Further, the reference-model bandwidth is reduced if the control authority in any one axis is depleted as a result of control effector saturation. A backstepping approach is utilized for the guidance law, with proportional feedback gains that adapt to changes in the reference model bandwidth. The trajectory-reshaping algorithm is known as the optimum-path-to-go methodology. Here, a trajectory database is precomputed off line to cover all variations under consideration. An efficient representation of this database is then interrogated in flight to rapidly find the “best” reshaped trajectory, based on the current state of the vehicle’s control capabilities. The main goal of the flight-test program was to demonstrate the benefits of integrating trajectory reshaping with the essential elements of control reconfiguration and guidance adaptation. The results indicate that for more severe, multiple control failures, control reconfiguration, guidance adaptation, and trajectory reshaping are all needed to recover the mission. <s> BIB011 </s> Control Allocation- A Survey <s> Aircraft <s> A model predictive, dynamic control allocation algorithm is developed in this paper for the inner loop of a re-entry vehicle guidance and control system. The purpose of the control allocation portion of the guidance and control architecture is to distribute control power among redundant control effectors to meet the desired control objectives under a set of constraints. Most existing algorithms neglect the actuator dynamics or deal with the actuator dynamics separately, thereby assuming a static relationship between actuator outputs (in our case, control surface deflections) and plant inputs (i.e., moments about the three body axis). We propose a dynamic control allocation scheme based on model-based predictive control (MPC) that directly takes into account actuators with noneligible dynamics and hard constraints. Model-based predictive control schemes compute the control inputs by optimizing an open-loop control objective over a future time interval at each control step. In our setup, the model-predictive control allocation problem is posed as a sequential quadratic programming problem with dynamic constraints, which can be cast into a linear complementary problem (LCP) and therefore solved by linear programming approaches in a finite number of iterations. The time-varying affine internal model used in the MPC design enhances the ability of the control loop to deal with unmodeled system nonlinearities. The approach can be easily extended to encompass a variety of linear actuator dynamics without the need to redesign the overall scheme. Results are based on the model of an experimental reusable launch vehicle, and compared with that of existing static control allocation schemes. <s> BIB012 </s> Control Allocation- A Survey <s> Aircraft <s> In this paper, the problem of control allocation - distribution of control power among redundant control effectors, under a set of constraints - for the inner loop of a re-entry vehicle guidance and control system is studied. Our control allocation scheme extends a previously developed model-predictive algorithm by providing asymptotic tracking of time-varying control input commands. The approach accounts for non-negligible dynamics of the actuators with hard constraints, setting it apart from most existing control allocation schemes, where a static relationship between control surface deflections (actuator outputs) and moments about a three-body axis (plant inputs) is assumed. The approach is readily extended to encompass a variety of linear actuator dynamics without the need for redesign of the overall control allocation scheme, allowing for increased effectiveness of the inner loop in terms of speed of maneuverability. Simulation results, with consideration given toward implementation, are provided for an experimental reusable launch vehicle, and are compared to those of static control allocation schemes. <s> BIB013 </s> Control Allocation- A Survey <s> Aircraft <s> Allocation of control authority among redundant control effectors, under hard constraints, is an important component of the inner loop of a reentry vehicle guidance and control system. Whereas existing control allocation schemes generally neglect actuator dynamics, thereby assuming a static relationship between control surface deflections and moments about a three-body axis, in this work a dynamic control allocation scheme is developed that implements a form of model-predictive control. In the approach proposed here, control allocation is posed as a sequential quadratic programming problem with constraints, which can also be cast into a linear complementarity problem and therefore solved in a finite number of iterations. Accounting directly for nonnegligible dynamics of the actuators with hard constraints, the scheme extends existing algorithms by providing asymptotic tracking of time-varying input commands for this class of applications. To illustrate the effectiveness of the proposed scheme, a high-fidelity simulation for an experimental reusable launch vehicle is used, in which results are compared with those of static control allocation schemes in situations of actuator failures. <s> BIB014 </s> Control Allocation- A Survey <s> Aircraft <s> Due to increased requirements on the reliability, maneuverability and survivability of modern and future manned and unmanned aerial vehicles, more control effectors/surfaces are being introduced. This introduces redundant or overactuated control effectors and requires the control allocation function, together with baseline flight control law, to be implemented in the overall flight control systems. In particular, in the case of control effector (actuator) failures or control surface damages, an effective re-distribution (or reallocation) of the control surface deflections with the remaining healthy control effectors is needed in order to achieve acceptable performance even in the presence of control effector failures. In this paper, as a preliminary study of reconfigurable control allocation (control re-allocation) applied to a realistic and nonlinear aircraft model, a pseudo-inverse method and a fixed-point algorithm are implemented and tested in an ADMIRE (aero-data model in research environment) benchmark aircraft model. Partial loss of control effectiveness faults have been implemented in the ADMIRE benchmark and used for evaluating the control re-allocation schemes. Simulation results show satisfactory results for accommodating the control effector failures. <s> BIB015 </s> Control Allocation- A Survey <s> Aircraft <s> In flight control, control allocation is used to distribute the total control command to each actuator when the number of actuators exceeds the number of controlled variables. A new optimal control allocation method based on nonlinear compensation was proposed in this paper. The quadratic term and mutual-interference effects were introduced into control effectiveness matrix. The linear combination of control objective and error objective was selected as optimal target. Then the optimal control allocation problem of redundancy system was solved by transformed to nonlinear programming formulation. Comparison with other allocation method and control system simulation results show the proposed method has timing properties similar to the redistributed pseudo-inverse method and achieves the objective accurately and optimally. <s> BIB016 </s> Control Allocation- A Survey <s> Aircraft <s> †This paper features the combination of model-based predictive control and dynamic inversion into a constrained and globally valid control method for fault-tolerant flight-control purposes. The fact that the approach is both constrained and model-based creates the possibility to incorporate additional constraints, or even a new model, in case of a failure. Both of these properties lead to the fault-tolerant qualities of the method. Efficient distribution of the desired control moves over the control effectors creates the possibility to separate the input allocation problem from the inversion loop when redundant actuators are available. An important aspect that is considered here is the computational complexity of the presented methods. This complexity is reduced through application of an intelligent constraint mapping algorithm that allows for a strict separation of the applied model-predictive controller and the nonlinear dynamic inversion method. Furthermore, the complexity is reduced through application of a method that only uses the full set of available inputs, or actuators, when absolutely necessary, e.g. in a failure situation. Part of this paper consists of the application of the proposed theory to an aerospace benchmark of moderately high complexity. It is shown through an example that the theory is well-suited to the task, provided that fault-detection and isolation information is available continuously. <s> BIB017 </s> Control Allocation- A Survey <s> Aircraft <s> In this paper, we present the design and development of the guidance, navigation, and control system of a small vertical-takeoff-and-landing unmanned air vehicle based on a 6 degrees-of-freedom nonlinear dynamic model. The vertical-takeoff-and-landing unmanned air vehicle is equipped with three propellers for vertical thrust, and thrust differential together with a set of yaw trim flaps are used for 3 degrees-of-freedom attitude and thrust control actuation. The focus is on the 6 degrees-of-freedom flight control algorithm design using the trajectory linearization control method, along with simulation verification and robustness tests. Hardware and software implementation of the flight controller and onboard navigation sensors are also briefly discussed. <s> BIB018 </s> Control Allocation- A Survey <s> Aircraft <s> This paper focuses on the control redundancy problem due to the increase of control effectors of advanced aircrafts. Based on the current control allocation research, this paper further puts forward the concept of control allocation and management system. The bases sequenced control allocation method is proposed firstly, and then some management strategies are introduced to the control allocation process of bases sequenced method. The candidate effectors and optimal indexes are managed according to the information of flight conditions, mission requirements and effectors working conditions, and engineering experience is also considered in this system. The key aspects of the proposed system are illustrated by a 6DOF nonlinear aircraft model. The simulation results show that the functions of control allocation system are extended and the system adaptability to the flight status, mission requirements and effectors failure conditions are improved. <s> BIB019 </s> Control Allocation- A Survey <s> Aircraft <s> The next generation (NextGen) transport aircraft configurations being investigated as part of the NASA Aeronautics Subsonic Fixed Wing Project have more control surfaces, or control effectors, than existing transport aircraft configurations. Conventional flight control is achieved through two symmetric elevators, two antisymmetric ailerons, and a rudder. The five control surfaces, reduced to three command variables, produce moments along the three main axes of the aircraft and enable the pilot to control the attitude of the aircraft. Next generation aircraft will have additional redundant control effectors to control the three moments, creating a situation where the aircraft is over-actuated and where a simple relationship no longer exists between the required control surface deflections and the desired moments. NextGen flight controllers will incorporate control allocation algorithms to determine the optimal effector commands to attain the desired moments, taking into account the effector limits. Approaches to solving the problem using linear programming and quadratic programming algorithms have been proposed and tested. It is of great interest to understand their relative advantages and disadvantages and how design parameters may affect their properties. In this paper, we investigate the sensitivity of the effector commands with respect to the desired moments and show on some examples the sensitivity of the solutions provided by the linear programming and quadratic programming methods. <s> BIB020 </s> Control Allocation- A Survey <s> Aircraft <s> Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the actuator deflections. The paper discusses the alternative choice of the l∞ ∞ ∞ ∞ norm, or sup norm. Minimization of the control effort translates into the minimization of the maximum actuator deflection (min-max optimization). The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are also investigated through examples. In particular, the min-max criterion results in a type of load balancing, where the load is the desired command and the algorithm balances this load among various actuators. The solution using the l∞ ∞ ∞ ∞ norm also results in better robustness to failures and to lower sensitivity to nonlinearities in illustrative examples. <s> BIB021 </s> Control Allocation- A Survey <s> Aircraft <s> Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l 1 or l 2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l 1 norm for minimization of the tracking error and a normalized l 221E; norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l 221E; algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached. <s> BIB022 </s> Control Allocation- A Survey <s> Aircraft <s> Reconfigurable control allocation research is important to multidisciplinary science and engineering applications. In particular, the proposed research plays a significant role in enhancing the safety, reliability and fault tolerance capability of Unmanned Aerial Vehicle (UAV), which is one of the most active research and development areas. The main objective of this paper is to introduce and evaluate UAV reconfigurable control system design against control surfaces faults without modifying the baseline controller. The faults introduced are in the form of partial loss and stuck at unknown position on the UAV control surfaces. Weighted Least Squares (WLS) control reallocation algorithm with application to UAV was investigated. The paper is undertaken in a nonlinear UAV model ALTAV (Almost-Light-Than-Air-Vehicles), developed by Quanser incorporation. Different faults have been introduced in control surfaces with different trajectory commanded inputs. Gaussian noise was introduced in the model. Comparisons were made under normal situation, the case without control reallocation, and the case with control reallocation method. Simulation results show the satisfactory reconfigurable flight control system performance using the WLS control reallocation method for ALTAV nonlinear UAV benchmark. <s> BIB023 </s> Control Allocation- A Survey <s> Aircraft <s> This paper presents a stability analysis and an application of a recently developed algorithm to recover from Pilot Induced Oscillations (CAPIO). When actuators are ratesaturated due to either an aggressive pilot command, high gain of the ight control system or some anomaly in the system, the eective delay in the control loop may increase. This eective delay increase manifests itself as a phase shift between the commanded and actual system signals and can instigate Pilot Induced Oscillations (PIO). CAPIO reduces the eective time delay by minimizing the phase shift between the commanded and the actual attitude accelerations. To establish theoretical results pertinent to CAPIO stability analysis, a scalar case is presented. In addition, we present simulation results for aircraft with cross-coupling which demonstrates the potential of CAPIO serving as an eective PIO handler in adverse conditions. <s> BIB024 </s> Control Allocation- A Survey <s> Aircraft <s> This paper proposes a control allocation framework where a feedback adaptive signal is designed for a group of redundant actuators and then it is adaptively allocated among all group members. In the adaptive control allocation structure, cooperative actuators are grouped and treated as an equivalent control effector. A state feedback adaptive control signal is designed for the equivalent effector and adaptively allocated to the member actuators. Two adaptive control allocation algorithms, guaranteeing closed-loop stability and asymptotic state tracking when partial and total loss of control effectiveness occur, are developed. Proper grouping of the actuators reduces the controller complexity without reducing their efficacy. The implementation and effectiveness of the strategies proposed is demonstrated in detail using several examples. <s> BIB025
In flight control applications the virtual control input τ is usually the moments about the roll, pitch and yaw axes. Conventional fixed-wing aircraft design and flight control systems are based on a relatively small number of control surfaces (effectors) that are dedicated to control each axis of the aircraft, like • ailerons for roll control, • an elevator for pitch control, and • a rudder for yaw control. The grouping of two or more control surfaces into a single effector by constraining them to move together is common in flight control, and is often called ganging, e.g. . Typically, left and right ailerons are constrained to deflect differentially, while right and left elevators are constrained to deflect equally. Assuming the actuators and effectors are fault-free and the above ganging scheme is used, even with five control surfaces there are only three effective effectors available to control the three axes and control allocation is not needed. However, many current aircraft designs have a larger number of control surfaces that can be used during normal or special conditions, such as vertical take-off-and-landing, or after failure of an actuator or effector. Depending on the type of aircraft, one may have many more effectors including • V-tails that give coupled lateral and longitudinal forces, • control surfaces like flaps, spoilers, and slats, • tiltable propellers, and thrust vector jets Control allocation is widely used with such designs in order to ensure optimal use of the effectors, including fault tolerant and robust control over a wide flight envelope, BIB005 BIB006 BIB004 BIB001 BIB018 . It is concluded in BIB005 ) that although simulations demonstrate success of the conventional flight control approach in many cases, the control allocation approach appears to provide uniformly better performance in all cases. Effector models for aerospace applications are usually assumed in the linear form (11). As discussed in Section 2, the control effectiveness matrix B may depend on slowly varying variables such as altitude and velocity, and is therefore scheduled as a function of these variables. It is also worthwhile to remark that nonlinear effector models tend to be better approximated using an affine model τ = Bu + b 0 instead of the linear model (11), see BIB008 . The extensions are straightforward, so most control allocation designs proceed without loss of generality with the model (11). All the (constrained) linear control allocation methods described in Section 2 are commonly found in the flight control literature, e.g. ). The models and constraints are generally given by the physical characteristics of the effectors and actuators, while the choice of u p and weighting matrices may reflect different objectives such as • minimum wing loading, • minimum control surface deflection, • minimum radar signature, • minimum drag, • maximum lift, and • rapid reconfigurability for fault tolerance, and others, e.g. BIB019 ). Using the pseudo-inverse solution as a preferencevector u p allows one to analytically represent the control allocator in a robustness analysis of the system that is valid as long as no single axis is saturated and the commanded accelerations are feasible, BIB011 ). This facilitates the verification and validation process that must be completed prior to flight testing when using optimization based control allocation methods. A comprehensive comparison of performance of several state-of-the-art linear control allocation methods are provided by BIB007 . One main conclusion is that the optimization-based methods tend to outperform the alternative methods proposed in the literature both in terms of avoiding unnecessary infeasibility and minimizing the use of control effort. The 2-norm (quadratic) formulation seems to be favorable over the 1-norm (linear) formulation since the solution tends to combine the use of all control surfaces (rather than just a few), BIB010 . The use of ∞-norm will minimize the maximum effector use and therefore lead to a balanced use of effectors, which also has advantages for robustness to failure and nonlinearities, BIB020 BIB022 BIB021 ). The underlying motivation for the direct control allocation method, BIB001 , is that in many applications (in particular aircraft) it is considered important to keep the direction of the allocated forces and moments equal to the command, in order to get graceful degradation of performance and handling qualities. Hence, it is practically motivated by a different objective than merely minimizing the error in allocated generalized forces, which may be important in some flight control applications. Although the control allocation problem is in most cases decoupled from the high level motion control design, there may be cases when the interactions between control and control allocation should be studied or accounted for. This includes cases when actuator dynamics is significant, e.g. , or the control allocation influences the zero-dynamics due to inversion-based control, e.g. BIB002 BIB003 ). An integrated approach to flight control and control allocation design is investigated in , while BIB009 describes two control allocation methods that adapt the weight matrices of a pseudoinverse like control allocation law in order to avoid saturation and rate constraints. Control allocation methods that explicitly take into account linear actuator dynamics have been proposed, using linear constrained MPC, BIB012 BIB013 BIB014 , or optimization in the framework of linear matrix inequalities (LMIs), . In order to reduce effective time delay due to actuator rate limitations in case of quickly changing commanded moments, and thereby reduce the risk of pilot induced oscillations, BIB024 proposed and studied a control allocation method that also penalizes the difference between the time-derivatives of the commanded and allocated moments. A linear programming approach to control allocation that accounts for interaction between control effectors due to aerodynamic couplings are studied in . Constrained control allocation using nonlinear effector models have been studied using numerical nonlinear programming methods, BIB016 BIB006 ). Reconfiguration of the control allocation, via weight modifications BIB023 or adaptation of effector model BIB015 , was considered in in order to manage faults. Methods for stable adaptation of parameter uncertainty in the effector models due to failures, and associated control allocation strategies that will group effectors into a smaller set equivalent effectors, similar to ganging or daisy chaining, is studied in BIB025 . Model predictive control is shown to be a powerful tool to model failures and a suitable basis for fault tolerant control allocation in BIB017 .
Control Allocation- A Survey <s> Spacecraft <s> To enable autonomous operation of future reusable launch vehicles, reconfiguration technologies will be needed to facilitate mission recovery following a major anomalous event. The Air Force’s Integrated Adaptive Guidance and Control program developed such a system for Boeing’s X-40A, and the total in-flight simulator research aircraft was employed to flight test the algorithms during approach and landing. The inner loop employs a modelfollowing/dynamic-inversion approach with optimal control allocation to account for control-surface failures. Further, the reference-model bandwidth is reduced if the control authority in any one axis is depleted as a result of control effector saturation. A backstepping approach is utilized for the guidance law, with proportional feedback gains that adapt to changes in the reference model bandwidth. The trajectory-reshaping algorithm is known as the optimum-path-to-go methodology. Here, a trajectory database is precomputed off line to cover all variations under consideration. An efficient representation of this database is then interrogated in flight to rapidly find the “best” reshaped trajectory, based on the current state of the vehicle’s control capabilities. The main goal of the flight-test program was to demonstrate the benefits of integrating trajectory reshaping with the essential elements of control reconfiguration and guidance adaptation. The results indicate that for more severe, multiple control failures, control reconfiguration, guidance adaptation, and trajectory reshaping are all needed to recover the mission. <s> BIB001 </s> Control Allocation- A Survey <s> Spacecraft <s> Redundant thrusters are generally used for a reliable attitude control system. Also, redundant thrusters yield a better performance if they are used appropriately. In this paper, the authors propose an efficient redundancy management algorithm to reduce the fuel consumption. The algorithm is based on a linear programming problem which is a constrained optimization problem. For the algorithm, a cost function is defined as a quantity related to the fuel consumption for a maneuver. The independent variables are the thrusters' on-times which are control input variables of a satellite dynamic model. The advantage of the proposed method is verified by numerical examples. The examples show that the proposed method consumes less fuel than an existing method for a given maneuvering command. A sub-optimal algorithm is also discussed for an onboard computation. The proposed algorithm is applied to two maneuvers: move-to-rest and rest-to-rest. This is verified by a numerical simulation. <s> BIB002 </s> Control Allocation- A Survey <s> Spacecraft <s> Amixed-integer linear programming approach tomixing continuous andpulsed control effectors is proposed. The method is aimed at applications involving reentry vehicles that are transitioning from exoatmospheric flight to endoatmospheric flight. In this flight phase, aerodynamic surfaces are weak and easily saturated, and vehicles typically rely on pulsed reaction control jets for attitude control. Control laws for these jets have historically been designed using single-axis phase-plane analysis, which has proven to be sufficient formany applications wheremultiaxis coupling is insignificant and when failures have not been encountered. Here, we propose using a mixed-integer linear programming technique to blend continuous control effectors and pulsed jets to generate moments commanded by linear or nonlinear control laws. When coupled with fault detection and isolation logic, the control effectors can be reconfigured to minimize the impact of control effector failures or damage. When the continuous effectors can provide the desired moments, standard linear programming methods can be used to mix the effectors; however, when the pulsed effectors must be used to augment the aerodynamic surfaces, mixed-integer linear programming techniques are used to determine the optimal combination of jets to fire. The reaction jet control allocator acts as a nonuniformquantizer that applies amoment vector to the vehicle, which approximates the desired moment generated by a continuous control law. Lyapunov theory is applied to develop amethod for determining the region of attraction associated with a quantized vehicle attitude control system. <s> BIB003 </s> Control Allocation- A Survey <s> Spacecraft <s> Future space missions demand high precision control in both angular and linear axes. Therefore propulsion systems providing effort in both axes with a high precision are needed. Microthrusters can satisfy these requirements. These actuators are usually subject to present an allocation problem. Moreover, because of this demand on high precision, the maximal propulsion capacity appears to be critically low, leading to a possible saturation of the actuators. A multi-saturation based model for a highly non-linear allocation function is presented. An anti-windup strategy dealing with the actuator saturation is proposed. Simulations consider a two satellites flight formation scenario. They show the improvement on performance and stability tolerance to off-nominal initial conditions. Simulations are based on a nominal model provided by Thales Alenia Space (TAS). <s> BIB004 </s> Control Allocation- A Survey <s> Spacecraft <s> Accurate and reliable control of planetary entry is a major challenge for planetary exploration vehicles. For Mars entry, uncertainties in atmospheric properties such as winds aloft and density pose a major problem for meeting precision landing requirements. Anticipated manned missions to Mars will also require levels of safety and fault tolerance not required during earlier robotic missions. This paper develops a nonlinear fault-tolerant controller specifically tailored for addressing the unique environmental and mission demands of future Mars entry vehicles. The controller tracks a desired trajectory from entry interface to parachute deployment, and has an adaptation mechanism that reduces tracking errors in the presence of uncertain parameters such as atmospheric density, and vehicle properties such as aerodynamic coefficients and inertias. This nonlinear control law generates the commanded moments for a discrete control allocation algorithm, which then generates the optimal controls required to follow the desired trajectory. The reaction control system acts as a non-uniform quantizer, which generates applied moments that approximate the desired moments generated by a continuous adaptive control law. If a fault is detected in the control jets, it reconfigures the controls and minimizes the impact of control failures or damage on trajectory tracking. It is assumed that a fault identification and isolation scheme already exists to identify failures. A stability analysis is presented, and fault tolerance performance is evaluated with non real-time simulation for a complete Mars entry trajectory tracking scenario using various scenarios of control effector failures. The results presented in the paper demonstrate that the control algorithm has a satisfactory performance for tracking a pre-defined trajectory in the presence of control failures, in addition to plant and environment uncertainties. Copyright © 2010 John Wiley & Sons, Ltd. <s> BIB005
Spacecraft have other actuators and effectors, either instead of, or in addition to control surfaces. These include reaction control jets, and reaction wheels. In addition, the energy consumption for control is generally a high priority objective of the control system design of spacecraft and will often need to be strongly considered in the control allocation strategy as well. Flight tests with the Boeing X-40A reusable launch vehicle using reconfigurable control allocation based linear programming is reported in BIB001 ). In case of faulty locked actuators, it is in BIB001 proposed to set the associated elements of the preferred control vector u p to the locked actuator positions. Fault tolerant control allocation for a planetary entry vehicle was investigated in BIB005 , using mixed-integer linear programming of handle the quantized/discrete nature of pulsed reaction control jets. BIB003 ) propose a control allocation approach to optimally combine the use of (discrete) pulsed reaction control jets with (continuous) control surfaces in spacecraft transitions from exo-atmospheric to endo-atmospheric flight, using mixed-integer linear programming. Satellite systems often have redundant thrusters, where it is desirable to minimize energy consumption during a maneuver or attitude control. A linear programming control allocation approach is investigated in BIB002 , where it is shown that it can reduce energy consumption compared to a simpler grouping strategy. A multi-saturation based model for a highly non-linear allocation function of micro-thrusters in a satellite is presented in BIB004 . Quadratic programming is used for constrained thrust allocation of redundant satellites in , with particular emphasis on fault tolerant reconfigurable control.
Control Allocation- A Survey <s> Station-keeping and low-speed maneuvering <s> Abstract This paper presents a new thrust-allocation scheme which significantly reduces the fuel consumption for dynamic positioning of ships when rotatable azimuth thrusters are used. Thrust allocation is the problem of determining the thrust and the direction of each of the n thruster devices of a ship, given the desired forces and moment from the control law. The problem of singular configurations is pointed out and solved by using a modification of the singular-value decomposition. A filtering scheme is proposed to control the azimuth directions to reduce the wear and tear, and to minimize the required thrust and power in a general manner, allowing the enabling and disabling of thrusters. Both new theory and full-scale sea trials are presented. <s> BIB001 </s> Control Allocation- A Survey <s> Station-keeping and low-speed maneuvering <s> In one concept for a Mobile Offshore Base (MOB), the several modules are kept in alignment and properly positioned with respect to one another by the use of many steerable thruster units. The research reported here develops an algorithm using linear programming to rationally allocate commanded thrust and thruster azimuth to multiple thrusters (more than two) in order to best achieve the desired force system on the MOB module. The linear programming approach allows explicit consideration of many of the real-world characteristics of the thrusters, including maximum thrust capability, maximum slew rate of the thrusters, etc. The algorithm was programmed and the results of the optimum allocation for two different scenarios are presented. <s> BIB002 </s> Control Allocation- A Survey <s> Station-keeping and low-speed maneuvering <s> The thruster allocation is a complex mapping from a demanded force and turning moment to a set of thruster pitch/ rpm and azimuth setpoints. Since the number of degrees of freedom of thruster controls is normally very high, there exist many such mappings. <s> BIB003 </s> Control Allocation- A Survey <s> Station-keeping and low-speed maneuvering <s> In this paper, a method of thrust allocation based on a linearly constrained quadratic cost function capable of handling rotating azimuths is presented. The problem formulation accounts for magnitude and rate constraints on both thruster forces and azimuth angles. The advantage of this formulation is that the solution can be found with a finite number of iterations for each time step. Experiments with a model ship are used to validate the thrust allocation system. <s> BIB004 </s> Control Allocation- A Survey <s> Station-keeping and low-speed maneuvering <s> In extreme weather conditions, thrusters on ships and rigs may be subject to severe thrust losses caused by ventilation and in-and-out-of-water events. When a thruster ventilates, air is sucked down from the surface and into the propeller. In more severe cases, parts of or even the whole propeller can be out of the water. These losses vary rapidly with time and cause increased wear and tear in addition to reduced thruster performance. In this paper, a thrust allocation strategy is proposed to reduce the effects of thrust losses and to reduce the possibility of multiple ventilation events. This thrust allocation strategy is named antispin thrust allocation, based on the analogous behavior of antispin wheel control of cars. The proposed thrust allocation strategy is important for improving the life span of the propulsion system and the accuracy of positioning for vessels conducting station keeping in terms of dynamic positioning or thruster-assisted position mooring. Application of this strategy can result in an increase of operational time and, thus, increased profitability. The performance of the proposed allocation strategy is demonstrated with experiments on a model ship. <s> BIB005 </s> Control Allocation- A Survey <s> Station-keeping and low-speed maneuvering <s> A thrust allocation method with capabilities to assist the power management system on dynamically positioned ships is proposed in this paper. Its main benefits are reduction in frequency and/or load variations on the electric network, and a formulation of thruster bias which can be released when required by the power management system. To reduce load variations without increasing overall power consumption it is necessary to deviate from the thrust command given by the dynamic positioning system or joystick. The resulting deviation in position and velocity of the vessel is tightly controlled, and results show that small deviations are sufficient to fulfill the objective. For simplicity, the study has been limited to thrusters with fixed direction, having in mind that generalizations are fairly straightforward. <s> BIB006
Several types of ships and specialized vessels, such as semisubmersible platforms used in the petroleum industry, depend on thrust allocation control systems during certain modes of operation. This is in particular the case during dynamic positioning operations that include station keeping and low-speed maneuvering using joystick control or automatic tracking functionality, . Such control systems often control the vessel in three degrees of freedom (surge, sway and yaw) and command the required surge and sway forces as well as yaw moment to the thrust allocation system. Dynamic positioning operations include drilling, offloading of cargo or petroleum at oilfield installations, pipe-laying, cable laying, seismic data acquisition, dredging, fire fighting and rescue, construction, diving support, and others. Thrust allocation for low-speed maneuvering is used for vessels ranging from cruise vessels, ferries, and tankers to smaller yachts, research and fishing vessels. The thruster system can be implemented using various thrust producing devices that are effective in the low-speed regime: • Main propellers gives positive or negative force in the longitudinal direction only, and possibly a small yaw moment if mounted off the longitudinal axis. The propeller thrust is usually controlled through its angular speed, a variable pitch angle, or both. • Main propellers with rudders gives positive or negative force in the longitudinal direction. In addition, the rudder angle can be controlled to produce lateral forces and yaw moment when the propeller thrusts forward since the propeller slipstream is directed to flow at high speed past the rudder surface and can therefore produce a significant lateral force. When the propeller thrusts backwards, the rudder is not effective. • Tunnel thrusters are propellers mounted in the lateral direction in tunnels through the ship hull. They produce lateral forces and yaw moments. • Azimuth thrusters are propellers that can be turned to produce thrust in any direction in the horizontal plane. The propeller thrust is usually controlled through its angular speed, a variable pitch angle, or both. Since they are vector thrust devices, an azimuth thruster has two-degrees-offreedom for the control allocation. • Water jets and other propulsion devices and control surfaces are less commonly used. Thrusters are commonly powered by electricity distributed from a power plant that may comprise one or more dieselengine or gas turbine electric generators. Main propellers are sometimes directly driven by the engine. Safety and operational requirements require a high degree of redundancy to achieve the necessary fault tolerance. Typical requirements is that operations can continue uninterrupted for some time to allow them to be aborted safely after major failures such as loss of a single thruster, single generator set, a single electric switchboard, or a single engine room due to fire or flooding in a single compartment. Often, the worst case single point failure is loss of half of thrust capacity due to a switchboard short circuit failure, or fire or flooding in a machine room. Advanced vessel design with high redundancy tend to have four to eight thrust producing devices, where some are azimuth thrusters with two independent degrees-of-freedom for control. The thrust allocation algorithm therefore has many degrees of freedom in order to be capable to handle critical failures. The thruster system capacity is usually designed based on vessel capability requirements to withstand environmental forces such as wind, waves and currents, ). Usually, wind loads are dominating. Thurst allocation objectives and constraints that are commonly accounted for, BIB002 BIB001 BIB004 , include the following: • Surge, sway and yaw control, usually with a priority on the yaw axis since loss of heading will usually imply loss of position under heavy wind conditions since ships are designed for minimum wind loads when heading up against the wind. • Thrusters have individual capacity constraints due to their power rating, but may also have coupled constraints if limited by the electric power available on a shared power bus. • Rate constraints are generally important for the turning of azimuth thrusters and rudders' steering machine. • Minimization of fuel consumption. • Minimization of tear-and-wear on thrusters and generator sets due to time-varying control commands that must respond to the motion of the vessel caused by wind and waves. • Avoiding too high variations in electric power consumption that may cause blackout due to over-or underfrequency protection of the weak electric power grid on an isolated ship or vessel. • Sector constraints are sometimes imposed on azimuth thrusters in order to protect equipment (like subsea equipment lowered through a moon pool, or hydro-acoustic transceivers used for positioning), divers in the water, or to avoid thrust losses in nearby thrusters due to interactions caused when directing the slipstream of one thruster into the propeller disc of another thruster. • Thrusters may be disabled and enabled dynamically in order to guarantee fault tolerance and operational flexibility. Industrial solutions are described in BIB001 BIB003 . A static QP-based strategy is described in , while the method in BIB001 ) utilizes pseudoinverses, in combination with the extended thrust concept . The control vector u consists of the horizontal plane thrust vector decomposed in the vessel xy-axes (horizontal plane) in order to allow linear models also with azimuth thrusters. In BIB001 , constraints are handled by saturation strategies in combination with filtering of azimuth angle commands that also serves the secondary objective of reducing thruster tear and wear, and the singular value decomposition is used to handle cases when temporary controllability is weak due to all thruster being aligned and can produce thrust all in the same direction. The interactions between the thrust allocation and low level thruster control strategies are studied in , which is particularly important in extreme seas where thrust losses can be large when the propeller ventilates and inand-out-of-water effects may lead to propeller spin if properly addressed, BIB005 ). Even at fairly low speeds, tunnel thrusters will significantly reduce their effect, which should be accounted for in scheduling of the control effectiveness matrix B, . A practical strategy that explicitly optimize the thrust allocation in order to account for power generation constraints, variations in loads, and operational desires such as balancing the load on different electric bus segments and switchboards is described in BIB003 ). An integrated approach to dynamic control of power plant as part of the thrust allocation strategy is studied in BIB006 , inspired by who studied the stabilizing effect on the electrical power plant. An similar industrial thrust allocation approach implementation with dynamic load control and prediction is described in . The allocation of control to rudders is particularly challenging due to their highly asymmetric characteristic (no effect when the propellers thrust backwards). Optimization-based approaches that consider the finite (usually small) number of combinations of propeller thrust directions have been proposed and successfully tested, . Similar strategies can be used for general nonconvex thruster constraints, ), e.g. due to forbidden sectors being less than 180 degrees. In special situations, e.g. when there are primarily azimuth thrusters in use, an additional objective of thruster configuration singularity avoidance might be useful in order to avoid temporary loss of controllability when all thrusters point in more or less the same direction BIB001 . Station keeping of ships and semi-submersible platform for long periods of time are sometimes implemented using mooring lines with thruster assisted position and heading control. This is commonly used for drilling units and floating production, storage and offloading units (FPSOs) operating in water depths of less than 500 meters. The thrust allocation must take into account the mooring line forces and provide assistance when needed to make corrections, for example in strong winds or after a mooring line break, e.g. ). Control allocation for small-waterplane marine constructions such as semi-submersibles can obtain additional roll and pitch damping using a conventional thruster system. This is possible for constructions with large draft and beam relative the length since controllability depends on moment arms in roll and pitch . In this case the thrust allocation scheme should not only allocate forces and moment in the horizontal plane (surge, sway and yaw), but also allocate moments in roll and pitch.
Control Allocation- A Survey <s> High-speed maneuvering and ship autopilots <s> In this work, a control strategy for ship parametric roll resonance is developed by changing the frequency of the parametric excitation. This is achieved by varying the ship’s forward speed. This changes the perceived frequency of the waves, known as the encounter frequency (and thus the frequency of parametric excitation), via Doppler shift. A novel control philosophy based on a 1-DOF roll model, capable of describing the roll motion when the encounter frequency is non-constant, is proposed. The stability properties of the closed-loop system are mathematically proven and the effectiveness of the control system is demonstrated by simulating the closed-loop system using both a 6-DOF ship model and a simplified 1-DOF model describing the roll motion. The simulations are in agreement with the theoretical analysis. <s> BIB001 </s> Control Allocation- A Survey <s> High-speed maneuvering and ship autopilots <s> In this chapter, two strategies to actively control ships experiencing parametric roll resonance are proposed. Both approaches aim at changing the frequency of the parametric excitation by controlling the Doppler shift, which, in recent results, has been shown to be effective to reduce the roll angle significantly. However, exactly how to change the frequency of the parametric excitation to stabilize the parametric oscillations has remained an open research topic. Thus, two optimal control philosophies that alter the ship’s forward speed and heading angle, which in turn changes the encounter frequency and consequently the frequency of the parametric excitation, are presented. This is referred to as frequency detuning. As a first approach, the methodology of extremum seeking (ES) control is adapted for ships in parametric roll resonance to iteratively determine the optimal setpoint of the encounter frequency in order to avoid one of the conditions for parametric roll. The encounter frequency is consequently mapped to the ship’s forward speed and heading angle by a control allocation. This is formulated as a constrained nonlinear optimization problem. Second, the use of a model predictive control (MPC) is proposed. By addressing both states and input constraints explicitly, the MPC formulation is used to change the ship’s forward speed and heading angle in an optimal manner to reduce parametric roll oscillations. The effectiveness of the proposed control approaches to stabilize parametric oscillations, by simultaneously changing the ship’s forward speed and heading angle optimally, is illustrated by computer simulations. This clearly verifies the concept of frequency detuning. <s> BIB002
Ship autopilots conventionally use rudders to meet heading control objectives, while they may also use additional control surfaces such as fins and azimuth propellers (azipods) which calls for control allocation solutions. It is also possible to use rudders for roll damping alone or in combinations with controllable fins (see and references therein). The penalties for the use of rudders and fins must be included in the control objective together with penalties and criteria for accurate steering and roll damping. This is an over-actuated control allocation problem except for ships equipped with one single rudder for simultaneously heading and roll damping, that is a under-actuated rudder-roll damping system that depend on frequency separation of these functions with rudder-roll damping at high frequencies (see ) and references therein). Severe instances of parametric rolling of ships can be avoided by specifying the control objectives of the speed and heading autopilots such that the frequency of excitation is changed via the Doppler-shift of the encounter frequency BIB001 . The optimal frequency is found by using MPC or extremum seeking control, and nonlinear control allocation is used to compute the desired speed and heading angle based on a penalty function designed such that the encounter frequency never is equal to two times the natural frequency in roll, BIB002 . This is the condition for parametric resonance.
Control Allocation- A Survey <s> Multi vessel operations <s> In this paper we present a strategy that allows a swarm of autonomous tugboats to cooperatively move a large object on the water. The two main challenges are: (1) the actuators are unidirectional and experience saturation; (2) the hydrodynamics of the system are difficult to characterize. The primary theoretical contribution of the paper addresses the first challenge. We present a tracking controller and force allocation strategy that, despite actuator limitations, result in asymptotically convergent tracking for a certain class of reference trajectories. The primary practical contribution is the introduction of a set of adaptive control laws that address the second challenge by compensating for unknown, and difficult to measure, hydrodynamic parameters. Experimental verification of the controllers is presented using a 1:36 scale model of a U.S. Navy ship, inside the Naval Academy's unique 380 ft testing tank. <s> BIB001 </s> Control Allocation- A Survey <s> Multi vessel operations <s> In this paper, we present a comprehensive trajectory tracking framework for cooperative manipulation scenarios involving marine surface ships. Our experimental platform is a small boat equipped with six thrusters, but the technique presented here can be applied to a multiship manipulation scenario such as a group of autonomous tugboats transporting a disabled ship or unactuated barge. The primary challenges of this undertaking are as follows: (1) the actuators are unidirectional and experience saturation; (2) the hydrodynamics of the system are difficult to characterize; and (3) obtaining acceptable performance under field conditions (i.e., global positioning system errors, wind, waves, etc.) is arduous. To address these issues, we present a framework that includes trajectory generation, tracking control, and force allocation that, despite actuator limitations, results in asymptotically convergent trajectory tracking. In addition, the controller employs an adaptive feedback law to compensate for unknown—difficult to measure—hydrodynamic parameters. Field trials are conducted utilizing a 3-m vessel in a nearby estuary. © 2010 Wiley Periodicals, Inc. **image** © 2011 Wiley Periodicals, Inc. <s> BIB002
Control allocation strategies have also been proposed for multi-vessel operations, where several tug-boats cooperatively generate forces and moments in order to tow a floating structure. This is formulated in a straightforward manner in the control allocation framework by incorporating the constraints on the tug-boats capacity and direction, BIB001 BIB002 . Its implementation requires a supervisory strategy that coordinates the tub-boats that operate as effectors/actuators in this framework.
Control Allocation- A Survey <s> Maneuvering of underwater vehicles <s> The problem of controlling underwater mobile robots in 6 degrees of freedom (DOF) is addressed. Underwater mobile robots where the number of thrusters and control surfaces exceeds the number of controllable DOF are considered in detail. Unlike robotic manipulators underwater mobile robots should include a velocity dependent thruster configuration matrix B(q), which modifies the standard manipulator equation to: Mq + C(q)q + g(x) = B(q)u where x = J(x)q. Uncertainties in the thruster configuration matrix due to unmodeled nonlinearities and partly known thruster characteristics are modeled as multiplicative input uncertainty. This article proposes two methods to compensate for the model uncertainties: (1) an adaptive passivity-based control scheme and (2) deriving a hybrid (adaptive and sliding) controller. The hybrid controller combines the adaptive scheme where M, C, and g are estimated on-line with a switching term added to the controller to compensate for uncertainties in the input matrix B. Global stability is ensured by applying Barbalat's Lyapunov-like lemma. The hybrid controller is simulated for the horizontal motion of the Norwegian Experimental Remotely Operated Vehicle (NEROV). <s> BIB001 </s> Control Allocation- A Survey <s> Maneuvering of underwater vehicles <s> A new approach to the fault-accommodating allocation of thruster forces of an autonomous underwater vehicle (AUV) is investigated in this paper. This paper presents a framework that exploits the excess number of thrusters to accommodate thruster faults during operation. First, a redundancy resolution scheme is presented that considers the presence of an excess number of thrusters along with any thruster faults and determines the reference thruster forces to produce the desired motion. This framework is then extended to incorporate a dynamic state feedback technique to generate reference thruster forces that are within the saturation limit of each thruster. Results from both computer simulations and experiments are provided to demonstrate the viability of the proposed scheme. <s> BIB002
Highly maneuverable underwater vehicles, either ROVs (Remotely Operated Vehicles) or AUVs (Autonomous Underwater Vehicles), are often controlled using compact electrically driven thrusters and fins. The thrust allocation problem is similar to a dynamically positioned surface vessel, including cases when also vertical forces are controlled using thrusters in addition to buoyancy control. Commonly used methods include pseudo-inverses, redistributed pseudoinverses or simple optimization formulations, BIB001 . Aspects of faulttolerant control by saturation mechanisms and appropriate weighting of the pseudo-inverse is studied in BIB002 .
Control Allocation- A Survey <s> Yaw stability control <s> Currently, advanced control systems implemented on production ground vehicles have the goal of promoting maneuverability and stability. With proper coordination of steering and braking action, these goals may be achieved even when road conditions are severe. This paper considers the effect of steering and wheel torques on the dynamics of vehicular systems. Through the input-output linearization technique, the advantages of four-wheel steering (4WS) system and independent torques control are clear from a mathematical point of view. A sliding mode controller is also designed to modify driver’s steering and braking commands to enhance maneuverability and safety. Simulation results show the maneuverability and safety are improved. Although the controller design is based on a four-wheel steering vehicle, the algorithm can also be applied to vehicles of different configurations with slight changes. <s> BIB001 </s> Control Allocation- A Survey <s> Yaw stability control <s> Abstract Robust decoupling of the lateral and yaw motions of a car has been achieved by feedback of the integrated yaw rate into front wheel steering. In the present paper the yaw disturbance attenuation is analyzed for a generic single-track vehicle model. The frequency limit, up to which yaw disturbances are attenuated, is calculated. For specific vehicle data, it is shown that this control law significantly reduces the influence of yaw disturbances on yaw rate and side-slip angle for low frequencies. This safety advantage is experimentally verified for μ-split braking. <s> BIB002 </s> Control Allocation- A Survey <s> Yaw stability control <s> Aiming to maximize not only stability limit but also vehicle responsiveness combined yaw moment and lateral force control as a cooperative control is presented. The total lateral force for front and rear wheels as well as direct yaw moment are introduced using the models response of side-slip angle and yaw rate. The side-slip response is the 2DOF vehicle response while the yaw rate response is a suggested first order lag to prevent oscillation in relation to steering input. Three different cases of combining lateral force and direct yaw moment control have been investigated using computer simulations. In order to show the effect of the cooperative control on vehicle stability and responsiveness, a direct yaw moment control to follow the yaw rate response is taken as comparison case. The effect of cooperative control is proved by computer simulations of the vehicle response to a single sine wave steer input with braking. <s> BIB003 </s> Control Allocation- A Survey <s> Yaw stability control <s> Active safety of road transport requires, among other things, the improvement of road vehicle yaw stability by active control. One approach for yaw dynamics improvement is to use differential braking, thereby creating the moment that is necessary to counteract the undesired yaw motion. An alternative approach is to command additional steering angles to create the counteracting moment. The maximum benefit, of course, can be gained through coordinated and combined use of both methods of corrective yaw motion generation in a control strategy. This problem has been approached by using a revised model regulator here as the main controller that utilizes coordinated steering and individual wheel braking actuation, with the aim of achieving better vehicle yaw stability control. Independent use of the individual means of actuation are treated first. Possible strategies for combined and coordinated use of steering and individual wheel braking action in a vehicle yaw dynamics controller are then presented. Simulation results on a nonlinear two track vehicle model are used to illustrate the effectiveness of the coordinated approach. <s> BIB004 </s> Control Allocation- A Survey <s> Yaw stability control <s> This paper examines the use of control allocation techniques for the control of multiple inputs to a ground vehicle to track a desired yaw rate trajectory while minimizing vehicle sideslip. The proposed controller uses quadratic programming accompanied by linear quadratic regulator gains designed around a linear vehicle model to arrive at a combination of vehicle commands. Several failure scenarios are examined and the results for two different quadratic programming approaches are presented along with a discussion of the advantages each method has to offer. <s> BIB005 </s> Control Allocation- A Survey <s> Yaw stability control <s> This paper presents a proposed optimum tire force distribution method in order to optimize tire usage and find out how the tires should share longitudinal and lateral forces to achieve a target vehicle response under the assumption that all four wheels can be independently steered, driven, and braked. The inputs to the optimization process are the driver's commands (steering wheel angle, accelerator pedal pressure, and foot brake pressure), while the outputs are lateral and longitudinal forces on all four wheels. Lateral and longitudinal tire forces cannot be chosen arbitrarily, they have to satisfy certain specified equality constraints. The equality constraints are related to the required total longitudinal force, total lateral force, and total yaw moment. The total lateral force and total moment required are introduced using the model responses of side-slip angle and yaw rate while the total longitudinal force is computed according to driver's command (traction or braking). A computer simulation of a closed-loop driver-vehicle system subjected to evasive lane change with braking is used to prove the significant effects of the proposed optimal tire force distribution method on improving the limit handling performance. The robustness of the vehicle motion with the proposed control against the coefficient of friction variation as well as the effect of steering wheel angle amplitude is discussed. <s> BIB006 </s> Control Allocation- A Survey <s> Yaw stability control <s> This paper investigates the effectiveness of weighting coefficients adaptation in simultaneous optimum distribution of lateral and longitudinal tire forces for improvement of vehicle handling and stability. Three different cases of weighting coefficients adaptation are considered in this study. Similar weighting coefficients for rear and front wheels are adopted in case (1). In case (2), the weighting coefficients of front wheels are greater than the corresponding value of rear wheels. Finally for case (3), the weighting coefficients of rear wheels are adopted to be greater than the values of front wheels. It is concluded that weighting coefficients adaptation can exert a large influence on the vehicle handling performance. <s> BIB007 </s> Control Allocation- A Survey <s> Yaw stability control <s> ESC (Electronic Stability Control) was introduced on the mass market in 1998. Since then, several studies showing the positive effects of ESC have been presented. In this study, data from crashes occurring in Sweden during 1998 to 2004 were used to evaluate the effectiveness of ESC on real life crashes. To control for exposure, induced exposure methods were used, where ESC-sensitive to ESC-insensitive crashes and road conditions were matched in relation to cars equipped with and without ESC. Cars of similar or in some cases identical make and model were used to isolate the role of ESC. The study shows the positive and consistent effects of ESC overall and in circumstances where the road has low friction. The overall effectiveness on all injury crash types except rear end crashes was 16.7 +/- 9.3 %, while for serious and fatal crashes the effectiveness was 21.6 +/- 12.8 %. The corresponding estimates for crashes with injured car occupants were 23.0+/-9.2% and 26.9+/-13.9%. For serious and fatal loss-of control type crashes on wet roads the effectiveness was 56.2 +/- 23.5 % and for roads covered with ice or snow the effectiveness was 49.2+/-30.2%. It was estimated that for Sweden, with a total of 500 vehicle related deaths annually, that 80-100 fatalities could be saved annually if all cars had ESC. On the basis of the results, it is recommended that all new cars sold should have ESC as standard equipment. <s> BIB008 </s> Control Allocation- A Survey <s> Yaw stability control <s> In this paper a novel approach to autonomous steering systems is presented. A model predictive control (MPC) scheme is designed in order to stabilize a vehicle along a desired path while fulfilling its physical constraints. Simulation results show the benefits of the systematic control methodology used. In particular we show how very effective steering manoeuvres are obtained as a result of the MPC feedback policy. Moreover, we highlight the trade off between the vehicle speed and the required preview on the desired path in order to stabilize the vehicle. The paper concludes with highlights on future research and on the necessary steps for experimental validation of the approach. <s> BIB009 </s> Control Allocation- A Survey <s> Yaw stability control <s> This paper investigates a hierarchically coordinated vehicle dynamics control approach with individual wheel torque and steering actuation. A high-level robust nonlinear sliding mode controller is designed to determine the generalized forces/moments required to achieve vehicle motion objectives. A weighted pseudo-inverse based control allocation method is employed for computationally efficient distribution of control effort to the slip and slip angle of each wheel. To avoid saturation, tire-road friction estimation is an essential part of the control distribution scheme. Two adverse driving scenario simulations are used to evaluate the effectiveness of this control system. <s> BIB010 </s> Control Allocation- A Survey <s> Yaw stability control <s> In this article, vehicle dynamics integrated control algorithm using an on-line non-linear optimization method is proposed for 4-wheel-distributed steering and 4-wheel-distributed traction/braking systems. The proposed distribution algorithm minimizes work load of each tire, which is controlled to become the same value. The global optimality of the convergent solution of the recursive algorithm can be proved by extension to convex problems. This implies that theoretical limited performance of vehicle dynamics integrated control is clarified. Furthermore, the effect of this vehicle dynamics control for the 4-wheel-distributed steering and 4-wheel-distributed traction/braking systems is demonstrated by simulation to compare with the combination of the various actuators. <s> BIB011 </s> Control Allocation- A Survey <s> Yaw stability control <s> This paper presents a new control allocation scheme for advanced coordinated vehicle dynamics control (CVDC) that takes into account vehicle operating condition and tire-surface friction coefficient. Individual tire slip and slip angles are selected as the control variables to resolve the inherent tire force nonlinear constraints. A real-time adaptable accelerated fixed-point (AFP) method is proposed to solve the magnitude and rate constrained quadratic programming control allocation (CA) problem. It could achieve faster convergence rates when control variable saturations occur. The performance of this control allocation approach is evaluated for adverse driving conditions simulated using the CarSimreg vehicle simulation package. <s> BIB012 </s> Control Allocation- A Survey <s> Yaw stability control <s> This paper shows how real-time optimisation for actuator coordination, known as control allocation, can be a viable choice for heavy vehicle motion control systems. For this purpose, a basic stability control system implementing the method is presented. The real-time performance of two different control allocation solvers is evaluated and the use of dynamic weighting is analysed. Results show that sufficient vehicle stability can be achieved when using control allocation for actuator coordination in heavy vehicle stability control. Furthermore, real-time simulations indicate that the optimisation can be performed with the computational capacity of today's standard electronic control units. <s> BIB013 </s> Control Allocation- A Survey <s> Yaw stability control <s> Modern cars are equipped with an increasing number of active systems in order to help the driver, e.g. to better deal with unexpected changes in the vehicle dynamics. To improve the global vehicle dynamics, comfort and handling, coordination between traditionally stand-alone active systems is required. This demands for a Global Chassis Controller. <s> BIB014 </s> Control Allocation- A Survey <s> Yaw stability control <s> In this work a dynamic control allocation approach is presented for an automotive vehicle yaw stabilization scheme. The stabilization strategy consists of a high level module that deals with the vehicle motion control objective (yaw rate reference generation and tracking), a low level module that handles the braking control for each wheel (longitudinal slip control and maximal tire-road friction parameter estimation), and an intermediate level dynamic control allocation module that generates the longitudinal slip reference for the low level brake control module and commands front wheel steering angle corrections. The control allocation design is such that the actual torque about the yaw axis tends to the desired torque calculated form the high level module, with desirable distribution of control forces satisfying actuator constraints and minimal control effort objectives. Conditions for uniform asymptotic stability are given for the case when the control allocation includes adaptation of the tire-road maximal friction coefficients, and the scheme has been implemented in a realistic non linear multi body vehicle simulation environment. The simulation cases show that the yaw control allocation strategy stabilizes the vehicle in extreme maneuvers where the non linear vehicle yaw dynamics otherwise (without active braking or active steering) becomes unstable in the sense of over- or under steering. The control allocation implementation is efficient and suitable for low cost automotive electronic control units. <s> BIB015 </s> Control Allocation- A Survey <s> Yaw stability control <s> Abstract Comparison of static and dynamic control allocation techniques for nonlinear constrained optimal distribution of tire forces in a vehicle control system is presented. The total body forces and moments, obtained from a high level controller, are distributed among tire forces, which are constrained to nonlinear constraint of saturation, through two approaches. For the static control allocation technique the interior-point method is employed to perform the nonlinear constrained optimization problem. Also, by the dynamic control allocation technique a dynamic update law is derived for desired forces of each tire. Through simulation results efficiency of two approaches in enhancing vehicle handling and stability is evaluated and the results are compared. <s> BIB016
Active safety systems like the electronic stability control (ESC) are now common in production cars, and shown to have tremendous life-saving effect when skids may occur due to evasive maneuvers, slippery surface, or too high speed in curves, e.g. BIB008 ). The ESC detects deviation between the actual lateral motion of the vehicle and the drivers intention, usually by comparing the vehicle's lateral acceleration and yaw rate with information computed from the steering wheel angle commanded by the driver. In case of a significant difference, the ESC will automatically take action to counteract skidding by actuating a yaw moment to correct the skidding motion of the case, (van Zanten 2000, BIB003 . In most vehicles, the four brakes are actuated independently in order to set up such a moment, possibly in combination with engine torque reduction. Increasing the longitudinal wheel slip by setting up a longitudinal braking force will effectively reduce the lateral friction forces, and both these phenomena contribute to generate a change in the yaw moment. This leads to a control allocation problem, where the constrained forces and moments generated by the four brakes must be coordinated to generate the desired yaw moment while at the same time minimize other forces generated by the tires in order to avoid unintended side effects or discomforting the driver. For small control actions, the main challenge is that brake forces are uni-directional. However, for large control actions, the problem is much more challenging since the tire nonlinearities due to the saturating characteristics of tire friction forces must be taken into account. This saturation level depends strongly on the ground surface and tires, both being uncertain time-varying properties to the control system. Moreover, anti-lock braking systems (ABS) may be activated and act as a limitation to the achievable performance, since the ABS may limit the longitudinal slip in order to maintain high lateral friction, which may be a conflicting objective in some cases when lateral stability is lost. Furthermore, the load distribution on the tires may be far from even due to large accelerations, and this must also be considered when allocating forces to each wheel. In order to maximize the region of stability, these nonlinear effects are important to consider. Some electronic stability control systems also use active steering to manipulate the yaw moment, where an electric motor on the steering column may add actuation in addition to the driver's command, BIB002 . There has also been proposed systems that have additional redundancy by combining active steering and active braking, BIB001 BIB004 BIB010 , which also achieves additional control authority and an opportunity to enhance the region of stability. Due to the strong nonlinearities and dynamic constraints, the use of nonlinear constrained control allocation techniques will generally be desired for lateral vehicle control. However, in order to avoid the online computational burden of nonlinear programming, several simplified approaches have been proposed. The effector mapping from longitudi-nal tire slips and slip angles are linearized in BIB012 and an accelerated fixed-point iteration algorithm is studies as a computationally efficient alternative to quadratic programming, BIB005 . A commonly used control allocation objective is to minimize friction forces, for example the adhesion potential characterized using friction ellipse models for each individual tyre, . Using linearization of the model τ = Gϕ(u, x, θ ), where θ are time-varying parameters, the constrained least-squares problem of allocation error minimization can be solved using numerical online quadratic programming BIB013 ). The effect of vehicle handling performance of weighting matrix coefficients on the control allocation performance is studied in BIB006 BIB007 , when using a control allocation method that assumes unconstrained optimization. Nonlinearities and uncertainty is with these approaches handled within low-level actuator/effector controllers that can also provide time-varying constraint limits (such as estimated maximum tire/road friction coefficient) to the control allocation. A nonlinear programming approach to nonlinear constrained control allocation for yaw stabilization is taken in , where computational efficiency is achieved through an approximative multi-parametric nonlinear programming algorithm that pre-computes a piecewise linear function that can be evaluated online using binary search tree data structures in order to approximate the optimal solution. The nonlinear optimizing control allocation method of BIB011 ) minimizes work load of each tire, assuming all wheels can be actuated independently with respect to steering and brake/traction. Theoretical convexity properties of the optimization problem are studied. Fault tolerant control with respect to brake failures is studied in , where the main objective of the linear programming based control algorithm during the failure mode is to redistribute the control tasks to the functioning actuators, so that the vehicle performance remains as close as possible to the desired performance in spite of a failure. The dynamic adaptive nonlinear control allocation method is studied for yaw stabilization in BIB015 ) (see section 3.3), where a combination of braking and frontwheel steering is used for actuation. Estimation of maximum tire-road friction coefficient is an integral part of the adaptive control allocation strategy. In BIB016 , the performance of the method is further compared to a (static) nonlinear programming approach. A simpler gradient-based dynamic control allocation approach is shown to be effective in BIB014 ). It should also be mentioned that model predictive control designs, incorporating dynamics vehicles models and actuators models, are effective methods for solving the combined motion control and control allocation problem for vehicle dynamics control, e.g. BIB009 , although the computational complexity is much more of a challenge than with a control allocation design.
Control Allocation- A Survey <s> Electrical propulsion <s> The majority of current mechanical systems used in machinery and especially those which are controlled by microprocessors can be described as equal-actuated. This means that the number of actuators (drives, controls) is equal to the number of degrees-of-freedom. The mechanical systems can have directly such property or it can be as such treated during the design and operation. Classical rigid mechanisms can have such property naturally. Generally all flexible mechanisms violate this property as not all flexible degrees-of-freedom can be actuated and thus directly controlled. However, there are important mechanical systems which do not fulfil this criterion at equality of actuators and degrees-of-freedom. The examples of under-actuated systems are bio-mechanical systems during dynamic phase of motion, technical systems of cranes, vehicles, underwater robots, missiles with failed engines, inverted pendulum, and ball on the beam. The examples of over-actuated systems are again biomechanical systems during the contact with ground and recently introduced redundantly actuated parallel robots. The paper first deals with the introduction and definition of the property of under-actuation, equal-actuation and over-actuation. Then the paper deals with the control of under-actuated or over-actuated systems and the challenge of how to design such systems. For the covering abstract see ITRD E125059. <s> BIB001 </s> Control Allocation- A Survey <s> Electrical propulsion <s> The capability of over-actuated vehicles to maintain stability during limit handling is studied in this paper. A number of important differently actuated vehicles, equipped with hydraulic brakes toward more advanced chassis solutions, are presented. A virtual evaluation environment has specifically been developed to cover the complex interaction between the driver and the vehicle under control. In order to fully exploit the different actuators setup, and the hard nonconvex constraints they possess, the principle of control allocation by nonlinear optimization is successfully employed. The final evaluation is made by exposing the driver and the over-actuated vehicles to a safety-critical double lane change. Thereby, the differently actuated vehicles are ranked by a quantitative indicator of stability. <s> BIB002 </s> Control Allocation- A Survey <s> Electrical propulsion <s> This paper presents a control method for tracking electric ground vehicle planar motions while achieving the optimal energy consumption. Sliding mode control and an energy-efficient control allocation (CA) scheme are synthesized to track the desired vehicle longitudinal, lateral, and yaw motions. By explicitly incorporating actuator efficiencies and actuator operating modes into the coordination of redundant in-wheel motors equipped on electric ground vehicles, vehicle planar motion control and operating energy optimization are achieved simultaneously. Different maneuvers are tested for comparisons between the standard and the energy-efficient CA schemes. Based on experimental data and some reasonable assumptions on the efficiencies of in-wheel motors, the energy-efficient CA dictates different torque distributions on all the wheels under consideration of different efficiencies. Simulation results indicate that, in comparison to the results by the standard CA scheme, less energy is consumed when the energy-efficient CA scheme is applied for controlling the electric ground vehicle planar motions. <s> BIB003
Electrically powered ground vehicles may have in-wheel electric motors that combine drive and regenerative brake functions, possibly in combination with friction brakes, and the trend is towards highly over-actuated vehicles, BIB001 , with extended stability, BIB002 . Control allocation can be used to coordinate the electric motors in individual combinations of drive/brake mode while optimizing energy efficiency. The method proposed in BIB003 ) utilizes linear/quadratic approximations to formulate an approximate control allocation problem that can be solved numerically online with computational efficiency.
Control Allocation- A Survey <s> Rollover prevention <s> This paper uses Model Predictive Control theory to develop a framework for automobile stability control. The framework is then demonstrated with a roll mode controller which seeks to actively limit the peak roll angle of the vehicle while simultaneously tracking the driver’s yaw rate command. Initially, control law presented assumes knowledge of the complete input trajectory and acts as a benchmark for the best performance any controller could have on this system. This assumption is then relaxed by only assuming that the current driver steering command is available. Numerical simulations on a nonlinear vehicle model show that both control structures effectively track the driver intended yaw rate during extreme maneuvers while also limiting the peak roll angle. During ordinary driving, the controlled vehicle behaves identically to an ordinary vehicle. These preliminary results shows that for double lane change maneuvers, it is possible to limit roll angle while still closely tracking the driver’s intent.Copyright © 2003 by ASME <s> BIB001 </s> Control Allocation- A Survey <s> Rollover prevention <s> In this paper, a full-vehicle active suspension system is designed to simultaneously improve vehicle ride comfort and steady-state handling performance. First, a linear suspension model of a vehicle and a nonlinear handling model are described. Next, the link between the suspension model and vehicle steady-state handling characteristics is analysed. Then, an H-infinity controller for the suspension is designed to achieve integrated ride-comfort and handling control. Finally, the controller is verified by computer simulations. <s> BIB002 </s> Control Allocation- A Survey <s> Rollover prevention <s> This paper proposes a method to enhance existing electronic stability control systems such that a certain level of rollover mitigation performance is achieved. Such an enhancement is conducted through a control algorithm using only the standard ESC sensors. The analysis presented here reveals that a rollover mitigation system such as this will face a trade-off between the vehicle’s responsiveness and the control robustness due to error in the roll dynamics model and state estimation. <s> BIB003 </s> Control Allocation- A Survey <s> Rollover prevention <s> In ambition to minimize potential interferences between yaw stabilization and rollover prevention of an automotive vehicle, this work presents a new approach to integrate both objectives. It introduces rollover prevention in form of a nonlinear constraint on the control allocation of a yaw stabilizing controller, yielding a hierarchical allocation problem. A suitable algorithm in form of a dynamic update law addressing this problem is derived. Its implementation is computationally efficient and suitable for low cost automotive electronic control units. The proposed rollover constraint design does not require any sensory equipment in addition to the yaw stabilizing algorithm. Actuation is conducted by differential braking, while an extension to further actuators is possible. The method is validated using an industrial multi-body vehicle simulator. <s> BIB004 </s> Control Allocation- A Survey <s> Rollover prevention <s> In this work a dynamic control allocation approach is presented for an automotive vehicle yaw stabilization scheme. The stabilization strategy consists of a high level module that deals with the vehicle motion control objective (yaw rate reference generation and tracking), a low level module that handles the braking control for each wheel (longitudinal slip control and maximal tire-road friction parameter estimation), and an intermediate level dynamic control allocation module that generates the longitudinal slip reference for the low level brake control module and commands front wheel steering angle corrections. The control allocation design is such that the actual torque about the yaw axis tends to the desired torque calculated form the high level module, with desirable distribution of control forces satisfying actuator constraints and minimal control effort objectives. Conditions for uniform asymptotic stability are given for the case when the control allocation includes adaptation of the tire-road maximal friction coefficients, and the scheme has been implemented in a realistic non linear multi body vehicle simulation environment. The simulation cases show that the yaw control allocation strategy stabilizes the vehicle in extreme maneuvers where the non linear vehicle yaw dynamics otherwise (without active braking or active steering) becomes unstable in the sense of over- or under steering. The control allocation implementation is efficient and suitable for low cost automotive electronic control units. <s> BIB005
Although yaw-stabilizing ESC does not explicitly consider the risk of roll-over, it is widely acknowledged that an ESC function will reduce the risk of rollover since it tends to reduce lateral accelerations that is a main cause of rollover accidents. Further enhancements can be achieved if rollover prevention functionality can be considered as as an integral part of the vehicle dynamics control system. Using brake and steering actuators, the control allocation approach is extended by incorporating roll moment allocation together with yaw moment allocation in the control allocation, BIB003 BIB001 . Using polyhedral approximation of the friction ellipsoids, a quadratic programming approach is taken to allocation error minimization in . In BIB004 ), the dynamic nonlinear control allocation approach of BIB005 ) is extended to include rollover prevention objectives. Active suspension actuators have also been proposed for the control of ground vehicle yaw dynamics BIB002 , although they are not currently common in production vehicles.
Control Allocation- A Survey <s> Mobile robots <s> Active degrees of freedom provide a robotic vehicle the ability to enhance its performance in all terrain conditions. While active suspension systems are now commonplace in on-road vehicles, their application to off-road terrains has been little investigated. A fundamental component of such an application is a need to translate desired body motion commands into actuator values through the use of proprioceptive algorithms. The diverse nature of the terrains that might be encountered places variable demands upon the operation of the vehicle. This entails the potential use of a diverse set of algorithms designed to optimize mobility and performance. This paper presents a cohesive control scheme designed for the operation of an autonomous vehicle under all conditions. The ideas presented have been tested in simulation, and some have been used extensively in the field <s> BIB001 </s> Control Allocation- A Survey <s> Mobile robots <s> This paper proposes new robust control allocation method of redundantly actuated variable structure systems. The original system is divided into two subsystems by factorizing the input matrix. Then all system uncertainties can be treated with the virtual control input. The control strategy has two steps. At the first step the virtual control input is found with sliding mode control with perturbation estimation method (SMCPE) considering system uncertainties then at the second step the real control input is calculated by using the Quadratic Optimization method. The simulation is conducted with the redundantly actuated leg-wheel hybrid structures which have 4 actuators and 3DOFs. The state-space model is derived from the dynamic equation with kinematic and dynamic constraints. The effect of kinematic error is also studied. <s> BIB002 </s> Control Allocation- A Survey <s> Mobile robots <s> Eccentric centroid is one of the most serious problems of omni-directional mobile robot design. The reason is the limited precision in producing causes. Nowadays, most control approaches are studied basing on ideal models with regular centroid. Thus extra energy is needed to adjust difference between ideal centroid and actual centroid. In this paper, we propose an approach with eccentric centroid to analyze the distribution of traction on each wheel in this condition and get an optimization solution set of this traction with linear programming algorithm. We conducted simulation experiments to demonstrate the algorithm is of positive effect in reducing cost of extra energy. <s> BIB003
Traction control for mobile robots that operate off-road is considered in BIB001 . Based on the geometry of the problem, several simple (unconstrained) closed-form computationally efficient control allocation strategies for wheeled and legged mobile robots with active suspension are derived and compared in BIB001 , while pseudo-inverse type control allocation strategies are studied in BIB002 . A linear programming solution to control allocation for wheeled mobile robots is presented in BIB003 . Nonlinearities, uncertainty and additional complexity in these approaches is to a large extent handled in the low-level controllers at each actuator/effector.
Control Allocation- A Survey <s> Other application areas <s> Mechanisms interacting with their environments that possess complete contact force controllability such as multifingered hands and walking vehicles 2 are considered in this article. In these systems, the redundancy in actuation can be used to optimize the force distribution characteristics. The resulting optimization problems can be highly nonlinear. Here, the redundancy in actuation is characterized using geometric reasoning which leads to simplifications in the formulation of the optimization problems. Next, advanced polynomial continuation techniques are adapted to solve for the global optimum of an important nonlinear optimization problem for the case of four frictional contacts. The algorithms developed here are not suited for real-time implementation. However, these algorithms can be used in off-line force planning, and they can be used to develop look-up tables for certain applications. The outputs of these algorithms can also be used as a baseline to evaluate the effectiveness of sub-optimal schemes. <s> BIB001 </s> Control Allocation- A Survey <s> Other application areas <s> For a walking robot with high constant body speed, the dynamic effects of the legs on the transfer phase are dominant compared with other factors. This paper presents a new force distribution algorithm to maximize walkable terrain without slipping considering the dynamic effects of the legs on the transfer phase. Maximizing the walkable terrain means having the capability of walking on more slippery ground under the same constraint, namely constant body speed. A simple force distribution algorithm applied to the proposed walking model with a pantograph leg shows an improvement in the capability of preventing foot-slippage compared with one using a pseudo-inverse method. <s> BIB002 </s> Control Allocation- A Survey <s> Other application areas <s> The control of real time, nonlinear, large-scale systems - systems with large aggregations of sensors and actuators - is seldom explored in actual operating physical systems. In such many-element systems, control issues such as actuation allocation, fusion of sensor data, and system identification emerge as challenging problems for large-scale system control. In this work, constrained optimization is used to solve these problems as applied to the control of an object moving system with 1,152 actuators and 32,000 sensors with a 2 ms control loop time. Solutions for allocating actuation among large numbers of actuators using hierarchical constrained optimization and fusing the output of many sensors into a small number of final measurements under tight real time constraints have been developed. This paper demonstrate that hyper-redundant systems are capable of system self-identification, and that constrained optimization can effectively solve problems associated with control of many-element systems. <s> BIB003 </s> Control Allocation- A Survey <s> Other application areas <s> This paper investigates the problem of closed-loop control of a small number of parameters by allocating actuation in a system with many binary degrees of freedom, using an actual large-scale air-jet table as an example. In this system, the desired force and torque are produced by a large number of spatially distributed binary air jets directing individual forces on an object. Various algorithms for solving the force allocation problem-determining the appropriate valve states in this hyper-redundant system-are investigated. The algorithms range from discrete optimal search to continuous constrained optimization to a hybrid hierarchical approach that can be distributed. The latter consists of using the continuous optimal solutions to recursively break the large optimization problems into smaller problems that can be solved using optimal search methods or precomputed lookup tables. A tradeoff between computation time and allocation error was found. The optimal algorithms yield low errors but the time is exponential in the number of actuators, while the continuous solutions execute quickly but yield larger errors. The hybrid hierarchical optimal algorithms give the best compromise between these conflicting goals, and their applicability spans the full range of degrees of freedom from a few to many thousands. These hierarchical algorithms are useful in many such highly redundant systems. <s> BIB004 </s> Control Allocation- A Survey <s> Other application areas <s> A general hierarchical methodology for control distribution in highly redundant system is presented. The new method makes use of distribution functions to approximate the feasible solution set and to keep in check the "curse of dimensionality". To improve the performance of the distribution function approach a hierarchical approach is proposed which decomposes a large scale control distribution problem in to many small scale control distribution problems to compromise the need for real-time computation against optimality. The main advantage of the proposed hierarchical approach is the de-coupling of many small scale problems from each other. The convergence and accuracy of the proposed method are demonstrated by numerical studies. <s> BIB005 </s> Control Allocation- A Survey <s> Other application areas <s> In this paper, an energetic swarm controller is developed that controls the swarm temperature, swarm centre position, and swarm potential energy. A sliding control approach is combined with a control allocation process to solve the overactuated control problem. The control allocation problem is solved using nonlinear programming software which allows the optimization problem to be solved with input saturation constraints. Furthermore, a low level trajectory controller based on dynamic feedback linearization is developed in order to improve the trajectory tracking performance of the individual swarm members. Application to a group of wheeled mobile robots is used to demonstrate the approach. Together, these results allow energetic swarm controllers to be implemented on wheeled mobile robot (WMR) systems with uncertainty and input saturation constraints. <s> BIB006 </s> Control Allocation- A Survey <s> Other application areas <s> This communication presents and justifies ideas related to motion control of snake robots that are currently the subject of ongoing investigations by the authors. In particular, we highlight requirements for intelligent and efficient snake robot locomotion in unstructured environments, and subsequently we present two new design concepts for snake robots that comply with these requirements. The first design concept is an approach for sensing environment contact forces, which is based on measuring the joint constraint forces at the connection between the links of the snake robot. The second design concept involves allowing the cylindrical surface of each link of a snake robot to rotate by a motor inside the link in order to induce propulsive forces on the robot from its environments. The paper details the advantages of the proposed design concepts over previous snake robot designs. <s> BIB007 </s> Control Allocation- A Survey <s> Other application areas <s> Research on biomimetic robotic fish has been undertaken for more than a decade. Various robotic fish prototypes have been developed around the world. Although considerable research efforts have been devoted to understanding the underlying mechanism of fish swimming and construction of fish-like swimming machines, robotic fish have largely remained laboratory curiosities. This paper presents a robotic fish that is designed for application in real-world scenarios. The robotic fish adopts a rigid torpedo-shaped body for the housing of power, electronics, and payload. A compact parallel four-bar mechanism is designed for propulsion and maneuvering. Based on the kinematic analysis of the tail mechanism, the motion control algorithm of joints is presented. The swimming performance of the robotic fish is investigated experimentally. The swimming speed of the robotic fish can reach 1.36 m/s. The turning radius is 1.75 m. Powered by the onboard battery, the robotic fish can operate for up to 20 h. Moreover, the advantages of the biomimetic propulsion approach are shown by comparing the power efficiency and turning performance of the robotic fish with that of a screw-propelled underwater vehicle. The application of the robotic fish in a real-world probe experiment is also presented. © 2010 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc. <s> BIB008
Legged walking robots require coordination of the dynamic or periodic motion of each leg. The force distributing control allocation algorithm should take into account energyefficiency and contact friction between the leg and ground which is non-zero only for a fraction of a cycle, e.g. BIB001 . A nonlinear programming approach, simplified by a pseudoinverse calculations of initial solution guess, is presented in BIB002 . The control effectiveness matrix B is time-varying due to the cyclic contact patter of the walk. Control allocation has been used in the development flapping wing micro air vehicle control methods. In they describe a method to control 5 degrees of freedom using two physical actuators that drive flapping wings. The six variables parameterize the periodic motion of two independently flapping wings that in turn control five degrees of freedom. Multi-agent swarms (like formations of mobile robots) are considered in a fairly general context in BIB006 ). Control allocation strategies based on pseudo-inverses and nonlinear programming are investigated. The control allocation problem when using a large-scale distributed array of air-jet actuators in studied in BIB003 BIB004 . In order to achieve computational efficiency to allow real-time implementation at high update frequencies in case of thousands of independent actuators, they empirically compare optimal solutions with approximate solution that depend on hierarchical decomposition into actuator groups. A similar approach was taken in BIB005 , where a hierarchical decomposition and re-parameterization using basis-function leads to computational complexity reduction of the control allocation computations. These methods also allows parallelization so the algorithm can be distributed on multiple processors. Over-actuated mechanical design are increasing in popularity in automotive, aerospace and maritime industries, and not only humanoid walking robots, but emerging concepts like robotic snakes (e.g. BIB007 , and robotic fish (e.g. BIB008 ) with highly redundant and over-actuated bioinspired locomotion mechanisms will for sure benefit from further research on control allocation.
Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Bayesian Reasoning as a Serious, Real World Problem <s> AS both the number and cost of clinical laboratory tests continue to increase at an accelerating rate,1 physicians are faced with the task of comprehending and acting on a rising flood tide of information. We conducted a small survey to obtain some idea of how physicians do, in fact, interpret a laboratory result. Methods We asked 20 house officers, 20 fourth-year medical students and 20 attending physicians, selected in 67 consecutive hallway encounters at four Harvard Medical School teaching hospitals, the following question: "If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of . . . <s> BIB001 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Bayesian Reasoning as a Serious, Real World Problem <s> Previous research has demonstrated that Bayesian reasoning performance is improved if uncertainty information is presented as natural frequencies rather than single-event probabilities. A questionnaire study of 342 college students replicated this effect but also found that the performance-boosting benefits of the natural frequency presentation occurred primarily for participants who scored high in numeracy. This finding suggests that even comprehension and manipulation of natural frequencies requires a certain threshold of numeracy abilities, and that the beneficial effects of natural frequency presentation may not be as general as previously believed. <s> BIB002
Traditional research on people's abilities to engage in Bayesian reasoning uses the following protocol: a person is presented with a description of a situation in which Bayesian reasoning is relevant, the necessary numerical information for Bayesian calculations, and then a request that the participant calculate the posterior probability (expressed in terms of the relevant situation). For example, one such task (adapted from BIB002 ) is as follows: The serum test screens pregnant women for babies with Down's syndrome. The test is a very good one, but not perfect. Roughly 5% of babies have Down's syndrome. If a baby has Down's syndrome, there is a 80% chance that the result will be positive. If the baby is unaffected, there is still a 20% chance that the result will still be positive. A pregnant woman has been tested and the result is positive. What is the chance that her baby actually has Down's syndrome? Undergraduates, medical students, and even physicians do quite poorly on this type of Bayesian reasoning task (e.g., BIB001 , including when it is in a medical testing context such as the above example. Such failures of Bayesian reasoning suggest potentially tragic consequences for medical decision making, as well as any other real world topics that involve similar calculations. Interestingly, evaluations of how and why people do poorly in Bayesian reasoning has changed over the years. In the early days of research on Bayesian reasoning, the dominant view by researchers was that humans were approximating Bayes' theorem, but erred in being far too conservative in their estimates (e.g., . That is, people did not utilize the new information as much as they should; relying too much on the base rate information. Later work, however, shifted to the idea that the dominant error was in the opposite direction: that people generally erred in relying too much on the new information and neglecting the base rate, either partially or entirely (e.g., Kahneman, 1974, 1982) . This later approach is one of the better known positions within what is known as the heuristics and biases paradigm, within which base rate neglect was considered so strong and pervasive that at one point it was asserted: "In his evaluation of evidence, man is apparently not a conservative Bayesian: he is not Bayesian at all" (Kahneman and Tversky, 1972, p. 450) .
Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> Contributions to Mathematical Psychology, Psycho-metrics and Methodology presents the most esteemed research findings of the 22nd European Mathematical Psychology Group meeting in Vienna, Austria, September 1991. The selection of work appearing in this volume contains not only contributions to mathematical psychology in the narrow sense, but also work in psychometrics and methodology, with the common element of all contributions being their attempt to deal with scientific problems in psychology with rigorous mathematics reasoning. The book contains 28 chapters divided into five parts: Perception, Learning, and Cognition; Choice and Reaction Time; Social Systems; Measurement and Psychometrics; and Methodology. It is of interest to all mathematical psychologists, educational psychologists, and graduate students in these areas. <s> BIB001 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one specifies the information format in which it is designed to operate. The authors show that Bayesian algorithms are computationally simpler in frequency formats than in the probability formats used in previous research. Frequency formats correspond to the sequential way information is acquired in natural sampling, from animal foraging to neural networks. By analyzing several thousand solutions to Bayesian problems, the authors found that when information was presented in frequency formats, statistically naive participants derived up to 50% of all inferences by Bayesian algorithms. Non-Bayesian algorithms included simple versions of Fisherian and Neyman-Pearsonian inference. Is the mind, by design, predisposed against performing Bayesian inference? The classical probabilists of the Enlightenment, including Condorcet, Poisson, and Laplace, equated probability theory with the common sense of educated people, who were known then as "hommes eclaires." Laplace (1814/ 1951) declared that "the theory of probability is at bottom nothing more than good sense reduced to a calculus which evaluates that which good minds know by a sort of instinct, without being able to explain how with precision" (p. 196). The available mathematical tools, in particular the theorems of Bayes and Bernoulli, were seen as descriptions of actual human judgment (Daston, 1981,1988). However, the years of political upheaval during the French Revolution prompted Laplace, unlike earlier writers such as Condorcet, to issue repeated disclaimers that probability theory, because of the interference of passion and desire, could not account for all relevant factors in human judgment. The Enlightenment view—that the laws of probability are the laws of the mind—moderated as it was through the French Revolution, had a profound influence on 19th- and 20th-century science. This view became the starting point for seminal contributions to mathematics, as when George Boole <s> BIB002 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> An investigation was made of the role played by verbal structure in the problems used to study the base-rate fallacy, which has traditionally been attributed to the role of heuristics (e.g. causality, specificity). It was hypothesized that elements of the verbal form of text problems led to a misunderstanding of the question or the specific information, rendering obscure the independence of the sets of data (specific information is obtained independently from the base rate). Nine texts were presented to various groups of subjects: four were taken from Tversky and Kahneman (1980) and used as controls; five were obtained by modifying the verbal form of the original in order to reveal or conceal the links between the sets of data. The percentage of base-rate fallacies was greatly reduced with texts in which the independence of the data was clear, regardless of the causality and specificity of the information they contained (which was not changed). This result suggests that there is a need to consider the rul... <s> BIB003 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> G. Gigerenzer and U. Hoffrage (1995) claimed that Bayesian inference problems, which have been notoriously difficult for laypeople to solve using base rates, hit rates, and false-alarm rates, become computationally simpler when information is presented with frequencies based on natural sampling. They made an evolutionary argument for the improved performance. The authors of the present article show that performance can improve with either probabilities or frequencies, depending on the rareness of the events and the type of information presented. When events are rare, probabilities are more difficult to understand than frequencies (i.e., 5 out of 1,000 vs. .005.). Furthermore, when the information is presented as joint and marginal events, nested sets become more apparent. Frequencies based on natural sampling have these desirable properties. The authors agree with Gigerenzer and Hoffrage that frequencies can improve Bayesian reasoning, but they attribute that improvement to the use of mental models that involve elements of nested sets. <s> BIB004 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> This article outlines a theory of naive probability. According to the theory, individuals who are unfamiliar with the probability calculus can infer the probabilities of events in an extensional way: They construct mental models of what is true in the various possibilities. Each model represents an equiprobable alternative unless individuals have beliefs to the contrary, in which case some models will have higher probabilities than others. The probability of an event depends on the proportion of models in which it occurs. The theory predicts several phenomena of reasoning about absolute probabilities, including typical biases. It correctly predicts certain cognitive illusions in inferences about relative probabilities. It accommodates reasoning based on numerical premises, and it explains how naive reasoners can infer posterior probabilities without relying on Bayes's theorem. Finally, it dispels some common misconceptions of probabilistic reasoning. The defence were permitted to lead evidence of the Bayes Theorem in connection with the statistical evaluation of the DNA profile. Although their Lordships expressed no concluded view on the matter, they had very grave doubts as to whether that evidence was properly admissible . .. their Lordships had never heard it suggested that a jury <s> BIB005 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> Gigerenzer has argued that it may be inappropriate to characterize some of the biases identified by Kahneman and Tversky as “errors” or “fallacies,” for three reasons: (a) according to frequentists, no norms are appropriate for single-case judgments because single-case probabilities are meaningless; (b) even if single-case probabilities make sense, they need not be governed by statistical norms because such norms are “content-blind” and can conflict with conversational norms; (c) conflicting statistical norms exist. I try to clear up certain misunderstandings that may have hindered progress in this debate. Gigerenzer’s main point turns out to be far less extreme than the position of “normative agnosticism” attributed to him by Kahneman and Tversky: Gigerenzer is not denying that norms appropriate for single-case judgments exist, but is rather complaining that the existence and the nature of such norms have been dogmatically assumed by the heuristics and biases literature. In response to this complaint I argue that single-case probabilities (a) make sense and (b) are governed by probabilistic norms, and that (c) the existence of conflicting statistical norms may be less widespread and less damaging than Gigerenzer thinks. q 2000 Elsevier Science B.V. All rights reserved. <s> BIB006 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> Three experiments examined people’s ability to incorporate base rate information when judging posterior probabilities. Specifically, we tested the (Cosmides, L., & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgement under uncertainty. Cognition, 58, 1‐73) conclusion that people’s reasoning appears to follow Bayesian principles when they are presented with information in a frequency format, but not when information is presented as one case probabilities. First, we found that frequency formats were not generally associated with better performance than probability formats unless they were presented in a manner which facilitated construction of a set inclusion mental model. Second, we demonstrated that the use of frequency information may promote biases in the weighting of information. When participants are asked to express their judgements in frequency rather than probability format, they were more likely to produce the base rate as their answer, ignoring diagnostic evidence. q 2000 Elsevier Science B.V. All rights reserved. <s> BIB007 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> In the psychology of thinking, little thought is given to what constitutes good thinking. Instead, normative solutions to problems have been accepted at face value, thereby determining what counts as a reasoning fallacy. I applaud Vranas (Cognition 76 (2000) 179) for thinking seriously about norms. I do, however, disagree with his attempt to provide post hoc justifications for supposed reasoning fallacies in terms of ‘content-neutral’ norms. Norms need to be constructed for a specific situation, not imposed upon it in a contentblind way. The reason is that content-blind norms disregard relevant structural properties of the given situation, including polysemy, reference classes, and sampling. I also show that content-blind norms can, unwittingly, lead to double standards: the norm in one problem is the fallacy in the next. The alternative to content-blind norms is not no norms, but rather carefully designed norms. q 2001 Elsevier Science B.V. All rights reserved. <s> BIB008 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> Is the human mind inherently unable to reason probabilistically, or is it able to do so only when problems tap into a module for reasoning about natural frequencies? We suggest an alternative possibility: naive individuals are able to reason probabilistically when they can rely on a representation of subsets of chances or frequencies. We predicted that naive individuals solve conditional probability problems if they can infer conditional probabilities from the subset relations in their representation of the problems, and if the question put to them makes it easy to consider the appropriate subsets. The results of seven studies corroborated these predictions: when the form of the question and the structure of the problem were framed so as to activate intuitive principles based on subset relations, naive individuals solved problems, whether they were stated in terms of probabilities or frequencies. Otherwise, they failed with both sorts of information. The results contravene the frequentist hypothesis and the evolutionary account of probabilistic reasoning. <s> BIB009 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> The mental-model account of naive probabilistic reasoning by P. N. Johnson-Laird, P. Legrenzi, V. Girotto, M. S. Legrenzi, and J.-P. Caverni (1999) provides an opportunity to clarify several similarities and differences between it and ecological rationality (frequentist) accounts. First, ambiguities in the meaning of Bayesian reasoning can lead to disagreements and inappropriate arguments. Second, 2 conflated effects of using natural frequencies are noticed but not actually tested separately because of an artificial dissociation of frequency representations and natural sampling. Third, similarities are noted between the subset principle and the principle of natural sampling. Finally, some potentially misleading portrayals of the role of evolutionary factors in psychology are corrected. Mental-model theory, rather than better explaining probabilistic reasoning, may be able to use frequency representations as a key element in clarifying its own ambiguous constructs. <s> BIB010 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> A good representation can be crucial for finding the solution to a problem. Gigerenzer and Hoffrage (Psychol. Rev. 102 (1995) 684; Psychol. Rev. 106 (1999) 425) have shown that representations in terms of natural frequencies, rather than conditional probabilities, facilitate the computation of a cause's probability (or frequency) given an effect – a problem that is usually referred to as Bayesian reasoning. They also have shown that normalized frequencies – which are not natural frequencies – do not lead to computational facilitation, and consequently, do not enhance people's performance. Here, we correct two misconceptions propagated in recent work (Cognition 77 (2000) 197; Cognition 78 (2001) 247; Psychol. Rev. 106 (1999) 62; Organ. Behav. Hum. Decision Process. 82 (2000) 217): normalized frequencies have been mistaken for natural frequencies and, as a consequence, “nested sets” and the “subset principle” have been proposed as new explanations. These new terms, however, are nothing more than vague labels for the basic properties of natural frequencies. <s> BIB011 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> Abstract Do individuals unfamiliar with probability and statistics need a specific type of data in order to draw correct inferences about uncertain events? Girotto and Gonzalez (Cognition 78 (2001) 247) showed that naive individuals solve frequency as well as probability problems, when they reason extensionally, in particular when probabilities are represented by numbers of chances. Hoffrage, Gigerenzer, Krauss, and Martignon (Cognition 84 (2002) 343) argued that numbers of chances are natural frequencies disguised as probabilities, though lacking the properties of true probabilities. They concluded that we failed to demonstrate that naive individuals can deal with true probabilities as opposed to natural frequencies. In this paper, we demonstrate that numbers of chances do represent probabilities, and that naive individuals do not confuse numbers of chances with frequencies. We conclude that there is no evidence for the claim that natural frequencies have a special cognitive status, and the evolutionary argument that the human mind is unable to deal with probabilities. <s> BIB012 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> Cosmides and Tooby (1996) increased performance using a frequency rather than probability frame on a problem known to elicit base-rate neglect. Analogously, Gigerenzer (1994) claimed that the conjunction fallacy disappears when formulated in terms of frequency rather than the more usual single-event probability. These authors conclude that a module or algorithm of mind exists that is able to compute with frequencies but not probabilities. The studies reported here found that base-rate neglect could also be reduced using a clearly stated single-event probability frame and by using a diagram that clarified the critical nested-set relations of the problem; that the frequency advantage could be eliminated in the conjunction fallacy by separating the critical statements so that their nested relation was opaque; and that the large effect of frequency framing on the two problems studied is not stable. Facilitation via frequency is a result of clarifying the probabilistic interpretation of the problem and inducing a representation in terms of instances, a form that makes the nested-set relations amongst the problem components transparent. <s> BIB013 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> The idea that naturally sampled frequencies facilitate performance in statistical reasoning tasks because they are a cognitively privileged representational format has been challenged by findings that similarly structured numbers presented as chances similarly facilitate performance, on the basis of the claim that these are technically single-event probabilities. A crucial opinion, however, is that of the research participants, who possibly interpret chances as de facto frequencies. A series of experiments here indicate that not only is performance improved by clearly presented natural frequencies, rather than chances phrasing, but also that participants who interpreted chances as frequencies, rather than as probabilities, were consistently better at statistical reasoning. This result was found across different variations of information presentation and across different populations. <s> BIB014 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Natural Sampling and Frequencies in Bayesian Reasoning <s> SUMMARY In an ongoing debate between two visions of statistical reasoning competency, ecological rationality proponents claim that pictorial representations help tap into the frequency coding mechanisms of the mind, whereas nested sets proponents argue that pictorial representations simply help one to appreciate general subset relationships. Advancing this knowledge into applied areas is hampered by this present disagreement. A series of experiments used Bayesian reasoning problems with different pictorial representations (Venn circles, iconic symbols and Venn circles with dots) to better understand influences on performance across these representation types. Results with various static and interactive presentations of pictures all indicate a consistent advantage for iconic representations. These results are more consistent with an ecological rationality view of how these pictorial representations achieve facilitation in statistical task performance and provide more specific guidance for applied uses. Copyright # 2008 John Wiley & Sons, Ltd. <s> BIB015
A seminal paper in terms of improving Bayesian reasoning and the current issues revolving around those improvements is BIB002 . This paper described a structure for presenting information in such a way that it greatly helped people reach correct Bayesian conclusions. This structure is one of whole-number frequencies in a natural sampling framework. (This original paper used the unfortunately ambiguous label of "frequency format" for this structure, which has led to some confusion; see BIB008 Hoffrage, 1999, 2007; BIB004 BIB006 BIB008 There are thus two aspect of this structure: (a) the use of frequencies as a numerical format, and (b) the use of a particular structure, called natural sampling, for the relationships between the numbers. The rationale for both of these aspects is similar: they map onto the type of information which the human mind generally encounters in the natural environment, both currently and over evolutionary history. For this reason, the Gigerenzer and Hoffrage position is often described as the ecological rationality approach. It can be challenging to dissociate natural sampling from frequencies. When considering the occurrence of objects or events in the real world, that experience tends to strongly imply frequency counts as the format in which that information would be encoded. The actual format of natural sampling, however, is actually the online categorization of that information into groups, including groups which can be subsets of one another. Figure 1 shows the previously given Bayesian reasoning task information (about a Down's syndrome serum test) as naturally sampled frequencies. In this case we imagine (or recall) 100 experiences with this test, and five of those experiences were with a baby who had Down's syndrome (i.e., 5% base rate). Those five experiences can be further categorized by when the test came out positive (4 times; 4 out of 5 is 80%), and the 95 cases of babies without Down's syndrome can be similarly categorized by the test results (19 false positive results; 19 out of 95 is 20%). This nested categorization structure creates numbers in the lower-most row for which the base-rates (from the initial categorization groups) are automatically taken into account already. This, in turn, makes the calculations for Bayesian reasoning less computationally difficult. (Specifically, the probabilistic version of Bayes theorem is p(H|D) =p(H)p(D|H)/p(H)p(D|H) + p(∼H)p(D|∼H), with D = new data and H = the hypothesis, whereas with natural sampling this equation can be simplified to p(H|D) = d&h/d&h + d&∼h, with d&h = frequency of data and the hypothesis and d&∼h = frequency of data and the null hypothesis. Also note that changing the natural frequency numbers to standardized formats, such as percentages, destroys the nested categorizations, and thus the computational simplification, of natural sampling.) Thus, whereas it is pretty easy to create numerical frequencies which are not in a natural sampling framework, it is difficult FIGURE 1 | An illustration of a natural sampling framework: the total population (100) is categorized into groups (5/95) and those groups are categorized into parallel sub-groups below that. to construct a natural sampling framework without reference to frequencies. The consequences of confusions about how natural sampling and numerical frequencies are related to each other has led to a number of claimed novel discoveries, which are observed from the other side as "re-inventions." One example of this is that the principles of natural sampling have been co-opted as something new and different. These situations require some clarification, which hopefully can be done in a relatively concise manner. Subsequent to the description and application of a natural sampling structure in the original BIB002 paper (which explicitly drew on the work by BIB001 in developing the natural sampling idea), the basic structure of natural sampling has been re-invented at least four times in the literature. Each time, the new incarnation is described at a level of abstraction which allows one to consider the structure independent of frequencies (or any other numerical format), but the natural sampling structure is unmistakable: (a) BIB005 reintroduced the basic relevant principle of natural sampling as their "subset principle, " implying that ecological rationality researchers somehow missed this property: "The real burden of the findings of Gigerenzer and Hoffrage, (1995, p. 81) is that the mere use of frequencies does not constitute what they call a 'natural sample.' Whatever its provenance, as they hint, a natural sample is one in which the subset relations can be used to infer the posterior probability, and so reasoners do not have to use Bayes' theorem." Note also the confusion in this passage between the narrow definition of Bayesian reasoning as using Bayes' theorem and the more general, psychologically relevant definition of Bayesian reasoning we clarified earlier in this paper. BIB009 continue from this point in their use of the "subset principle, " which is simply an abstraction of the natural sampling structure; (b) BIB007 proposed a process that involves "cueing of a set inclusion mental model, " rather than a natural sampling structure; (c) BIB003 and created the label of "partitive formulation" to describe the natural sampling structure; and (d) BIB013 use the term "nested-set relations" rather than natural sampling, following . As this last re-invention noted, did discover that using frequencies sometimes improved performance (e.g., in their work on the conjunction fallacy), but they did not actually elaborate this observation into a theory; they only speculated that frequencies somehow helped people represent class inclusion. Dissociating the natural sampling framework, claiming that it is something else, and then looking at the effects of numerical frequencies by themselves (without natural sampling or with malformed natural sampling) has allowed for all sorts of methodological and conceptual shenanigans. It is not interesting, either methodologically or theoretically, that making Bayesian reasoning tasks harder (by adding steps, using wordings which confuse people, switching numerical formats within the same problem) can decrease performance (see, BIB010 BIB014 BIB015 Brase, ,b, 2014 for further elaboration). Indeed, it is generally difficult to make strong theoretical claims based on people failing to accomplish a task, as there are usually many different possible reasons for failure. In addition to multiple attempts to co-opt the concept of natural sampling there has been a notable attempt to co-opt the numerical format of frequencies, claiming that the facilitative effect of using frequencies is not actually about the frequencies themselves. BIB009 asserted that people actually can be good at Bayesian reasoning when given only probabilistic information. The probabilities used in this research, however, are of a peculiar type stated in whole number terms. For example: Mary is tested now [for a disease]. Out of the entire 10 chances, Mary has ___ chances of showing the symptom [of the disease]; among these chances, ___ chances will be associated with the disease. (p. 274) How many times was Mary tested? Once or ten times? If tested once, there is one "chance" for a result; if tested 10 times (or even if 10 hypothetical times are envisioned), then this is an example of frequency information. It seems odd to say that subjects are truly reasoning about unique events and that they are not using frequencies, when the probabilities are stated as de facto frequencies (i.e., 3 out of 10). Although BIB009 claim that "chances" refer to the probability of a single-event, it can just as easily be argued that this format yields better reasoning because it manages -in the view of the research participants-to tap into a form of natural frequency representation. This alternative interpretation was immediately pointed out BIB010 BIB011 , but advocates of the heuristics and biases approach were not swayed BIB012 . In order to adjudicate this issue, BIB014 gave participants Bayesian reasoning tasks based on those used by BIB009 . Some of these problems used the natural sampling-like chances wording. Other versions of this problem used either percentages (not a natural sampling format) or used a (non-chances) frequency wording that was in a natural sampling format. After solving these problems, the participants were asked how they had thought about the information and reached their answer to the problem. First of all, contrary to the results of BIB009 , it was found that frequencies in a natural sampling structure actually led to superior performance over "chances" in a natural sampling structure. (The effect size of this result is actually similar to the Girotto and Gonzalez (2001) results, which were statistically underpowered due to small sample sizes.) More notably, though, the participants who interpreted the ambiguous "chances" as referring to frequencies performed better than those who interpreted the same information as probabilities. This result cuts through any issues about the computational differences between natural sampling frameworks versus normalized information, because the presented information is exactly the same in these conditions and requires identical computations; only the participants' understanding of that information is different.
Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Using Pictures to Aid Bayesian Reasoning <s> Abstract Professional probabilists have long argued over what probability means, with, for example, Bayesians arguing that probabilities refer to subjective degrees of confidence and frequentists arguing that probabilities refer to the frequencies of events in the world. Recently, Gigerenzer and his colleagues have argued that these same distinctions are made by untutored subjects, and that, for many domains, the human mind represents probabilistic information as frequencies. We analyze several reasons why, from an ecological and evolutionary perspective, certain classes of problem-solving mechanisms in the human mind should be expected to represent probabilistic information as frequencies. Then, using a problem famous in the “heuristics and biases” literature for eliciting base rate neglect, we show that correct Bayesian reasoning can be elicited in 76% of subjects - indeed, 92% in the most ecologically valid condition - simply by expressing the problem in frequentist terms. This result adds to the growing body of literature showing that frequentist representations cause various cognitive biases to disappear, including overconfidence, the conjunction fallacy, and base-rate neglect. Taken together, these new findings indicate that the conclusion most common in the literature on judgment under uncertainty - that our inductive reasoning mechanisms do not embody a calculus of probability - will have to be re-examined. From an ecological and evolutionary perspective, humans may turn out to be good intuitive statisticians after all. <s> BIB001 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Using Pictures to Aid Bayesian Reasoning <s> Cosmides and Tooby (1996) increased performance using a frequency rather than probability frame on a problem known to elicit base-rate neglect. Analogously, Gigerenzer (1994) claimed that the conjunction fallacy disappears when formulated in terms of frequency rather than the more usual single-event probability. These authors conclude that a module or algorithm of mind exists that is able to compute with frequencies but not probabilities. The studies reported here found that base-rate neglect could also be reduced using a clearly stated single-event probability frame and by using a diagram that clarified the critical nested-set relations of the problem; that the frequency advantage could be eliminated in the conjunction fallacy by separating the critical statements so that their nested relation was opaque; and that the large effect of frequency framing on the two problems studied is not stable. Facilitation via frequency is a result of clarifying the probabilistic interpretation of the problem and inducing a representation in terms of instances, a form that makes the nested-set relations amongst the problem components transparent. <s> BIB002 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Using Pictures to Aid Bayesian Reasoning <s> Abstract. Recent probability judgment research contrasts two opposing views. Some theorists have emphasized the role of frequency representations in facilitating probabilistic correctness; opponents have noted that visualizing the probabilistic structure of the task sufficiently facilitates normative reasoning. In the current experiment, the following conditional probability task, an isomorph of the “Problem of Three Prisoners” was tested. “A factory manufactures artificial gemstones. Each gemstone has a 1/3 chance of being blurred, a 1/3 chance of being cracked, and a 1/3 chance of being clear. An inspection machine removes all cracked gemstones, and retains all clear gemstones. However, the machine removes ½ of the blurred gemstones. What is the chance that a gemstone is blurred after the inspection?” A 2 × 2 design was administered. The first variable was the use of frequency instruction. The second manipulation was the use of a roulette-wheel diagram that illustrated a “nested-sets” relationship betwe... <s> BIB003 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Using Pictures to Aid Bayesian Reasoning <s> The idea that naturally sampled frequencies facilitate performance in statistical reasoning tasks because they are a cognitively privileged representational format has been challenged by findings that similarly structured numbers presented as chances similarly facilitate performance, on the basis of the claim that these are technically single-event probabilities. A crucial opinion, however, is that of the research participants, who possibly interpret chances as de facto frequencies. A series of experiments here indicate that not only is performance improved by clearly presented natural frequencies, rather than chances phrasing, but also that participants who interpreted chances as frequencies, rather than as probabilities, were consistently better at statistical reasoning. This result was found across different variations of information presentation and across different populations. <s> BIB004 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Using Pictures to Aid Bayesian Reasoning <s> SUMMARY In an ongoing debate between two visions of statistical reasoning competency, ecological rationality proponents claim that pictorial representations help tap into the frequency coding mechanisms of the mind, whereas nested sets proponents argue that pictorial representations simply help one to appreciate general subset relationships. Advancing this knowledge into applied areas is hampered by this present disagreement. A series of experiments used Bayesian reasoning problems with different pictorial representations (Venn circles, iconic symbols and Venn circles with dots) to better understand influences on performance across these representation types. Results with various static and interactive presentations of pictures all indicate a consistent advantage for iconic representations. These results are more consistent with an ecological rationality view of how these pictorial representations achieve facilitation in statistical task performance and provide more specific guidance for applied uses. Copyright # 2008 John Wiley & Sons, Ltd. <s> BIB005 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Using Pictures to Aid Bayesian Reasoning <s> In an ongoing debate about statistical reasoning competency, one view claims that pictorial representations help tap into the frequency coding mechanisms, whereas another view argues that pictorial representations simply help one to appreciate general subset relationships. The present experiments used Bayesian reasoning problems, expressed in an ambiguous numerical format (chances) and with different pictorial representations, to better understand influences on performance across these representation types. Although a roulette wheel diagram had some positive effect on performance, both abstract icons and pictographs improved performance markedly more. Furthermore, a frequency interpretation of the ambiguous numerical information was also associated with superior performance. These findings support the position that the human mind is more easily able to use frequency-based information, as opposed to grasping subset relations, as an explanation for improved statistical reasoning. These results also provide ... <s> BIB006
Generally speaking, pictures help Bayesian reasoning. Like the research on frequencies and natural sampling, however, there is disagreement on how and why they help. The ecological rationality account BIB001 considers pictorial representations as helping because they help to tap into the frequency-tracking cognitive mechanisms of a mind designed by the ecology experienced over evolutionary history. That is, people have been tracking, storing, and using information about the frequencies of objects, locations, and events for many generations. Visual representations of objects, events, and locations should therefore be closer to that type of information with which the mind is designed to work. An alternative heuristics and biases account is that pictures help to make the structure of Bayesian reasoning problems easier to understand. This account of pictures helping because it enables people to "see the problem more clearly" is often tied to the co-opted and abstracted idea of natural sampling; the pictures help make the subset structure, the set-inclusion model, or the nested-set relations more apparent (e.g., BIB002 BIB003 . Indeed, there are parallels here in the comparison of these two perspectives: the ecological rationality account proposes a more narrowly specified (and evolutionary based) account, whereas the heuristics and biases account favors a less specific (non-evolutionary) account. Subsequent research BIB005 BIB006 has taken advantage of the fact that ambiguous numerical formats can be interpreted as either frequencies or as probabilities. By using the "chances" wording for the actual text and therefore holding the numerical information as a constant, while varying the type of pictorial representation, this research has been able to compare different types of pictorial aids against a neutral task backdrop. BIB005 found that, compared to control conditions of no picture at all, Venn circles (which should facilitate the perception of subset relationship) did not help nearly as much as pictures of icon arrays (which should facilitate frequency interpretations of the information). Furthermore, a picture with intermediate properties -a Venn circle with dots embedded within it -led to intermediate performance between solid Venn circles and icon arrays. Subsequent research by took an interesting intermediate theoretical position, claiming that the heuristics and biases account predicted no facilitation of Bayesian reasoning from using pictures (contra BIB002 BIB003 . Their null findings of several different types of pictures failing to improve Bayesian reasoning are used to challenge the ecological rationality account, which they agree does predict an improvement with the use of pictures. A nearly concurrent publication replicated and extended the specific effects of BIB005 , however, casting doubt on the significance of the null findings. BIB006 found that roulette wheel diagrams (like those used in BIB003 led to performance similar to that of Venn diagrams, and that both realistic and abstract icon shapes significantly improved performance. Interpretation of the ambiguous numerical information as frequencies also improved Bayesian reasoning performance in all these conditions (replicating the findings of BIB004 , separate from the effects of the different picture types.
Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> Abstract Professional probabilists have long argued over what probability means, with, for example, Bayesians arguing that probabilities refer to subjective degrees of confidence and frequentists arguing that probabilities refer to the frequencies of events in the world. Recently, Gigerenzer and his colleagues have argued that these same distinctions are made by untutored subjects, and that, for many domains, the human mind represents probabilistic information as frequencies. We analyze several reasons why, from an ecological and evolutionary perspective, certain classes of problem-solving mechanisms in the human mind should be expected to represent probabilistic information as frequencies. Then, using a problem famous in the “heuristics and biases” literature for eliciting base rate neglect, we show that correct Bayesian reasoning can be elicited in 76% of subjects - indeed, 92% in the most ecologically valid condition - simply by expressing the problem in frequentist terms. This result adds to the growing body of literature showing that frequentist representations cause various cognitive biases to disappear, including overconfidence, the conjunction fallacy, and base-rate neglect. Taken together, these new findings indicate that the conclusion most common in the literature on judgment under uncertainty - that our inductive reasoning mechanisms do not embody a calculus of probability - will have to be re-examined. From an ecological and evolutionary perspective, humans may turn out to be good intuitive statisticians after all. <s> BIB001 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> In this engaging book, Jerry Fodor argues against the widely held view that mental processes are largely computations, that the architecture of cognition is massively modular, and that the explanation of our innate mental structure is basically Darwinian. Although Fodor has praised the computational theory of mind as the best theory of cognition that we have got, he considers it to be only a fragment of the truth. In fact, he claims, cognitive scientists do not really know much yet about how the mind works (the book's title refers to Steve Pinker's How the Mind Works). Fodor's primary aim is to explore the relationship among computational and modular theories of mind, nativism, and evolutionary psychology. Along the way, he explains how Chomsky's version of nativism differs from that of the widely received New Synthesis approach. He concludes that although we have no grounds to suppose that most of the mind is modular, we have no idea how nonmodular cognition could work. Thus, according to Fodor, cognitive science has hardly gotten started. <s> BIB002 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> BACKGROUND ::: Numeracy, how facile people are with basic probability and mathematical concepts, is associated with how people perceive health risks. Performance on simple numeracy problems has been poor among populations with little as well as more formal education. Here, we examine how highly educated participants performed on a general and an expanded numeracy scale. The latter was designed within the context of health risks. ::: ::: ::: METHOD ::: A total of 463 men and women aged 40 and older completed a 3-item general and an expanded 7-item numeracy scale. The expanded scale assessed how well people 1) differentiate and perform simple mathematical operations on risk magnitudes using percentages and proportions, 2) convert percentages to proportions, 3) convert proportions to percentages, and 4) convert probabilities to proportions. ::: ::: ::: RESULTS ::: On average, 18% and 32% of participants correctly answered all of the general and expanded numeracy scale items, respectively. Approximately 16% to 20% incorrectly answered the most straightforward questions pertaining to risk magnitudes (e.g., Which represents the larger risk: 1%, 5%, or 10%?). A factor analysis revealed that the general and expanded risk numeracy items tapped the construct of global numeracy. ::: ::: ::: CONCLUSIONS ::: These results suggest that even highly educated participants have difficulty with relatively simple numeracy questions, thus replicating in part earlier studies. The implication is that usual strategies for communicating numerical risk may be flawed. Methods and consequences of communicating health risk information tailored to a person's level of numeracy should be explored further. <s> BIB003 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> Cosmides and Tooby (1996) increased performance using a frequency rather than probability frame on a problem known to elicit base-rate neglect. Analogously, Gigerenzer (1994) claimed that the conjunction fallacy disappears when formulated in terms of frequency rather than the more usual single-event probability. These authors conclude that a module or algorithm of mind exists that is able to compute with frequencies but not probabilities. The studies reported here found that base-rate neglect could also be reduced using a clearly stated single-event probability frame and by using a diagram that clarified the critical nested-set relations of the problem; that the frequency advantage could be eliminated in the conjunction fallacy by separating the critical statements so that their nested relation was opaque; and that the large effect of frequency framing on the two problems studied is not stable. Facilitation via frequency is a result of clarifying the probabilistic interpretation of the problem and inducing a representation in terms of instances, a form that makes the nested-set relations amongst the problem components transparent. <s> BIB004 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> Currently, there is widespread skepticism that higher cognitive processes, given their apparent flexibility and globality, could be carried out by specialized computational devices, or modules. This skepticism is largely due to Fodor's influential definition of modularity. From the rather flexible catalogue of possible modular features that Fodor originally proposed has emerged a widely held notion of modules as rigid, informationally encapsulated devices that accept highly local inputs and whose operations are insensitive to context. It is a mistake, however, to equate such features with computational devices in general and therefore to assume, as Fodor does, that higher cognitive processes must be non-computational. Of the many possible non-Fodorean architectures, one is explored here that offers possible solutions to computational problems faced by conventional modular systems: an 'enzymatic' architecture. Enzymes are computational devices that use lock-and-key template matching to identify relevant information (substrates), which is then operated upon and returned to a common pool for possible processing by other devices. Highly specialized enzymes can operate together in a common pool of information that is not pre-sorted by information type. Moreover, enzymes can use molecular 'tags' to regulate the operations of other devices and to change how particular substrates are construed and operated upon, allowing for highly interactive, context-specific processing. This model shows how specialized, modular processing can occur in an open system, and suggests that skepticism about modularity may largely be due to failure to consider alternatives to the standard model. <s> BIB005 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> A series of four studies explored how the ability to comprehend and transform probability numbers relates to performance on judgment and decision tasks. On the surface, the tasks in the four studies appear to be widely different; at a conceptual level, however, they all involve processing numbers and the potential to show an influence of affect. Findings were consistent with highly numerate individuals being more likely to retrieve and use appropriate numerical principles, thus making themselves less susceptible to framing effects, compared with less numerate individuals. In addition, the highly numerate tended to draw different (generally stronger or more precise) affective meaning from numbers and numerical comparisons, and their affective responses were more precise. Although generally helpful, this tendency may sometimes lead to worse decisions. The less numerate were influenced more by competing, irrelevant affective considerations. Analyses showed that the effect of numeracy was not due to general i... <s> BIB006 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> Objective To investigate the accuracy of interpretation of probabilistic screening information by different stakeholder groups and whether presentation as frequencies improves accuracy. ::: ::: Design Between participants experimental design; participants responded to screening information embedded in a scenario. ::: ::: Setting Regional maternity service and national conferences and training days. ::: ::: Participants 43 pregnant women attending their first antenatal appointment in a regional maternity service; 40 companions accompanying the women to their appointments; 42 midwives; 41 obstetricians. Participation rates were 56%, 48%, 89%, and 71% respectively. ::: ::: Measures Participants estimated the probability that a positive screening test result meant that a baby actually had Down's syndrome on the basis of all the relevant information, which was presented in a scenario. They were randomly assigned to scenarios that presented the information in percentage (n = 86) or frequency (n = 83) format. They also gave basic demographic information and rated their confidence in their estimate. ::: ::: Results Most responses (86%) were incorrect. Obstetricians gave significantly more correct answers (although still only 43%) than either midwives (0%) or pregnant women (9%). Overall, the proportion of correct answers was higher for presentation as frequencies (24%) than for presentation as percentages (6%), but further analysis showed that this difference occurred only in responses from obstetricians. Many health professionals were confident in their incorrect responses. ::: ::: Conclusions Most stakeholders in pregnancy screening draw incorrect inferences from probabilistic information, and health professionals need to be aware of the difficulties that both they and their patients have with such information. Moreover, they should be aware that different people make different mistakes and that ways of conveying information that help some people will not help others. <s> BIB007 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> Leading accounts of judgment under uncertainty evaluate performance within purely statistical frame- works, holding people to the standards of classical Bayesian (A. Tversky & D. Kahneman, 1974) or frequentist (G. Gigerenzer & U. Hoffrage, 1995) norms. The authors argue that these frameworks have limited ability to explain the success and flexibility of people's real-world judgments and propose an alternative normative framework based on Bayesian inferences over causal models. Deviations from traditional norms of judgment, such as base-rate neglect, may then be explained in terms of a mismatch between the statistics given to people and the causal models they intuitively construct to support probabilistic reasoning. Four experiments show that when a clear mapping can be established from given statistics to the parameters of an intuitive causal model, people are more likely to use the statistics appropriately, and that when the classical and causal Bayesian norms differ in their prescriptions, people's judgments are more consistent with causal Bayesian norms. Everywhere in life, people are faced with situations that require intuitive judgments of probability. How likely is it that this person is trustworthy? That this meeting will end on time? That this pain in my side is a sign of a serious disease? Survival and success in the world depend on making judgments that are as accurate as possible given the limited amount of information that is often available. To explain how people make judgments under uncer- tainty, researchers typically invoke a computational framework to clarify the kinds of inputs, computations, and outputs that they expect people to use during judgment. We can view human judg- ments as approximations (sometimes better, sometimes worse) to modes of reasoning within a rational computational framework, where a computation is "rational" to the extent that it provides adaptive value in real-world tasks and environments. However, there is more than one rational framework for judgment under uncertainty, and behavior that looks irrational under one frame- work may look rational under a different framework. Because of this, evidence of "error-prone" behavior as judged by one frame- work may alternatively be viewed as evidence that a different rational framework is appropriate. In this article we consider the question of which computational framework best explains people's judgments under uncertainty. To answer this, we must consider what kinds of real-world tasks and environments people encounter, which frameworks are best suited to these environments (i.e., which we should take to be normative), and how well these frameworks predict people's actual judgments under uncertainty (i.e., which framework offers the best descrip- tive model). We propose that a causal Bayesian framework, in which Bayesian inferences are made over causal models, repre- sents a more appropriate normative standard and a more accurate descriptive model than previous frameworks for judgment under uncertainty. The plan of the article is as follows. We first review previous accounts of judgment under uncertainty, followed by the argu- ments for why a causal Bayesian framework provides a better normative standard for human judgment. We then present four experiments supporting the causal Bayesian framework as a de- scriptive model of people's judgments. Our experiments focus on the framework's ability to explain when and why people exhibit base-rate neglect, a well-known judgment phenomenon that has often been taken as a violation of classical Bayesian norms. Spe- cifically, we test the hypotheses that people's judgments can be explained as approximations to Bayesian inference over appropri- ate causal models and that base-rate neglect often occurs when experimenter-provided statistics do not map clearly onto parame- ters of the causal model participants are likely to invoke. We conclude by discussing implications of the causal Bayesian frame- work for other phenomena in probabilistic reasoning and for improving the teaching of statistical reasoning. <s> BIB008 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> Previous research has demonstrated that Bayesian reasoning performance is improved if uncertainty information is presented as natural frequencies rather than single-event probabilities. A questionnaire study of 342 college students replicated this effect but also found that the performance-boosting benefits of the natural frequency presentation occurred primarily for participants who scored high in numeracy. This finding suggests that even comprehension and manipulation of natural frequencies requires a certain threshold of numeracy abilities, and that the beneficial effects of natural frequency presentation may not be as general as previously believed. <s> BIB009 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> As reflected in the amount of controversy, few areas in psychology have undergone such dramatic conceptual changes in the past decade as the emerging science of heuristics. Heuristics are efficient cognitive processes, conscious or unconscious, that ignore part of the information. Because using heuristics saves effort, the classical view has been that heuristic decisions imply greater errors than do “rational” decisions as defined by logic or statistical models. However, for many decisions, the assumptions of rational models are not met, and it is an empirical rather than an a priori issue how well cognitive heuristics function in an uncertain world. To answer both the descriptive question (“Which heuristics do people use in which situations?”) and the prescriptive question (“When should people rely on a given heuristic rather than a complex strategy to make better judgments?”), formal models are indispensable. We review research that tests formal models of heuristic inference, including in business organizations, health care, and legal institutions. This research indicates that (a) individuals and organizations often rely on simple heuristics in an adaptive way, and (b) ignoring part of the information can lead to more accurate judgments than weighting and adding all information, for instance for low predictability and small samples. The big future challenge is to develop a systematic theory of the building blocks of heuristics as well as the core capacities and environmental structures these exploit. <s> BIB010 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> The thesis that the mind is better prepared to process frequencies—as compared to other numerical formats—continues to be debated. A recent aspect of this issue is the role of numeracy (numerical literacy; one's ability to understand and work with numerical information) and specifically the argument that individual differences in numeracy interact with numerical formats. This interaction, either that frequencies improve performance only for those of low numeracy or that frequencies work only for those of high numeracy, would suggest that better performance using frequencies could be due to (nonevolutionary) numeracy effects. The three present studies revisited prior work with cumulative probability, Bayesian reasoning, and scenario risk assessments to study the effects of numeracy on frequency facilitation. Results from these experiments consistently failed to replicate previous findings of interactions; however, a more consistent finding emerged of a straightforward frequency effect. The lack of interact... <s> BIB011 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> Doctors and patients have difficulty inferring the predictive value of a medical test from information about the prevalence of a disease and the sensitivity and false-positive rate of the test. Previous research has established that communicating such information in a format the human mind is adapted to-namely natural frequencies-as compared to probabilities, boosts accuracy of diagnostic inferences. In a study, we investigated to what extent these inferences can be improved-beyond the effect of natural frequencies-by providing visual aids. Participants were 81 doctors and 81 patients who made diagnostic inferences about three medical tests on the basis of information about prevalence of a disease, and the sensitivity and false-positive rate of the tests. Half of the participants received the information in natural frequencies, while the other half received the information in probabilities. Half of the participants only received numerical information, while the other half additionally received a visual aid representing the numerical information. In addition, participants completed a numeracy scale. Our study showed three important findings: (1) doctors and patients made more accurate inferences when information was communicated in natural frequencies as compared to probabilities; (2) visual aids boosted accuracy even when the information was provided in natural frequencies; and (3) doctors were more accurate in their diagnostic inferences than patients, though differences in accuracy disappeared when differences in numerical skills were controlled for. Our findings have important implications for medical practice as they suggest suitable ways to communicate quantitative medical data. <s> BIB012 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> Abstract High numerate individuals tend to be more successful probabilistic problem solvers than those lower in numeracy. These individual differences, however, can be modulated through the presentation format of external information, although discrepancies have been reported. The present investigation addressed these discrepancies by using formally equivalent Bayesian reasoning problems differing in numerical format and problem complexity. As previously observed, with a complex problem all participants were at floor level with probabilistic information, while individual differences emerged with natural frequency data. In sharp contrast, with a simple problem, differences between numeracy levels were diminished with natural frequencies, with group differences emerging only with probabilistic formats. Accordingly, the impact of numeracy in Bayesian reasoning depends both on numerical format and verbal complexity, and further suggests that lower numerate individuals are not inherently unable to reason in a Bayesian-like manner. <s> BIB013 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> Although decisions based on uncertain events are critical in everyday life, people perform remarkably badly when reasoning with probabilistic information. A well-documented example is performance on Bayesian reasoning problems, where people fail to take into account the base-rate. However, framing these problems as frequencies improves performance spectacularly. Popular evolutionary theories have explained this facilitation by positing a specialised module that automatically operates on natural frequencies. Here we test the key prediction from these accounts, namely that the performance of the module functions independently from general-purpose reasoning mechanisms. In three experiments we examined the relationship between cognitive capacity and performance on Bayesian reasoning tasks in various question formats, and experimentally manipulated cognitive resources in a dual task paradigm. Results consistently indicated that performance on classical Bayesian reasoning tasks depends on participants’ availabl... <s> BIB014 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> Many disciplines in everyday life depend on improved performance in probability problems. Most adults struggle with conditional probability problems and prior studies have shown user accuracy is less than 50%. This study examined user performance when aided with computer-generated Venn and Euler-type diagrams in a non-learning context. Following relational complexity, working memory and mental model theories, this study manipulated problem complexity in diagrams and text-only displays. Partially consistent with the study hypotheses, complex visuals outperformed complex text-only displays and simple text-only displays outperformed complex text only displays. However, a significant interaction between users' spatial ability and the use of diagram displays led to a reversal of performance for low-spatial users in one of the diagram displays. Participants with less spatial ability were significantly impaired in their ability to solve problems with less relational complexity when aided by a diagram. <s> BIB015 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> The presentation of a Bayesian inference problem in terms of natural frequencies rather than probabilities has been shown to enhance performance. The effect of individual differences in cognitive processing on Bayesian reasoning has rarely been studied, despite enabling us to test process-oriented variants of the two main accounts of the facilitative effect of natural frequencies: The ecological rationality account (ERA), which postulates an evolutionarily shaped ease of natural frequency automatic processing, and the nested sets account (NSA), which posits analytical processing of nested sets. In two experiments, we found that cognitive reflection abilities predicted normative performance equally well in tasks featuring whole and arbitrarily parsed objects (Experiment 1) and that cognitive abilities and thinking dispositions (analytical vs. intuitive) predicted performance with single-event probabilities, as well as natural frequencies (Experiment 2). Since these individual differences indicate that analytical processing improves Bayesian reasoning, our findings provide stronger support for the NSA than for the ERA. <s> BIB016 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Individual Differences in Bayesian Reasoning <s> People often struggle when making Bayesian probabilistic estimates on the basis of competing sources of statistical evidence. Recently, Krynski and Tenenbaum (Journal of Experimental Psychology: General, 136, 430–450, 2007) proposed that a causal Bayesian framework accounts for peoples’ errors in Bayesian reasoning and showed that, by clarifying the causal relations among the pieces of evidence, judgments on a classic statistical reasoning problem could be significantly improved. We aimed to understand whose statistical reasoning is facilitated by the causal structure intervention. In Experiment 1, although we observed causal facilitation effects overall, the effect was confined to participants high in numeracy. We did not find an overall facilitation effect in Experiment 2 but did replicate the earlier interaction between numerical ability and the presence or absence of causal content. This effect held when we controlled for general cognitive ability and thinking disposition. Our results suggest that clarifying causal structure facilitates Bayesian judgments, but only for participants with sufficient understanding of basic concepts in probability and statistics. <s> BIB017
There have been various claims that certain individual differences may moderate the often-observed frequency effect in Bayesian reasoning. BIB006 demonstrated that numerical literacy (or numeracy)-an applicable understanding of probability, risk, and basic mathematics-moderated many classic judgment and decision making results, showing proof of concept that not all judgment and decision making tasks may be viewed the same by every individual. Specifically, BIB006 showed that low numerates may benefit the most from number formats designed to aid comprehension of the information. The explanation proposed for these results can be summarized as a "fluency hypothesis": that more numerically fluent people (higher in numerical literacy) are influenced less by the use of different numerical formats because they are quite capable of mentally converting formats themselves. In doing so, these highly numerate people utilize the numerical format best suited for the present task. Less numerically fluent people, on the other hand, are prone to work only with the numerical information as presented to them. This leaves them more at the mercy of whatever helpful or harmful format is given to them. Although BIB006 did not assess Bayesian reasoning specifically, Chapman and Liu (2009) later brought the issue of numerical literacy to the topic of frequency effects in Bayesian reasoning tasks. The story takes an interesting turn at this point, because although BIB006 showed low numerates benefited most from a number format change to frequencies, BIB009 showed instead that high numerates differentially benefited from natural frequency formatted Bayesian reasoning problems. Specifically they found that this frequency effect was only observed in highly numerate individuals, resulting in a statistically significant numeracy x number format interaction. BIB009 pointed out that some other research is consistent with these results. In particular, BIB007 provided different groups of participants with Bayesian reasoning problems framed as a test for a birth defect. The participants were either obstetricians, pregnant women and their spouses, or midwives. The effect of presentation format was assessed with a between-subjects manipulation, with some participants receiving naturally sampled frequencies and others receiving a single event probability format. Although the frequency effect was observed in their study, a closer examination showed that this effect was limited to obstetricians, whereas the midwives, pregnant women, and their spouses all showed equally poor Bayesian reasoning performance regardless of number format. To the extent that obstetricians have somewhat higher numerical literacy, which is a plausible assumption, the BIB007 results would be consistent with those of BIB009 . Both of these results, however, are inconsistent with the findings and the fluency hypothesis of BIB006 . BIB009 proposed something akin to a "threshold" hypothesis regarding the interaction effect they found. This threshold hypothesis proposes that a certain level of numerical literacy is required for difficult problems (such as Bayesian reasoning tasks) before helpful formats (e.g., naturally sampled frequencies) are able to provide an observable benefit. To assess this threshold hypothesis and the fluency hypothesis proposed by BIB006 , BIB011 systematically tested a variety of problem types with varying levels of difficulty and in different number formats, while also assessing numerical literacy with the standard measure used in this research (i.e., the General Numeracy Scale; BIB003 ). These findings generally showed an absence of any interaction across several different problem types. Of most importance to the current paper, the Bayesian reasoning problems originally used by BIB009 also failed to replicate the numeracy × number format interaction, causing some specific concern over the "threshold hypothesis" of Bayesian reasoning, and to a lesser extent the "fluency hypothesis" of judgment and decision making tasks in general. The one constant across these studies was a consistent main effect for numeracy and a consistent main effect for number format, with higher numerates performing better on Bayesian reasoning tasks, and participants given the natural frequencies format also performing better than those given single event probability versions. Support for the findings of BIB011 were shown by Garcia-Retamero and Hoffrage (2013) who studied the Bayesian reasoning ability of doctors and patients in medical decision tasks. After fully crossing conditions by number format (natural frequencies and single event probabilities) and display (number only or pictorial representation), participants' numeracy scores were also assessed. Garcia-Retamero and BIB012 found the traditional frequency effect, just as in BIB011 , and also an improvement in Bayesian reasoning performance by including a pictorial representation. Numeracy did not interact with the frequency effect, again consistent with the BIB011 findings and with the ecological rationality explanation of the frequency effect. BIB013 also partially replicated the lack of a numeracy × number format interaction, and found consistent improvement in Bayesian reasoning as a result of using natural frequencies, with the only exception being in very difficult problems, operationally defined by longer word length of the problem text. BIB013 proposed that both BIB009 and BIB011 may be partially correct. When given long ("difficult") problems, the numeracy × number format interaction was present, with low numerates showing a floor effect, and high numerates showing the benefit of natural frequencies, a finding consistent with the "threshold hypothesis" of BIB009 . However, with less difficult problems the numeracy × number format interaction disappeared, a finding in line with BIB011 . The above set of results led BIB013 to suggest a potential problem with evolutionary accounts proposed by various researchers (e.g., BIB001 , in that there was not a frequency facilitation effect for the very difficult problems. The present authors, however, do not see this as a problem for an evolutionary account. We reach this conclusion because differences in problem context (e.g., problem difficulty, word count) that are assessed in terms of the written problem properties are only tenuously connected to evolved cognitive abilities. Cognitive mechanisms evolved to solve specific problems in specific environments. The perspective of ecological rationality, which is generally consistent with evolutionary psychology, is also built upon a similar premise (i.e., the fit between the structure of the environment and the design of the mind; BIB010 . By analogy, this situation can be compared to someone proposing that humans have an evolved ability to develop complex language. This proposal is not endangered by the observation that people (even highly literate people) find a college physics textbook difficult to read. Reading is a cultural invention which taps into our evolved language ability, and thus our ability to handle a particularly difficult written text is only tenuously connected to the evolved cognitive ability for human language. More recent work on individual difference moderators of the frequency effect in Bayesian reasoning has only made the aforementioned research more perplexing. For instance, BIB017 demonstrated a "threshold" type effect despite slightly different problem format manipulations. Specifically, BIB017 assessed the differences between the standard format (single event probabilities) and a causal format (still single event probabilities, but with additional text describing a possible cause for false positive test results); previous research by BIB008 demonstrated evidence that causal structures in problems could lead to improved Bayesian reasoning performance. In separate studies, BIB017 found evidence for numerical literacy serving as a moderator of problem structure's benefits on Bayesian accuracy, with the effect of problem structure only present in highly numerate individuals. Similar to the discussion of the threshold hypothesis of BIB009 , this observation of an apparent moderating relationship between privileged representational formats, and individual difference measures (e.g., numeracy, cognitive reflection) might be seen as damaging to evolutionary and ecological accounts. However, the same explanation as offered for the BIB009 results can hold for the BIB017 results: that performance near floor effect levels can resemble an interaction. In fact, performance in the BIB017 studies was somewhat low (range: 3 to 32% in lowest to highest performing conditions). Other recent research BIB014 BIB016 ) has addressed a commonly held assumption critics make about the "ecological rationality account": if naturally sampled frequencies are a privileged representational format for an evolved statistical reasoning module, then the module must be "closed, " and automatic. Thus, any general cognitive traits (e.g., cognitive reflection), or any method of decreasing general cognitive capacity (e.g., cognitive load), should not significantly interfere with Bayesian performance, or the frequency effect. In general terms, this idea is the assumption of modular encapsulation , which is still promoted by Fodor but actually not accepted by any prominent evolutionary psychology views (e.g., compare BIB002 BIB005 . Although both groups of authors readily acknowledge the research conducted, and the reviews published, concerning the massive modularity hypothesis, there does seem to be some misunderstanding. For example, Barrett and Kurzban (2006, see specifically pp. 636-637) , which is cited by some of the work mentioned above, discuss at length the misunderstandings about automaticity of evolved modules, and the method of using cognitive load induced deficits as evidence against evolved modules. Without getting too detailed, their arguments can be summarized by the following analogy: personal computers have a variety of specialized programs (modules). Few would argue that a word processor works as efficiently at storing and computing numerical data, as compared to a spreadsheet program. Thus, these programs are separate, and specialized. However, if I download 1,000 music files to my computer, the overall performance of those separate programs will suffer, at least with respect to processing time. Also, if I drain the battery power in my laptop, the programs will fail to operate at all. This observation does not lead directly to the conclusion that the programs are not specialized. It simply points to the conclusion that the programs require some overlapping general resources. The same conclusion should be made with respect to cognitive modules. The examples in this analogy are extreme instances of general situations which can impair the functioning of functionally specific modules, but the point holds. The question becomes not one of modular abilities being impervious to general resource constraints, but rather one of understanding how particular situational contexts influence the functioning of specific cognitive abilities. In a different study of individual differences, BIB015 found the standard benefits of pictorial representations (Venn diagrams, in this case) in answering complex statistical tasks such as Bayesian reasoning. Furthermore, this general pattern interacted with measured spatial ability, which was independently assessed. In low-complexity problems, low spatial ability participants actually were hurt by pictorial representations, whereas high spatial ability participants demonstrated no difference between pictorial and text displays. However, in high-complexity problems, high spatial ability participants were aided in their understanding by the presence of pictorial representations, whereas low spatial ability participants saw no benefit. This last result is somewhat consistent with a threshold hypothesis, but there are many issues within these studies in need of deeper assessment. Further research is needed to clarify how different spatial ability levels are related to the use of different types of visual displays and if there is any relationship between spatial ability, numeracy, and the effects of naturally sampled frequencies. Finally, there are differences in performance that are related to the incentive structures under which people are asked to do Bayesian reasoning tasks. Research participants who do Bayesian reasoning tasks as part of a college course (either through a research "subject pool" or as in-class volunteers) tend to perform less well than participants who are paid money for their participation . This same research also documented that participants from more selective universities generally performed better than those from less selective universities, most likely due to a combination of different overall ability levels and different intrinsic motivation levels to do academic-type tasks. extended this research to show that people whose payments were tied to performance (i.e., correct responses received more money) did even better than people who were given a flat payment for their participation. This is an important factor in, for example, understanding the very high level of Bayesian reasoning performance found by BIB001 paid participants from Stanford University) versus the lower performance on the same task in BIB004 in-class participants from Brown University). In all cases, however, it should be noted that the relative levels of performance when varying the use of natural sampling, frequencies, and pictorial aids were consistent across studies. Absolute performance levels vary, but these methods for improving Bayesian reasoning remain effective.
Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Conclusion <s> Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one specifies the information format in which it is designed to operate. The authors show that Bayesian algorithms are computationally simpler in frequency formats than in the probability formats used in previous research. Frequency formats correspond to the sequential way information is acquired in natural sampling, from animal foraging to neural networks. By analyzing several thousand solutions to Bayesian problems, the authors found that when information was presented in frequency formats, statistically naive participants derived up to 50% of all inferences by Bayesian algorithms. Non-Bayesian algorithms included simple versions of Fisherian and Neyman-Pearsonian inference. Is the mind, by design, predisposed against performing Bayesian inference? The classical probabilists of the Enlightenment, including Condorcet, Poisson, and Laplace, equated probability theory with the common sense of educated people, who were known then as "hommes eclaires." Laplace (1814/ 1951) declared that "the theory of probability is at bottom nothing more than good sense reduced to a calculus which evaluates that which good minds know by a sort of instinct, without being able to explain how with precision" (p. 196). The available mathematical tools, in particular the theorems of Bayes and Bernoulli, were seen as descriptions of actual human judgment (Daston, 1981,1988). However, the years of political upheaval during the French Revolution prompted Laplace, unlike earlier writers such as Condorcet, to issue repeated disclaimers that probability theory, because of the interference of passion and desire, could not account for all relevant factors in human judgment. The Enlightenment view—that the laws of probability are the laws of the mind—moderated as it was through the French Revolution, had a profound influence on 19th- and 20th-century science. This view became the starting point for seminal contributions to mathematics, as when George Boole <s> BIB001 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Conclusion <s> Abstract Professional probabilists have long argued over what probability means, with, for example, Bayesians arguing that probabilities refer to subjective degrees of confidence and frequentists arguing that probabilities refer to the frequencies of events in the world. Recently, Gigerenzer and his colleagues have argued that these same distinctions are made by untutored subjects, and that, for many domains, the human mind represents probabilistic information as frequencies. We analyze several reasons why, from an ecological and evolutionary perspective, certain classes of problem-solving mechanisms in the human mind should be expected to represent probabilistic information as frequencies. Then, using a problem famous in the “heuristics and biases” literature for eliciting base rate neglect, we show that correct Bayesian reasoning can be elicited in 76% of subjects - indeed, 92% in the most ecologically valid condition - simply by expressing the problem in frequentist terms. This result adds to the growing body of literature showing that frequentist representations cause various cognitive biases to disappear, including overconfidence, the conjunction fallacy, and base-rate neglect. Taken together, these new findings indicate that the conclusion most common in the literature on judgment under uncertainty - that our inductive reasoning mechanisms do not embody a calculus of probability - will have to be re-examined. From an ecological and evolutionary perspective, humans may turn out to be good intuitive statisticians after all. <s> BIB002 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Conclusion <s> SUMMARY In an ongoing debate between two visions of statistical reasoning competency, ecological rationality proponents claim that pictorial representations help tap into the frequency coding mechanisms of the mind, whereas nested sets proponents argue that pictorial representations simply help one to appreciate general subset relationships. Advancing this knowledge into applied areas is hampered by this present disagreement. A series of experiments used Bayesian reasoning problems with different pictorial representations (Venn circles, iconic symbols and Venn circles with dots) to better understand influences on performance across these representation types. Results with various static and interactive presentations of pictures all indicate a consistent advantage for iconic representations. These results are more consistent with an ecological rationality view of how these pictorial representations achieve facilitation in statistical task performance and provide more specific guidance for applied uses. Copyright # 2008 John Wiley & Sons, Ltd. <s> BIB003 </s> Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why <s> Conclusion <s> In an ongoing debate about statistical reasoning competency, one view claims that pictorial representations help tap into the frequency coding mechanisms, whereas another view argues that pictorial representations simply help one to appreciate general subset relationships. The present experiments used Bayesian reasoning problems, expressed in an ambiguous numerical format (chances) and with different pictorial representations, to better understand influences on performance across these representation types. Although a roulette wheel diagram had some positive effect on performance, both abstract icons and pictographs improved performance markedly more. Furthermore, a frequency interpretation of the ambiguous numerical information was also associated with superior performance. These findings support the position that the human mind is more easily able to use frequency-based information, as opposed to grasping subset relations, as an explanation for improved statistical reasoning. These results also provide ... <s> BIB004
Overall, the literature on Bayesian reasoning is clear and straightforward in terms of what works for improving performance: natural sampling, frequencies, icon-based pictures, and more general development of the prerequisite skills for these tasks (i.e., numerical literacy, visual ability, and motivation to reach the correct answer). The more contentious topic is that of why these factors work to improve Bayesian reasoning. The balance of evidence favors the ecological and evolutionary rationality explanations for why these factors are key to improving Bayesian reasoning. This verdict is supported by multiple considerations which flow from the preceding review. First, the ecological rationality account is consistent with a broad array of scientific knowledge from animal foraging, evolutionary biology, developmental psychology, and other areas of psychological inquiry. Second, the ecological rationality approach is the view which has consistently tended to discover and refine the existence of these factors based on a priori theoretical considerations, whereas alternative accounts have tended to emerge as post hoc explanations. (To be specific, the facilitation effect of natural frequencies documented by BIB001 , the facilitative effect of pictorial representation documented by BIB002 , the effect of using whole objects versus aspects of objects documented by , and the differential effects of specific types of pictorial aids in Bayesian reasoning documented by BIB003 BIB004 all were established based on ecological rationality considerations which were then followed by alternative accounts.) Third, the actual nature of the evidence itself supports the ecological rationality approach more than other accounts. For instance, in head-to-head evaluations of rival hypotheses, using uncontestable methodologies, the results have supported the ecological rationality explanations (e.g., BIB003 . Furthermore, a quite recent meta-analysis (McDowell and Jacobs, 2014) has conclusively established the validity of the effect of naturally sampled frequencies in facilitating Bayesian reasoning, as described from an ecological rationality perspective. Distressingly, some proponents of a heuristics and biases view of Bayesian reasoning have not engaged with the bulk of the above literature which critically evaluates this view relative to the ecological rationality view. As just one illustration, Ayal and BeythMarom (2014) cite the seminal work by BIB001 , yet ignore nearly all of the other research done from an ecological rationality approach in the subsequent nearly 20 years. Robert Frost (1919 Frost ( /1999 In science, perhaps even more than in other domains of life, fences are not good. Willingness to engage openly, honestly, and consistently with the ideas one does not agree with should be a hallmark of scientific inquiry. Failing to do so is scientifically irresponsible. In conclusion, the vast majority of studies in human Bayesian reasoning align well with evolutionary and ecological rationality account of how the mind may be designed. These accounts are theoretically parsimonious and established in a rich set of literature from a wide range of interrelated disciplines. Alternative explanations, however, tend to appeal to stripped down parts of this account, often losing clear predictive power in the process, which neglect the ecological and evolutionary circumstances of the human mind they purport to explain. That does not mean that the heuristic and biases account no longer has any validity. The intellectually invigorating component of this debate is that we do not fully understand all that is to learn about how people engage in (or fail to engage in) Bayesian reasoning. There is still much to learn about the possible environmental constraints on Bayesian reasoning (e.g., problem difficulty, number of cues), and how those constraints may be interwoven with individual differences (e.g., numerical literacy, spatial ability), and even different measures of specific individual differences (e.g., subjective vs. objective numeracy). We look forward to disassembling walls and integrating various perspectives, with the hope of more fully understanding how to improve Bayesian reasoning, and how those methods of improvement illuminate the nature of human cognition.
Overview of Digital Television Development Worldwide <s> B. Analog TV Enhancement Projects <s> Statistics and reality are often in conflict, and nowhere is the contrast more marked than in the world of robotics in Japan. In the 1970s, statistics showed that Japan was using robots at a prodigious pace, yet visitors had difficulty in finding more than a few. Moreover, when they did find robots, the applications were unimaginative. <s> BIB001 </s> Overview of Digital Television Development Worldwide <s> B. Analog TV Enhancement Projects <s> The advanced compatible television (ACTV) system is a proposal for the single-channel transmission of widescreen enhanced-definition television (EDTV) images. A widescreen high-definition source is encoded into a signal that is NTSC-compatible. Existing NTSC receivers display a selected 4:3 portion of the widescreen image with standard NTSC resolution. A new widescreen receiver is proposed, tuned to the same 6 MHz RF channel, that displays a widescreen image with a resolution in excess of 400 lines/picture height in both spatial dimensions. The encoding process is reviewed and the recovery of various signal components to produce the widescreen image in the ACTV receiver is discussed. > <s> BIB002
On the evolutionary path to fully digital TV systems, there were several projects to enhance and improve analog television using advanced analog and hybrid analog-digital technologies. Some of the projects worth mentioning are the Japan Broadcasting Corporation (NHK) HDTV BIB001 project in Japan, the Eureka EU 95 Project and PALplus in Europe, and Advanced Compatible Television in the United States BIB002 . They provided valuable experience for future DTV systems development.
Overview of Digital Television Development Worldwide <s> 1) NHK HDTV Projects: <s> The status of high-definition television (HDTV) in Japan, Europe, and the US is examined. Japan has begun experimental broadcasts, and Europe plans experimental HDTV broadcasts in 1991, while the US is mired in disputes over just how important HDTV might be to its deteriorating consumer electronics industry. Delays in selecting a terrestrial broadcasting transmission standard for the US suggest 1993 as the earliest possible date for the start of HDTV broadcasting in the United States and 1995 as a more probable date. Problems and approaches in the US, Japan, and Europe are compared, focusing on government support and standardization. > <s> BIB001 </s> Overview of Digital Television Development Worldwide <s> 1) NHK HDTV Projects: <s> The authors describe the characteristics, of the 1125/60 high-definition television (HDTV) standard and the applications of HDTV, which differ from those of conventional television. They discuss the importance of a worldwide unified studio standard. They describe the development of the 1125/60 standard and efforts to establish it as an international standard. The authors examine the status and future of HDTV as regards equipment, program production, and utilization. > <s> BIB002
In 1964, the NHK Science and Technical Research Laboratories (STRL) started a research project on future television systems. This work began right after the successful live broadcasting of the 1964 Tokyo Summer Olympic Games to audiences around the world using satellite video transmission techniques developed by NHK. After five years of study, NHK established the concept of high definition television (HDTV) as a system suitable for viewing at roughly three times the picture height (compared to the five times picture height viewing distance suitable for the 525-and 625-line systems then in use around the world), having twice the horizontal and vertical (i.e., over 1000 scanning lines) resolution of conventional TV systems, and a wide aspect ratio. NHK then began research and development that was aimed at making HDTV a future broadcasting medium. By 1975, the 1125-line/60-Hz scanning format was introduced based on the experimental studies that examined hardware feasibility and human vision properties . In the early 1980s, NHK had developed most of the prototype equipment necessary for HDTV program production, including cameras, video tape recorders (VTRs), and display devices, largely based on analog techniques. In 1981, NHK conducted the first HDTV demonstration in the United States jointly with the Society of Motion Picture and Television Engineers (SMPTE), followed by a demonstration with the Columbia Broadcasting System (CBS) and the first demonstration in Europe with the European Broadcasting Union (EBU) in 1982. In 1983, NHK developed a bandwidth reduction system called MUSE (which stands for "multi sub-Nyquist sampling encoding"). The system employed multiple sub-Nyquist sampling techniques in order to compress the original HDTV signal bandwidth from 30 MHz down to 8.1 MHz. This would allow the transmission of the HDTV signal using one satellite transponder via frequency modulation (FM) of MUSE signal. The first trial of MUSE broadcast was at the Tsukuba Expo'85 in Japan (Fig. 1) , which sparked worldwide efforts to develop practical HDTV transmission systems BIB001 . In 1989, MUSE satellite direct-to-home broadcasting started on an experimental basis. Currently, a single channel of daily MUSE broadcasting is still in operation. It is scheduled to move to digital broadcasting in 2007. In the 1990s, HDTV production and distribution equipment made remarkable progress, driven by the need to supply programs for satellite MUSE broadcast services. The developments included CCD cameras, digital VTRs, camcorders, production and routing switchers, digital codecs for microwave links, and other equipment. A new digital satellite broadcasting service was started in December 2000. The major broadcasters in Japan are broadcasting seven HDTV services via a direct broadcasting link using a broadcasting satellite (BS), allocated in the BSS planned band of 11.7-12.2 GHz. Moreover, terrestrial digital broadcasting started in the Tokyo, Osaka, and Nagoya metropolitan areas in December 2003. Both the satellite and the terrestrial systems are based on integrated services digital broadcasting (ISDB), which is HDTV-centered digital broadcasting augmented by various data services providing program-related information. In Japan, HDTV is considered indispensable for data services, since much of their content includes fine characters and/or graphics, which are readable only when displayed on a high definition screen. Regarding international standardization activities, in 1972, Japan proposed a study program on HDTV to the CCIR (now ITU-R). The proposal was adopted by the CCIR Plenary Assembly in 1974, and since then, Japan has contributed many study results. One of the first parameters to be extensively debated was the aspect ratio of an HDTV system, which was originally conceived as being 5 : 3, offering somewhat wider pictures that the 4 : 3 aspect ratio of the 525-and 625-line television systems. International agreement was eventually achieved on a 16 : 9 aspect ratio, which more closely matches the 1.85 : 1 aspect ratio that is frequently used in 35-mm film production. Another key development was Japan's proposal of the 1125-line/60-Hz format as the unified worldwide standard. The number of scanning lines was chosen to produce a similar level of complexity for format conversion to/from conventional TV systems with 525 or 625 lines and was based on the following criteria. a) The number should be larger than 1050, for a viewing distance of three to four screen heights. b) It should be an odd number for interlace scanning. c) Since the greatest common denominator between 525 and 625 is 25, the total scanning lines should be a multiple of 25. Therefore, a "magic number" of 1125 was selected. In 1997, Japan proposed that the two regionally proposed HDTV standards (1125-and 1250-line system proposed in Europe-see next section) be unified by introducing a common image format of 1080 active lines and 74.25-MHz sampling BIB002 . This resulted in the ITU-R Recommendation BT. 709-3. In 1999, the total number of scanning lines, which was the last outstanding parameter, was unanimously agreed upon to be 1125, and a unified worldwide HDTV standard was finally realized. In 2000, ITU-R approved this HDTV system as the international studio standard, making it easier for international program exchange. This can be considered a great achievement by ITU-R Study Group 6 (former Study Group 11), which has worked on the issues over many years since the adoption of the Study Program in 1974. 2) Eureka EU 95 Project and PALplus: Since the 1960s, European countries have been using two analog color TV systems, namely, PAL and SECAM. Different variants of these two systems are in operation in different European countries. The move to HDTV was seen as an excellent opportunity to define a common system for the whole of Europe to replace PAL and SECAM, and to compete with the 1125-line/60-Hz HDTV production standards developed by Japan. This led to the creation of a research project funded by the European Union (EU) as part of its Eureka research framework. This project, code named EU 95, was supposed to develop a European solution for HDTV based upon the 1250-line/50-Hz system. Multiplexed Analogue Components (MAC) was chosen as the fundamental technology underlying the development . C-MAC, D-MAC and D2-MAC were proposed as Standard Definition Television (SDTV) transmission standard variants and HD-MAC the SDTV backward compatible HDTV variant. All MAC systems were targeting cable and satellite distribution only, since, at that time, the terrestrial spectrum was seen as a resource that eventually would be freed from TV broadcasting services. In order to support the strategy underlying EU 95, in 1986 the European Commission issued what was called the MAC directive, in which it stated that all direct-to-home broadcast satellite (DBS) services using high-power satellites must use MAC or HD-MAC. The HD-MAC system was successfully developed and HDTV pictures of impressive quality were shown in live satellite broadcasts from the Olympic games in Albertville, France (1992), Barcelona, Spain (1992), and Lillehammer, Norway (1994). However, none of the MAC systems succeeded in the European market place for a multiplicity of technical, commercial, and programming reasons. In many countries of Europe, satellite direct-to-home broadcasting was very successfully introduced in the late 1980s, however, the successful services used an FM-modulated PAL signal for satellite distribution. This was possible, despite the existence of the "MAC directive," because the satellites used were considered telecommunications satellites-not DBS satellites-and, therefore, did not fall under that directive. The MAC systems were by-passed by the successful deployment of PAL satellite services and most satellite receiver manufacturers essentially ignored the MAC solutions altogether. The EU 95 project finished in 1995. While HD-MAC was being developed as a transmission standard for cable and satellite distribution, a decision had to be taken about the future transmission technology for the terrestrial networks, which at that time delivered PAL and SECAM signals with 625 lines and an aspect ratio of 4 : 3. The planned introduction of HD-MAC was intended to lead to the deployment of production facilities providing HDTV pictures with an aspect ratio of 16 : 9, but the terrestrial TV networks would only be able to deliver image quality much inferior to the HD-MAC cable and satellite offerings-including a reduction of the number of TV lines displayed, which was in any case restricted by the 625 lines of the PAL standard. This reduction was a result of the "letterbox" approach that was chosen by many European broadcasters to overcome the incompatibility between 4 : 3 PAL displays and the 16 : 9 aspect ratio used for HDTV production. A solution was developed by the PALplus project, which started as a German "strategy group PAL" in 1988, and was formally created in early 1991 . The group solved the aspect ratio problem by adding separately coded side panels to achieve 16 : 9 without reducing vertical resolution. PALplus was officially launched in Germany in 1995 and significant numbers of PALplus 16 : 9 receivers were sold-ironically despite the demise of HD-MAC. Key representatives of the PALplus project became the founders of digital video broadcasting (DVB), and a number of lessons learned by the members of this project were used in the creation of the DVB Project.
Overview of Digital Television Development Worldwide <s> 1) ACATS: <s> The advanced compatible television (ACTV) system is a proposal for the single-channel transmission of widescreen enhanced-definition television (EDTV) images. A widescreen high-definition source is encoded into a signal that is NTSC-compatible. Existing NTSC receivers display a selected 4:3 portion of the widescreen image with standard NTSC resolution. A new widescreen receiver is proposed, tuned to the same 6 MHz RF channel, that displays a widescreen image with a resolution in excess of 400 lines/picture height in both spatial dimensions. The encoding process is reviewed and the recovery of various signal components to produce the widescreen image in the ACTV receiver is discussed. > <s> BIB001 </s> Overview of Digital Television Development Worldwide <s> 1) ACATS: <s> The North American broadcasting structure and the relationships between the different forms of the media are examined in order to understand the North American approach to high-definition television (HDTV). The processes that are being used to define the appropriate technical systems for HDTV broadcasting are explained. Technical proposals that have been made are summarized. > <s> BIB002 </s> Overview of Digital Television Development Worldwide <s> 1) ACATS: <s> Efficient baseband analog processing proven in hardware for a 525-line progressively scanned HDTV (high-definition TV) image is described. The processing steps are also suitable for 1050-line interlaced HDTV images. Specific terrestrial and cable HDTV packaging formats for baseband processing are presented which exploit efficient NTSC (National Television System Committee) compatibility for program delivery in North America. Analog augmentation formats requiring 4 MHz or 3 MHz of spectrum in addition to the NTSC signal are disclosed that may be suitable for taboo channel utilization. All signal components are readily transcodable from a MAC (multiplexed analog component) satellite format utilizing the same generic baseband analog processing method. Augmentation implementation issues associated with an actual hardware prototype are discussed. > <s> BIB003 </s> Overview of Digital Television Development Worldwide <s> 1) ACATS: <s> DigiCipher, an all-digital HDTV (high-definition television) system, with transmission over a single 6 MHz VHF or UHF channel, is described. It provides full HDTV performance with virtually no visible transmission impairments due to noise, multipath, and interference. It offers high picture quality, while the complexity of the decoder is low. Furthermore, low transmitting power can be used, making it ideal for simulcast HDTV transmission using unused or prohibited channels. DigiCipher can also be used for cable and satellite transmission of HDTV. There is no satellite receive dish size penalty (compared to FM-NTSC) in the satellite delivery of DigiCipher HDTV. To achieve the full HDTV performance in a single 6 MHz bandwidth, a highly efficient unique compression algorithm based on DCT (discrete cosine transform) transform coding is used. Through the extensive use of computer simulation, the compression algorithm has been refined and optimized. Computer simulation results show excellent video quality for a variety of HDTV material. For error-free transmission of the digital data, power error correction coding combined with adaptive equalization is used. At a carrier-to-noise ratio of above 19 dB, essentially error-free reception can be achieved. > <s> BIB004 </s> Overview of Digital Television Development Worldwide <s> 1) ACATS: <s> The DigiCipher high-definition television (HDTV) system, an all-digital approach that achieves full HDTV performance with error-free reception in a single 6-MHz television channel is described. The DigiCipher HDTV system is based on discrete cosine transform coding and uses motion prediction techniques to eliminate redundancy in the digital signal, channel equalization to defeat multipath, and error correction to defeat noise and interference. The source signal, source coding, channel coding, modulation, and performance of the system are discussed. > <s> BIB005 </s> Overview of Digital Television Development Worldwide <s> 1) ACATS: <s> The three key elements of the advanced digital television (ADTV) system are described. These elements are source coding based on MPEG++ data compression, channel coding based on a prioritized data transport, and modulation techniques based on spectrally shaped QAM. The performance of the ADTV system is discussed. > <s> BIB006 </s> Overview of Digital Television Development Worldwide <s> 1) ACATS: <s> The digital spectrum-compatible high-definition television (DSC-HDTV) system, a digital HDTV simulcast system designed for United States terrestrial broadcasting on currently unassignable channels, is described. The system uses progressively scanned source signals and is characterized by an effective, high-performance video compression system. Compression includes motion compensation with hierarchical block matching and block transform coding with adaptive quantization according to perceptual criteria. Video compression is designed to simplify the receiver decoding; only a few VLSI chips and only one full frame memory are required. The source signal, source coding, channel coding, modulation, and performance of the system are discussed. > <s> BIB007 </s> Overview of Digital Television Development Worldwide <s> 1) ACATS: <s> Abstract In the United States, the Federal Communications Commission (FCC) began a process six years ago to develop a terrestrial high definition television (HDTV) broadcasting standard. Early in 1993 a comprehensive report was released by the FCC's Advisory Committee on Advanced Television Service comparing five proposed systems that had undergone extensive testing. Although the report did not pick a ‘winning system’, it did recommend that only digital systems receive further consideration as the United States standard. This paper presents comparisons and conclusions from that report and notes the recent formation of a ‘Grand Alliance’ by the individual proponents of digital systems to propose a single system. <s> BIB008 </s> Overview of Digital Television Development Worldwide <s> 1) ACATS: <s> A public process has been in place in the United States for six years to establish an HDTV terrestrial broadcasting standard. The process, having moved through a planning phase, a competition phase, and an examination phase, has now entered a cooperation phase. Remarkable progress has been made/spl minus/a testament to the process. During 1994 the American digital HDTV terrestrial broadcasting system will be tested, fully documented, and recommended to the FCC for adoption. > <s> BIB009 </s> Overview of Digital Television Development Worldwide <s> 1) ACATS: <s> The article shows how the Advanced Television Test Center is putting the Grand Alliance's HDTV system through its paces. The testing of prototype hardware is designed to support the proposed US standard for high-definition television (HDTV) terrestrial broadcasting. The standard was developed through the efforts of a group called the Grand Alliance. When the testing on the proposed Grand Alliance system is completed, the prototype will be submitted to the Federal Communications Commission (FCC) for review. > <s> BIB010 </s> Overview of Digital Television Development Worldwide <s> 1) ACATS: <s> The US HDTV process has fostered substantial research and development activity over the last several years. The Advisory Committee on Advanced Television Service (ACATS) was formed to advise the FCC on the technology and systems suitable for delivery of high definition service over terrestrial broadcast channels. Four digital HDTV systems where tested at the Advanced Television Testing Center. All the systems gave excellent performance, but the results were inconclusive and a plan for a second round of tests was prepared. As each of the four systems where being readied for retest. The proponents of the four individual digital HDTV proposals worked together to define a single HDTV system which incorporated the best technology from the individual systems. The consortium of companies, called the Grand Alliance (GA), announced a combined system and submitted it to ACATS for consideration. After ACATS certification, the GA began construction of a prototype system to submit for laboratory testing at the end of 1994. This paper describes the video compression subsystem and the hardware prototype. The preprocessing, motion estimation, quantization, and rate control subsystems are described. The system uses bidirectional motion compensation, discrete cosine transform, quantization and Huffman coding. The resulting bitstream is input into a transport system which uses fixed length packets. The multiplex transport stream is input into the 8-VSB transmission system. Finally, the specifics of the hardware implementation are described and some simulation results are presented. > <s> BIB011
On 21 February 1987 a "Petition for Notice of Inquiry" was filed with the U.S. FCC by 58 broadcasting organizations and companies requesting that the commission initiate a proceeding to explore issues arising from the introduction of advanced television technologies and their possible impact on the television broadcasting service. At the time, it was generally believed that HDTV could not be broadcast using 6-MHz terrestrial channels. The broadcasting organizations were concerned that only alternative media would be able to deliver HDTV to the viewing public, placing terrestrial broadcasting at a severe disadvantage. The FCC agreed this was a subject of utmost importance and initiated a proceeding (MM Docket no. 87-268) to consider the technical and public policy issues of advanced television systems. The commission subsequently established ACATS, consisting of 25 leaders of the television industry. Richard E. Wiley, a former chairman of the FCC, was named to lead the committee, with hundreds of industry volunteers serving on numerous Advisory Committee subcommittees. Canada and Mexico also participated in the ACATS process. The Advisory Committee established subgroups to study the various issues concerning services, technical parameters, and testing mechanisms required to establish an advanced television system standard. It also established a system evaluation, test, and analysis process. 2) Development of the ATSC DTV Standard: Initially, 23 different systems were proposed to the Advisory Committee BIB002 . Mostly analog or hybrid analog/digital approaches, these systems ranged from "improved" systems, which worked within the parameters of the NTSC system to improve the quality of the video; to "enhanced" systems, which added additional information to the signal to provide an improved widescreen picture BIB001 ; and finally to HDTV systems using two 6-MHz channels per program, which were completely new services with substantially higher resolution, a wider picture aspect ratio, and improved sound BIB003 . In January 1990, the FCC effectively rejected all of the proposed approaches by a policy announcement calling for: 1) establishing a full HDTV transmission standard; 2) using only a single 6-MHZ channel; and 3) locating it within the existing frequency bands allocated to analog TV broadcasting . In response to the FCC's newly raised bar, a fundamental technological advance emerged when, in May 1990, General Instrument Corporation proposed the first all-digital HDTV. Their DigiCipher system proposal used the 1050i ("i" for interlaced scanning) video format, motion-compensated video compression, and QAM digital modulation BIB004 , BIB005 . Within seven months, three additional all-digital HDTV systems had been proposed, emerging from their secret development programs at leading research laboratories. Advanced Digital HDTV, proposed by Sarnoff, Thomson, Philips, and NBC, pioneered the use of multiple video formats, MPEG video compression, and packetized data transport BIB006 , . Digital Spectrum Compatible Television, proposed by Zenith and AT&T, pioneered the use of the 720p progressive scan format and vestigial sideband digital modulation BIB007 . The Channel Compatible DigiCipher, proposed by General Instrument and the Massachusetts Institute of Technology (MIT), Cambridge, combined the use of the 720p format with QAM modulation . Although the proponents highlighted their differences, all of the proposed systems were similar in their use of motion-compensated discrete cosine transform based video compression to achieve the required reduction in data rate necessary for transmission in a single 6-MHz channel . By 1991, the number of competing system proposals had been reduced to six, including the four all-digital HDTV systems. The Advisory Committee developed extensive test procedures to evaluate the performance of the proposed systems and required the proponents to provide fully implemented real-time operating hardware for the testing phase of the process. From July 1991 to October 1992, the six systems were tested by three independent and neutral laboratories working together, following the detailed test procedures prescribed by the Advisory Committee BIB010 (Fig. 2) . The Advanced Television Test Center (ATTC), funded by the broadcasting and consumer electronics industries, conducted transmission performance testing and subjective tests using expert viewers . CableLabs, a research and development consortium of cable television system operators, conducted an extensive series of cable transmission tests as well. The Advanced Television Evaluation Laboratory (ATEL) within the Canadian Communications Research Centre (CRC) conducted subjective assessment tests using nonexpert viewers . In February 1993, a Special Panel of the Advisory Committee convened to review the results of the testing process, and-if possible-to choose a new transmission standard for terrestrial broadcast television to be recommended by the Advisory Committee to the FCC. After a week of deliberations, the Special Panel determined that there would be no further consideration of analog technology, and that based upon analysis of transmission system performance, an alldigital approach was both feasible and desirable. Although all of the all-digital systems performed well, each of them had one or more aspects that required further improvement. The Special Panel recommended that the proponents of the four all-digital systems be authorized to implement certain modifications they had proposed, and that supplemental tests of these improvements be conducted. The Advisory Committee adopted this recommendation of the Special Panel, but also expressed its willingness to entertain a proposal by the remaining proponents for a single system that incorporated the best elements of the four all-digital systems . a) The Grand Alliance: In response to this invitation, in May 1993, as an alternative to a second round of intense competitive testing, the proponents of the four all-digital systems formed the Digital HDTV Grand Alliance. The members of the Grand Alliance were AT&T, General Instrument, North American Philips, MIT, Thomson Consumer Electronics, the David Sarnoff Research Center, and Zenith Electronics Corporation. In forming the Grand Alliance, the formerly competing proponents agreed to several key system principles, including: 1) accommodating both interlaced and progressive picture formats; 2) basing the video compression on the newly emerging MPEG-2 standard (see Section III-B1); and 3) utilizing a packetized data transport as part of a layered system architecture. However, many difficult choices remained, including whether or not to use bidirectional predicted B-frames, consideration of possible extensions to the MPEG syntax, which digital audio subsystem to use and which digital modulation technique to employ. After a thorough review of the Grand Alliance's initial proposal, the Advisory Committee worked, in col- laboration with the Grand Alliance during 1993 and early 1994, to finalize the design of the system, which eventually included the use of the 1920 1080 interlaced format and the 1280 720 progress format with square pixels, Dolby AC-3 (Dolby Digital) audio, and the use of 8-VSB modulation, which had demonstrated better performance than QAM during comparative transmission subsystem testing. By 1994, the Grand Alliance companies proceeded to build a final prototype system based on specifications approved by the Advisory Committee BIB008 . The prototype Grand Alliance system was built in a modular fashion at various locations. The video encoder was built by AT&T and General Instrument, the video decoder by Philips, the multichannel audio subsystem by Dolby Laboratories, the transport system by Thomson and Sarnoff, and the transmission system by Zenith. The complete system was integrated at Sarnoff Labs BIB011 (Fig. 3) . Testing of the complete Grand Alliance system began in April 1995 and was completed in August of that year. The Advisory Committee testing of the Grand Alliance system was similar to that conducted for the four individual all-digital systems; however, additional tests were conducted to more fully evaluate the proposed system. These new tests included format conversions between the progressive and interlace modes (both directions) and compliance with the MPEG-2 video compression syntax. Subjective audio tests and long form viewing of video and audio programming were also conducted. Field tests were conducted in Charlotte, NC, utilizing the complete Grand Alliance system. Working closely with the Advisory Committee throughout the U.S. DTV process, the ATSC was responsible for developing and documenting the detailed specifications for the ATV standard based on the Grand Alliance system BIB009 . In addition, the ATSC developed the industry consensus around several SDTV formats that were added to the Grand Alliance HDTV system to form a complete DTV standard. Among other things, these SDTV video formats provided for interoperability with existing television standards and supported the convergence of television and computing devices. b) Documenting the DTV Standard: The ATSC assigned the work of documenting the advanced television system standards to specialist groups, dividing the work into five areas of interest: • video, including input signal format and source coding; • audio, including input signal format and source coding; • transport, including data multiplex and channel coding; • RF/transmission, including the modulation subsystem; • receiver characteristics. A steering committee consisting of the chairs of the five specialist groups, the chair and vice-chairs of the Technology Group on Distribution (T3), and liaison among the ATSC, the FCC, and ACATS was established to coordinate development of the documents. Following completion of its work to document the U.S. ATV standard, the ATSC membership approved the specification as the ATSC Digital Television Standard (document number A/53) on 16 September 1995. On 28 November 1995, the FCC Advisory Committee issued its Final Report, providing the following conclusions. • The Grand Alliance system meets the Committee's performance objectives and is better than any of the four original digital ATV systems. • The Grand Alliance system is superior to any known alternative system. • The ATSC Digital Television Standard fulfills all of the requirements for the U.S. ATV broadcasting standard. Accordingly, the Advisory Committee recommended to the FCC that the ATSC DTV Standard be adopted as the standard for digital terrestrial television broadcasting in the United States . c) DTV Standard Adopted by the FCC: On 24 December 1996, the commission adopted the major elements of the ATSC Digital Television Standard, mandating its use for digital terrestrial television broadcasts in the United States. In 1997 the FCC adopted companion DTV rules assigning additional 6-MHz channels to approximately 1600 full-power broadcasters in the United States to permit them to offer digital terrestrial broadcast in parallel with their existing analog services during a transition period while consumers made the conversion to digital receivers or set-top boxes. The FCC also adopted a series of rules governing the transition to DTV, including a rather aggressive schedule for the transition. Under the FCC's timetable, stations in the largest U.S. cities were required to go on the air first with digital services, while stations in smaller cities would make the transition later. Under the FCC's plan, more than half of the U.S. population would have access to terrestrial broadcast DTV signals within the first year, all commercial stations would have to be on the air within five years, and all public TV stations would have to be on the air within six years. Analog broadcasts were planned to cease after nine years (on 31 December 2006), assuming that the public had embraced digital TV in adequate numbers by that time. Part of the FCC's motivation in mandating a rapid deployment of digital TV was to hasten the day when it could recapture 108 MHz of invaluable nationwide spectrum that would be freed up by the use of more spectrum-efficient DTV technology. In accordance with the FCC plan, DTV service was launched in the United States on 1 November 1998, and more than 50 percent of the U.S. population had access to terrestrial DTV signals within one year. By 1 March 2003, there were more than 750 DTV stations on the air in the United States, and nearly 5 million DTV displays had been sold. By 1 March 2005, there were nearly 1400 DTV stations on the air and over 16 million DTV displays had been sold, including over 2.5 million with integrated ATSC tuners. According to CEA data, consumer adoption of HDTV in the United States is occurring at roughly twice the rate as the adoption of color TV. The ATSC DTV Standard was submitted to Task Group 11/3 of the ITU-R, and in 1997 it was included as System A in ITU Recommendations BT.1300 and BT.1306. 3) Ongoing Work of the ATSC: Since the primary ATSC DTV Standard was adopted in 1995, the ATSC has conducted a wide-ranging program for developing supplemental DTV and DTV-related standards, and for addressing implementation issues that have arisen in the countries that have adopted the ATSC DTV Standard. Highlights of this work include a standard for program and system infor- mation protocol (PSIP), a conditional access standard to permit restricted or pay services, a suite of data broadcasting standards, a standardized software environment for digital receivers, a standard for distributed transmitter synchronization, a standard for satellite contribution and distribution services, and a standard for direct-to-home satellite services. All segments of the television industry in North America and elsewhere are now represented within the ATSC, including broadcasters, cable companies, satellite service providers, consumer and professional equipment manufacturers, computer and telecommunications companies, and motion picture and other content providers. A current organizational illustration of the ATSC is given in Fig. 4 . A Board of Directors, formed of members of the parent committee, manages the overall activities and directions of the ATSC. Two main subcommittees exist: • the Technology and Standards Group (TSG); • the Planning Committee (PC). From time to time, the board can establish one or more task force groups to address specific items. Within the TSG structure, specialist groups are organized into specific areas of interest. Ad hoc groups may be formed for specific issues or projects.
Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> I. INTRODUCTION Authors in the first issue of IEEE Transactions on Industrial <s> This paper outlines opportunities and challenges in the development of next-generation embedded devices, applications, and services, resulting from their increasing intelligence - it plots envisioned future directions for intelligent device networking based on service-oriented high-level protocols, in particular as regards the industrial automation sector - and outlines the approach adopted by the Service Infrastructure for Real-Time Embedded Networked Applications project, as well as the business advantages this approach is expected to provide. <s> BIB001 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> I. INTRODUCTION Authors in the first issue of IEEE Transactions on Industrial <s> Control systems rely heavily on the software that is used to implement them. However, current trends in software engineering are not fully exploited in the development process of complex control systems. In this paper, an approach for the model driven development of distributed control systems (DCSs) is presented. The proposed approach that greatly simplifies the development process adopts the function block construct introduced by the IEC 61499 standard and supports the automatic generation of implementation models for many different execution environments. It favours the deployment and re-deployment of distributed control applications and provides an infrastructure for the transparent exploitation of current software engineering practices. GME, a meta-modelling tool, was utilized to develop Archimedes, an IEC-compliant prototype engineering support system. Specific model-to-model transformers have been developed to automate the transformation of FB-based design models to CORBA-component-model based implementation models to demonstrate the applicability of the proposed approach. <s> BIB002 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> I. INTRODUCTION Authors in the first issue of IEEE Transactions on Industrial <s> Industrial automation platforms are experiencing a paradigm shift. New technologies are making their way in the area, including embedded real-time systems, standard local area networks like Ethernet, Wi-Fi and ZigBee, IP-based communication protocols, standard service oriented architectures (SOAs) and Web services. An automation system will be composed of flexible autonomous components with plug & play functionality, self configuration and diagnostics, and autonomic local control that communicate through standard networking technologies. However, the introduction of these new technologies raises important problems that need to be properly solved, one of these being the need to support real-time and quality-of-service (QoS) for real-time applications. This paper describes a SOA enhanced with real-time capabilities for industrial automation. The proposed architecture allows for negotiation of the QoS requested by clients from Web services, and provides temporal encapsulation of individual activities. This way, it is possible to perform an a priori analysis of the temporal behavior of each service, and to avoid unwanted interference among them. After describing the architecture, experimental results gathered on a real implementation of the framework (which leverages a soft real-time scheduler for the Linux kernel) are presented, showing the effectiveness of the proposed solution. The experiments were performed on simple case studies designed in the context of industrial automation applications. <s> BIB003 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> I. INTRODUCTION Authors in the first issue of IEEE Transactions on Industrial <s> Integration of Networked Control Systems is always an engineering challenge. Heterogeneous hardware and software environments combined with long lifetime of control systems makes this job particularly troublesome. Modern concepts of integration of automation systems are related to Service Oriented Architecture (SOA). While its suitability is proven in IT systems, SOA has not been adopted yet in commercial Programmable Logic Controllers (PLC), and thus cannot be considered as a solution for integration with already deployed control systems. However, during past years thousands of PLCs with embedded HTTP servers were deployed in the field. These devices, used together with modern PLC that acts as the HTTP client, enable unique opportunity of integration for control systems with soft real-time constraints. In the present study, the performance of PLC-to-PLC communications based on HTTP is evaluated and compared to Modbus TCP. <s> BIB004 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> I. INTRODUCTION Authors in the first issue of IEEE Transactions on Industrial <s> In recent years, requirements for interoperability, flexibility, and reconfigurability of complex automation industry applications have increased dramatically. The adoption of service-oriented architectures (SOAs) could be a feasible solution to meet these challenges. The IEC 61499 standard defines a set of management commands, which provides the capability of dynamic reconfiguration without affecting normal operation. In this paper, a formal model is proposed for the application of SOAs in the distributed automation domain in order to achieve flexible automation systems. Practical scenarios of applying SOA in industrial automation are discussed. In order to support the SOA IEC 61499 model, a service-based execution environment architecture is proposed. One main characteristic of flexibility-dynamic reconfiguration-is also demonstrated using a case study example. <s> BIB005
Informatics, ten years ago, described opportunities and challenges in using the service oriented architecture in manufacturing BIB001 . Since then several research articles published in the same journal reporting successful or promising results regarding the exploitation of the SOA paradigm in the industrial automation system (IAS) domain, e.g., BIB003 [3] BIB004 . Similar results have been published in other journals too, e.g. BIB002 . In the last issue of the journal, i.e., June 2015, authors present in BIB005 , a formal model for the application of SOA in the distributed automation domain in order to achieve flexibility. They adopt the IEC 61499 standard instead of the widely used in industry IEC 61131 , for several reasons they present in the paper. They also describe an execution environment based on the proposed formal model and demonstrate the flexibility of the proposed approach by a scenario for dynamic reconfiguration. In this letter the proposed approach is discussed in the context of both the SOA paradigm and the IEC 61499 Function Block model, in an attempt to identify advantages and disadvantages, and its potential for exploitation. The remainder of this letter is organized as follows. Section II discusses published work regarding the exploitation of SOA in the industrial automation domain, in order to set up a framework for the discussion. Section III discuss the SOA based IEC 61499 model presented in BIB005 . Section IV comments on the SOA-based execution environment architecture and the run-time reconfiguration. Finally, Section V concludes this letter.
Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> II. SOA IN INDUSTRIAL AUTOMATION SYSTEMS <s> This paper outlines opportunities and challenges in the development of next-generation embedded devices, applications, and services, resulting from their increasing intelligence - it plots envisioned future directions for intelligent device networking based on service-oriented high-level protocols, in particular as regards the industrial automation sector - and outlines the approach adopted by the Service Infrastructure for Real-Time Embedded Networked Applications project, as well as the business advantages this approach is expected to provide. <s> BIB001 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> II. SOA IN INDUSTRIAL AUTOMATION SYSTEMS <s> Control systems rely heavily on the software that is used to implement them. However, current trends in software engineering are not fully exploited in the development process of complex control systems. In this paper, an approach for the model driven development of distributed control systems (DCSs) is presented. The proposed approach that greatly simplifies the development process adopts the function block construct introduced by the IEC 61499 standard and supports the automatic generation of implementation models for many different execution environments. It favours the deployment and re-deployment of distributed control applications and provides an infrastructure for the transparent exploitation of current software engineering practices. GME, a meta-modelling tool, was utilized to develop Archimedes, an IEC-compliant prototype engineering support system. Specific model-to-model transformers have been developed to automate the transformation of FB-based design models to CORBA-component-model based implementation models to demonstrate the applicability of the proposed approach. <s> BIB002 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> II. SOA IN INDUSTRIAL AUTOMATION SYSTEMS <s> Industrial automation platforms are experiencing a paradigm shift. New technologies are making their way in the area, including embedded real-time systems, standard local area networks like Ethernet, Wi-Fi and ZigBee, IP-based communication protocols, standard service oriented architectures (SOAs) and Web services. An automation system will be composed of flexible autonomous components with plug & play functionality, self configuration and diagnostics, and autonomic local control that communicate through standard networking technologies. However, the introduction of these new technologies raises important problems that need to be properly solved, one of these being the need to support real-time and quality-of-service (QoS) for real-time applications. This paper describes a SOA enhanced with real-time capabilities for industrial automation. The proposed architecture allows for negotiation of the QoS requested by clients from Web services, and provides temporal encapsulation of individual activities. This way, it is possible to perform an a priori analysis of the temporal behavior of each service, and to avoid unwanted interference among them. After describing the architecture, experimental results gathered on a real implementation of the framework (which leverages a soft real-time scheduler for the Linux kernel) are presented, showing the effectiveness of the proposed solution. The experiments were performed on simple case studies designed in the context of industrial automation applications. <s> BIB003 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> II. SOA IN INDUSTRIAL AUTOMATION SYSTEMS <s> Nowadays, Service-oriented Architecture (SOA) paradigm is becoming a broadly deployed standard for business and enterprise integration. It continuously spreads across the diverse layers of the enterprise organization and disparate domains of application envisioning a unified communication solution. In the industrial domain, Evolvable Production System (EPS) paradigm focus on the identification of guidelines and solutions to support the design, operation, maintenance, and evolution of complete industrial infrastructures. Similarly to several other domains, the crescent ubiquity of smart devices is raising important lifecycle concerns such as device setup, control, management, supervision and diagnosis. From initial setup and deployment to system lifecycle monitoring and evolution, each device needs to be taken into account and easily reachable. The present work exploits the association of EPS and SOA paradigms in the pursuit of a common architectural solution to support the different phases of the device lifecycle. The result is a modular, adaptive and open infrastructure forming a complete SOA ecosystem that will make use of the embedded capabilities supported by the proposed device model. The infrastructure components are specified and it is shown how they can interact and be combined to adapt to current system specificity and requirements. Finally, a proof-of-concept prototype deployed in a real industrial production scenario is also detailed and results are presented. <s> BIB004 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> II. SOA IN INDUSTRIAL AUTOMATION SYSTEMS <s> Integration of Networked Control Systems is always an engineering challenge. Heterogeneous hardware and software environments combined with long lifetime of control systems makes this job particularly troublesome. Modern concepts of integration of automation systems are related to Service Oriented Architecture (SOA). While its suitability is proven in IT systems, SOA has not been adopted yet in commercial Programmable Logic Controllers (PLC), and thus cannot be considered as a solution for integration with already deployed control systems. However, during past years thousands of PLCs with embedded HTTP servers were deployed in the field. These devices, used together with modern PLC that acts as the HTTP client, enable unique opportunity of integration for control systems with soft real-time constraints. In the present study, the performance of PLC-to-PLC communications based on HTTP is evaluated and compared to Modbus TCP. <s> BIB005 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> II. SOA IN INDUSTRIAL AUTOMATION SYSTEMS <s> This paper presents an approach to using semantic web services in managing production processes. In particular, the devices in the production systems considered expose web service interfaces through which they can then be controlled, while semantic web service descriptions formulated in web ontology language for services (OWL-S) make it possible to determine the conditions and effects of invoking the web services. The approach involves three web services that cooperate to achieve production goals using the domain web services. In particular, one of the three services maintains a semantic model of the current state of the system, while another uses the model to compose the domain web services so that they jointly achieve the desired goals. The semantic model of the system is automatically updated based on event notifications sent by the domain services. <s> BIB006 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> II. SOA IN INDUSTRIAL AUTOMATION SYSTEMS <s> A novel architecture for the field of industrial automation is described, the goals of which are: 1) computation of optimal production plans; 2) automated usage of the optimized plans; 3) flexibility and reusability at development and maintenance; and 4) seamless transition from current practice to the approach introduced herein. The architecture consists of three main components: 1) a set of OPC unified architecture (UA) servers, which are used to model the information from the device level; 2) a set of services organized into two layers (basic and complex services), which act as a link between the first and the third layer; and 3) a constraint satisfaction problem (CSP) layer for the computation of production plans. Extensive performance tests motivate the choice of the service development framework, and prove the effectiveness of the special adapter software solution for the integration of current devices and the ability of the UA server to manage a high number of UA connections. As a proof-of-concept, the architecture has been tested for a real manufacturing problem composed of four flexible manufacturing systems. The results show that the architecture is able to efficiently control and monitor a real manufacturing process according to an optimized schedule with over 99% of the time spent on the manufacturing. <s> BIB007 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> II. SOA IN INDUSTRIAL AUTOMATION SYSTEMS <s> System-based approach for the development of industrial automation systems.UML modeling of the software part of Mechatronic component.Semi-automatic transformation to object oriented IEC 61131.Semi-automatic transformation to Java code as alternative to use embedded boards.The case study is developed as a lab exercise. Industrial automation systems (IASs) are commonly developed using the languages defined by the IEC 61131 standard and are executed on programmable logic controllers (PLCs). Their software part is commonly considered only after the development and integration of mechanics and electronics. However, this approach narrows the solution space for software; thus, it is considered inadequate to address the complexity of today's systems. In this paper, we adopt a system-based approach for the development of IASs. Based on this, the UML model of the software part of the system is extracted from the SysML system model and it is then refined to get the implementation code. Two implementation alternatives are considered to exploit both PLCs and the recent deluge of embedded boards in the market. For PLC targets, the new version of IEC 61131 that supports object-orientation is adopted, while Java is used for embedded boards. The case study used to illustrate our approach was developed as a lab exercise, which aims to introduce to students a number of technologies used to address challenges in the domain of cyber-physical systems and highlights the role of the Internet of Things (IoT) as a glue for their cyber interfaces. <s> BIB008 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> II. SOA IN INDUSTRIAL AUTOMATION SYSTEMS <s> Internet of Things (IoT) has provided a promising opportunity to build powerful industrial systems and applications by leveraging the growing ubiquity of radio-frequency identification (RFID), and wireless, mobile, and sensor devices. A wide range of industrial IoT applications have been developed and deployed in recent years. In an effort to understand the development of IoT in industries, this paper reviews the current research of IoT, key enabling technologies, major IoT applications in industries, and identifies research trends and challenges. A main contribution of this review paper is that it summarizes the current state-of-the-art IoT in industries systematically. <s> BIB009
SOA was considered for several years as one of the hottest subjects in the IT community. This has been changed the last few years when IoT has replaced this buzzword. As expected, SOA has attracted the attention of researchers in the industrial automation domain. Several research groups presented their work towards the exploitation of the SOA paradigm in IASs. Authors in BIB001 outline opportunities and challenges in using the service oriented architecture in the manufacturing community. They claim that web services technology constitutes the preferred implementation vehicle for serviceoriented architectures and they discuss the extension of the SOA paradigm into the device space that will allow to seamlessly integrate device-level networks with enterpriselevel networks. Authors capture the disadvantages of UPnP (Universal Plug and Play) initiative, already used in industry, in comparison with web services. The key concepts of the SIRENA project [13] , which was is part of the ITEA initiative, are described. SIRENA has played a pioneering role by applying the SOA paradigm to communications and interworking between components at the device level and its results were used as a foundation for both SODA [14] and SOCRADES [15] projects. SODA exploited the framework of SIRENA and defined it in a platform, language and network neutral way, applicable to a wide variety of networked devices in several domains among which IASs. It has also promoted a Devices Profile for Web Services (DPWS) as an OASIS standard and delivered different implementations. An implementation of the DPWS specification based on the J2ME CDC platform was developed by the SOA4D (Service-Oriented Architecture for Devices) [16] open-source initiative for exploiting and adapting SOAP and Web services to the specific constraints of embedded devices. SOCRADES proposes the use of SOA as Web services in such a way that it results to a unifying application-level communication mean across the various levels of the enterprise pyramid down to the device level, for the devices to expose selected functionality to be used by the layers above. Service-Oriented Architecture in Industrial Automation Systems -The case of IEC 61499: A Review Kleanthis Thramboulidis, Member, IEEE In BIB003 , authors present a SOA based framework for Industrial Automation enhanced with real-time capabilities. A key characteristic of the proposed framework is that it allows for negotiation of the QoSs requested by clients from web services, and provides temporal encapsulation of individual activities. This allows to perform an a priori analysis of the temporal behavior of each service, and to avoid unwanted interference among them. Authors evaluate current implementations of CORBA , such as TAO, that satisfy the requirements of embedded real time system, regarding the requirements they have defined, and argue on the selection of SOAP instead of CORBA as basis for their framework. CORBA is one of the first implementations of the SOA concept for distributed systems. Authors in BIB002 present a CORBA Component Model (CCM) implementation of the IEC 61499 run-time environment that exports its services to the environment through the CORBA bus. TAO , a realtime ORB that implements the real-time CORBA 1.x is utilized. Authors in BIB004 present an approach to exploit SOAP in the domain of Evolvable Production Systems. Their approach was inspired by the Devices Profile for Web Services (DPWS) specification, which was extended to address the specific needs of this domain. Programmable Logic Controllers (PLCs) were used as devices. This work is highly related with SODA and SOCRADES projects. In BIB005 , authors evaluate the performance of PLC-to-PLC communications based on HTTP and compare it to Modbus TCP. The motivation for this work is the appearance, during past years, in the market of various PLCs with embedded HTTP servers. These PLCs may be used in collaboration with PLCs that acts as the HTTP clients, to allow the integration of control systems with soft real-time constraints. Authors claim that while SOA's suitability is proven in IT systems, it has not been adopted yet in commercial PLCs, and thus cannot be considered as a solution for integration with already deployed control systems. They come up with results that indicate that Modbus TCP protocol is significantly better than HTTP and they attribute this result mainly to the relatively low performance of PLC application code executing complex string processing required by the HTTP protocol. They also mention that HTTP performs well enough to meet specified soft real-time constraints of the sample Networked Control System (NCS). The 99.9% of measured HTTP data exchanges are completed in less than 700 ms which makes, as claimed by the authors, the HTTP communications an alternative that is worth evaluating for soft real-time NCS. Authors in BIB007 describe an open source SOA architecture for IAS that is composed of three layers. The first layer, which is used to model the information from the device level, is constructed as a set of OPC servers. The second layer, which is used as a link to the third layer is composed of basic and complex services. The third layer, which is named constraint satisfaction problem (CSP) layer, is used for computation of production plans. They demonstrated and evaluated the proposed framework on Apache CFX with SOAP and Jersey, that is an implementation of the JAX-RS, i.e., the Java API for RESTFul web services, and the java based framework Apache River. The use of the SOA paradigm is adopted outside of the device boundary, thus this approach does not consider determinism and real-time deadlines imposed by device level requirements. Authors in present the application of SOA in building automation systems. The presented approach utilizes the DPWS profile, ontologies for representing semantic data, and a composition plan description language to describe contextbased composite services in form of composition plans. They claim that SOAP and WSDL is the most popular implementation of SOA which is gaining increasing market penetration. Authors evaluate four different implementations of the DPWS, two based on C and two based on Java. Authors present evaluation results regarding the feasibility and scalability of the proposed system and specifically the performance overhead of the service selection and service execution processes (composition time). Composition time has been measured as 1000 msec for 500 devices on an Intel processor with 2.6-GHz and 6-GB RAM. Semantic web services are utilized by authors in BIB006 to present an approach for managing production processes. Based on this approach devices expose web service interfaces formulated in OWL-S through which they can be controlled. Even though authors claim that the exposed web services interface of the device is used for controlling the device and thus inserting the framework's overhead in the control loop of the plant, performance evaluation is not given with the argument that it is difficult to find similar semantic web service monitoring and composition approaches against which to compare with. SOAP has been defined as a lightweight protocol intended for exchanging structured information in a decentralized, distributed environment . Authors in investigate CORBA and SOAP as communication mechanisms to interconnect different systems and argue that "it turns out that a direct and naive use of SOAP would result in a response time degradation of a factor 400 compared to CORBA." Since then web services technology had further improved regarding XML parsers but not to the level of considering it as a glue to interconnect constituent components of a controller running on the same device. Even the use of HTTP at the device level is introducing performance overhead that allows the approach to be considered only for soft real-time NCS BIB004 . SOAP is also not the preferred technology for the IoT where the REST architectural model is considered as the dominating one BIB008 . SOA is an enabling technology for IoT which is becoming increasingly popular as claimed in BIB009 . However, is should be noted that the four-layer SOA presented in BIB009 for the IoT, places the service layer on top of the Network one that is on top of the sensing layer [12, Fig. 4 ] for which the universal unique identifier (UUID) is considered as key characteristic of IoT to enable the identification and use of provided by devices services. Authors claim that a device with UUID can be easily identified and retrieved. Application API and interface are captured at the interface layer along with Contracts. It is also worthwhile to note that the Service bus is on top of the Business logic. SOA based products have already appeared in the industrial systems market in the context of Industry 4.0. For example, TwinCAT from Beckhoff combines IEC 61131-3-based SOA services with OPC UA interoperability [20] .
Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> III. THE FORMAL SOA IEC 61499 FUNCTION BLOCK MODEL <s> Service-oriented computing promotes the idea of assembling application components into a network of services that can be loosely coupled to create flexible, dynamic business processes and agile applications that span organizations and computing platforms. An SOC research road map provides a context for exploring ongoing research activities. <s> BIB001 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> III. THE FORMAL SOA IEC 61499 FUNCTION BLOCK MODEL <s> Industrial automation is largely based on PLC-based control systems. PLCs are today mostly programmed in the languages of the IEC 61131 standard which are not ready to meet the new challenges of widely distributed automation systems. Currently, an extension of IEC 61131 which includes object oriented programming as well as the new standard IEC 61499 are available. Moreover, service-oriented paradigms where autonomous and interoperable resources provide their functionalities in the form of services that can be accessed externally by clients without knowing the underlining implementation have been presented in the literature. In the supervisory control theory, methodologies based on formal models have been developed to improve the coordination of concurrent and distributed systems. In this paper, an event-driven approach is proposed to improve the design of industrial control systems using commercial PLCs. At a lower level, basic sequences are coded in elementary software objects, called function blocks, providing their functionalities as services. At an upper level, a Petri Net (PN) controller forces the execution of such services according to desired sequences, while by a PN supervisor constraints on the sequences are satisfied. <s> BIB002 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> III. THE FORMAL SOA IEC 61499 FUNCTION BLOCK MODEL <s> System-based approach for the development of industrial automation systems.UML modeling of the software part of Mechatronic component.Semi-automatic transformation to object oriented IEC 61131.Semi-automatic transformation to Java code as alternative to use embedded boards.The case study is developed as a lab exercise. Industrial automation systems (IASs) are commonly developed using the languages defined by the IEC 61131 standard and are executed on programmable logic controllers (PLCs). Their software part is commonly considered only after the development and integration of mechanics and electronics. However, this approach narrows the solution space for software; thus, it is considered inadequate to address the complexity of today's systems. In this paper, we adopt a system-based approach for the development of IASs. Based on this, the UML model of the software part of the system is extracted from the SysML system model and it is then refined to get the implementation code. Two implementation alternatives are considered to exploit both PLCs and the recent deluge of embedded boards in the market. For PLC targets, the new version of IEC 61131 that supports object-orientation is adopted, while Java is used for embedded boards. The case study used to illustrate our approach was developed as a lab exercise, which aims to introduce to students a number of technologies used to address challenges in the domain of cyber-physical systems and highlights the role of the Internet of Things (IoT) as a glue for their cyber interfaces. <s> BIB003 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> III. THE FORMAL SOA IEC 61499 FUNCTION BLOCK MODEL <s> In recent years, requirements for interoperability, flexibility, and reconfigurability of complex automation industry applications have increased dramatically. The adoption of service-oriented architectures (SOAs) could be a feasible solution to meet these challenges. The IEC 61499 standard defines a set of management commands, which provides the capability of dynamic reconfiguration without affecting normal operation. In this paper, a formal model is proposed for the application of SOAs in the distributed automation domain in order to achieve flexible automation systems. Practical scenarios of applying SOA in industrial automation are discussed. In order to support the SOA IEC 61499 model, a service-based execution environment architecture is proposed. One main characteristic of flexibility-dynamic reconfiguration-is also demonstrated using a case study example. <s> BIB004
SOA was introduced as an approach to design a software system to provide services either to end-user applications or other services distributed in a network, via published and discoverable interfaces BIB001 . Authors in [7, Sec. 1] admit that SOA has been introduced to facilitate the creation of distributed networked computer systems. However, the formal model they propose utilizes SOA for the integration of software modules that constitute a controller running on a single computation node (device). Based on Definition 4, Function Block Instances (FBIs) are service providers since each input event of an FBI is considered as a provided service. A basic Function Block (FB) is considered to provide atomic services (Definition 2). Moreover, based on Definition 5 there is a service repository in every IEC 61499 resource for the FBIs to register their provided services, as shown in Fig.1 [7, Fig. 1 ]. This is performed by having each FBI to register the service definitions or service contracts, as claimed by authors. WSDL is used by authors in [7, Sec. V] to define service contracts, and the SOAP protocol is used to implement the interactions among FBIs in the same processing node. Fig. 1 . The basic structure adopted in the formal SOA-based IEC 61499 model BIB004 . To the best of our knowledge this is the first attempt to utilize SOAP and WSDL to integrate the objects or components that constitute a controller software that is executed on one device. In BIB002 authors describe an approach for the integration of coordinate OO IEC 61131 FBs with FBs that encapsulate plant resources, such as silos and pipes, adopting the event-based model of IEC 61499. They consider their approach as a service-oriented architecture. This approach is further discussed in BIB003 . It should be noted that basic FBs defined by the IEC 61499 standard include among others FBs for performing logic operations such as AND, OR, XOR as well as FB for merging (E_MERGE) and delaying (E_DELAY) of events. All these FBs are integrated based on the proposed approach using SOAP, WSDL and the WS-discovery protocol. FBIs register their services to the resource repository for other FBIs of the same device to discover and use these services. Part 3 of the IEC 61499 standard recommends practitioners to avoid using even CORBA with the argument that implementation of the features specified by its model would be too expensive, and its performance "would almost always be too slow, for use in a distributed real-time industrial-process measurement and control system (IPMCS)." Authors with Definition 3 argue that services provided by composite FBs (CFBs) are considered as composite, based on the fact that CFBs are defined as a network of FBIs. Thus, a CFB is defined as an aggregation of services possibly using BPEL or a similar language for orchestrating smaller and finegrained services provided by the CFB's constituent FBIs. The relation of this language with the definition process of Composite FB Types is not discussed. Authors consider BIB004 Definition 5 ] the event and data connections among FBIs as one-way communication and consider response messages send by the provider FBI to be implemented by using a service that would be provided for this reason by the service requestor FBI. It is assumed that the motivation for this is Definition 4 and the graphical notation of the FBN which captures the response to a request as a separate event connection along with the corresponding data connections. This raises, among others, the question of how these two independent services will be combined in the service definition by using WSDL.
Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> A. The Execution Environment <s> The function block (FB) construct has been adopted by recent IEC standards for the design of reusable, interoperable, distributed control applications. In this paper, an approach to exploit the benefits of this paradigm in batch process control is presented. A hybrid approach that integrates the FB model with the Unified Modeling Language is exploited and customized to the batch domain taking as starting point the industrially accepted SP88 standard. A toolset was customized to support the presented approach and demonstrate the applicability of the IEC61499 function block model in batch processing. Research experience with industrial engineers in the context of IEC 61499 and SP88 is used to motivate a development methodology that is sufficiently straightforward and efficient. The Java-based IEC61499-compliant run-time environment used for the execution of the control application is briefly described. <s> BIB001 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> A. The Execution Environment <s> Model-driven engineering (MDE) is proposed as the next revolution in embedded-system development. It is a promising paradigm that provides the developer the abstraction level required to focus on the specific application and not on the underlying computing environments. Real-time (RT) Linux variants constitute a mature and stable platform that can be considered a strong candidate for RT applications in the control and automation domain. In this paper, a framework for the MDE of industrial automation systems is presented. This framework exploits the following: 1) the function block, a well-known paradigm in the industrial automation domain, to provide the control engineer with the ability to construct its systems as aggregations of existing components and 2) the real-time Linux to execute the automatically synthesized executable. A prototype runtime environment is described, and a laboratory example application using a robotic arm is used to demonstrate the applicability of the proposed framework. Performance measurements are very promising, even for hard RT control applications. <s> BIB002 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> A. The Execution Environment <s> In recent years, requirements for interoperability, flexibility, and reconfigurability of complex automation industry applications have increased dramatically. The adoption of service-oriented architectures (SOAs) could be a feasible solution to meet these challenges. The IEC 61499 standard defines a set of management commands, which provides the capability of dynamic reconfiguration without affecting normal operation. In this paper, a formal model is proposed for the application of SOAs in the distributed automation domain in order to achieve flexible automation systems. Practical scenarios of applying SOA in industrial automation are discussed. In order to support the SOA IEC 61499 model, a service-based execution environment architecture is proposed. One main characteristic of flexibility-dynamic reconfiguration-is also demonstrated using a case study example. <s> BIB003
Authors in [7, Sec. V] describe an execution environment for IEC 61499 based on the formal model defined in the same paper. They present the key constructs of the execution environment using a class diagram [7, Fig.2] or [7, Fig.3 ] based on text BIB003 Sec IV] . From this diagram and Definition 1 that is utilized by authors to implement every class of this diagram as a service, it is extracted that the execution environment services, and the whole execution environment, are implemented using FB Types. The resource is implemented as a service repository but it keeps a list of FB types and FB instances. When a request for creating a new FB instance is received by the resource manager, one instance of the FB Service class is created and just one end point (the one of the FB Service instance) is registered in the repository of the resource, even in the case that the corresponding FB type provides more than one services that is the common case. The resource instance contains information not only on the provided by this instance services, but also for output events emitted by the FBI as well as data inputs and data outputs. This characterizes the resource repository as an FB instance repository and not service repository as claimed by authors. From the definition of dynamic services it is extracted that not only input events are mapped to services but also the EC state algorithms. Data services are also defined to access internal variables of the FB instance. Service endpoints are also used for EC state actions, EC algorithms and EC actions and all these are stored in the service repository that means that SOAP and XML overhead is introduced even in the ECC execution time. Moreover, services are registered to the repository for every constituent FBIs of composite FB, that means that the overhead from service utilization is also introduced at the composite FB level. The WS-discovery protocol is utilized for service discovery from the resource repository. Even though the approach focus on distributed systems the relation of the resource repository with the device external one, that would probably be used to register device's exposed services is not discussed. For the presented execution environment, authors assume that EC algorithms are normally written in IEC 61131 languages and mainly ST and LD. However, this raises the question of portability that was considered one of the main factors for the selection of 61499 instead of the 61131, which is claimed in BIB003 that does not provide code portability among various PLC vendors. On the other side it is claimed that code portability is achieved for FB library elements due to the use of their XML-based representation. It should be noted that PLCopen has defined an XML based representation for IEC 61131. Authors claim BIB003 Sec.1] that interoperability can be achieved through the use to the publish/subscribe communication model. The use of publish/subscribe communication pattern for interaction assumes that publishers and subscribers have already addressed interoperability issues. The publish/subscribe communication pattern has been successfully utilized in IEC 61499 execution environments to obtain flexibility at the device level, e.g., BIB002 BIB001 .
Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> B. Run-time reconfiguration <s> Model-driven engineering (MDE) is proposed as the next revolution in embedded-system development. It is a promising paradigm that provides the developer the abstraction level required to focus on the specific application and not on the underlying computing environments. Real-time (RT) Linux variants constitute a mature and stable platform that can be considered a strong candidate for RT applications in the control and automation domain. In this paper, a framework for the MDE of industrial automation systems is presented. This framework exploits the following: 1) the function block, a well-known paradigm in the industrial automation domain, to provide the control engineer with the ability to construct its systems as aggregations of existing components and 2) the real-time Linux to execute the automatically synthesized executable. A prototype runtime environment is described, and a laboratory example application using a robotic arm is used to demonstrate the applicability of the proposed framework. Performance measurements are very promising, even for hard RT control applications. <s> BIB001 </s> Service-Oriented Architecture in Industrial Automation Systems - The case of IEC 61499: A Review <s> B. Run-time reconfiguration <s> In recent years, requirements for interoperability, flexibility, and reconfigurability of complex automation industry applications have increased dramatically. The adoption of service-oriented architectures (SOAs) could be a feasible solution to meet these challenges. The IEC 61499 standard defines a set of management commands, which provides the capability of dynamic reconfiguration without affecting normal operation. In this paper, a formal model is proposed for the application of SOAs in the distributed automation domain in order to achieve flexible automation systems. Practical scenarios of applying SOA in industrial automation are discussed. In order to support the SOA IEC 61499 model, a service-based execution environment architecture is proposed. One main characteristic of flexibility-dynamic reconfiguration-is also demonstrated using a case study example. <s> BIB002
Run-time reconfiguration at the device level, which is considered as one benefit of the proposed architecture, imposes string real time constraints and complex algorithms not shown in BIB002 . The described case study even though considers the deletion and creation of FB types includes actions for deleting and creating event and data connections [7, Table I ]. The creation of event connections among FBIs has to be related to the publish/discover based interaction on which the proposed architecture is based. The resource management model described by IEC 61499 to support the IDE in the deployment process is not consistent with the publish/discover model that authors have adopted for the construction of the formal model BIB002 Sec. IV] . For example, the management command of IEC 61499 "CREATE event connection" expresses a different model from the publish/discover pattern. A coordinator, the IDE, enforces the construction of an event connection among the specific FBIs. However, based on the publish/discover pattern and as authors claim, when an FBI "intends to invoke a particular logic from a service provider, the requested service will be located by the service repository for the service requester." Based on this, "the service requester can access the service provider via sending messages" It is also interesting to note the feature of the framework that allows the control engineer to add a new functionality at the FB instance along with new services. This allows the control engineer, according to authors, to define FB types on the fly during normal operation and embed instances of them in the control logic. Regarding the performance evaluation, the proposed execution environment is compared against FORTE, which is based on method call for FBI invocation. FORTE adopts a completely different execution semantics from the ones adopted in the proposed execution environment. An overhead of 0.4 ms has been measured per persistent connection that is increased to 2.4 ms for temporary connections. It is clear that this last overhead has to be calculated for every connection of the new FB instance that is added to the network during reconfiguration at run-time. This probably results in more than 50 ms (this time is not reported in BIB002 ) from the deletion of the old FB type till the end of the specific reconfiguration action described in the case study. An execution environment for IEC 16499 that supports run time reconfiguration with detailed performance measurements is presented in BIB001 . Based on this: a) the average value of the FB instance creation time is 20 µs, and b) the creation of an event connection has an average time of 1.87 µs, while its deletion has an average value of 1.8 µs, both with a standard deviation of about 0.5 µs. It should be noted that RTNet is used as a communication mechanism instead of web services and SOAP. SOAP has been developed to interconnect functionalities expressed in terms of software developed on heterogeneous hardware and/or software platforms, which are distributed over the internet. These two requirements, i.e., distribution and heterogeneity, do not exist in the single device IEC 61499 execution environment thus the cost of performance overhead and the complexity that its adoption introduces is without benefit.
Literature survey on low rank approximation of matrices <s> Introduction <s> Let A be a given m×n real matrix with m≧n and of rank n and b a given vector. We wish to determine a vector x such that ::: ::: $$\parallel b - A\hat x\parallel = \min .$$ ::: ::: where ∥ … ∥ indicates the euclidean norm. Since the euclidean norm is unitarily invariant ::: ::: $$\parallel b - Ax\parallel = \parallel c - QAx\parallel $$ ::: ::: where c=Q b and Q T Q = I. We choose Q so that ::: ::: $$QA = R = {\left( {_{\dddot 0}^{\tilde R}} \right)_{\} (m - n) \times n}}$$ ::: ::: (1) ::: ::: and R is an upper triangular matrix. Clearly, ::: ::: $$\hat x = {\tilde R^{ - 1}}\tilde c$$ ::: ::: where c denotes the first n components of c. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> This paper is concerned with least squares problems when the least squares matrix A is near a matrix that is not of full rank. A definition of numerical rank is given. It is shown that under certain conditions when A has numerical rank r there is a distinguished r dimensional subspace of the column space of A that is insensitive to how it is approximated by r independent columns of A. The consequences of this fact for the least squares problem are examined. Algorithms are described for approximating the stable part of the column space of A. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> Abstract : An algorithm is presented for computing a column permutation Pi and a QR-factorization (A)(Pi) = QR of an m by n (m or = n) matrix A such that a possible rank deficiency of A will be revealed in the triangular factor R having a small lower right block. For low rank deficient matrices, the algorithm is guaranteed to reveal the rank of A and the cost is only slightly more than the cost of one regular QR-factorization. A posteriori upper and lower bounds on the singular values of A are derived and can be used to infer the numerical rank of A. Keywords: QR-Factorization; Rank deficient matrices; Least squares computation; Subset selection; Rank; Singular values. <s> BIB003 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> 1. A Review of Some Required Concepts from Core Linear Algebra. 2. Floating Point Numbers and Errors in Computations. 3. Stability of Algorithms and Conditioning of Problems. 4. Numerically Effective Algorithms and Mathematical Software. 5. Some Useful Transformations in Numerical Linear Algebra and Their Applications. 6. Numerical Solutions of Linear Systems. 7. Least Squares Solutions to Linear Systems. 8. Numerical Matrix Eigenvalue Problems. 9. The Generalized Eigenvalue Problem. 10. The Singular Value Decomposition (SVD). 11. A Taste of Round-Off Error Analysis. Appendix A: A Brief Introduction to MATLAB. Appendix B: MATLAB and Selected MATLAB Programs. <s> BIB004 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> Given anm n matrixM withm > n, it is shown that there exists a permutation FI and an integer k such that the QR factorization MYI= Q(Ak ckBk) reveals the numerical rank of M: the k k upper-triangular matrix Ak is well conditioned, IlCkll2 is small, and Bk is linearly dependent on Ak with coefficients bounded by a low-degree polynomial in n. Existing rank-revealing QR (RRQR) algorithms are related to such factorizations and two algorithms are presented for computing them. The new algorithms are nearly as efficient as QR with column pivoting for most problems and take O (ran2) floating-point operations in the worst case. <s> BIB005 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higher-order tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, first-order perturbation effects, etc., are analyzed. We investigate how tensor symmetries affect the decomposition and propose a multilinear generalization of the symmetric eigenvalue decomposition for pair-wise symmetric tensors. <s> BIB006 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> The mosaic-skeleton method was bred in a simple observation that rather large blocks in very large matrices coming from integral formulations can be approximated accurately by a sum of just few rank-one matrices (skeletons). These blocks might correspond to a region where the kernel is smooth enough, and anyway it can be a region where the kernel is approximated by a short sum of separable functions (functional skeletons). Since the effect of approximations is like that of having small-rank matrices, we find it pertinent to say about mosaic ranks of a matrix which turn out to be pretty small for many nonsingular matrices. <s> BIB007 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> Numerical techniques for data analysis and feature extraction are discussed using the framework of matrix rank reduction. The singular value decomposition (SVD) and its properties are reviewed, and the relation to latent semantic indexing (LSI) and principal component analysis (PCA) is described. Methods that approximate the SVD are reviewed. A few basic methods for linear regression, in particular the partial least squares (PLS) method, are presented, and analyzed as rank reduction methods. Methods for feature extraction, based on centroids and the classical linear discriminant analysis (LDA), as well as an improved LDA based on the generalized singular value decomposition (LDA/GSVD) are described. The effectiveness of these methods are illustrated using examples from information retrieval and two-dimensional representation of clustered data. <s> BIB008 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> A procedure is reported for the compression of rank-deficient matrices. A matrix A of rank k is represented in the form $A = U \circ B \circ V$, where B is a $k\times k$ submatrix of A, and U, V are well-conditioned matrices that each contain a $k\times k$ identity submatrix. This property enables such compression schemes to be used in certain situations where the singular value decomposition (SVD) cannot be used efficiently. Numerical examples are presented. <s> BIB009 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> Recently several results appeared that show significant reduction in time for matrix multiplication, singular value decomposition as well as linear (\ell_ 2) regression, all based on data dependent random sampling. Our key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation. Our main contribution is summarized as follows. --Independent of the recent results of Har-Peled and of Deshpande and Vempala, one of the first -- and to the best of our knowledge the most efficient -- relative error (1 + \in) \parallel A - A_k \parallel _F approximation algorithms for the singular value decomposition of an m ? n matrix A with M non-zero entries that requires 2 passes over the data and runs in time O\left( {\left( {M(\frac{k} { \in } + k\log k) + (n + m)(\frac{k} { \in } + k\log k)^2 } \right)\log \frac{1} {\delta }} \right) --The first o(nd^{2}) time (1 + \in) relative error approximation algorithm for n ? d linear (\ell_2) regression. --A matrix multiplication and norm approximation algorithm that easily applies to implicitly given matrices and can be used as a black box probability boosting tool. <s> BIB010 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> We describe two recently proposed randomized algorithms for the construction of low-rank approximations to matrices, and demonstrate their application (inter alia) to the evaluation of the singular value decompositions of numerically low-rank matrices. Being probabilistic, the schemes described here have a finite probability of failure; in most cases, this probability is rather negligible (10(-17) is a typical value). In many situations, the new procedures are considerably more efficient and reliable than the classical (deterministic) ones; they also parallelize naturally. We present several numerical examples to illustrate the performance of the schemes. <s> BIB011 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. <s> BIB012 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> Abstract Given an m × n matrix A and a positive integer k, we describe a randomized procedure for the approximation of A with a matrix Z of rank k. The procedure relies on applying A T to a collection of l random vectors, where l is an integer equal to or slightly greater than k; the scheme is efficient whenever A and A T can be applied rapidly to arbitrary vectors. The discrepancy between A and Z is of the same order as l m times the ( k + 1 ) st greatest singular value σ k + 1 of A, with negligible probability of even moderately large deviations. The actual estimates derived in the paper are fairly complicated, but are simpler when l − k is a fixed small nonnegative integer. For example, according to one of our estimates for l − k = 20 , the probability that the spectral norm ‖ A − Z ‖ is greater than 10 ( k + 20 ) m σ k + 1 is less than 10 − 17 . The paper contains a number of estimates for ‖ A − Z ‖ , including several that are stronger (but more detailed) than the preceding example; some of the estimates are effectively independent of m. Thus, given a matrix A of limited numerical rank, such that both A and A T can be applied rapidly to arbitrary vectors, the scheme provides a simple, efficient means for constructing an accurate approximation to a singular value decomposition of A. Furthermore, the algorithm presented here operates reliably independently of the structure of the matrix A. The results are illustrated via several numerical examples. <s> BIB013 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> In this article we present and analyze a new scheme for the approximation of multivariate functions (d=3,4) by sums of products of univariate functions. The method is based on the Adaptive Cross Approximation (ACA) initially designed for the approximation of bivariate functions. To demonstrate the linear complexity of the schemes, we apply it to large-scale multidimensional arrays generated by the evaluation of functions. <s> BIB014 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students. <s> BIB015 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> Abstract In the present paper, we give a survey of the recent results and outline future prospects of the tensor-structured numerical methods in applications to multidimensional problems in scientific computing. The guiding principle of the tensor methods is an approximation of multivariate functions and operators relying on a certain separation of variables. Along with the traditional canonical and Tucker models, we focus on the recent quantics-TT tensor approximation method that allows to represent N-d tensors with log-volume complexity, O ( d log N ). We outline how these methods can be applied in the framework of tensor truncated iteration for the solution of the high-dimensional elliptic/parabolic equations and parametric PDEs. Numerical examples demonstrate that the tensor-structured methods have proved their value in application to various computational problems arising in quantum chemistry and in the multi-dimensional/parametric FEM/BEM modeling—the tool apparently works and gives the promise for future use in challenging high-dimensional applications. <s> BIB016 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> Matrix approximation is a common tool in machine learning for building accurate prediction models for recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model where we assume instead that the matrix is only locally of low-rank, leading to a representation of the observed matrix as a weighted sum of low-rank matrices. We analyze the accuracy of the proposed local low-rank modeling. Our experiments show improvements in prediction accuracy in recommendation tasks. <s> BIB017 </s> Literature survey on low rank approximation of matrices <s> Introduction <s> During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors. <s> BIB018
The low rank matrix approximation is approximating a matrix by one whose rank is less than that of the original matrix. The goal of this is to obtain more compact representations of the data with limited loss of information. Let A be m × n matrix, then the low rank approximation (rank k) of A is given by A m×n ≈ B m×k C k×n . The low rank approximation of the matrix can be stored and manipulated more economically than the matrix itself. One can see from the above approximation that only k(m + n) entries have to be stored instead of mn entries of the original matrix A. The low rank approximation of a matrix appears in many applications. The list of applications includes image processing 166] , data mining , noise reduction, seismic inversion, latent semantic indexing [167] , principal component analysis (PCA) BIB008 , machine-learning BIB017 BIB015 , regularization for ill-posed problems, statistical data analysis applications, DNA microarray data, web search model and so on. The low rank approximation of matrices also plays a very important role in tensor decompositions BIB018 BIB016 BIB006 . Because of the interplay of rank and error there are basically two types of problems related to the low rank approximation of a matrix; fixed-precision approximation problem and fixed-rank approximation problem (we do not use this nomenclature in this article). In the fixed-precision approximation problem, for a given matrix A and a given tolerance ǫ, one wants to find a matrix B with rank k = k(ǫ) such that A − B ≤ ǫ in an appropriate matrix norm. On the contrary, in the fixed-rank approximation problem, one looks for a matrix B with fixed rank k and an error A − B as small as possible. The low rank approximation problem is well studied in the numerical linear algebra community. There are very classical matrix decompositions which gives low rank approximation. Singular value decomposition (SVD) is the best known. It has wide applications in many areas. It provides the true rank and gives the best low rank approximation of a matrix BIB004 . QR decomposition with column pivoting [144] , rank revealing QR factorization BIB003 BIB001 BIB002 BIB005 and interpolative decomposition BIB009 BIB011 are other useful techniques. These techniques require O(mnk) arithmetic operations to get a rank k approximation by at least k passes (the number of times that the entire data is read) through the input matrix. It is not easy to access the data in many applications with very large data. So these methods become unsuitable for large scale data matrices. Alternatives for these classical algorithms are randomized algorithms for low rank approximations BIB012 BIB013 BIB010 . The complexity of these algorithms is at most sublinear in the size m × n and they only require one or two passes of the input matrix. The main idea of these randomized algorithms is to compute an approximate basis for the range space of the matrix A using a random selection of columns/rows of A and project A onto the subspace spanned by this basis. We sketch it here: Let k be the target rank (the aim is to obtain a rank k approximation). Choose a number of samples larger than k, i.e s = k + p. The randomized low rank approximation constructs the approximation in the following way. Step 1: Form lower dimensional matrix X by the s selected row and/or columns. Step 2: Compute an approximate orthonormal basis Q = [q 1 , q 2 , ..., q k ] for the range of X. Step 3: Construct the low rank approximationà by projecting A onto the space spanned by the basis Q :à = QQ T A. In step 1, the columns/rows can be chosen in different ways: by subsampling of the input matrix or by using random projections. The matrix X formed by these columns is expected to be very close to A in a sense that the basis of the range of X covers the range of A well. The orthonormal basis consisting of k linearly independent vectors can be obtained using exact methods since the size of X is very small. These techniques are relatively insensitive to the quality of randomness and produce high accurate results. The probability of failure is negligible. Using the orthonormal basis Q one can approximate the standard factorizations like SVD, QR etc BIB012 . There are other approximation techniques available in the literature like cross/skeleton decompositions BIB014 BIB007 . Their complexity is of order O(k 2 (m + n)) and they use only k(m + n) entries from the original matrix to construct a rank k approximation of the matrix. These methods are also very useful in data sparse representation of the higher order tensors. The algorithms that construct different data tensor formats use low rank approximations of matrices at different levels of their construction. These are obtained by the cross/skeleton approximations inexpensively (linear in m and n ) which also gives the data sparse representation with linear complexity . The main motivation of this paper is to give a brief description of the techniques which are available in the literature. The paper gives an overview of the existing classical deterministic algorithms, randomized algorithms and finally cross/skeleton approximation techniques which have great advantage of being able to handle really large data appearing in applications. In section 2 the classical algorithms like singular value decomposition, pivoted QR factorization, rank revealing QR factorization (RRQR) are described briefly with relevant references. More emphasize is given to the subset selection problem and interpolative decomposition (these play a big role in skeleton/cross approximation or CU R decomposition which will be discussed in section 3). Various randomized algorithms are also described. In section 3 various versions of so called cross/skeleton approximation techniques are described. The algorithms are given in detail and the computational complexity of them are derived (linear in n). For simplicity of the presentation we consider only the matrix of real numbers. Frobenius norm of a m × n matrix A = (a ij ) is defined as the square root of the sum of the absolute squares of its elements i.e The spectral norm of the matrix A is defined as largest singular value of A. i.e Here σ max is the largest singular value of the matrix A. 2 Classical techniques and randomized algorithms
Literature survey on low rank approximation of matrices <s> Algorithms and computational complexity <s> A partial reorthogonalization procedure (BPRO) for maintaining semi-orthogonality among the left and right Lanczos vectors in the Lanczos bidiagonalization (LBD) is presented. The resulting algorithm is mathematically equivalent to the symmetric Lanczos algorithm with partial reorthogonalization (PRO) developed by Simon but works directly on the Lanczos bidiagonalization of A. For computing the singular values and vectors of a large sparse matrix with high accuracy, the BPRO algorithm uses only half the amount of storage and a factor of 3-4 less work compared to methods based on PRO applied to an equivalent symmetric system. Like PRO the algorithm presented here is based on simple recurrences which enable it to monitor the loss of orthogonality among the Lanczos vectors directly without forming inner products. These recurrences are used to develop a Lanczos bidiagonalization algorithm with partial reorthogonalization which has been implemented in a MATLAB package for sparse SVD and eigenvalue problems called PROPACK. Numerical experiments with the routines from PROPACK are conducted using a test problem from inverse helioseismology to illustrate the properties of the method. In addition a number of test matrices from the Harwell-Boeing collection are used to compare the accuracy and efficiency of the MATLAB implementations of BPRO and PRO with the svds routine in MATLAB 5.1, which uses an implicitly restarted Lanczos algorithm. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Algorithms and computational complexity <s> Low-rank approximation of large and/or sparse matrices is important in many applications, and the singular value decomposition (SVD) gives the best low-rank approximations with respect to unitarily-invariant norms. In this paper we show that good low-rank approximations can be directly obtained from the Lanczos bidiagonalization process applied to the given matrix without computing any SVD. We also demonstrate that a so-called one-sided reorthogonalization process can be used to maintain an adequate level of orthogonality among the Lanczos vectors and produce accurate low-rank approximations. This technique reduces the computational cost of the Lanczos bidiagonalization process. We illustrate the efficiency and applicability of our algorithm using numerical examples from several applications areas. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Algorithms and computational complexity <s> Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. <s> BIB003
The SVD of a matrix A is typically computed numerically by a two-step procedure. In the first step, the matrix is reduced to a bidiagonal matrix. This takes O(mn 2 ) floating-point operations (flops), and the second step is to compute the SVD of the bidiagonal matrix. The second step takes O(n) iterations, each costing O(n) flops. Therefore the overall cost is O(mn 2 ). If A is a square matrix, then SVD algorithm requires O(n 3 ) flops . Alternatively we can obtain the rank k approximation directly by obtaining partial SVD. The partial SVD can be obtained by computing partial QR factorization and post process the factors BIB003 . This technique requires only O(kmn) flops. Krylov subspace methods like, Lanczos methods for certain large sparse symmetric matrices and Arnoldi (unsymmetric Lanczos methods) for unsymmetric matrices can be used to compute SVD . The straight forward algorithm of Lanczos bidiagonalization has the problem of loss of orthogonality between the computed Lanczos vectors. Lanczos with complete reorthogonalization (or performing only local orthogonalization at every Lanczos steps), block Lanczos algorithms are practical Lanczos procedures . Details of efficient algorithms for large sparse matrices can be found in , chapter 4 of and BIB002 . The algorithms are available in the packages SVDPACK , PROPACK BIB001 . As a low rank approximation method the singular value decomposition has few drawbacks. It is expensive to compute if the dimension of the matrix is very large. In many applications it is sufficient to have orthonormal bases for the fundamental subspaces, something which the singular value decomposition provides. In other applications, however, it is desirable to have natural bases that consist of the rows or columns of the matrix. Here we describe such matrix decompositions.
Literature survey on low rank approximation of matrices <s> Pivoted QR decomposition <s> Let A be a given m×n real matrix with m≧n and of rank n and b a given vector. We wish to determine a vector x such that ::: ::: $$\parallel b - A\hat x\parallel = \min .$$ ::: ::: where ∥ … ∥ indicates the euclidean norm. Since the euclidean norm is unitarily invariant ::: ::: $$\parallel b - Ax\parallel = \parallel c - QAx\parallel $$ ::: ::: where c=Q b and Q T Q = I. We choose Q so that ::: ::: $$QA = R = {\left( {_{\dddot 0}^{\tilde R}} \right)_{\} (m - n) \times n}}$$ ::: ::: (1) ::: ::: and R is an upper triangular matrix. Clearly, ::: ::: $$\hat x = {\tilde R^{ - 1}}\tilde c$$ ::: ::: where c denotes the first n components of c. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Pivoted QR decomposition <s> Abstract : An algorithm is presented for computing a column permutation Pi and a QR-factorization (A)(Pi) = QR of an m by n (m or = n) matrix A such that a possible rank deficiency of A will be revealed in the triangular factor R having a small lower right block. For low rank deficient matrices, the algorithm is guaranteed to reveal the rank of A and the cost is only slightly more than the cost of one regular QR-factorization. A posteriori upper and lower bounds on the singular values of A are derived and can be used to infer the numerical rank of A. Keywords: QR-Factorization; Rank deficient matrices; Least squares computation; Subset selection; Rank; Singular values. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Pivoted QR decomposition <s> Abstract : Efficient modeling of electromagnetic scattering has always been an active topic in the field of computational electromagnetics. To reduce the memory and CPU time in the method of moments (MoM) solution, an efficient method based on pseudo skeleton approximation is presented in this report. The algorithm is purely algebraic, and therefore its performance is not associated with the kernel functions in the integral equations. The algorithm starts with a multilevel partitioning of the computational domain, which is very similar to the technique employed in multilevel fast multipole algorithm (MLFMA). Any of the impedance sub-matrices (with size of m x n) associated with the well-separated partitioning clusters (far interaction terms) is represented by the product of two much smaller matrices (with sizes of m x r and r x n), where r is the effective rank. Therefore, the memory requirement will be relieved and the total CPU time will be reduced significantly as well, since the rank is much smaller than the original matrix dimensions. It should be noted that we don't have to calculate all the impedance entries to implement the aforementioned decomposition. Instead, we only need to calculate a few randomly chosen rows and columns of those impedance entries. Further compressions based on singular value decomposition (SVD) are performed so that the rank reaches its optimal limit, which leads to the optimized final matrix compression. Numerical examples are provided to show the validity of the new algorithm. Future work directions are also discussed in this report. <s> BIB003
Let A m×n be a rank deficient matrix (m > n) with rank γ. A pivoted QR decomposition with column pivoting has the form AP = QR where P is a permutation matrix, Q is orthonormal and R is upper triangular matrix. In exact arithmetic, where R 11 is γ × γ upper triangular matrix with rank γ, Q ∈ R m×n and P ∈ R n×n . In floating point arithmetic one may obtain is small. A rank k approximation to any matrix A can be obtained by partitioning the decomposition AP = QR. Let B = AP and write where B 1 has k columns. Then our rank k approximation iŝ The approximationB (k) reproduces the first k columns of B exactly. Since Q 2 is orthogonal, the error inB (k) as an approximation to B is This is called truncated pivoted QR decomposition to A. The permutation matrix is determined by column pivoting such that R (k) 11 is well conditioned and R (k) BIB003 is negligible (the larger entries of R are moved to the upper left corner and the smallest entries are isolated in the bottom submatrix). This decomposition is computed by a variation of orthogonal triangularization by Householder transformations BIB001 144] . The algorithm described in 144] requires O(kmn) flops. This algorithm is effective in producing a triangular factor R with small R (k) BIB003 , very little is known in theory about its behavior and it can fail on some matrices (look at example 1 of BIB002 and also in ). Similar decompositions like pivoted Cholesky decompositions, pivoted QLP decomposition and UTV decompositions can be found in [144] .
Literature survey on low rank approximation of matrices <s> Definition (RRQR): <s> Abstract The most widely used stable methods for numerical determination of the rank of a matrix A are the singular value decomposition and the QR algorithm with column interchanges. Here two algorithms are presented which determine rank and nullity in a numerically stable manner without using column interchanges. One algorithm makes use of the condition estimator of Cline, Moler, Stewart, and Wilkinson and relative to alternative stable algorithms is particularly efficient for sparse matrices. The second algorithm is important in the case that one wishes to test for rank and nullity while sequentially adding columns to a matrix. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Definition (RRQR): <s> Abstract : An algorithm is presented for computing a column permutation Pi and a QR-factorization (A)(Pi) = QR of an m by n (m or = n) matrix A such that a possible rank deficiency of A will be revealed in the triangular factor R having a small lower right block. For low rank deficient matrices, the algorithm is guaranteed to reveal the rank of A and the cost is only slightly more than the cost of one regular QR-factorization. A posteriori upper and lower bounds on the singular values of A are derived and can be used to infer the numerical rank of A. Keywords: QR-Factorization; Rank deficient matrices; Least squares computation; Subset selection; Rank; Singular values. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Definition (RRQR): <s> T. Chan has noted that, even when the singular value decomposition of a matrix A is known, it is still not obvious how to find a rank-revealing QR factorization (RRQR) of A if A has numerical rank deficiency. This paper offers a constructive proof of the existence of the RRQR factorization of any matrix A of size m x n with numerical rank r . The bounds derived in this paper that guarantee the existence of RRQR are all of order f i ,in comparison with Chan's 0(2"-') . It has been known for some time that if A is only numerically rank-one deficient, then the column permutation l7 of A that guarantees a small rnn in the QR factorization of A n can be obtained by inspecting the size of the elements of the right singular vector of A corresponding to the smallest singular value of A . To some extent, our paper generalizes this well-known result. We consider the interplay between two important matrix decompositions: the singular value decomposition and the QR factorization of a matrix A . In particular, we are interested in the case when A is singular or nearly singular. It is well known that for any A E R m X n (a real matrix with rn rows and n columns, where without loss of generality we assume rn > n) there are orthogonal matrices U and V such that where C is a diagonal matrix with nonnegative diagonal elements: We assume that a, 2 a2 2 . . 2 on 2 0 . The decomposition (0.1) is the singular value decomposition (SVD) of A , and the ai are the singular values of A . The columns of V are the right singular vectors of A , and the columns of U are the left singular vectors of A . Mathematically, in terms of the singular values, Received December 1, 1990; revised February 8, 199 1. 199 1 Mathematics Subject Classification. Primary 65F30, 15A23, 15A42, 15A15. <s> BIB003 </s> Literature survey on low rank approximation of matrices <s> Definition (RRQR): <s> The rank revealing QR factorization of a rectangular matrix can sometimes be used as a reliable and efficient computational alternative to the singular value decomposition for problems that involve rank determination. This is illustrated by showing how the rank revealing QR factorization can be used to compute solutions to rank deficient least squares problems, to perform subset selection, to compute matrix approximations of given rank, and to solve total least squares problems. <s> BIB004 </s> Literature survey on low rank approximation of matrices <s> Definition (RRQR): <s> Given anm n matrixM withm > n, it is shown that there exists a permutation FI and an integer k such that the QR factorization MYI= Q(Ak ckBk) reveals the numerical rank of M: the k k upper-triangular matrix Ak is well conditioned, IlCkll2 is small, and Bk is linearly dependent on Ak with coefficients bounded by a low-degree polynomial in n. Existing rank-revealing QR (RRQR) algorithms are related to such factorizations and two algorithms are presented for computing them. The new algorithms are nearly as efficient as QR with column pivoting for most problems and take O (ran2) floating-point operations in the worst case. <s> BIB005 </s> Literature survey on low rank approximation of matrices <s> Definition (RRQR): <s> In this paper we propose four algorithms to compute truncated pivoted QR approximations to a sparse matrix. Three are based on the Gram–Schmidt algorithm and the other on Householder triangularization. All four algorithms leave the original matrix unchanged, and the only additional storage requirements are arrays to contain the factorization itself. Thus, the algorithms are particularly suited to determining low-rank approximations to a sparse matrix. <s> BIB006
Given a matrix A m×n (m ≥ n) and an integer k(k ≤ n), assume partial QR factorizations of the form where Q ∈ R m×n is an orthonormal matrix, R ∈ R n×n is block upper triangular, R 11 ∈ R k×k , R 12 ∈ R k×n−k , R 22 ∈ R n−k×n−k and P ∈ R n×n is a permutation matrix. The above factorization is call RRQR factorization if it satisfies where p(k, n) is a lower degree polynomial in k and n. In the above definition σ min is the minimum singular value and σ max is the maximum singular value. RRQR was defined by Chan in BIB002 (similar ideas were proposed independently in BIB001 ). A constructive proof of the existence of a RRQR factorization of an arbitrary matrix A m×n with numerical rank r is given in BIB003 . Much research on RRQR factorizations has yielded improved results for p(k, n). There are several algorithms to compute the RRQR factorization BIB002 BIB005 . The computational complexity of these algorithms are slightly larger than the standard QR decomposition algorithm. The values of p(k, n) and the complexities of different algorithms were tabulated in . Different applications of RRQR like subset selection problems, total least-squares problems including low rank approximation have been discussed in BIB004 . The low rank approximation of the matrix A can be obtained by neglecting the submatrix R 22 in RRQQ factorization of A. It has been shown that matrix approximations derived from RRQR factorizations are almost as good as those derived from truncated SVD approximations. The singular value and pivoted QR decompositions are not good for large and sparse matrices. The problem is that the conventional algorithms for computing these decomposition proceed by transformations that quickly destroy the sparsity of matrix A. Different algorithms for the efficient computation of truncated pivoted QR approximations to a sparse matrix without loosing the sparsity of the matrix A are proposed in BIB006 . Some more references on structure preserving RRQR factorization algorithms are given in BIB004 .
Literature survey on low rank approximation of matrices <s> Interpolative decomposition <s> A procedure is reported for the compression of rank-deficient matrices. A matrix A of rank k is represented in the form $A = U \circ B \circ V$, where B is a $k\times k$ submatrix of A, and U, V are well-conditioned matrices that each contain a $k\times k$ identity submatrix. This property enables such compression schemes to be used in certain situations where the singular value decomposition (SVD) cannot be used efficiently. Numerical examples are presented. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Interpolative decomposition <s> We describe two recently proposed randomized algorithms for the construction of low-rank approximations to matrices, and demonstrate their application (inter alia) to the evaluation of the singular value decompositions of numerically low-rank matrices. Being probabilistic, the schemes described here have a finite probability of failure; in most cases, this probability is rather negligible (10(-17) is a typical value). In many situations, the new procedures are considerably more efficient and reliable than the classical (deterministic) ones; they also parallelize naturally. We present several numerical examples to illustrate the performance of the schemes. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Interpolative decomposition <s> Abstract Given an m × n matrix A and a positive integer k, we describe a randomized procedure for the approximation of A with a matrix Z of rank k. The procedure relies on applying A T to a collection of l random vectors, where l is an integer equal to or slightly greater than k; the scheme is efficient whenever A and A T can be applied rapidly to arbitrary vectors. The discrepancy between A and Z is of the same order as l m times the ( k + 1 ) st greatest singular value σ k + 1 of A, with negligible probability of even moderately large deviations. The actual estimates derived in the paper are fairly complicated, but are simpler when l − k is a fixed small nonnegative integer. For example, according to one of our estimates for l − k = 20 , the probability that the spectral norm ‖ A − Z ‖ is greater than 10 ( k + 20 ) m σ k + 1 is less than 10 − 17 . The paper contains a number of estimates for ‖ A − Z ‖ , including several that are stronger (but more detailed) than the preceding example; some of the estimates are effectively independent of m. Thus, given a matrix A of limited numerical rank, such that both A and A T can be applied rapidly to arbitrary vectors, the scheme provides a simple, efficient means for constructing an accurate approximation to a singular value decomposition of A. Furthermore, the algorithm presented here operates reliably independently of the structure of the matrix A. The results are illustrated via several numerical examples. <s> BIB003
Interpolative decompositions (ID's) (also called as CX decomposition) are closely related to pivoted QR factorizations and are useful for representing low rank matrices in terms of linear combinations of their columns BIB001 BIB002 BIB003 . Interpolative decomposition of a matrix completely rely on the column subset selection. Before defining the interpolative decomposition, a brief description is given below on the subset selection problem.
Literature survey on low rank approximation of matrices <s> Subset selection problem <s> This paper is concerned with least squares problems when the least squares matrix A is near a matrix that is not of full rank. A definition of numerical rank is given. It is shown that under certain conditions when A has numerical rank r there is a distinguished r dimensional subspace of the column space of A that is insensitive to how it is approximated by r independent columns of A. The consequences of this fact for the least squares problem are examined. Algorithms are described for approximating the stable part of the column space of A. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Subset selection problem <s> Given anm n matrixM withm > n, it is shown that there exists a permutation FI and an integer k such that the QR factorization MYI= Q(Ak ckBk) reveals the numerical rank of M: the k k upper-triangular matrix Ak is well conditioned, IlCkll2 is small, and Bk is linearly dependent on Ak with coefficients bounded by a low-degree polynomial in n. Existing rank-revealing QR (RRQR) algorithms are related to such factorizations and two algorithms are presented for computing them. The new algorithms are nearly as efficient as QR with column pivoting for most problems and take O (ran2) floating-point operations in the worst case. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Subset selection problem <s> We consider the problem of selecting the best subset of exactly $k$ columns from an $m \times n$ matrix $A$. We present and analyze a novel two-stage algorithm that runs in $O(\min\{mn^2,m^2n\})$ time and returns as output an $m \times k$ matrix $C$ consisting of exactly $k$ columns of $A$. In the first (randomized) stage, the algorithm randomly selects $\Theta(k \log k)$ columns according to a judiciously-chosen probability distribution that depends on information in the top-$k$ right singular subspace of $A$. In the second (deterministic) stage, the algorithm applies a deterministic column-selection procedure to select and return exactly $k$ columns from the set of columns selected in the first stage. Let $C$ be the $m \times k$ matrix containing those $k$ columns, let $P_C$ denote the projection matrix onto the span of those columns, and let $A_k$ denote the best rank-$k$ approximation to the matrix $A$. Then, we prove that, with probability at least 0.8, $$ \FNorm{A - P_CA} \leq \Theta(k \log^{1/2} k) \FNorm{A-A_k}. $$ This Frobenius norm bound is only a factor of $\sqrt{k \log k}$ worse than the best previously existing existential result and is roughly $O(\sqrt{k!})$ better than the best previous algorithmic result for the Frobenius norm version of this Column Subset Selection Problem (CSSP). We also prove that, with probability at least 0.8, $$ \TNorm{A - P_CA} \leq \Theta(k \log^{1/2} k)\TNorm{A-A_k} + \Theta(k^{3/4}\log^{1/4}k)\FNorm{A-A_k}. $$ This spectral norm bound is not directly comparable to the best previously existing bounds for the spectral norm version of this CSSP. Our bound depends on $\FNorm{A-A_k}$, whereas previous results depend on $\sqrt{n-k}\TNorm{A-A_k}$; if these two quantities are comparable, then our bound is asymptotically worse by a $(k \log k)^{1/4}$ factor. <s> BIB003 </s> Literature survey on low rank approximation of matrices <s> Subset selection problem <s> Given a real matrix A∈Rm×n of rank r, and an integer k<r, the sum of the outer products of top k singular vectors scaled by the corresponding singular values provide the best rank-k approximation Ak to A. When the columns of A have specific meaning, it might be desirable to find good approximations to Ak which use a small number of columns of A. This paper provides a simple greedy algorithm for this problem in Frobenius norm, with guarantees on the performance and the number of columns chosen. The algorithm selects c columns from A with c=O(klogkϵ2η2(A)) such that ::: ‖A−ΠCA‖F≤(1+ϵ)‖A−Ak‖F, ::: where C is the matrix composed of the c columns, ΠC is the matrix projecting the columns of A onto the space spanned by C and η(A) is a measure related to the coherence in the normalized columns of A. The algorithm is quite intuitive and is obtained by combining a greedy solution to the generalization of the well known sparse approximation problem and an existence result on the possibility of sparse approximation. We provide empirical results on various specially constructed matrices comparing our algorithm with the previous deterministic approaches based on QR factorizations and a recently proposed randomized algorithm. The results indicate that in practice, the performance of the algorithm can be significantly better than the bounds suggest. <s> BIB004
Subset selection is a method for selecting a subset of columns from a real matrix, so that the subset represents the entire matrix well and is far from being rank deficient. Given a m × n matrix A and an integer k, subset selection attempts to find the k most linearly independent columns that best represents the information in the matrix. The mathematical formulation of the subset selection problems is: Determine a permutation matrix P such that 1. A 1 is m × k matrix containing k linearly independent columns such that smallest singular value is as large as possible. That is for some γ 2. the n − k columns of A 2 (redundant columns) are well represented by k columns of A 1 . That is Remark: Z is a matrix responsible for representing the columns of A 2 in terms of the columns of A 1 . More detailed information and an equivalent definition to the subset selection problem is given in section 2.4. The subset selection using singular value decomposition has been addressed in BIB001 . Many subset selection algorithms use a QR decomposition (which was discussed in last subsection) to find the most representative columns . There are several randomized algorithms for this problem BIB003 BIB004 . The strong RRQR algorithm by Gu and Eisenstat BIB002 gives the best deterministic approximation to the two conditions (10) and (11) of the subset selection problem. The details are given below. As described in the last subsection the RRQR factorization of A m×n is represented by This gives the permutation matrix P such that AP = (A 1 , A 2 ) where A 1 (the matrix with most important k columns of A) and A 2 (with the redundant columns) are given by with In the above inequalities f ≥ 1 is a tolerance supplied by some user. The Gu and Eisenstat algorithm also guarantees that (R One can extend this algorithm for wide and fat matrices where m < n and k = m. The computational complexity of this algorithm is O(mn 2 ). Remark: So from the strong RRQR algorithm we can see the following. As described in subsection 2.2 the truncated RRQR of A is AP ≃ Q[R 11 R 12 ]. Now we can write it as AP ≃ QR 11 [I k×k R −1 11 R 12 ], where QR 11 is matrix which contains k linearly independent columns. From theorem 3.2 of BIB002 , one can see that 11 R 12 ]P T is an approximation to the matrix A. As we have seen in section 2.2 the error in the approximation is This subset selection problem is also studied widely in randomized setting. We postpone the discussion of these techniques to subsection 2.4. Remark: In most of the deterministic algorithms for subset selection problem, the error estimates are given in spectral norm. The error estimates in both spectral and Frobenius norm are presented for several randomized algorithms in the literature which is the subject of next subsection.
Literature survey on low rank approximation of matrices <s> Definition (ID): <s> Given anm n matrixM withm > n, it is shown that there exists a permutation FI and an integer k such that the QR factorization MYI= Q(Ak ckBk) reveals the numerical rank of M: the k k upper-triangular matrix Ak is well conditioned, IlCkll2 is small, and Bk is linearly dependent on Ak with coefficients bounded by a low-degree polynomial in n. Existing rank-revealing QR (RRQR) algorithms are related to such factorizations and two algorithms are presented for computing them. The new algorithms are nearly as efficient as QR with column pivoting for most problems and take O (ran2) floating-point operations in the worst case. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Definition (ID): <s> We describe two recently proposed randomized algorithms for the construction of low-rank approximations to matrices, and demonstrate their application (inter alia) to the evaluation of the singular value decompositions of numerically low-rank matrices. Being probabilistic, the schemes described here have a finite probability of failure; in most cases, this probability is rather negligible (10(-17) is a typical value). In many situations, the new procedures are considerably more efficient and reliable than the classical (deterministic) ones; they also parallelize naturally. We present several numerical examples to illustrate the performance of the schemes. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Definition (ID): <s> Abstract Given an m × n matrix A and a positive integer k, we describe a randomized procedure for the approximation of A with a matrix Z of rank k. The procedure relies on applying A T to a collection of l random vectors, where l is an integer equal to or slightly greater than k; the scheme is efficient whenever A and A T can be applied rapidly to arbitrary vectors. The discrepancy between A and Z is of the same order as l m times the ( k + 1 ) st greatest singular value σ k + 1 of A, with negligible probability of even moderately large deviations. The actual estimates derived in the paper are fairly complicated, but are simpler when l − k is a fixed small nonnegative integer. For example, according to one of our estimates for l − k = 20 , the probability that the spectral norm ‖ A − Z ‖ is greater than 10 ( k + 20 ) m σ k + 1 is less than 10 − 17 . The paper contains a number of estimates for ‖ A − Z ‖ , including several that are stronger (but more detailed) than the preceding example; some of the estimates are effectively independent of m. Thus, given a matrix A of limited numerical rank, such that both A and A T can be applied rapidly to arbitrary vectors, the scheme provides a simple, efficient means for constructing an accurate approximation to a singular value decomposition of A. Furthermore, the algorithm presented here operates reliably independently of the structure of the matrix A. The results are illustrated via several numerical examples. <s> BIB003 </s> Literature survey on low rank approximation of matrices <s> Definition (ID): <s> [1] The interpolative decomposition (ID) is combined with the multilevel fast multipole algorithm (MLFMA), denoted by ID-MLFMA, to handle multiscale problems. The ID-MLFMA first generates ID levels by recursively dividing the boxes at the finest MLFMA level into smaller boxes. It is specifically shown that near-field interactions with respect to the MLFMA, in the form of the matrix vector multiplication (MVM), are efficiently approximated at the ID levels. Meanwhile, computations on far-field interactions at the MLFMA levels remain unchanged. Only a small portion of matrix entries are required to approximate coupling among well-separated boxes at the ID levels, and these submatrices can be filled without computing the complete original coupling matrix. It follows that the matrix filling in the ID-MLFMA becomes much less expensive. The memory consumed is thus greatly reduced and the MVM is accelerated as well. Several factors that may influence the accuracy, efficiency and reliability of the proposed ID-MLFMA are investigated by numerical experiments. Complex targets are calculated to demonstrate the capability of the ID-MLFMA algorithm. <s> BIB004
Let A m×n be a matrix of rank k. There exists an m × k matrix B whose columns constitute a subset of the columns of A, and k × n matrix P, such that 1. some subset of the columns of P makes up k × k identity matrix, 2. P is not too large (no entry of P has an absolute value greater than 1), and 3. A m×n = B m×k P k×n . Moreover, the decomposition provides an approximation when the exact rank of A is greater than k, but the (k + 1)st greatest singular value of A is small. The approximation quality of the Interpolative decomposition is described in the following Lemma BIB002 BIB003 . One can also look at for similar results. Lemma: Suppose that m and n are positive integers, and A is m × n matrix. Then for any positive integer k with k ≤ m and k ≤ n, there exist a k × n matrix P, and a m × k matrix B whose columns constitute a subset of the columns of A, such that 1. some subset of the columns of P makes up k × k identity matrix, 2. no entry of P has an absolute value greater than 1, . the least (that is, k the greatest) singular value of P is at least 1, 5. A m×n = B m×k P k×n when k = m and k = n, and 6. when k < m and k < n, where σ k+1 is the (k + 1)st greatest singular value of A. The algorithms to compute the ID are computationally expensive. The algorithms described in BIB001 can be used to compute the Interpolative decomposition. In BIB003 a randomized algorithm has been proposed. The authors have constructed the interpolative decomposition under weaker conditions than those in above Lemma. The computational complexity of this algorithm is O(kmnlog(n)). This decomposition have also been studied in . The details of a software package of ID algorithms can be found in and the applications of ID in different applications can be found in BIB004 .
Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> In many applications, the data consist of (or may be naturally formulated as) an $m \times n$ matrix $A$. It is often of interest to find a low-rank approximation to $A$, i.e., an approximation $D$ to the matrix $A$ of rank not greater than a specified rank $k$, where $k$ is much smaller than $m$ and $n$. Methods such as the singular value decomposition (SVD) may be used to find an approximation to $A$ which is the best in a well-defined sense. These methods require memory and time which are superlinear in $m$ and $n$; for many applications in which the data sets are very large this is prohibitive. Two simple and intuitive algorithms are presented which, when given an $m \times n$ matrix $A$, compute a description of a low-rank approximation $D^{*}$ to $A$, and which are qualitatively faster than the SVD. Both algorithms have provable bounds for the error matrix $A-D^{*}$. For any matrix $X$, let $\|{X}\|_F$ and $\|{X}\|_2$ denote its Frobenius norm and its spectral norm, respectively. In the first algorithm, $c$ columns of $A$ are randomly chosen. If the $m \times c$ matrix $C$ consists of those $c$ columns of $A$ (after appropriate rescaling), then it is shown that from $C^TC$ approximations to the top singular values and corresponding singular vectors may be computed. From the computed singular vectors a description $D^{*}$ of the matrix $A$ may be computed such that $\mathrm{rank}(D^{*}) \le k$ and such that $$ \left\|A-D^{*}\right\|_{\xi}^{2} \le \min_{D:\mathrm{rank}(D)\le k} \left\|A-D\right\|_{\xi}^{2} + poly(k,1/c) \left\|{A}\right\|^2_F $$ holds with high probability for both $\xi = 2,F$. This algorithm may be implemented without storing the matrix $A$ in random access memory (RAM), provided it can make two passes over the matrix stored in external memory and use $O(cm+c^2)$ additional RAM. The second algorithm is similar except that it further approximates the matrix $C$ by randomly sampling $r$ rows of $C$ to form a $r \times c$ matrix $W$. Thus, it has additional error, but it can be implemented in three passes over the matrix using only constant additional RAM. To achieve an additional error (beyond the best rank $k$ approximation) that is at most $\epsilon\|{A}\|^2_F$, both algorithms take time which is polynomial in $k$, $1/\epsilon$, and $\log(1/\delta)$, where $\delta>0$ is a failure probability; the first takes time linear in $\mbox{max}(m,n)$ and the second takes time independent of $m$ and $n$. Our bounds improve previously published results with respect to the rank parameter $k$ for both the Frobenius and spectral norms. In addition, the proofs for the error bounds use a novel method that makes important use of matrix perturbation theory. The probability distribution over columns of $A$ and the rescaling are crucial features of the algorithms which must be chosen judiciously. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> Frieze et al. [17] proved that a small sample of rows of a given matrix A contains a low-rank approximation D that minimizes ||A - D||F to within small additive error, and the sampling can be done efficiently using just two passes over the matrix [12]. In this paper, we generalize this result in two ways. First, we prove that the additive error drops exponentially by iterating the sampling in an adaptive manner. Using this result, we give a pass-efficient algorithm for computing low-rank approximation with reduced additive error. Our second result is that using a natural distribution on subsets of rows (called volume sampling), there exists a subset of k rows whose span contains a factor (k + 1) relative approximation and a subset of k + k(k + 1)/e rows whose span contains a 1+e relative approximation. The existence of such a small certificate for multiplicative low-rank approximation leads to a PTAS for the following projective clustering problem: Given a set of points P in Rd, and integers k, j, find a set of j subspaces F 1 , . . ., F j , each of dimension at most k, that minimize Σ p∈P min i d(p, F i )2. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> We study random submatrices of a large matrix A. We show how to approximately compute A from its random submatrix of the smallest possible size O(r log r) with a small error in the spectral norm, where r = ||A||_F^2 / ||A||_2^2 is the numerical rank of A. The numerical rank is always bounded by, and is a stable relaxation of, the rank of A. This yields an asymptotically optimal guarantee in an algorithm for computing low-rank approximations of A. We also prove asymptotically optimal estimates on the spectral norm and the cut-norm of random submatrices of A. The result for the cut-norm yields a slight improvement on the best known sample complexity for an approximation algorithm for MAX-2CSP problems. We use methods of Probability in Banach spaces, in particular the law of large numbers for operator-valued random variables. <s> BIB003 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> We consider the problem of selecting the best subset of exactly $k$ columns from an $m \times n$ matrix $A$. We present and analyze a novel two-stage algorithm that runs in $O(\min\{mn^2,m^2n\})$ time and returns as output an $m \times k$ matrix $C$ consisting of exactly $k$ columns of $A$. In the first (randomized) stage, the algorithm randomly selects $\Theta(k \log k)$ columns according to a judiciously-chosen probability distribution that depends on information in the top-$k$ right singular subspace of $A$. In the second (deterministic) stage, the algorithm applies a deterministic column-selection procedure to select and return exactly $k$ columns from the set of columns selected in the first stage. Let $C$ be the $m \times k$ matrix containing those $k$ columns, let $P_C$ denote the projection matrix onto the span of those columns, and let $A_k$ denote the best rank-$k$ approximation to the matrix $A$. Then, we prove that, with probability at least 0.8, $$ \FNorm{A - P_CA} \leq \Theta(k \log^{1/2} k) \FNorm{A-A_k}. $$ This Frobenius norm bound is only a factor of $\sqrt{k \log k}$ worse than the best previously existing existential result and is roughly $O(\sqrt{k!})$ better than the best previous algorithmic result for the Frobenius norm version of this Column Subset Selection Problem (CSSP). We also prove that, with probability at least 0.8, $$ \TNorm{A - P_CA} \leq \Theta(k \log^{1/2} k)\TNorm{A-A_k} + \Theta(k^{3/4}\log^{1/4}k)\FNorm{A-A_k}. $$ This spectral norm bound is not directly comparable to the best previously existing bounds for the spectral norm version of this CSSP. Our bound depends on $\FNorm{A-A_k}$, whereas previous results depend on $\sqrt{n-k}\TNorm{A-A_k}$; if these two quantities are comparable, then our bound is asymptotically worse by a $(k \log k)^{1/4}$ factor. <s> BIB004 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> We consider low-rank reconstruction of a matrix using its columns and we present asymptotically optimal algorithms for both spectral norm and Frobenius norm reconstruction. The main tools we introduce to obtain our r esults are: (i) the use of fast approximate SVD-like decompositions for column reconstruction, and (ii) two deter ministic algorithms for selecting rows from matrices with orthonormal columns, building upon the sparse represen tation theorem for decompositions of the identity that appeared in \cite{BSS09}. <s> BIB005 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> We prove that for any real-valued matrix $X \in \R^{m \times n}$, and positive integers $r \ge k$, there is a subset of $r$ columns of $X$ such that projecting $X$ onto their span gives a $\sqrt{\frac{r+1}{r-k+1}}$-approximation to best rank-$k$ approximation of $X$ in Frobenius norm. We show that the trade-off we achieve between the number of columns and the approximation ratio is optimal up to lower order terms. Furthermore, there is a deterministic algorithm to find such a subset of columns that runs in $O(r n m^{\omega} \log m)$ arithmetic operations where $\omega$ is the exponent of matrix multiplication. We also give a faster randomized algorithm that runs in $O(r n m^2)$ arithmetic operations. <s> BIB006 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> Randomized algorithms for very large matrix problems have received a great deal of attention in recent years. Much of this work was motivated by problems in large-scale data analysis, and this work was performed by individuals from many different research communities. This monograph will provide a detailed overview of recent work on the theory of randomized matrix algorithms as well as the application of those ideas to the solution of practical problems in large-scale data analysis. An emphasis will be placed on a few simple core ideas that underlie not only recent theoretical advances but also the usefulness of these tools in large-scale data applications. Crucial in this context is the connection with the concept of statistical leverage. This concept has long been used in statistical regression diagnostics to identify outliers; and it has recently proved crucial in the development of improved worst-case matrix algorithms that are also amenable to high-quality numerical implementation and that are useful to domain scientists. Randomized methods solve problems such as the linear least-squares problem and the low-rank matrix approximation problem by constructing and operating on a randomized sketch of the input matrix. Depending on the specifics of the situation, when compared with the best previously-existing deterministic algorithms, the resulting randomized algorithms have worst-case running time that is asymptotically faster; their numerical implementations are faster in terms of clock-time; or they can be implemented in parallel computing environments where existing numerical algorithms fail to run at all. Numerous examples illustrating these observations will be described in detail. <s> BIB007 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> Given a real matrix A∈Rm×n of rank r, and an integer k<r, the sum of the outer products of top k singular vectors scaled by the corresponding singular values provide the best rank-k approximation Ak to A. When the columns of A have specific meaning, it might be desirable to find good approximations to Ak which use a small number of columns of A. This paper provides a simple greedy algorithm for this problem in Frobenius norm, with guarantees on the performance and the number of columns chosen. The algorithm selects c columns from A with c=O(klogkϵ2η2(A)) such that ::: ‖A−ΠCA‖F≤(1+ϵ)‖A−Ak‖F, ::: where C is the matrix composed of the c columns, ΠC is the matrix projecting the columns of A onto the space spanned by C and η(A) is a measure related to the coherence in the normalized columns of A. The algorithm is quite intuitive and is obtained by combining a greedy solution to the generalization of the well known sparse approximation problem and an existence result on the possibility of sparse approximation. We provide empirical results on various specially constructed matrices comparing our algorithm with the previous deterministic approaches based on QR factorizations and a recently proposed randomized algorithm. The results indicate that in practice, the performance of the algorithm can be significantly better than the bounds suggest. <s> BIB008 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> The goal of unsupervised feature selection is to identify a small number of important features that can represent the data. We propose a new algorithm, a modification of the classical pivoted QR algorithm of Businger and Golub, that requires a small number of passes over the data. The improvements are based on two ideas: keeping track of multiple features in each pass, and skipping calculations that can be shown not to affect the final selection. Our algorithm selects the exact same features as the classical pivoted QR algorithm, and has the same favorable numerical stability. We describe experiments on real-world datasets which sometimes show improvements of several orders of magnitude over the classical algorithm. These results appear to be competitive with recently proposed randomized algorithms in terms of pass efficiency and run time. On the other hand, the randomized algorithms may produce more accurate features, at the cost of small probability of failure. <s> BIB009 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> We consider processing an n x d matrix A in a stream with row-wise updates according to a recent algorithm called Frequent Directions (Liberty, KDD 2013). This algorithm maintains an l x d matrix Q deterministically, processing each row in O(d l^2) time; the processing time can be decreased to O(d l) with a slight modification in the algorithm and a constant increase in space. We show that if one sets l = k+ k/eps and returns Q_k, a k x d matrix that is the best rank k approximation to Q, then we achieve the following properties: ||A - A_k||_F^2<= ||A||_F^2 - ||Q_k||_F^2<= (1+eps) ||A - A_k||_F^2 and where pi_{Q_k}(A) is the projection of A onto the rowspace of Q_k then ||A - pi_{Q_k}(A)||_F^2<= (1+eps) ||A - A_k||_F^2. We also show that Frequent Directions cannot be adapted to a sparse version in an obvious way that retains the l original rows of the matrix, as opposed to a linear combination or sketch of the rows. <s> BIB010 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> A sketch of a matrix A is another matrix B which is significantly smaller than A but still approximates it well. Finding such sketches efficiently is an important building block in modern algorithms for approximating, for example, the PCA of massive matrices. This task is made more challenging in the streaming model, where each row of the input matrix can only be processed once and storage is severely limited. In this paper we adapt a well known streaming algorithm for approximating item frequencies to the matrix sketching setting. The algorithm receives n rows of a large matrix A e ℜ n x m one after the other in a streaming fashion. It maintains a sketch B ℜ l x m containing only l This gives a streaming algorithm whose error decays proportional to 1/l using O(ml) space. For comparison, random-projection, hashing or sampling based algorithms produce convergence bounds proportional to 1/√l. Sketch updates per row in A require amortized O(ml) operations and the algorithm is perfectly parallelizable. Our experiments corroborate the algorithm's scalability and improved convergence rate. The presented algorithm also stands out in that it is deterministic, simple to implement and elementary to prove. <s> BIB011 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> In today's information systems, the availability of massive amounts of data necessitates the development of fast and accurate algorithms to summarize these data and represent them in a succinct format. One crucial problem in big data analytics is the selection of representative instances from large and massively distributed data, which is formally known as the Column Subset Selection problem. The solution to this problem enables data analysts to understand the insights of the data and explore its hidden structure. The selected instances can also be used for data preprocessing tasks such as learning a low-dimensional embedding of the data points or computing a low-rank approximation of the corresponding matrix. This paper presents a fast and accurate greedy algorithm for large-scale column subset selection. The algorithm minimizes an objective function, which measures the reconstruction error of the data matrix based on the subset of selected columns. The paper first presents a centralized greedy algorithm for column subset selection, which depends on a novel recursive formula for calculating the reconstruction error of the data matrix. The paper then presents a MapReduce algorithm, which selects a few representative columns from a matrix whose columns are massively distributed across several commodity machines. The algorithm first learns a concise representation of all columns using random projection, and it then solves a generalized column subset selection problem at each machine in which a subset of columns are selected from the sub-matrix on that machine such that the reconstruction error of the concise representation is minimized. The paper demonstrates the effectiveness and efficiency of the proposed algorithm through an empirical evaluation on benchmark data sets. <s> BIB012 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> The development of randomized algorithms for numerical linear algebra, e.g. for computing approximate QR and SVD factorizations, has recently become an intense area of research. This paper studies one of the most frequently discussed algorithms in the literature for dimensionality reduction---specifically for approximating an input matrix with a low-rank element. We introduce a novel and rather intuitive analysis of the algorithm in Martinsson et al. (2008), which allows us to derive sharp estimates and give new insights about its performance. This analysis yields theoretical guarantees about the approximation error and at the same time, ultimate limits of performance (lower bounds) showing that our upper bounds are tight. Numerical experiments complement our study and show the tightness of our predictions compared with empirical observations. <s> BIB013 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> We study low-rank approximation in the streaming model in which the rows of an n x d matrix A are presented one at a time in an arbitrary order. At the end of the stream, the streaming algorithm should output a k x d matrix R so that ‖A - AR† R‖2F ≤ (1 + ∊)‖A - Ak‖2F where Ak is the best rank-k approximation to A. A deterministic streaming algorithm of Liberty (KDD, 2013), with an improved analysis of Ghashami and Phillips (SODA, 2014), provides such a streaming algorithm using O(dk/∊) words of space. A natural question is if smaller space is possible. We give an almost matching lower bound of Ω(dk/∊) bits of space, even for randomized algorithms which succeed only with constant probability. Our lower bound matches the upper bound of Ghashami and Phillips up to the word size, improving on a simple Ω(dk) space lower bound. <s> BIB014 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> Approximating a matrix by a small subset of its columns is a known problem in numerical linear algebra. Algorithms that address this problem have been used in areas which include, among others, sparse approximation, unsupervised feature selection, data mining, and knowledge representation. Such algorithms were investigated since the 1960's, with recent results that use randomization. The problem is believed to be NP-Hard, and to the best of our knowledge there are no previously published algorithms aimed at computing optimal solutions. We show how to model the problem as a graph search, and propose a heuristic based on eigenvalues of related matrices. Applying the A* search strategy with this heuristic is guaranteed to find the optimal solution. Experimental results on common datasets show that the proposed algorithm can effectively select columns from moderate size matrices, typically improving by orders of magnitude the run time of exhaustive search. <s> BIB015 </s> Literature survey on low rank approximation of matrices <s> Column subset selection problem (CSSP) <s> We describe a new algorithm called Frequent Directions for deterministic matrix sketching in the row-updates model. The algorithm is presented an arbitrary input matrix $A \in R^{n \times d}$ one row at a time. It performed $O(d \times \ell)$ operations per row and maintains a sketch matrix $B \in R^{\ell \times d}$ such that for any $k<\ell$ $\|A^TA - B^TB \|_2 \leq \|A - A_k\|_F^2 / (\ell-k)$ and $\|A - \pi_{B_k}(A)\|_F^2 \leq \big(1 + \frac{k}{\ell-k}\big) \|A-A_k\|_F^2 $ . Here, $A_k$ stands for the minimizer of $\|A - A_k\|_F$ over all rank $k$ matrices (similarly $B_k$) and $\pi_{B_k}(A)$ is the rank $k$ matrix resulting from projecting $A$ on the row span of $B_k$. We show both of these bounds are the best possible for the space allowed. The summary is mergeable, and hence trivially parallelizable. Moreover, Frequent Directions outperforms exemplar implementations of existing streaming algorithms in the space-error tradeoff. <s> BIB016
We have seen the mathematical formulation of the subset selection problem in section 2.3. There is an equivalent definition to the subset selection problem in the literature. Here we define the column subset selection problem. Definition (CSSP): Given a matrix A m×n and a positive integer k as the number of columns of A forming a matrix C ∈ R m×k such that the residual A − P C A ξ is minimized over all possible n k choices for the matrix C. Here P C = CC † (C † is Moore-Penrose pseudoinverse of C) denotes the projection onto the k dimensional space spanned by the columns of C and ξ = 2 or F denotes the spectral norm or Frobenius norm. This seems to be a very hard optimization problem, finding k columns out of n columns such that A − P C A ξ is minimum. It requires O(n k ) time and thus to find the optimal solution we require O(n k mnk). So obtaining the approximation is prohibitively slow if the data size is large. The NP-hardness of the CSSP (assuming k is a function of n) is an open problem BIB004 . So research is focused on computing approximation solutions to CSSP. Let A k be the best low rank k approximation. Therefore A − A k ξ provides a lower bound for A − P C A ξ for ξ = F, 2 and for any choice of C. So most of the algorithms have been proposed in the literature to select k columns of A such that the matrix C satisfies As we have seen in the previous section, the strong RRQR algorithm (deterministic algorithm) gives spectral norm bounds. From the definition of RRQR there exists a permutation matrix Π ∈ R n×n (look at equation (6), please note that symbol for permutation matrix is changed here). Let Π k denote the first k columns of this permutation matrix Π. If C = AΠ k is m × k matrix consisting of k columns of A (C corresponds to Q R 11 0 in definition of RRQR), then from the equations (5) and (8) one can see (proof is very simple and similar to the proof of Lemma 7.1 in A That is, any algorithm that constructs an RRQR factorization of the matrix A with provable guarantees also provides provable guarantees for the CSSP . Several randomized algorithms have been proposed to this problem. In these methods, few columns (more than the target rank k) of A are selected randomly according to a probability distribution which was obtained during the preprocessing of the matrix and then the low rank approximation is obtained using classical techniques from linear algebra. One such type of method, a fast Monte-Carlo algorithm for finding a low rank approximation, has been proposed in BIB001 BIB001 . This algorithm gives an approximation very close to SVD by sampling the columns and rows of the matrix A with only two passes through the data. It is based on selecting a small subset of important columns of A, forming a matrix C such that the projection of A on the subspace spanned by the columns of C is as close to A as possible. A brief description of the algorithm is given below. A set of s columns (s > k, where k is the target rank) were chosen randomly, each according to a probability distribution proportional to their magnitudes (squared l 2 norms of the columns). Let S be the matrix obtained by writing these s columns as columns. An orthogonal set of k vectors in the span of these s columns have been obtained. These orthogonal vectors are the top k left singular vectors of the matrix S (which were obtained from the SVD of a s × s matrix formed by sampling the rows according to a probability distribution). The rank k approximation to A is obtained by projecting A on the span of these orthogonal vectors. The rank k approximation D * of the matrix A (within a small additive error) may be computed such that holds with probability at least 1 − δ. Here δ is the failure probability, ǫ is an error parameter and the randomly chosen columns s = poly(k, 1/ǫ) (a polynomial in k and 1/ǫ). F . This kind of sampling method may not perform well in some cases BIB002 . In a modified version of the algorithm proposed in BIB001 BIB001 has been discussed. In BIB002 Deshpande et. al. generalized the work in BIB001 BIB001 . They have proved that the additive error in (12) drops exponentially by adaptive sampling and presented a multipass algorithm for low rank approximation. They have shown that it is possible to get (1 + ǫ) relative or multiplicative approximation (look at ). They have generalized the sampling approach using volume sampling (i.e picking k-subsets of the columns of any given matrix with probabilities proportional to the squared volumes of the simplicies defined by them) to get a multiplicative approximation (look at ) instead of additive approximation (look at (12)). They have proved the following existence result. There exists (using volume sampling) exactly k columns in any m × n matrix A such that where D * (this may not coincide with the D * in (12)) is the projection onto the span of these k columns. They also have proved (existence result) that there exists k +k(k +1)/ǫ rows whose span contains the rows of a rank-k matrix D * such that In , Deshpande et. al. improved the existence result in eq and developed an efficient algorithm. They have used an adaptive sampling method to approximate the volume sampling method and developed an algorithm which finds k columns of A such that The computational complexity of this algorithm is O(mnk+kn). This algorithm requires multipasses through the data and also maintains the sparsity of A. In BIB004 , Boutsidis et. al. proposed a two stage algorithm to select exactly k columns from a matrix. In the first stage (randomized stage), the algorithm randomly selects O(k ln k) columns of V T k , i.e of the transpose of the n × k matrix consisting of the top k right singular vectors of A, according to a probability distribution that depends on information in the top-k right singular subspace of A. Then in the second stage (the deterministic stage), k columns have been selected from the set of columns of V T k using deterministic column selection procedure. The computational complexity of this algorithm is O (min(mn 2 , m 2 n) ). It has been proved that the algorithm returns a m × k matrix C consisting of exactly k columns of A (rank of A is ρ) such that with probability at least 0.7 : They have compared the approximation results with best existing results for CSSP. They have shown that (from the above equations) the estimate in spectral norm is better than the existing with O(kmn ω log n) arithmetic operations (ω is the exponent of arithmetic complexity of matrix multiplication). This improves the O(k √ log k)− approximation of Boutsidis et. al. in for the Frobenius norm case. In the very recent articles by Boutsidis et. al BIB005 and Guruswami et.al. BIB006 , these estimates have been further improved. This problem has been also studied in BIB015 BIB008 BIB012 BIB007 BIB009 BIB003 BIB013 and also in a PhD thesis by Civril . In BIB010 BIB016 BIB011 BIB014 , a similar kind of work has been studied (streaming algorithms).
Literature survey on low rank approximation of matrices <s> Randomised CUR <s> In many applications, the data consist of (or may be naturally formulated as) an $m \times n$ matrix $A$ which may be stored on disk but which is too large to be read into random access memory (RAM) or to practically perform superlinear polynomial time computations on it. Two algorithms are presented which, when given an $m \times n$ matrix $A$, compute approximations to $A$ which are the product of three smaller matrices, $C$, $U$, and $R$, each of which may be computed rapidly. Let $A' = CUR$ be the computed approximate decomposition; both algorithms have provable bounds for the error matrix $A-A'$. In the first algorithm, $c$ columns of $A$ and $r$ rows of $A$ are randomly chosen. If the $m \times c$ matrix $C$ consists of those $c$ columns of $A$ (after appropriate rescaling) and the $r \times n$ matrix $R$ consists of those $r$ rows of $A$ (also after appropriate rescaling), then the $c \times r$ matrix $U$ may be calculated from $C$ and $R$. For any matrix $X$, let $\|X\|_F$ and $\|X\|_2$ denote its Frobenius norm and its spectral norm, respectively. It is proven that $$ \left\|A-A'\right\|_\xi \le \min_{D:\mathrm{rank}(D)\le k} \left\|A-D\right\|_\xi + poly(k,1/c) \left\|A\right\|_F $$ holds in expectation and with high probability for both $\xi = 2,F$ and for all $k=1,\ldots,\mbox{rank}(A)$; thus by appropriate choice of $k$ $$ \left\|A-A'\right\|_2 \le \epsilon \left\|A\right\|_F $$ also holds in expectation and with high probability. This algorithm may be implemented without storing the matrix $A$ in RAM, provided it can make two passes over the matrix stored in external memory and use $O(m+n)$ additional RAM (assuming that $c$ and $r$ are constants, independent of the size of the input). The second algorithm is similar except that it approximates the matrix $C$ by randomly sampling a constant number of rows of $C$. Thus, it has additional error but it can be implemented in three passes over the matrix using only constant additional RAM. To achieve an additional error (beyond the best rank-$k$ approximation) that is at most $\epsilon \|A\|_F$, both algorithms take time which is a low-degree polynomial in $k$, $1/\epsilon$, and $1/\delta$, where $\delta>0$ is a failure probability; the first takes time linear in $\mbox{max}(m,n)$ and the second takes time independent of $m$ and $n$. The proofs for the error bounds make important use of matrix perturbation theory and previous work on approximating matrix multiplication and computing low-rank approximations to a matrix. The probability distribution over columns and rows and the rescaling are crucial features of the algorithms and must be chosen judiciously. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Randomised CUR <s> Many data analysis applications deal with large matrices and involve approximating the matrix using a small number of ``components.'' Typically, these components are linear combinations of the rows and columns of the matrix, and are thus difficult to interpret in terms of the original features of the input data. In this paper, we propose and study matrix approximations that are explicitly expressed in terms of a small number of columns and/or rows of the data matrix, and thereby more amenable to interpretation in terms of the original data. Our main algorithmic results are two randomized algorithms which take as input an $m \times n$ matrix $A$ and a rank parameter $k$. In our first algorithm, $C$ is chosen, and we let $A'=CC^+A$, where $C^+$ is the Moore-Penrose generalized inverse of $C$. In our second algorithm $C$, $U$, $R$ are chosen, and we let $A'=CUR$. ($C$ and $R$ are matrices that consist of actual columns and rows, respectively, of $A$, and $U$ is a generalized inverse of their intersection.) For each algorithm, we show that with probability at least $1-\delta$: $$ ||A-A'||_F \leq (1+\epsilon) ||A-A_k||_F, $$ where $A_k$ is the ``best'' rank-$k$ approximation provided by truncating the singular value decomposition (SVD) of $A$. The number of columns of $C$ and rows of $R$ is a low-degree polynomial in $k$, $1/\epsilon$, and $\log(1/\delta)$. Our two algorithms are the first polynomial time algorithms for such low-rank matrix approximations that come with relative-error guarantees; previously, in some cases, it was not even known whether such matrix decompositions exist. Both of our algorithms are simple, they take time of the order needed to approximately compute the top $k$ singular vectors of $A$, and they use a novel, intuitive sampling method called ``subspace sampling.'' <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Randomised CUR <s> Principal components analysis and, more generally, the Singular Value Decomposition are fundamental data analysis tools that express a data matrix in terms of a sequence of orthogonal or uncorrelated vectors of decreasing importance. Unfortunately, being linear combinations of up to all the data points, these vectors are notoriously difficult to interpret in terms of the data and processes generating the data. In this article, we develop CUR matrix decompositions for improved data analysis. CUR decompositions are low-rank matrix decompositions that are explicitly expressed in terms of a small number of actual columns and/or actual rows of the data matrix. Because they are constructed from actual data elements, CUR decompositions are interpretable by practitioners of the field from which the data are drawn (to the extent that the original data are). We present an algorithm that preferentially chooses columns and rows that exhibit high “statistical leverage” and, thus, in a very precise statistical sense, exert a disproportionately large “influence” on the best low-rank fit of the data matrix. By selecting columns and rows in this manner, we obtain improved relative-error and constant-factor approximation guarantees in worst-case analysis, as opposed to the much coarser additive-error guarantees of prior work. In addition, since the construction involves computing quantities with a natural and widely studied statistical interpretation, we can leverage ideas from diagnostic regression analysis to employ these matrix decompositions for exploratory data analysis. <s> BIB003 </s> Literature survey on low rank approximation of matrices <s> Randomised CUR <s> Abstract In this paper, we provide two generalizations of the CUR matrix decomposition Y = CUR (also known as pseudo-skeleton approximation method [1] ) to the case of N -way arrays (tensors). These generalizations, which we called Fiber Sampling Tensor Decomposition types 1 and 2 (FSTD1 and FSTD2), provide explicit formulas for the parameters of a rank- ( R , R , … , R ) Tucker representation (the core tensor of size R × R × ⋯ × R and the matrix factors of sizes I n × R , n = 1 , 2 , … N ) based only on some selected entries of the original tensor. FSTD1 uses P N - 1 ( P ⩾ R ) n -mode fibers of the original tensor while FSTD2 uses exactly R fibers in each mode as matrix factors, as suggested by the existence theorem provided in Oseledets et al. (2008) [2] , with a core tensor defined in terms of the entries of a subtensor of size R × R × ⋯ × R . For N = 2 our results are reduced to the already known CUR matrix decomposition where the core matrix is defined as the inverse of the intersection submatrix, i.e. U = W - 1 . Additionally, we provide an adaptive type algorithm for the selection of proper fibers in the FSTD1 model which is useful for large scale applications. Several numerical results are presented showing the performance of our FSTD1 Adaptive Algorithm compared to two recently proposed approximation methods for 3-way tensors. <s> BIB004 </s> Literature survey on low rank approximation of matrices <s> Randomised CUR <s> The CUR matrix decomposition is an important extension of Nystrom approximation to a general matrix. It approximates any data matrix in terms of a small number of its columns and rows. In this paper we propose a novel randomized CUR algorithm with an expected relative-error bound. The proposed algorithm has the advantages over the existing relative-error CUR algorithms that it possesses tighter theoretical bound and lower time complexity, and that it can avoid maintaining the whole data matrix in main memory. Finally, experiments on several real-world datasets demonstrate significant improvement over the existing relative-error algorithms. <s> BIB005 </s> Literature survey on low rank approximation of matrices <s> Randomised CUR <s> Polyphonic music transcription is a fundamental problem in computer music and over the last decade many sophisticated and application-specific methods have been proposed for its solution. However, most techniques cannot make fully use of all the available training data efficiently and do not scale well beyond a certain size. In this study, we develop an approach based on matrix factorization that can easily handle very large training corpora encountered in real applications. We evaluate and compare four different techniques that are based on randomized approaches to SVD and CUR decompositions. We demonstrate that by only retaining the relevant parts of the training data via matrix skeletonization based on CUR decomposition, we maintain comparable transcription performance with only 2% of the training data. The method seems to compete with the state-of-the-art techniques in the literature. Furthermore, it is very efficient in terms of time and space complexities, can work even in real time without compromising the success rate. <s> BIB006 </s> Literature survey on low rank approximation of matrices <s> Randomised CUR <s> Intelligent Transportation Systems (ITS) often operate on large road networks, and typically collect traffic data with high temporal resolution. Consequently, ITS need to handle massive volumes of data, and methods to represent that data in more compact representations are sorely needed. Subspace methods such as Principal Component Analysis (PCA) can create accurate low-dimensional models. However, such models are not readily interpretable, as the principal components usually involve a large number of links in the traffic network. In contrast, the CUR matrix decomposition leads to low-dimensional models where the components correspond to individual links in the network; the resulting models can be easily interpreted, and can also be used for compressed sensing of the traffic network. In this paper, the CUR matrix decomposition is applied for two purposes: (1) compression of traffic data; (2) compressed sensing of traffic data. In the former, only data from a “random” subset of links and time instances is stored. In the latter, data for the entire traffic network is inferred from measurements at a “random” subset of links. Numerical results for a large traffic network in Singapore demonstrate the feasibility of the proposed approach. <s> BIB007
As described in section 2.3, CU R decomposition gives low rank approximation explicitly expressed in terms of a small number of columns and rows of the matrix A. CU R decomposition problem has been widely studied in the literature. This problem has a close connection with the column subset selection problem. One can obtain the CUR decomposition by using column subset selection on A and on A T to obtain the matrices C and R respectively. But this will double the error in the approximation. Most of the existing CUR algorithms uses column subset selection procedure to choose the matrix C. In BIB001 , Drineas et. al. have proposed a linear time algorithm to approximate the CUR decomposition. c columns of A and r rows of A are randomly chosen according to a probability distribution to obtain the matrices C m×c consisting of chosen c columns, R r×n consisting of chosen r rows. A c × r matrix U has been obtained using C and R. They have shown that for given k, by choosing O(log(1/δ)ǫ −4 ) columns of A to construct C and O(kδ −2 ǫ −2 ) rows of A to construct R, the resulting CUR decomposition satisfies the additive error bound with probability at least 1 − δ By choosing O(klog(1/δ)ǫ −4 ) columns of A to construct C and O(kδ −2 ǫ −2 ) rows of A to construct R, the resulting CUR decomposition satisfies the additive error bound with probability at least 1 − δ Here ǫ is the error parameter and δ is the failure probability. The complexity of the algorithm is O(mc 2 + nr + c 2 r + c 3 ), which is linear in m and n. This algorithm needs very large number of rows and columns to get good accuracy. In BIB002 , Drineas et. al. developed an improved algorithm. c columns and r rows were chosen randomly by subsampling to construct the matrices C and R respectively and U is the weighted Moore-Penrose inverse of the intersection between the matrices C and R. For given k, they have shown that there exists randomized algorithms such that exactly c = O(k 2 log(1/δ)ǫ −2 ) columns of A are chosen to construct C, then exactly r = O(c 2 log(1/δ)ǫ −2 ) rows of A are chosen to construct R, such that with probability at least 1 − δ, This algorithms requires O(kmn) complexity (since the construction of sampling probabilities depends on right singular vectors of A). In BIB003 , the columns and rows were chosen randomly according to a probability distribution formed by normalized statistical leverage scores (based on right singular values of A). This algorithm takes A, k and ǫ as input and uses column subset selection procedure with c = O(k logk ǫ −2 ) columns of A to construct C and with r = O(k logk ǫ −2 ) rows of A to construct R. The matrix U is given by U = C † AR † . This algorithm requires O(kmn) complexity. In BIB005 , an improved algorithm has been proposed to obtain CU R decomposition with in shorter time compared to the existing relative error CUR algorithms BIB003 . The applicability of CUR decomposition in various fields can be found in BIB006 BIB007 . The generalization of CUR decomposition to Tensors has been described in BIB004 .
Literature survey on low rank approximation of matrices <s> (II). Random projection based methods <s> Latent semantic indexing (LSI) is an information retrieval technique based on the spectral analysis of the term-document matrix, whose empirical success had heretofore been without rigorous prediction and explanation. We prove that, under certain conditions, LSI does succeed in capturing the underlying semantics of the corpus and achieves improved retrieval performance. We propose the technique of random projection as a way of speeding up LSI. We complement our theorems with encouraging experimental results. We also argue that our results may be viewed in a more general framework, as a theoretical basis for the use of spectral methods in a wider class of applications such as collaborative filtering. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> (II). Random projection based methods <s> A classic result of Johnson and Lindenstrauss asserts that any set of n points in d-dimensional Euclidean space can be embedded into k-dimensional Euclidean space where k is logarithmic in n and independent of d so that all pairwise distances are maintained within an arbitrarily small factor. All known constructions of such embeddings involve projecting the n points onto a random k-dimensional hyperplane. We give a novel construction of the embedding, suitable for database applications, which amounts to computing a simple aggregate over k random attribute partitions. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> (II). Random projection based methods <s> Random projection Combinatorial optimization: Rounding via random projection Embedding metrics in Euclidean space Euclidean embeddings: Beyond distance preservation Learning theory: Robust concepts Intersections of half-spaces Information retrieval: Nearest neighbors Indexing and clustering Bibliography Appendix. <s> BIB003 </s> Literature survey on low rank approximation of matrices <s> (II). Random projection based methods <s> Recently several results appeared that show significant reduction in time for matrix multiplication, singular value decomposition as well as linear (\ell_ 2) regression, all based on data dependent random sampling. Our key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation. Our main contribution is summarized as follows. --Independent of the recent results of Har-Peled and of Deshpande and Vempala, one of the first -- and to the best of our knowledge the most efficient -- relative error (1 + \in) \parallel A - A_k \parallel _F approximation algorithms for the singular value decomposition of an m ? n matrix A with M non-zero entries that requires 2 passes over the data and runs in time O\left( {\left( {M(\frac{k} { \in } + k\log k) + (n + m)(\frac{k} { \in } + k\log k)^2 } \right)\log \frac{1} {\delta }} \right) --The first o(nd^{2}) time (1 + \in) relative error approximation algorithm for n ? d linear (\ell_2) regression. --A matrix multiplication and norm approximation algorithm that easily applies to implicitly given matrices and can be used as a black box probability boosting tool. <s> BIB004 </s> Literature survey on low rank approximation of matrices <s> (II). Random projection based methods <s> Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. <s> BIB005 </s> Literature survey on low rank approximation of matrices <s> (II). Random projection based methods <s> The low-rank matrix approximation problem involves finding of a rank k version of a m x n matrix A, labeled Ak, such that Ak is as "close" as possible to the best SVD approximation version of A at the same rank level. Previous approaches approximate matrix A by non-uniformly adaptive sampling some columns (or rows) of A, hoping that this subset of columns contain enough information about A. The sub-matrix is then used for the approximation process. However, these approaches are often computationally intensive due to the complexity in the adaptive sampling. In this paper, we propose a fast and efficient algorithm which at first pre-processes matrix A in order to spread out information (energy) of every columns (or rows) of A, then randomly selects some of its columns (or rows). Finally, a rank-k approximation is generated from the row space of these selected sets. The preprocessing step is performed by uniformly randomizing signs of entries of A and transforming all columns of A by an orthonormal matrix F with existing fast implementation (e.g. Hadamard, FFT, DCT...). Our main contribution is summarized as follows. 1) We show that by uniformly selecting at random d rows of the preprocessed matrix with d = ( 1/η k max {log k, log 1/β} ), we guarantee the relative Frobenius norm error approximation: (1 + η) norm{A - Ak}F with probability at least 1 - 5β. 2) With d above, we establish a spectral norm error approximation: (2 + √2m/d) norm{A - Ak}2 with probability at least 1 - 2β. 3) The algorithm requires 2 passes over the data and runs in time (mn log d + (m+n) d2) which, as far as the best of our knowledge, is the fastest algorithm when the matrix A is dense. 4) As a bonus, applying this framework to the well-known least square approximation problem min norm{A x - b} where A ∈ Rm x r, we show that by randomly choosing d = (1/η γ r log m), the approximation solution is proportional to the optimal one with a factor of η and with extremely high probability, (1 - 6 m-γ), say. <s> BIB006 </s> Literature survey on low rank approximation of matrices <s> (II). Random projection based methods <s> Randomized algorithms for very large matrix problems have received a great deal of attention in recent years. Much of this work was motivated by problems in large-scale data analysis, and this work was performed by individuals from many different research communities. This monograph will provide a detailed overview of recent work on the theory of randomized matrix algorithms as well as the application of those ideas to the solution of practical problems in large-scale data analysis. An emphasis will be placed on a few simple core ideas that underlie not only recent theoretical advances but also the usefulness of these tools in large-scale data applications. Crucial in this context is the connection with the concept of statistical leverage. This concept has long been used in statistical regression diagnostics to identify outliers; and it has recently proved crucial in the development of improved worst-case matrix algorithms that are also amenable to high-quality numerical implementation and that are useful to domain scientists. Randomized methods solve problems such as the linear least-squares problem and the low-rank matrix approximation problem by constructing and operating on a randomized sketch of the input matrix. Depending on the specifics of the situation, when compared with the best previously-existing deterministic algorithms, the resulting randomized algorithms have worst-case running time that is asymptotically faster; their numerical implementations are faster in terms of clock-time; or they can be implemented in parallel computing environments where existing numerical algorithms fail to run at all. Numerous examples illustrating these observations will be described in detail. <s> BIB007 </s> Literature survey on low rank approximation of matrices <s> (II). Random projection based methods <s> A classical problem in matrix computations is the efficient and reliable approximation of a given matrix by a matrix of lower rank. The truncated singular value decomposition (SVD) is known to provide the best such approximation for any given fixed rank. However, the SVD is also known to be very costly to compute. Among the different approaches in the literature for computing low-rank approximations, randomized algorithms have attracted researchers' attention recently due to their surprising reliability and computational efficiency in different application areas. Typically, such algorithms are shown to compute with very high probability low-rank approximations that are within a constant factor from optimal, and are known to perform even better in many practical situations. In this paper, we present a novel error analysis that considers randomized algorithms within the subspace iteration framework and show with very high probability that highly accurate low-rank approximations as well as singular values can indeed be computed quickly for matrices with rapidly decaying singular values. Such matrices appear frequently in diverse application areas such as data analysis, fast structured matrix computations, and fast direct methods for large sparse linear systems of equations and are the driving motivation for randomized methods. Furthermore, we show that the low-rank approximations computed by these randomized algorithms are actually rank-revealing approximations, and the special case of a rank-1 approximation can also be used to correctly estimate matrix 2-norms with very high probability. Our numerical experiments are in full support of our conclusions. <s> BIB008
The random projection method for low rank approximation of a matrix is based on the idea of random projection. In random projection, the original d dimensional data is projected to a k dimensional (k << d) subspace by post-multiplying a k × d random matrix Ω (a matrix whose entries are independent random variables of some specified distribution). The idea of random mapping is based on the Johnson-Lindenstrauss lemma which says any set of n points in the d dimensional Euclidean space can be embedded into k dimensional Euclidean space such that the distance between the points is approximately preserved . The choice of the random matrix Ω plays an important role in the random projection. There are several possible choices for Ω. The Bernoulli random matrix (with matrix entries 1 or -1 with an equal probability of each), Gaussian random matrix (with matrix entries have zero mean and unit variance normal distribution) are among the choices for Ω. The details of several other choices for random matrices were discussed in BIB002 BIB007 . The idea of the random projection based algorithms for low rank approximation A of a matrix A m×n is given below BIB005 BIB007 . Let k be the target rank, s be the number of samples. Step 1. Consider a random matrix Ω n×s . Step 2. Obtain the product Y m×s = AΩ. Step 3. Compute an approximate orthonormal basis Q m×k for the range of Y via SVD. Step 4. Finally obtain A = QQ T A. In BIB004 , a structured random matrix has been considered with s = O(k/ǫ) columns and low rank approximation has been obtained such that holds with high probability. The complexity of this algorithm is O(M k/ǫ + (m + n)k 2 /ǫ 2 ), where M is the number of non zero elements in A and it requires 2 passes over the data. In BIB005 , a standard Gaussian matrix has been considered as Ω with s = k + p columns, where p ≥ 2 an oversampling parameter. The algorithm gives low rank approximation such that holds with high probability. The complexity of this algorithm is O(mns + ms 2 ). This algorithm was further improved by coupling a form of the power iteration method with random projection method BIB005 . Y in this modified algorithm is Y = (AA T ) q AΩ, where q is an iteration parameter. This provides the improved error estimates of the form with an extra computational effort. One can look at BIB005 for the error estimates in spectral norm. The matrix multiplication AΩ requires O(mns) operations in the above algorithms. Some special structured matrices like Ω = DHS (details can be found in BIB007 ) and subsampled random Fourier transform (SRFT) matrices requires O(mn logs) complexity BIB005 . Complete analysis of the random projection methods can be found in BIB005 . Random projection method also has been studied in BIB008 BIB006 BIB001 BIB003 .
Literature survey on low rank approximation of matrices <s> Set U = QŨ . <s> The main contribution of this paper is to demonstrate that a new randomized SVD algorithm, proposed by Drineas et. al. in [4], is not only of theoretical interest but also a viable and fast alternative to traditional SVD algorithms in applications (e.g. image processing). This algorithm samples a constant number of rows (or columns) of the matrix, scales them appropriately to form a small matrix, say S, and then computes the SVD of S (which is a good approximation to the SVD of the original matrix). We experimentally evaluate the accuracy and speed of this algorithm for image matrices, using various probability distributions to perform the sampling. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Set U = QŨ . <s> Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Set U = QŨ . <s> Geostatistical modeling involves many variables and many locations. LU simulation is a popular method for generating realizations, but the covariance matrices that describe the relationships between all of the variables and locations are large and not necessarily amenable to direct decomposition, inversion or manipulation. This paper shows a method similar to LU simulation based on singular value decomposition of large covariance matrices for generating unconditional or conditional realizations using randomized methods. The application of randomized methods in generating realizations, by finding eigenvalues and eigenvectors of large covariance matrices is developed with examples. These methods use random sampling to identify a subspace that captures most of the information in a matrix by considering the dominant eigenvalues. Usually, not all eigenvalues have to be calculated; the fluctuations can be described almost completely by a few eigenvalues. The first k eigenvalues corresponds to a large amount of energy of the random field with the size of n×n. For a dense input matrix, randomized algorithms require O(nnlog(k)) floating-point operations (flops) in contrast with O(nnk) for classical algorithms. Usually the rank of the matrix is not known in advance. Error estimators and the adaptive randomized range finder make it possible to find a very good approximation of the exact SVD decomposition. Using this method, the approximate rank of the matrix can be estimated. The accuracy of the approximation can be estimated with no additional computational cost. When singular values decay slowly, power method can be used for increasing efficiency of the randomized method. Comparing to the original algorithm, the power method can significantly increase the accuracy of approximation. <s> BIB003 </s> Literature survey on low rank approximation of matrices <s> Set U = QŨ . <s> We present a randomized singular value decomposition (rSVD) method for the purposes of lossless compression, reconstruction, classification, and target detection with hyperspectral (HSI) data. Recent work in low-rank matrix approximations obtained from random projections suggests that these approximations are well suited for randomized dimensionality reduction. Approximation errors for the rSVD are evaluated on HSI, and comparisons aremade to deterministic techniques and as well as to other randomized low-rank matrix approximation methods involving compressive principal component analysis. Numerical tests on real HSI data suggest that the method is promising and is particularly effective for HSI data interrogation. <s> BIB004 </s> Literature survey on low rank approximation of matrices <s> Set U = QŨ . <s> In this paper, we propose an algorithm for solving the large-scale discrete ill-conditioned linear problems arising from the discretization of linear or nonlinear inverse problems. The algorithm combines some existing regularization techniques and regularization parameter choice rules with a randomized singular value decomposition (SVD), so that only much smaller scale systems are needed to solve, instead of the original large-scale regularized system. The algorithm can directly apply to some existing regularization methods, such as the Tikhonov and truncated SVD methods, with some popular regularization parameter choice rules such as the L-curve, GCV function, quasi-optimality and discrepancy principle. The error of the approximate regularized solution is analyzed and the efficiency of the method is well demonstrated by the numerical examples. <s> BIB005 </s> Literature survey on low rank approximation of matrices <s> Set U = QŨ . <s> In this work, we propose a new randomized algorithm for computing a low-rank approximation to a given matrix. Taking an approach different from existing literature, our method first involves a specific biased sampling, with an element being chosen based on the leverage scores of its row and column, and then involves weighted alternating minimization over the factored form of the intended low-rank matrix, to minimize error only on these samples. Our method can leverage input sparsity, yet produce approximations in {\em spectral} (as opposed to the weaker Frobenius) norm; this combines the best aspects of otherwise disparate current results, but with a dependence on the condition number $\kappa = \sigma_1/\sigma_r$. In particular we require $O(nnz(M) + \frac{n\kappa^2 r^5}{\epsilon^2})$ computations to generate a rank-$r$ approximation to $M$ in spectral norm. In contrast, the best existing method requires $O(nnz(M)+ \frac{nr^2}{\epsilon^4})$ time to compute an approximation in Frobenius norm. Besides the tightness in spectral norm, we have a better dependence on the error $\epsilon$. Our method is naturally and highly parallelizable. Our new approach enables two extensions that are interesting on their own. The first is a new method to directly compute a low-rank approximation (in efficient factored form) to the product of two given matrices; it computes a small random set of entries of the product, and then executes weighted alternating minimization (as before) on these. The sampling strategy is different because now we cannot access leverage scores of the product matrix (but instead have to work with input matrices). The second extension is an improved algorithm with smaller communication complexity for the distributed PCA setting (where each server has small set of rows of the matrix, and want to compute low rank approximation with small amount of communication with other servers). <s> BIB006 </s> Literature survey on low rank approximation of matrices <s> Set U = QŨ . <s> Low-rank matrix approximation is an integral component of tools such as principal component analysis (PCA), as well as is an important instrument used in applications like web search, text mining and computer vision, e.g., face recognition. Recently, randomized algorithms were proposed to effectively construct low rank approximations of large matrices. In this paper, we show how matrices from error correcting codes can be used to find such low rank approximations. ::: ::: The benefits of using these code matrices are the following: (i) They are easy to generate and they reduce randomness significantly. (ii) Code matrices have low coherence and have a better chance of preserving the geometry of an entire subspace of vectors; (iii) Unlike Fourier transforms or Hadamard matrices, which require sampling O(k log k) columns for a rank-k approximation, the log factor is not necessary in the case of code matrices. (iv) Under certain conditions, the approximation errors can be better and the singular values obtained can be more accurate, than those obtained using Gaussian random matrices and other structured random matrices. <s> BIB007
This approximates the SVD with the same rank as the basis matrix Q. The efficient implementation and approximation error of this procedure can be found in section 5 of BIB002 . This scheme is well suited for sparse and structured matrices. If the singular values of A decay slowly then power iterations (with q = 1 or 2) were used (to form Y in the random projection methods) to improve the accuracy BIB002 . This gives the truncated SVD (rank k) such that holds with high probability. Here k satisfies 2 ≤ k ≤ 0.5 min{m, n}. The total cost of this algorithm to obtain rank k SVD including the operation count to obtain Q is O(mnlog(k) + k 2 (m + n)). Randomized SVD have also been studied and used in many applications BIB003 BIB001 BIB005 BIB004 . Some other randomized algorithms for low rank approximation of a matrix have been proposed in (sparsification), BIB006 BIB007 . Performance of different randomized algorithms have been compared in BIB006 .
Literature survey on low rank approximation of matrices <s> Non negative matrix factorization (NMF) <s> A new variant ‘PMF’ of factor analysis is described. It is assumed that X is a matrix of observed data and σ is the known matrix of standard deviations of elements of X. Both X and σ are of dimensions n × m. The method solves the bilinear matrix problem X = GF + E where G is the unknown left hand factor matrix (scores) of dimensions n × p, F is the unknown right hand factor matrix (loadings) of dimensions p × m, and E is the matrix of residuals. The problem is solved in the weighted least squares sense: G and F are determined so that the Frobenius norm of E divided (element-by-element) by σ is minimized. Furthermore, the solution is constrained so that all the elements of G and F are required to be non-negative. It is shown that the solutions by PMF are usually different from any solutions produced by the customary factor analysis (FA, i.e. principal component analysis (PCA) followed by rotations). Usually PMF produces a better fit to the data than FA. Also, the result of PF is guaranteed to be non-negative, while the result of FA often cannot be rotated so that all negative entries would be eliminated. Different possible application areas of the new method are briefly discussed. In environmental data, the error estimates of data can be widely varying and non-negativity is often an essential feature of the underlying models. Thus it is concluded that PMF is better suited than FA or PCA in many environmental applications. Examples of successful applications of PMF are shown in companion papers. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Non negative matrix factorization (NMF) <s> Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Non negative matrix factorization (NMF) <s> Nonnegative Matrix and Tensor Factorization (NMF/NTF) and Sparse Component Analysis (SCA) have already found many potential applications, especially in multi-way Blind Source Separation (BSS), multi-dimensional data analysis, model reduction and sparse signal/image representations. In this paper we propose a family of the modified Regularized Alternating Least Squares (RALS) algorithms for NMF/NTF. By incorporating regularization and penalty terms into the weighted Frobenius norm we are able to achieve sparse and/or smooth representations of the desired solution, and to alleviate the problem of getting stuck in local minima. We implemented the RALS algorithms in our NMFLAB/NTFLAB Matlab Toolboxes, and compared them with standard NMF algorithms. The proposed algorithms are characterized by improved efficiency and convergence properties, especially for large-scale problems. <s> BIB003 </s> Literature survey on low rank approximation of matrices <s> Non negative matrix factorization (NMF) <s> It is well known that good initializations can improve the speed and accuracy of the solutions of many nonnegative matrix factorization (NMF) algorithms. Many NMF algorithms are sensitive with respect to the initialization of W or H or both. This is especially true of algorithms of the alternating least squares (ALS) type, including the two new ALS algorithms that we present in this paper. We compare the results of six initialization procedures (two standard and four new) on our ALS algorithms. Lastly, we discuss the practical issue of choosing an appropriate convergence criterion. <s> BIB004
Non negative matrix factorization of a given non negative matrix A m×n (i.e all the matrix entries a ij ≥ 0) is finding two non negative matrices W m×k and H k×n such that W H approximates A. The chosen k is much smaller than m and n. In general it is not possible to obtain W and H such that A = W H. So NMF is only an approximation. This problem can be stated formally as follows. Definition (NMF problem): Given a non negative matrix A m×n and a positive integer k < min{m, n}, find non negative matrices W m×k and H k×n to minimize the functional This is a nonlinear optimization problem. This factorization has several applications in image processing, text mining, financial data, chemometric and blind source separating etc. Generally, the factors W and H are naturally sparse, so they require very less storage. This factorization has some disadvantages too. The optimization problem defined above is convex in either W or H, but not in both W and H, which means that the algorithms can only, if at all, guarantee the convergence to a local minimum BIB004 . The factorization is also not unique (different algorithms gives different factorizations). Such a factorization was first introduced in BIB001 and the article about NMF became popular. There are several algorithms available in the literature. Multiplicative update algorithm BIB002 , projected gradient method , alternating least squares method BIB003 and several other algorithms described in , , , , and BIB004 are among the algorithms for NMF. The non negative tensor factorizations are described in and several algorithms for both non negative matrix and tensor factorizations with applications can be found in the book .
Literature survey on low rank approximation of matrices <s> Semidiscrete matrix decomposition (SDD) <s> We approximate a digital image as a sum of outer products dxyTwhere d is a real number but the vectors x and y have elements +1, -1, or 0 only. The expansion gives a least squares approximation. Work is proportional to the number of pixels; reconstruction involves only additions. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Semidiscrete matrix decomposition (SDD) <s> Information retrieval is an important direction in the area of natural language processing .This paper introduced semidiscrete matrix decomposition in latent semantic indexing. We aimed at it’s disadvantage in storage space and presented SSDD,then we compare the difference of SVD and SDD and SSDD in performance <s> BIB002
A semidiscrete decomposition (SDD) expresses a matrix as weighted sum of outer products formed by vectors with entries constrained to be in the set S = {−1, 0, 1}. The SDD approximation (k term SDD approximation) of an m × n matrix A is a decomposition of the form . . . Here each x i is an m−vector with entries from S = {−1, 0, 1}, each y i is a n−vector with entries from the set S and each d i is a positive scalar. The columns of X k , Y k do not need to be linearly independent. The columns can repeated multiple times. This k term SDD approximation requires very less storage compared to truncated SVD but it may require large k for accurate approximation. This approximation has applications in image compression and data mining. This approximation was first introduced in BIB001 in the contest of image compression and different algorithms have been proposed in . A detailed description of SDD approximation with applications in data mining can be found in the book and some other applications can be found in BIB002 .
Literature survey on low rank approximation of matrices <s> Nyström Method <s> Low-rank matrix approximation is an effective tool in alleviating the memory and computational burdens of kernel methods and sampling, as the mainstream of such algorithms, has drawn considerable attention in both theory and practice. This paper presents detailed studies on the Nystrom sampling scheme and in particular, an error analysis that directly relates the Nystrom approximation quality with the encoding powers of the landmark points in summarizing the data. The resultant error bound suggests a simple and efficient sampling scheme, the k-means clustering algorithm, for Nystrom low-rank approximation. We compare it with state-of-the-art approaches that range from greedy schemes to probabilistic sampling. Our algorithm achieves significant performance gains in a number of supervised/unsupervised learning tasks including kernel PCA and least squares SVM. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Nyström Method <s> Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Nyström Method <s> The Nystrom method is an efficient technique for the eigenvalue decomposition of large kernel matrices. However, in order to ensure an accurate approximation, a sufficiently large number of columns have to be sampled. On very large data sets, the SVD step on the resultant data submatrix will soon dominate the computations and become prohibitive. In this paper, we propose an accurate and scalable Nystrom scheme that first samples a large column subset from the input matrix, but then only performs an approximate SVD on the inner submatrix by using the recent randomized low-rank matrix approximation algorithms. Theoretical analysis shows that the proposed algorithm is as accurate as the standard Nystrom method that directly performs a large SVD on the inner submatrix. On the other hand, its time complexity is only as low as performing a small SVD. Experiments are performed on a number of large-scale data sets for low-rank approximation and spectral embedding. In particular, spectral embedding of a MNIST data set with 3.3 million examples takes less than an hour on a standard PC with 4G memory. <s> BIB003 </s> Literature survey on low rank approximation of matrices <s> Nyström Method <s> We reconsider randomized algorithms for the low-rank approximation of SPSD matrices such as Laplacian and kernel matrices that arise in data analysis and machine learning applications. Our main results consist of an empirical evaluation of the performance quality and running time of sampling and projection methods on a diverse suite of SPSD matrices. Our results highlight complementary aspects of sampling versus projection methods, and they point to differences between uniform and nonuniform sampling methods based on leverage scores. We complement our empirical results with a suite of worst-case theoretical bounds for both random sampling and random projection methods. These bounds are qualitatively superior to existing bounds--e.g., improved additive-error bounds for spectral and Frobenius norm error and relative-error bounds for trace norm error. <s> BIB004 </s> Literature survey on low rank approximation of matrices <s> Nyström Method <s> The Nystr\"{o}m method is routinely used for out-of-sample extension of kernel matrices. We describe how this method can be applied to find the singular value decomposition (SVD) of general matrices and the eigenvalue decomposition (EVD) of square matrices. We take as an input a matrix $M\in \mathbb{R}^{m\times n}$, a user defined integer $s\leq min(m,n)$ and $A_M \in \mathbb{R}^{s\times s}$, a matrix sampled from the columns and rows of $M$. These are used to construct an approximate rank-$s$ SVD of $M$ in $O\left(s^2\left(m+n\right)\right)$ operations. If $M$ is square, the rank-$s$ EVD can be similarly constructed in $O\left(s^2 n\right)$ operations. Thus, the matrix $A_M$ is a compressed version of $M$. We discuss the choice of $A_M$ and propose an algorithm that selects a good initial sample for a pivoted version of $M$. The proposed algorithm performs well for general matrices and kernel matrices whose spectra exhibit fast decay. <s> BIB005
The Nyström approximation is closely related to CU R approximation. Different from CU R, Nyström methods are used for approximating the symmetric positive semidefinite matrices (large kernel matrices arise in integral equations). The Nyström method has been widely used in machine learning community. The Nyström method approximates the matrix only using a subset of its columns. These columns are selected by different sampling techniques. The approximation quality depends on the selection of the good columns. A brief description of the Nyström approximation is given below. Let A ∈ R n×n be a symmetric positive semidefinite matrix (SPSD). Let C n×m be a matrix consists of m (<< n) randomly selected columns of A as columns. Now the matrix A can be rearranged such that C and A are written as where W ∈ R m×m , S ∈ R (n−m)×m and B ∈ R (n−m)×(n−m) . Since A is SPSD, W is also a SPSD. For k (k ≤ m), the rank k Nyström approximation is defined by , where σ i is the i th singular value of W and U (i) is the i th column of the matrix U in the SVD of W. The computational complexity is O(nmk + m 3 ) which is much smaller than the complexity O(n 3 ) of direct SVD. Using W † instead of W † k gives more accurate approximation with higher ranks than k. The Nyström method was first introduced in . They have selected the random columns using uniform sampling without replacement. A new algorithm has been proposed and theoretically analyzed in . The columns have been selected randomly using non-uniform probability distribution and the error estimates of the Nyström approximation were presented. By choosing O(k/ǫ 4 ) columns of A, the authors have shown that where ξ = 2, F and A k is the best rank k approximation of A. These estimates have been further improved in BIB004 BIB001 . A detailed comparison of the existing algorithms and error estimates have been discussed in BIB004 . The ensemble Nyström method has been proposed in . Adaptive sampling techniques are used to select the random columns. In BIB003 , a new algorithm which combines the randomized low rank approximation techniques BIB002 and the Nyström method was proposed. In BIB005 , the details how the Nyström method can be applied to find the SVD of general matrices were shown.
Literature survey on low rank approximation of matrices <s> Cross/Skeleton approximation techniques <s> Summary. This article considers the problem of approximating a general asymptotically smooth function in two variables, typically arising in integral formulations of boundary value problems, by a sum of products of two functions in one variable. From these results an iterative algorithm for the low-rank approximation of blocks of large unstructured matrices generated by asymptotically smooth functions is developed. This algorithm uses only few entries from the original block ::: and since it has a natural stopping criterion the approximative rank is not needed in advance. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Cross/Skeleton approximation techniques <s> The mosaic-skeleton method was bred in a simple observation that rather large blocks in very large matrices coming from integral formulations can be approximated accurately by a sum of just few rank-one matrices (skeletons). These blocks might correspond to a region where the kernel is smooth enough, and anyway it can be a region where the kernel is approximated by a short sum of separable functions (functional skeletons). Since the effect of approximations is like that of having small-rank matrices, we find it pertinent to say about mosaic ranks of a matrix which turn out to be pretty small for many nonsingular matrices. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Cross/Skeleton approximation techniques <s> Given a matrix [email protected]?R^m^x^n (n vectors in m dimensions), we consider the problem of selecting a subset of its columns such that its elements are as linearly independent as possible. This notion turned out to be important in low-rank approximations to matrices and rank revealing QR factorizations which have been investigated in the linear algebra community and can be quantified in a few different ways. In this paper, from a complexity theoretic point of view, we propose four related problems in which we try to find a sub-matrix [email protected]?R^m^x^k of a given matrix [email protected]?R^m^x^n such that (i) @s"m"a"x(C) (the largest singular value of C) is minimum, (ii) @s"m"i"n(C) (the smallest singular value of C) is maximum, (iii) @k(C)[email protected]"m"a"x(C)/@s"m"i"n(C) (the condition number of C) is minimum, and (iv) the volume of the parallelepiped defined by the column vectors of C is maximum. We establish the NP-hardness of these problems and further show that they do not admit PTAS. We then study a natural Greedy heuristic for the maximum volume problem and show that it has approximation ratio 2^-^O^(^k^l^o^g^k^). Our analysis of the Greedy heuristic is tight to within a logarithmic factor in the exponent, which we show by explicitly constructing an instance for which the Greedy heuristic is 2^-^@W^(^k^) from optimal. When A has unit norm columns, a related problem is to select the maximum number of vectors with a given volume. We show that if the optimal solution selects k columns, then Greedy will select @W(k/logk) columns, providing a logk approximation. <s> BIB003 </s> Literature survey on low rank approximation of matrices <s> Cross/Skeleton approximation techniques <s> In this article we present and analyze a new scheme for the approximation of multivariate functions (d=3,4) by sums of products of univariate functions. The method is based on the Adaptive Cross Approximation (ACA) initially designed for the approximation of bivariate functions. To demonstrate the linear complexity of the schemes, we apply it to large-scale multidimensional arrays generated by the evaluation of functions. <s> BIB004
In this section we discuss in detail the cross algorithms which gives the low rank approximation of a matrix A m×n . In these algorithms the approximation of a matrix is obtained using the crosses formed by the selected columns and rows of the given matrix. The computational complexity of these algorithms is linear in m, n and they use a small portion of the original matrix. As described in the last section cross/skeleton approximation of a matrix A is given by A ≃ CGR, where C m×k , R k×n consists of selected k columns and k rows of A and G = M −1 , where M k×k is the submatrix on the intersection of the crosses formed by selected rows and columns from A. In , it has been shown that one can obtain a rank k approximation within an accuracy ǫ such that provided that M is nonsingular. If M is ill conditioned or if M is singular then CM −1 R will not approximate A. So, the accuracy of the approximation depends on the choice of M. A good choice for M is the maximum volume submatrix i.e, the submatrix M has determinant with maximum modulus among all k×k submatrices of A . Since the search for this submatrix is NP-complex problem BIB003 , it is not feasible even for moderate values of m, n and k. In practice, such a submatrix M can be replaced by matrices that can be computed by the techniques, like adaptive cross approximation BIB004 BIB001 , skeleton decomposition with suboptimal maximum volume submatrix and pseudoskeleton decomposition . The adaptive cross approximation (ACA) technique constructs the approximation adaptively and the columns and rows are iteratively added until an error criterion is reached. In pseudoskeleton decomposition the matrix G is not necessarily equal to M −1 and even not necessarily nonsingular. The applications of these techniques can be found in BIB001 BIB002 . Here we describe all these techniques in detail.
Literature survey on low rank approximation of matrices <s> Skeleton decomposition <s> The mosaic-skeleton method was bred in a simple observation that rather large blocks in very large matrices coming from integral formulations can be approximated accurately by a sum of just few rank-one matrices (skeletons). These blocks might correspond to a region where the kernel is smooth enough, and anyway it can be a region where the kernel is approximated by a short sum of separable functions (functional skeletons). Since the effect of approximations is like that of having small-rank matrices, we find it pertinent to say about mosaic ranks of a matrix which turn out to be pretty small for many nonsingular matrices. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Skeleton decomposition <s> A classic result of Johnson and Lindenstrauss asserts that any set of n points in d-dimensional Euclidean space can be embedded into k-dimensional Euclidean space where k is logarithmic in n and independent of d so that all pairwise distances are maintained within an arbitrarily small factor. All known constructions of such embeddings involve projecting the n points onto a random k-dimensional hyperplane. We give a novel construction of the embedding, suitable for database applications, which amounts to computing a simple aggregate over k random attribute partitions. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Skeleton decomposition <s> For a given matrix, considered is the rank-r skeleton approximation which uses r columns and r rows of the given matrix. It is demonstrated that if the minor residing on the intersection of the chosen columns and rows has the maximal modulus among all minors of order r, the considered approximation is quasioptimal in Chebyshev norm. <s> BIB003
Consider a matrix A of order m×n. As described above the matrix A is approximated by A ≈ CGR, where C and R contains k selected columns and rows respectively and G = M −1 , where M = A(I, J) is of order k ×k, a submatrix on the intersection of the selected columns and rows (I, J are indices of rows and columns respectively). As explained above, obtaining the maximum volume submatrix M is very difficult, so we replace it by a quasioptimal maximal volume submatrix. It has been discussed in , how to find such a matrix. An algorithm has been developed (named as "maxvol" in ) and complexity of this algorithm has been shown to be O(mk). The algorithm takes a m × k matrix as input and gives k row indices such that the intersection matrix M has almost maximal volume as output. To construct a rank k skeleton approximation of a given matrix A using maxvol algorithm we follow the steps given below. Step 1 : Compute k columns of A given by the indices J = (j BIB002 , j , . . . , j (k) ) and store them in a matrix. Let it be C. i.e., C = A(:, J) is of order m × k. Step 2 : Now we find a good matrix by using the maxvol procedure on C that gives k row indices I = (i BIB002 , i , . . . , i (k) ) such that the corresponding intersection matrix, say M has almost maximal volume. Step 3 : Store the k rows in a matrix R = A(I, :) (R is of order k × n). Step 4 : Therefore the skeleton decomposition is A ≈ CGR where Remark: If the column indices J = (j BIB002 , j , . . . , j (k) ) at the beginning were badly chosen (may be because of some random strategy), this approximation might not be very good and the inverse of M might be unstable (nearly singular matrix). Even if the ranks are over estimated the inverse of M will also be unstable . To overcome this, after getting good row indices I, one can use the maxvol procedure for the row matrix R to optimize the choice of the columns and even alternate further until the determinant of M stays almost constant and the approximation is fine BIB001 . In the case of over estimation of ranks, some remedy is still possible to get good accuracy . Computational complexity: The complexity of the algorithm is O (m + n)k 2 , and only k(m+n) of the original entries of A have to be computed. The storage required for the approximation is k(m + n). We can see from the algorithm that only few original matrix entries have been used in the final approximation. A quasioptimal error estimate for skeleton approximation of matrix has been derived in BIB003 . It has been shown that if the matrix M has maximal in modulus determinant among all k by k submatrices of A then Where A ∞ defined as the largest entry in the absolute value of the matrix A (sup-norm).
Literature survey on low rank approximation of matrices <s> Algorithm <s> A classic result of Johnson and Lindenstrauss asserts that any set of n points in d-dimensional Euclidean space can be embedded into k-dimensional Euclidean space where k is logarithmic in n and independent of d so that all pairwise distances are maintained within an arbitrarily small factor. All known constructions of such embeddings involve projecting the n points onto a random k-dimensional hyperplane. We give a novel construction of the embedding, suitable for database applications, which amounts to computing a simple aggregate over k random attribute partitions. <s> BIB001 </s> Literature survey on low rank approximation of matrices <s> Algorithm <s> Abstract : Efficient modeling of electromagnetic scattering has always been an active topic in the field of computational electromagnetics. To reduce the memory and CPU time in the method of moments (MoM) solution, an efficient method based on pseudo skeleton approximation is presented in this report. The algorithm is purely algebraic, and therefore its performance is not associated with the kernel functions in the integral equations. The algorithm starts with a multilevel partitioning of the computational domain, which is very similar to the technique employed in multilevel fast multipole algorithm (MLFMA). Any of the impedance sub-matrices (with size of m x n) associated with the well-separated partitioning clusters (far interaction terms) is represented by the product of two much smaller matrices (with sizes of m x r and r x n), where r is the effective rank. Therefore, the memory requirement will be relieved and the total CPU time will be reduced significantly as well, since the rank is much smaller than the original matrix dimensions. It should be noted that we don't have to calculate all the impedance entries to implement the aforementioned decomposition. Instead, we only need to calculate a few randomly chosen rows and columns of those impedance entries. Further compressions based on singular value decomposition (SVD) are performed so that the rank reaches its optimal limit, which leads to the optimized final matrix compression. Numerical examples are provided to show the validity of the new algorithm. Future work directions are also discussed in this report. <s> BIB002 </s> Literature survey on low rank approximation of matrices <s> Algorithm <s> A skeleton decomposition of a matrix $A$ is any factorization of the form $A_{:C} Z A_{R:}$, where $A_{:C}$ comprises columns of $A$, and $A_{R:}$ comprises rows of $A$. In this paper, we investigate the conditions under which random sampling of $C$ and $R$ results in accurate skeleton decompositions. When the singular vectors (or more generally the generating vectors) are incoherent, we show that a simple algorithm returns an accurate skeleton in sublinear $O(\ell^3)$ time from $\ell \simeq k \log n$ rows and columns drawn uniformly at random, with an approximation error of the form $O(\frac{n}{\ell} \sigma_k)$ where $\sigma_k$ is the $k$th singular value of $A$. We discuss the crucial role that regularization plays in forming the middle matrix $U$ as a pseudoinverse of the restriction $A_{RC}$ of $A$ to rows in $R$ and columns in $C$. The proof methods enable the analysis of two alternative sublinear-time algorithms, based on the rank-revealing QR decomposition, which allows us to tighten the number of ... <s> BIB003
Step 1 : Choose k columns randomly from the matrix A given by the indices J = (j BIB001 , j , . . . , j (k) ) and store them in a matrix. Let it be C. i.e., C = A(:, J) is of order m × k. Step 2 : Now use maxvol procedure on C to get k row indices I = (i BIB001 , i , . . . , i (k) ) from A corresponding to the columns in C and store them in a matrix, say R = A(I, :) such that the corresponding intersection matrix, say M = A(I, J) has almost maximal volume. Step 3 : Apply SVD to the matrix M and let U M , V M and S M be the decomposition matrices by SVD of M. Step 4 : Fix ǫ and find r singular values satisfying the condition σ i > ǫ for i = 1, 2, . . . , k. Step 5 : Now truncate the matrices U M , V M and S M according to r singular values and store them in the matrices U r , V r and S r respectively. Step 6 : Find the pseudoinverse of M. i.e., Step 7 : So, finally A will be decomposed as A ∼ CR, where C = CV r S −1 r , R = U T r R. Computational Complexity: The overall computational cost of the above algorithm is Here one can see that the most dominating one in operation count is O(k 3 ) (due to the SVD on M in step 3). Some terms in the complexity involves mrk, nrk (due to the computations in step 7). Since k ≪ m, n and r ≪ k they do not dominate. So the overall complexity of the algorithm is linear in m and n. Pseudoskeleton approximation is used to construct different tensor decomposition formats . Different applications of pseudoskeleton approximation can be found in BIB002 . Error Analysis: In , the error in the pseudoskeleton approximation has been studied. It has been shown that if the matrix A is approximated by rank r within an accuracy ǫ then there exists a choice of r columns and r rows i.e C and R and the intersection matrix M such that A ≃ CGR satisfying Here the columns of C and rows of R are chosen such that their intersection M has maximal volume . In BIB003 sublinear randomized algorithm for skeleton decomposition has been proposed. Uniformly sampling l ≃ r log (max(m, n)) rows and columns are considered to construct a rank r skeleton approximation. The computational complexity of the algorithm is shown to be O(l 3 ) and the following error estimate has been proved. Suppose A ≃ X 1 A 11 Y T 1 where X 1 , Y 1 have r orthonormal columns (not necessarily singular vectors of A) and A 11 is not necessarily diagonal. Assume X 1 and Y 1 are incoherent, then holds with high probability.