article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
in the five decades since antineutrinos were first detected using a nuclear reactor as the source , these facilities have played host to a large number of neutrino physics experiments . during this time our understanding of neutrino physics and the technologyused to detect antineutrinos have matured to the extent that it seems feasible to use these particles for nuclear reactor safeguards , as first proposed at this conference three decades ago .safeguards agencies , such as the iaea , use an ensemble of procedures and technologies to detect diversion of fissile materials from civil nuclear fuel cycle facilities into weapons programs .nuclear reactors are the step in the fuel cycle at which plutonium is produced , so effective reactor safeguards are especially important .current reactor safeguards practice is focused upon tracking fuel assemblies through item accountancy and surveillance , and does not include direct measurements of fissile inventory . while containment and surveillance practices are effective , they are also costly and time consuming for both the agency and the reactor operator. therefore the prospect of using antineutrino detectors to non - intrusively _ measure _ the operation of reactors and the evolution of their fuel is especially attractive .the most likely scenario for antineutrino based cooperative monitoring ( e.g. iaea safeguards ) will be the deployment of relatively small ( cubic meter scale ) detectors within a few tens of meters of a reactor core .neutrino oscillation searches conducted at these distances at rovno and bugey in the 1990 s were in many ways prototypes that demonstrated much of the physics required . once the neutrino oscillation picture became clear at the start of this decade , all the pieces were in place to begin development of detectors specifically tailored to the needs of the safeguards community .longer range monitoring , e.g. that described in , would also be attactive , but will likely require significant advances before becoming feasible .a more detailed treatment of this topic can be found in a recent review of reactor antineutrino experiments .antineutrino emission by nuclear reactors arises from the beta decay of neutron - rich fragments produced in heavy element fissions .these reactor antineutrinos are typically detected via the inverse beta decay process on quasi - free protons in a hydrogenous medium ( usually scintillator ) : .time correlated detection of both final state particles provides powerful background rejection . for the inverse beta process ,the measured antineutrino energy spectrum , and thus the average number of detectable antineutrinos produced per fission , differ significantly between the two major fissile elements , and ( 1.92 and 1.45 average detectable antineutrinos per fission , respectively ) . hence , as the reactor core evolves and the relative mass fractions and fission rates of and change ( fig .[ fig : fisrates]a ) , the number of detected antineutrinos will also change .this relation between the mass fractions of fissile isotopes and the detectable antineutrino flux is known as the burnup effect . following the formulation of , it is instructive to write : where is the antineutrino detection rate , is the reactor thermal power, is a constant encompassing all non varying terms ( e.g. detector size , detector / core geometry ) , and describes the change in the antineutrino flux due to changes in the reactor fuel composition .typically , commercial pressurized water reactors ( pwrs ) are operated at constant thermal power , independent of the ratio of fission rates from each fissile isotope .pwrs operate for 1 - 2 years between refuelings , at which time about one third of the core is replaced . between refuelings fissile is produced by neutron capture on . operating in this mode , the factor and therefore the antineutrino detection rate decreases by about over the course of a reactor fuel cycle ( fig .[ fig : fisrates]b ) , depending on the initial fuel loading and operating history .therefore , one can see from eq .[ eq : nu_d_rate2 ] that measurements of the antineutrino detection rate can provide information about both the thermal power of a reactor and the evolution of its fuel composition .these two parameters can not , however , be determined independently by this method , e.g. to track fuel evolution one would need independent knowledge of the reactor power history .measurement of the antineutrino energy spectrum may allow for the independent determination of the fuel evolution , and relative measurements over day to month time scales where varies little allow for tracking of short term changes in the thermal power .this last point may be of safeguards interest , since it is only when the reactor is off that the fissile fuel can be accessed at a pwr , and the integral of the power history constrains the amount of fissile pu that can be produced .there are many efforts underway around the world to explore the potential of antineutrino based reactor safegaurds .the evolution of these efforts is summarized in the agenda of the now regular applied antineutrino physics ( aap ) workshops . at present ,these efforts are funded by a variety of national agencies acting independently , but there is frequent communication between the physicists involved at the aap meetings .this nascent aap community is hopeful that recent iaea interest ( sec .[ sec : iaea ] ) will result in a formal request from the iaea to the national support programs ( the programs within member states that conduct research and development requested by the iaea ) .such a request that clearly laid out the the needs of the agency with respect to detector size , sensitivity , etc , would allow for better coordination between these respective national programs , and would be an important step towards the development of device for use under iaea auspices .as mentioned above , the concept of using antineutrinos to monitor reactor was first proposed by mikaelian , and the rovno experiment was amongst the first to demonstrate the correlation between the reactor antineutrino flux , thermal power , and fuel burnup .several members of the original rovno group continue to develop antineutrino detection technology , e.g. developing new gd liquid scintillator using a linear alkylbenzene ( lab ) solvent .they propose to build a cubic meter scale detector specifically for reactor safeguards , and to deploy it at a reactor in russia .a collaboration between the sandia national laboratories ( snl ) and the lawrence livermore national laboratory ( llnl ) has been developing antineutrino detectors for reactor safeguards since about 2000 .our particular focus is on demonstrating to both the physics and safeguards communities that antineutrino based monitoring is feasible .this involves developing detectors that are simple to construct , operate , and maintain , and that are sufficiently robust and utilize materials suitable for a commercial reactor environment , all while maintaining a useful sensitivity to reactor operating parameters .the songs1 detector was operated at the san onofre nuclear generating station ( songs ) between 2003 and 2006 .the active volume comprised tons of gd doped liquid scintillator contained in stainless steel cells ( stainless was used to avoid any chance of acrylic degradation and liquid leakage ) .this was surrounded by a water / polyethylene neutron - gamma shield and plastic scintillator muon veto ( fig .[ fig : songs1 ] ) .the detector was located in the tendon gallery of one of the two pwrs at songs , about m from the reactor core and under about m.w.e .galleries of this type , which are part of a system for post - tensioning the containment concrete , are a feature of many , but not all , reactor designs. it may therefore be important to consider detector designs that can operate with little or no overburden .the songs1 detector was operated in a completely automatic fashion .automatic calibration and analysis procedures were implemented and antineutrino detection rate data was transmitted to snl / llnl in near real time .an example of the ability to track changes in reactor thermal power is given in fig .[ fig : power ] .a reactor scram ( emergency shutdown ) could be observed within 5 hours of its occurrence at confidence . integrating the antineutrino detection rate data over a hour period yielded a relative power monitoring precision of about , while increasing the averaging period to days yielded a precision of about .increasing the averaging time to days allowed observation of the fuel burnup ( fig .[ fig : burnup ] ) .the relatively simple calibration procedure was able to maintain constant detector efficiency to better than 1% over the month observation period .the decrease in rate due to fuel evolution ( burnup ) and the step increase in rate expected after refueling ( exchange of pu laden fuel for fresh fuel containing only u ) were both clearly observed .songs1 was very successful , demonstrating that a relatively simple and compact detector could be operated non - intrusively at a commercial reactor for a long period . however , much of the feedback received from the safeguards community focussed upon the use of a flammable liquid scintillator .as used in the songs1 deployment this scintillator presented no safety hazard to the operation of the reactor - all relevant safety codes and procedures were checked and strictly adhered to .however , deployment of that flammable material did require some compliance effort from the reactor operator . in a safeguards contextsuch situations should be avoided if at all possible .therefore , we decided to develop and deploy two detectors based upon non - flammable materials .the first of these was based upon a plastic scintillator material .our goal was retain as much similarity between this device and songs1 as possible .therefore we wished to use gd as the neutron capture agent ; this was achieved by using m x m x cm slabs of bc-408 plastic scintillator and interleaving them with a gd loaded layer .the total active volume of this detector was m .this device was deployed at songs in 2007 , and was clearly able to observe reactor outages like that shown in fig .[ fig : power ] .we continue to analyze the data from this detector and expect to be able to observe fuel burnup also . inspired by the gadzooks concept presented at this conference in 2004 , we are also investigating the use of gd - doped water as an antineutrino detector .this detection medium should have the advantage of being insensitive to the correlated background produced by cosmogenic fast neutrons that recoil from a proton and then capture .a liter tank of purified water containing 0.1% gd by weight was been deployed in the songs tendon gallery .initially , this detector was deployed with little passive shielding - the large uncorrelated background rate that resulted has made it difficult to identify a reactor antineutrino signal much beyond a level . in this configuration gd - doped water also appears promising for use in special nuclear material search applications .the coherent scattering of an ( anti)neutrino from a nucleus is a long predicted but as yet unobserved standard model process .it is difficult to observe since the signal is a recoiling nucleus with just a few kev of energy .nonetheless , this process holds great promise for reactor monitoring since it has cross - section several orders of magnitude higher than that for inverse beta decay , which might eventually yield significantly smaller monitoring detectors . to explore the prospects for this process, we are currently collaborating with the collar group of the university of chicago in deploying an ultra - low threshold ge crystal at songs .we are also investigating the potential of dual phase argon detectors .the double chooz collaboration plans to use the double chooz near detector ( m from the two chooz reactors ) for a precision non - proliferation measurement .the double chooz detectors will represent the state - of - the - art in antineutrino detection , and will be able to make a benchmark measurement of the antineutrino energy spectrum emitted by a commercial pwr .there is also a significant effort underway within double chooz to improve the reactor simulations used to predict reactor fission rates and the measurements of the antineutrino energy spectrum emitted by the important fissioning isotopes .this work is necessary for the physics goals of double chooz , but it will also greatly improve the precision with which the fuel evolution of a reactor can be measured .the double chooz near detector design is too complex and costly for widespread safeguards use .therefore , the double chooz groups in cea / saclay , in2p3-subatech , and apc plan to apply the technology developed for double chooz , in particular detector simulation capabilities and high flash - point liquid scintillator , to the development of a compact antineutrino detector for safeguards : nucifer . the emphasis of this design will be on maintaining high detection efficiency ( ) and good energy resolution and background rejection .nucifer will be commissioned against research reactors in france during 2009 - 2010 .following the commissioning phase nucifer will be deployed against a commercial pwr , where it is planned to measure reactor fuel evolution using the antineutrino energy spectrum .an effort to develop a compact antineutrino detector for reactor safeguards is also underway in brazil , at the angra dos rios nuclear power plant .several deployment sites near the larger of the two reactors at angra have been negotiated with the plant operator and detector design is well underway .this effort is particularly interesting , as a third reactor is soon to be built at the angra site , within which space may be specially reserved for a detector , and because in addition to the iaea , there is a regional safeguards organization ( abbac ) monitoring operations .such regional agencies often pioneer the use of new safeguards technologies , and the detector deployment at angra may occur with abbac involvement .a prototype detector for the kaska theta-13 experiment has been deployed at the joyo fast research reactor in japan .this effort is notable , since it is an attempt to observe antineutrinos with a compact detector at a small research reactor in a deployment location with little overburden .this may be typical of the challenging environment in which the iaea has recently expressed interest in applying this technique ( sec . [sec : iaea ] ) . not surprisingly , this effort has encountered large background rates , and to date has identified no clear reactor antineutrino signal .the iaea is aware of the developments occurring in this field .a representative from the iaea novel technologies group attended the most recent aap meeting , and expressed an interest in using this monitoring technique at research reactors .the iaea currently uses a device at these facilities that requires access to the primary coolant loop .needless to say , this is quite invasive and an antineutrino monitoring technique would clearly be superior in this respect .an experts meeting at iaea headquarters will occur in october of 2008 to discuss the capabilities of current and projected antineutrino detection techniques and the needs of the iaea ._ applications _ of neutrino physics must have seemed somewhat fanciful when first discussed at this conference several decades ago .but even with currently available technologies , useful reactor monitoring appears feasible , as demonstrated by the songs1 results .the iaea has expressed interest in this technique and the applied antineutrino physics community eagerly awaits their guidance as to the steps required to add antineutrino based reactor monitoring to the safeguards toolbox .llnl - proc-406953 .this work was performed under the auspices of the u.s .department of energy by lawrence livermore national laboratory in part under contract w-7405-eng-48 and in part under contract de - ac52 - 07na27344 .00 a. porta , et .al . , `` reactor antineutrino detection for thermal power measurement and non - proliferation purposes '' in proc .physics of reactors : `` nuclear power : a sustainable resource '' ( 2008 )
nuclear reactors have served as the antineutrino source for many fundamental physics experiments . the techniques developed by these experiments make it possible to use these very weakly interacting particles for a practical purpose . the large flux of antineutrinos that leaves a reactor carries information about two quantities of interest for safeguards : the reactor power and fissile inventory . measurements made with antineutrino detectors could therefore offer an alternative means for verifying the power history and fissile inventory of a reactors , as part of international atomic energy agency ( iaea ) and other reactor safeguards regimes . several efforts to develop this monitoring technique are underway across the globe .
in recent years , fifth generation ( 5 g ) wireless networks have attracted extensive research interest . according to the 3rd generation partnership project ( 3gpp ) , 5 g networks should support three major families of applications , including enhanced mobile broadband ( embb ) ; massive machine type communications ( mmtc ) ; and ultra - reliable and low - latency communications ( urllc ) . on top of this , enhanced vehicle - to - everything ( ev2x ) communications are also considered as an important service that should be supported by 5 g networks .these scenarios require massive connectivity with high system throughput and improved spectral efficiency ( se ) and impose significant challenges to the design of general 5 g networks . in order to meet these new requirements , new modulation and multiple access ( ma )schemes are being explored .orthogonal frequency division multiplexing ( ofdm ) has been adopted in fourth generation ( 4 g ) networks . with an appropriate cyclic prefix ( cp ) , ofdm is able to combat the delay spread of wireless channels with simple detection methods , which makes it a popular solution for current broadband transmission .however , traditional ofdm is unable to meet many new demands required for 5 g networks .for example , in the mmtc scenario , sensor nodes usually transmit different types of data asynchronously in narrow bands while ofdm requires different users to be highly synchronized , otherwise there will be large interference among adjacent subbands .to address the new challenges that 5 g networks are expected to solve , various types of modulation have been proposed , such as filtering , pulse shaping , and precoding to reduce the out - of - band ( oob ) leakage of ofdm signals . filtering is the most straightforward approach to reduce the oob leakage and with a properly designed filter , the leakage over the stop - band can be greatly suppressed .pulse shaping can be regarded as a type of subcarrier - based filtering that reduces overlaps between subcarriers even inside the band of a single user , however , it usually has a long tail in time domain according to the heisenberg - gabor uncertainty principle . introducing precoding to transmit data before ofdm modulation is also an effective approach to reduce leakage .in addition to the aforementioned approaches to reduce the leakage of ofdm signals , some new types of modulations have also been proposed specifically for 5 g networks .for example , to deal with high doppler spread in ev2x scenarios , transmit data can be modulated in the delay - doppler domain .the above modulations can be used with orthogonal multiple access ( oma ) in 5 g networks .oma is core to all previous and current wireless networks ; time - division multiple access ( tdma ) and frequency - division multiple access ( fdma ) are used in the second generation ( 2 g ) systems , code - division multiple access ( cdma ) in the third generation ( 3 g ) systems , and orthogonal frequency division multiple access ( ofdma ) in the 4 g systems . for these systems ,resource blocks are orthogonally divided in time , frequency , or code domains , and therefore there is minimal interference among adjacent blocks and makes signal detection relatively simple . however , oma can only support limited numbers of users due to limitations in the numbers of orthogonal resources blocks , which limits the se and the capacity of current networks . to support a massive number of and dramatically different classes of users and applications in 5 g networks ,various noma schemes have been proposed . as an alternative to oma, noma introduces a new dimension by perform multiplexing within one of the classic time / frequency / code domains . in other words, noma can be regarded as an `` add - on '' , which has the potential to be harmoniously integrated with existing ma techniques .the core of noma is to utilize power and code domains in multiplexing to support more users in the same resource block .there are three major types of noma : power - domain noma , code - domain noma , and noma multiplexing in multiple domains .with noma , the limited spectrum resources can be fully utilized to support more users , therefore the capacity of 5 g networks can be improved significantly even though extra interference and additional complexity will be introduced at the receiver . to address the various challenges of 5 g networks, we can either develop novel modulation techniques to reduce multiple user interference for oma or directly use noma .the rest of this article is organized as follows . in section [ sec : waveform ] , novel modulation candidates for oma in 5 g networks are compared . in section [ sec : ma ] , various noma schemes are discussed .section [ sec : conclusion ] concludes the article .in this section , we will discuss new modulation techniques for 5 g networks .since ofdm is widely used in current wireless systems and standards , many potential modulation schemes for 5 g networks are delivered from ofdm for backward compatibility reasons .therefore , we will first introduce traditional ofdm . denote , for , to be the transmit complex symbols .then the baseband ofdm signal can be expressed as for , where , is the subcarrier bandwidth and is the symbol duration . to ensure that transmit symbols can be recovered without distortion , , which is also called the orthogonal condition .it can be easily shown that if the orthogonal condition holds .denote to be the sampled version of , where .it can be easily seen that is the inverse discrete fourier transform ( idft ) of , which can be implemented by fast fourier transform ( fft ) and significantly simplifies ofdm modulation and demodulation . to address the delay spread of wireless channels , a cp is usually used in ofdm .if the length of the cp is larger than the delay span ( the duration between the first and the last taps / paths of a channel ) , then the demodulated ofdm signal can be expressed as where is the frequency response of the wireless channel at and is the impact of additive channel noise .therefore , the channel distortion becomes a multiplication of channel frequency response in ofdm systems while it is convolution in single - carrier systems , which makes the detection of ofdm signal much easier . from the above discussion ,ofdm can effectively deal with the delay spread of broadband wireless channels and fft can be used to significantly simplify its complexity , therefore it has been widely used in the current wireless communication systems and standards . however , as we can see from ( [ eq : ofdmsig ] ) , the ofdm signal is time - limited .therefore , its oob leakage is pretty high , especially when users are asynchronized as typical of 5 g networks . to address this issue, a guard band is usually inserted between the signals of two adjacent users in the frequency domain in addition to a cp or a guard interval in the time domain , which reduces the se of ofdm .this is even more severe for the users using a narrow frequency band .5 g networks have to support not only a massive number of users but also dramatically different types of users that have different demands .traditional ofdm can no longer satisfy these requirements , and therefore novel modulation techniques with much lower oob leakage are required .the new modulation techniques for 5 g networks currently need to consider backward compatibility with traditional ofdm systems but should also have the following key features to address the new challenges . 1 .high se : new modulation techniques should be able to mitigate oob leakage among adjacent users so that the system se can be improved significantly by reducing the guard band / time resources .2 . loose synchronization requirements : massive number of users are expected to be supported , especially for the internet of things ( iot ) , which makes synchronization difficult . therefore, new modulation techniques are expected to accept asynchronous scenarios .3 . flexibility : the modulation parameters ( e.g. , subcarrier width and symbol period ) for each user should be configured independently and flexibly to support users with different data rate requirements. the modulation techniques for oma mainly include pulse shaping , subband filtering , precoding design , guard interval ( gi ) shortening , and modulation in the delay - doppler domain . in this section ,we introduce those promising modulation techniques subsequently .pulse shaping , which is also regarded as subcarrier - based filtering , can effectively reduce oob leakage . according to the heisenberg - gabor uncertainty principle , the time and frequency widths of the pulsescan not be reduced at the same time .therefore , the waveforms based on pulse shaping is usually non - orthogonal in both time and frequency domains to maintain high se .compared with traditional ofdm , the transceiver structure supporting pulse shaped modulation is more complex . here , we introduce two typical modulations based on pulse shaping , i.e. , filter bank multicarrier ( fbmc ) and generalized frequency division multiplexing ( gfdm ) . as shown in fig . [fig : fbmc ] , fbmc consists of idft and dft , synthesis and analysis polyphase filter banks .the prototype filter in fbmc performs the pulse shaping .there are two types of typical pulses : the pulse based on the isotropic orthogonal transform algorithm ( iota ) and the pulse adopted in the phydyas project .the length of the pulse in the time domain is determined by the required performance and is usually several times the length of the symbol period .the bandwidth of the pulse , which is different from the pulse in the traditional ofdm that has a long tail , is limited within a few subbands . to achieve the best se , offset quadrature amplitude modulation ( oqam )is usually applied to make fbmc real - domain orthogonal in time and frequency domains .therefore , the transmit signal over consecutive block periods can be expressed as where and are the numbers of subcarriers and symbols , respectively , is the transmit symbol at subcarrier and symbol , and is the prototype filter coefficient at the -th time - domain sample .it is worth noting that the transmit symbols here refer to the pulse amplitude modulation ( pam ) symbols that are derived from the staggering of quadrature amplitude modulation ( qam ) symbols .thus the interval between two adjacent blocks is only half of the block period due to the offset in oqam .the parameter , in ( [ eq : fbmcsig ] ) , is defined as which is used to form the oqam structure . with a properly designed prototype filter such as iota and the oqam structure ,the interference from the nearby overlapped symbols caused by a matched filter ( mf ) receiver becomes pure imaginary , which can be easily cancelled .[ fig : gfdm ] demonstrates the block diagram of gfdm .ofdm and single - carrier frequency division multiplexing ( sc - fdm ) can be regarded as two special cases of gfdm .the unique feature of gfdm is to use circular shifted filters , rather than linear filters that are used in fbmc , to perform pulse shaping . by carefully choosing the circular filter ,the out - of - block leakage can be reduced even if the orthogonality is completely given up .we can flexibly adjust frequency samples and time samples for a gfdm block according to the application environment .the transmit signal for each gfdm block can be expressed as for , where is the transmit symbol on subcarrier at subsymbol and is the circular time and frequency shifted version of the prototype pulse shaping filter . in ( [ eq : gfdmsig ] ) , where denotes the modulo operation and is the prototype pulse shaping filter .similar to the traditional ofdm , the modulation process and demodulation process can be expressed by matrix operations .the idft and dft matrices in the traditional ofdm are substituted by some specific matrices corresponding to the modulation and demodulation for gfdm .but , the transceiver structure of gfdm is significantly different from the traditional ofdm . besides fbmc and gfdm ,other modulations based on pulse shaping , such as pulse - shaped ofdm and qam - fbmc , have also been proposed for 5 g networks .generally , modulations based on pulse shaping try to restrict transmit signals within a narrow bandwidth and thus mitigate the oob leakage so that they can work in asynchronous scenarios with a narrow guard band .fbmc also uses oqam to achieve real - domain orthogonality , which saves the cost of the gi and interference cancellation . in addition , the circular shifted filters in gfdm avoid the long tail of the linear filters in the time domain , which makes gfdm fit for sporadic transmission .furthermore , gfdm is easily compatible to mimo technologies .subband filtering is another technique to reduce the oob leakage .universal filtered multicarrier ( ufmc ) and filtered ofdm ( f - ofdm ) are two typical modulations based on subband filtering , which will be introduced next . fig .[ fig : ufmc ] shows the transmitter and the receiver structures of ufmc . in ufmc ,the subbands are with equal size , and each filter is a shifted version of the same prototype filter .ofdm is applied within a subband for this modulation as shown in the figure .since the bandwidth of the filter in ufmc is much wider than that of the modulations based on the pulse shaping , the length in time domain is much shorter .therefore , interference caused by the tail of the filter can be easily eliminated by adopting a zero - padding ( zp ) prefix with a reasonable length . assuming that subcarriers are divided into subbands , each with consecutive subcarriers , the transmit signal in ufmc can be expressed as where is the filter coefficient of subband , and is the ofdm modulated signal over subband that can be expressed as with denoting the length of the zp , denoting the number of symbol blocks and denoting the signal at subcarrier and symbol .in ( [ eq : ufmcsubsig ] ) , can be expressed as where denotes the -th transmit symbol at the -th symbol block . at the receiver , the signal at each symbol interval is with the length of and is zero - padded to have a length of so that a -point fft can be performed . please note that only the even subcarriers are considered for signal detection after the -point fft .f - ofdm has a similar transmitter structure as ufmc .the main difference is that f - ofdm employs a cp and usually allows residual inter - symbol interference ( isi ) .therefore , at the receiver , the mf is applied instead of the zp and decimation . besides, downsampling can be applied before the dft operation , which can reduce complexity significantly since the cp can mitigate most of interference caused by the tail of the filter ; the residual interference is with much lower power and can be treated as noise .thus , the filter in f - ofdm can be longer than that in ufmc and has better attenuation outside the band . with the aid of effective channel coding ,the performance degradation caused by residual interference in f - ofdm can be negligible .another difference from ufmc is that the subcarrier spacing and the cp length do not have to be the same for different users in f - ofdm .the most widely used filter in f - ofdm is the soft - truncated sinc filter , which can be easily used in various applications with different parameters .therefore , f - ofdm is very flexible in the frequency multiplexing . besides ufmc and f - ofdm , other modulations based on subband filtering have also been proposed .for example , resource block f - ofdm ( rb - f - ofdm ) utilizes filters based on resource block instead of the whole band of users in f - ofdm . in general , modulations based on subband filtering can effectively reduce oob leakage and achieve better performance in comparison with the traditional ofdm .apart from pulse shaping and subband filtering , there are also some other techniques to suppress the oob leakage and meet the requirements of 5 g networks . in the following ,we mainly introduce three other modulations , including guard interval discrete fourier transform spread ofdm ( gi dft - s - ofdm ) , spectrally - precoded ofdm ( sp - ofdm ) , and orthogonal time frequency and space ( otfs ) . in gi dft - s - ofdm , the known sequence is used as the gi instead of a cp .several types of the known sequences , such as the zero sequence and a well - designed unique word , can be used . by a fixedknown sequence with constant amplitude in gi dft - s - ofdm , the peak - to - average power ratio ( papr ) of the modulated signal can be reduced .moreover , the known sequence can also be utilized to estimate the parameters , such as the carrier frequency offset ( cfo ) in the synchronization process . by utilizing a proper sequence as the gi , the discontinuity between the adjacent time blocks in the traditional ofdm / dft - s - ofdm can be avoided . as a result, the oob leakage is reduced . for gi dft - s - ofdm ,the overall length of the gi and useful signal for different users is same .thus , the dft windows for different users at the receiver can still be aligned even if the lengths of the gis are different .therefore , the mutual interference due to asynchronization of users can be mitigated .[ fig : spofdm ] shows the diagram of sp - ofdm . from the figure , it consists idft and dft , spectral precoder , and iterative detector .generally , the data symbols mapped on subcarriers are precoded by a rank - deficient matrix in order to project the signal into a properly selected lower dimensional subspace so that the precoded signal can be high - order continuous , and results in much lower leakage compared with the traditional ofdm .even if precoded by a rank - deficient matrix can reduce the capacity of the channel , the oob leakage of the ofdm signals can be significantly suppressed at the cost of only few reduced dimensions . compared to the modulations based on filtering , sp - ofdm has the following three advantages : * the isi caused by the tail of the filters can be removed without filtering .therefore , the cp applied to combat the multipath of the wireless channels can be shorter , and se is improved .* when fragmented bands are used , sp - ofdm can easily notch specific well chosen frequencies without requiring multiple narrow subband filters .* furthermore , precoding and filtering can be combined to further improve the performance .the structure of otfs is similar to sp - ofdm , as can be seen in fig .[ fig : spofdm ] .the main difference is that the spectral precoder and the iterative detector are substituted by the two - dimensional ( 2d ) symplectic fourier transform and the corresponding inverse transform modules .otfs maps the symbols in the delay - doppler domain . through a 2d symplectic fourier transform, the corresponding data in the time - frequency domain can be calculated .then , the calculated data can be transmitted via a time - frequency - domain modulation method as in ofdm .since the 2d symplectic fourier transform is relatively independent of the time - frequency - domain modulation method , pulse shaping and subband filtering can also be applied together to further reduce the leakage in otfs .when a mobile is with a high speed , the channel experiences fast fading .channel parameters need to be estimated and tracked very often therefore , which significantly increases resource costs .moreover , most of the modulations are designed assuming that channels are constant within a symbol block . with a high mobility speeds ,extra interference is introduced , which degrades the performance .however , in the delay - doppler domain , the high doppler channel can be expressed in a stable model , which saves the cost of tracking the time - varying fading and improves performance therefore .otfs can be also applied to estimate channel state information ( csi ) of different antennas in mimo systems .generally , the delay and doppler dispersions are still relatively small compared to the system scale . in this case, the channel can be expressed in a compact and stable form in the delay - doppler domain . as a result ,the spread of the pilots caused by the channel are local , which enables to estimate the csi of different antennas in mimo systems by different pilots within a small area of the delay - doppler plane .in addition , a number of modulations based on other techniques have been also proposed , such as windowed ofdm ( w - ofdm ) , which utilizes windowing to deal with the discontinuity between adjacent ofdm symbols .we compare the power spectral density ( psd ) and bit - error rate ( ber ) of different modulations .suppressing the oob leakage is a key purpose for most of the modulation candidates for 5 g networks .the psds of the some modulations are shown in fig .[ fig : psd ] . from the figure ,all modulations achieve much lower leakage compared to the traditional ofdm . among them, ufmc applies subband filtering and also has low leakage , and fbmc and f - ofdm have the lowest leakage .gfdm , gi dft - s - ofdm , and sp - ofdm , although do not reduce the leakage as much as fbmc and f - ofdm , can still achieve much better performance than the traditional ofdm . in order to reduce the oob leakage ,many modulations utilize techniques , such as pulse shaping and subband filtering , which may introduce isi and ici .hence , the ber performance of different modulations is compared here .[ fig : ber ] shows the ber performance versus signal - to - noise ratio ( snr ) when the doppler spread and hz . from fig .[ fig : ber ] ( a ) , the traditional ofdm has the best performance when the doppler spread is zero ( ) since the isi caused by the multipath has been completely canceled by the cp . since the bandwidth of each subcarrier is small enough to make the corresponding channel approximately flat , the isi introduced by pulse shaping in fbmc is nearly pure imaginary .therefore , fbmc is approximately orthogonal in the real domain and achieves good ber performance .the performance of ufmc , gfdm , and sp - ofdm is similar to that of fbmc , which is degraded slightly due to noise enhancement and low - projection precoding .however , f - ofdm introduces extra isi that can not be completely canceled , and as a result , it has slightly worse performance , especially in the high snr region .gi dft - s - ofdm and otfs , which are different from the modulation schemes that directly map the symbols on subcarriers , apply spreading before mapping so that their performance does not approach that of ofdm .since the fast - fading channel is difficult to be estimated and tracked accurately , the performance of the most modulation schemes degrades significantly as we can see from fig .[ fig : ber ] ( b ) . while otfs can still achieve good performance due to its specific channel estimation method .moreover , its performance in the high - mobility scenario is even better than that in the zero doppler shift scenario because of doppler diversity . in this section ,modulation techniques for 5 g networks will be be discussed .these techniques can be used with oma to effectively deal with the oob leakage in 5 g networks . however , there are still many open issues in the area .a potential application of f - ofdm is iot . in this scenario ,the subbands are narrow and therefore , interference caused by a short cp can significantly degrade the performance and should be considered in the detection . to improve the detection performance , additional processing , such as filtering or successive interference cancellation ( sic ) , is needed .residual isi cancellation ( risic ) could be helpful .the existing designs for subband filtering , such as dolph - chebyshev filter in ufmc and soft - truncated filter in f - ofdm , are with fixed length .however , different users and different application scenarios will have different requirements on the leakage levels , filter lengths , etc . according to the heisenberg - gabor uncertainty principle , the time and frequency dispersions are dual variables that can not be reduced at the same time .therefore , how to balance the time and frequency dispersions and design an efficient prototype filter according to application scenarios is interesting .similar to the traditional ofdm , multi - carrier based new modulation candidates , such as fbmc and ufmc also have a large papr . in order to improve the efficiency of the power amplifier, the papr should be reduced .the traditional papr reduction methods applied in the traditional ofdm usually introduce distortions that degrade the performance .therefore , how to properly extend the papr reduction methods in the traditional ofdm to the new modulations is an interesting and meaningful issue .in order to support higher throughput and massive and heterogeneous connectivity for 5 g networks , we can adopt novel modulations discussed in section ii for oma , or directly use noma with effective interference mitigation and signal detection methods .the key features of noma can be summarized as follows : 1 .improved se : noma exhibits a high se , which is attributed to the fact that it allows each resource block ( e.g. , time / frequency / code ) to be exploited by multiple users .2 . ultra high connectivity : with the capability to support multiple users within one resource block , noma can potentially support massive connectivity for billions of smart devices .this feature is quite essential for iot scenarios with users that only require very low data rates but with massive number of users .3 . relaxed channel feedback : in noma, perfect uplink csi is not required at the base station ( bs ) .instead , only the received signal strength needs to be included in the channel feedback .low transmission latency : in the uplink of noma , there is no need to schedule requests from users to the bs , which is normally required in oma schemes . as a result , a grant - free uplink transmission can be established in noma , which reduces the transmission latency drastically .existing noma schemes can be classified into three categories : power - domain noma , code - domain noma , and noma multiplexing in multiple domains .we will introduce them subsequently with emphasis on power - domain noma .power - domain noma is considered as a promising ma scheme for 5 g networks .specifically , a downlink version of noma , named multiuser superposition transmission ( must ) , has been proposed for the 3gpp long - term evolution advanced ( 3gpp - lte - a ) networks .it has been shown that system capacity and user experiences can be improved by noma .more recently , a new work item ( wi ) outlining downlink multiuser superposition transmission for lte has been approved by 3gpp lte release 14 , which aims to identify the necessary techniques to enable lte to support the downlink intra - cell multiuser superposition transmission . here, we will expand upon the basic principles of various power - domain noma related techniques , including multiple antenna based noma , power allocation in noma , and cooperative noma .power - domain noma , as illustrated in fig .[ fig : noma ] for the two user case , deviates from conventional oma that uses tdma / fdma / cdma / ofdma allocating orthogonal resource blocks for different users to avoid the multiple access interference ( mai ) .instead power - domain noma can support multiple users within the same resource block by distinguishing them with different power levels . as a result , noma is able to support more connectivity and provide higher throughput with limited resources .the downlink transmission of noma for the two user case is shown in fig .[ fig : noma ] where the users are served at the same time / frequency / code resource block with a total power constraint .specifically , the bs sends a superimposed signal containing the two signals for the two users .this differs from conventional power allocation strategies , such as water filling , as noma allocates less power for the users with better downlink csi , to guarantee overall fairness and to utilize diversity in the time / frequency / code domains .sic is used for signal detection at the receiver .the user with more transmit power , that is , the one with smaller downlink channel gain , is first to be decoded while treating the other user s signal as noise .once the signal corresponding to the user with the larger transmit power is detected and decoded , its signal component will be subtracted from the received signal to facilitate the detection of subsequent users .it should be noted that the first detected user is with the largest inter - user interference and also the detection error in the first user will pass to the other user , which is why we have to allocate sufficient power to the first user to be detected .the extension of noma from two to multiple user cases is straightforward . for the uplink transmission of noma , the transmit power is limited by each individual user .different from the downlink , the transmit powers of the users using the same resource block are carefully controlled so that the received signal components at the bs corresponding to the users with the better csi , have more powers . at the receiver ( the bs ), the user with the best csi is decoded first . after that, the corresponding component is removed from the received signal .the sic receiver works in a descending order of the csi , which is the opposite to the downlink case .[ fig : noma_oma ] compares noma and oma where two users are served by the same bs if noma is adopted . from the figure ,the noma scheme achieves a lower outage probability .however , by adopting noma , a more complex transmitter and receiver are required to mitigate the interference .furthermore , power - domain noma usually works well when only two or a few users share the same resource block . as the number of users multiplexing in power domain increases , the mai becomes severe and the performance of noma degrades .multiple antenna techniques can provide an additional degree of freedom on the spatial domain , and bring further performance improvements to noma .recently , multiple antenna based noma has attracted lots of attention .different from single - input - single - output ( siso ) based noma , where the channels are normally represented by scalars , one of the research challenges in multiple antenna based noma comes from user ordering ; as the channels are generally in form of vectors or matrices .currently , the possible designs of multiple antenna based noma fall into two categories where one or multiple users are served by a single beamforming vector . by allocating different users with different beams in the same resource block , the quality of service ( qos ) of each user can be guaranteed in multiple antenna based noma systems forcing the beams to satisfy a predefined order .this type of multiple antenna based noma scheme has been first proposed by sun _ et al . _ in to investigate power optimization to maximize the ergodic capacity .this proposed multiple antenna based noma scheme has proved to be able to achieve significant performance improvement compared with conventional oma schemes .a cluster of users can share the same beam .the spatial channels of different users within the same cluster are considered to be highly correlated .therefore , beams for different clusters should be carefully designed to guarantee that the channels for different clusters are orthogonal to each other in order to suppress the inter - cluster interference . for multiple - input - single - output ( miso )based noma , a two - stage multicast beamforming scheme has been proposed by choi in , where zf beamforming has been employed to mitigate interference from adjacent clusters first and then the optimal beamforming vectors have been designed to minimize the total transmit power within each cluster . for mimobased noma , a scheme to simultaneously apply open - loop random beamforming and intra - beam sic , has been proposed by higuchi and kishiyama in .however , here the system performance is considerably degraded as the random beamforming can bring uncertainties at the user side .more recently , a precoding and detection framework with fixed power allocation has been proposed by ding _ et al ._ to solve these problems caused by random beamforming , and demonstrated that mimo based noma can achieve better outage performance than mimo based oma even for users who experience strong co - channel interference .a comprehensive summary for the state - of - the - art work on multiple antenna based noma is given in table [ table : mimo noma ] , where `` bf '' , `` op '' , `` su '' and `` mu '' are used to represent beamforming , outage probability , two - user and multi - users cases , respectively . [ cols="<,<,<,<,<",options="header " , ] & no need for user clustering & specific channel coding + [ table : comparison ] several noma schemes have been discussed in this section . even if using different techniques , these schemes share the same spirit to utilize non - orthogonality to increase the system capacity and support more users by the limited resource blocks . beyond the existing work ,more research is necessary to improve the performance of these noma schemes from the following aspects .the mpa - sic detection method is usually applied in scma and pdma , in which the user clustering mechanism affects the performance of the method significantly .when users are asynchronous , those with similar time delays should be divided into the same cluster for better performance .if the delays vary a lot among the users within the same cluster , interference among different users becomes large and may break the sparse structure .multi - branch technique can be applied to improve the performance by regarding each cluster as a branch . by calculating each branch in parallel and selecting the best result as the final one ,the performance could be improved compared to the single clustering approach .the joint design of new modulation and noma schemes is an important direction to be explored in 5 g networks .some of the noma schemes , especially the lds based code - domain noma , are based on ofdm , where the output of the sparse spreading matrix is mapped into orthogonal subcarriers . in general , how to properly combine the modulation and noma scheme is under research .for example , for the combination of scma and f - ofdm , the short cp of f - ofdm could introduce isi and ici when the subband is narrow and degrade the detection performance of scma .if the risic algorithm is adopted to cancel the interference introduced by f - ofdm , the multiuser detection of scma should be included in the iteration of cp reconstruction , which poses a requirement of joint design approaches for the receivers .the design of modulation and ma schemes for high frequency bands ( above 40 ghz ) is beginning to receive increased iterest .the millimeter - wave ( mmwave ) and terahertz ( thz ) bands appear to be good candidates to decrease spectrum sacristy due to the availabilities in current circuit design .however , the propagation properties of mmwave and thz bands have shown to be quite poor , which brings new challenges on system designs .for example , noise is the major limitation of mmwave and thz bands , which makes the transmit power levels extremely important and ultimately impacts the classes of applications that can use them ( e.g. iot ) .moreover , high level impairments including carrier frequency offset ( cfo ) and phase noise also need to be considered in mmwave and thz bands as they are noise - limited . nevertheless , there is already a study on noma based mmwave communications , we may further see analyses of such systems based on practical scenarios in the future .in this article , we provide a comprehensive survey covering the major promising candidates for modulation and multiple access ( ma ) in fifth generation ( 5 g ) networks . from our discussion, we can see that new modulations for orthogonal ma can be adopted to reduce out - of - band leakage while meeting the diverse demands of 5 g networks .non - orthogonal ma is another promising approach that marks a deviation from the previous generations of wireless networks . by utilizing non - orthogonality, we have convincingly shown that 5 g networks will be able to provide enhanced throughput and massive connectivity with improved spectral efficiency .h. sampath , s. talwar , j. tellado , v. erceg , and a. paulraj , `` a fourth - generation mimo - ofdm broadband wireless system : design , performance , and field trial results , '' _ ieee commun . mag ._ , vol .40 , no . 9 , pp . 143149 , sep .2002 .v. vakilian , t. wild , f. schaich , s.brink , and j. f. frigon , `` universal - filtered multi - carrier technique for wireless systems beyond lte , '' in _ proc .ieee globecom workshops ( gc wkshps ) _ , atlanta , ga , usa , dec . 2013 , pp . 223228 . f. schaich , t. wild , and y. chen , `` waveform contenders for 5 g - suitability for short packet and low latency transmissions , '' in _ proc .( vtc spring ) _ , seoul , korea , may 2014 , pp .j. abdoli , m. jia , and j. ma , `` filtered ofdm : a new waveform for future wireless systems , '' in _ proc .ieee 16th int . workshop signal process . adv .wireless commun .( spawc ) _ , stockholm , sweden , jun .2015 , pp . 6670 .x. zhang , m. jia , l. chen , j. ma , and j. qiu , `` filtered - ofdm - enabler for flexible waveform in the 5th generation cellular networks , '' in _ proc .ieee global commun .( globecom ) _ , san diego , ca , usa , dec . 2015 , pp . 16 .m. bellanger , m. renfors , t. ihalainen , and c.a.f .da rocha , `` ofdm and fbmc transmission techniques : a compatible high performance proposal for broadband power line communications , '' in _ int .. power line commun . andits applications ( isplc ) _ , rio de janeiro , brazil , mar .2010 , pp . 154159 .n. michailow , m. matth , i.s .gaspar , a. n. caldevilla , l. l. mendes , a. festag , and g. fettweis , `` generalized frequency division multiplexing for 5th generation cellular networks , '' _ ieee trans ._ , vol .62 , no . 9 , pp . 30453061 , sep .2014 .a. sahin , i. guvenc , and h. arslan , `` a survey on multicarrier communications : prototype filters , lattice structures , and implementation aspects , '' _ ieee commun .surveys tu ._ , vol . 16 , no . 3 , pp . 13121338 , third quarter , 2014 .z. zhao , m. schellmann , q. wang , x. gong , r. boehnke , and w. xu , `` pulse shaped ofdm for asynchronous uplink access , '' in _ asilomar conf .signals , systems and computers _ , monterey , usa , nov .2015 , pp . 37 . c. kim , k. kim , y. h. yun , z. ho , b. lee , and j. y. seol , `` qam - fbmc : a new multi - carrier system for post - ofdm wireless communications , '' in _ proc .ieee global commun . conf .( globecom ) _ , san diego , ca , usa , dec . 2015 , pp . 16 .g. berardinelli , f. m. l. tavares , t. b. sorensen , p. mogensen , and k. pajukoski , `` zero - tail dft - spread - ofdm signals , '' in _ proc .ieee globecom workshops ( gc wkshps ) _ , atlanta , ga , usa , dec . 2013 , pp . 229234 . a. sahin , r. yang , m. ghosh , and r. l. olesen , `` an improved unique word dft - spread ofdm scheme for 5 g systems , '' in _ proc .ieee globecom workshops ( gc wkshps ) _ , san diego , ca , usa , dec . 2015 , pp .p. achaichia , m. l. bot , and p. siohan , `` windowed ofdm versus ofdm / oqam : a transmission capacity comparison in the homeplug av context , '' in _ int .. power line commun . and its applications ( isplc ) _ , udine , italy , apr .2011 , pp .405410 x. li and l. j. cimini , `` effects of clipping and filtering on the performance of ofdm , '' _ ieee commun . lett ._ , vol . 2 , no . 5 , pp .131133 , may 1998 .k. higuchi and y. kishiyama , `` non - orthogonal access with random beamforming and intra - beam sic for cellular mimo downlink , '' in _ proc .( vtc fall ) _ , las vegas , nv , usa , sep .2013 , pp . 15 .k. higuchi and a. benjebbour , `` non - orthogonal multiple access ( noma ) with successive interference cancellation for future radio access , '' _ ieice trans ._ , vol .98 , no . 3 , pp . 403414 ,2015 .z. ding , r. schober , and h. v. poor , `` a general mimo framework for noma downlink and uplink transmission based on signal alignment , '' _ ieee trans .wireless commun .15 , no . 6 , pp . 4438 - 4454 , jun .2016 .z. qin , y. liu , z. ding , y. gao and m. elkashlan , `` physical layer security for 5 g non - orthogonal multiple access in large - scale networks , '' in _ proc .conf . on commun .( icc ) _ , kuala lumpur , malaysia , may 2016 , pp . 1 - 6 .y. liu , z. qin , m. elkashlan , y. gao , and l. hanzo , `` enhancing the physical layer security of non - orthogonal multiple access in large - scale networks , '' _ ieee trans .wireless commun ._ , to appear in 2017 .y. saito , y. kishiyama , a. benjebbour , t. nakamura , a. li , and k. higuchi , `` non - orthogonal multiple access ( noma ) for cellular future radio access , '' in _ proc .( vtc spring ) _ , dresden , germany , jun .2013 , pp.15 .y. saito , a. benjebbour , y. kishiyama , and t. nakamura , `` system - level performance evaluation of downlink non - orthogonal multiple access ( noma ) , '' in _ proc .personal , indoor , and mobile radio commun .( pimrc ) _ , london , uk , sep .2013 , pp .611 - 615 .z. ding , z. yang , p. fan , and h. v. poor , `` on the performance of non - orthogonal multiple access in 5 g systems with randomly deployed users , '' _ ieee signal process ._ , vol . 21 , no . 12 , pp .15011505 , dec . 2014 .y. liu , z. ding , m. elkashlan , and h. v. poor , `` cooperative non - orthogonal multiple access with simultaneous wireless information and power transfer , '' _ ieee j. sel .areas commun ._ , vol .34 , no . 4 , apr .2016 .f. liu , p. mahonen , and m. petrova , `` proportional fairness - based user pairing and power allocation for non - orthogonal multiple access , '' in _ proc .personal , indoor , and mobile radio commun .( pimrc ) _ , hong kong , p.r .china , aug .2015 , pp . 11271131 .f. liu , p. mahonen , and m. petrova , `` proportional fairness - based power allocation and user set selection for downlink noma systems , '' in _ proc .on commun .( icc ) _ , kuala lumpur , malaysia , may 2016 , pp .j. mei , l. yao , h. long , and k. zheng , `` joint user pairing and power allocation for downlink non - orthogonal multiple access systems , '' in _ proc . conf . on commun .( icc ) _ , kuala lumpur , malaysia , may 2016 , pp .n. otao , y. kishiyama , and k. higuchi , `` performance of non - orthogonal access with sic in cellular downlink using proportional fair - based resource allocation , '' in _ proc .wireless commun .( iswcs ) _ , paris , france , aug .2012 , pp . 476480 .y. sun , d. w. k. ng , z. ding , and r. schober , `` optimal joint power and subcarrier allocation for mc - noma systems , '' in _ proc .ieee global commun .( globecom ) _ , washington , dc , usa , 2016 , pp . 1 - 6. m. al - imari , p.xiao , m. a. imran , and r. tafazolli , `` uplink non - orthogonal multiple access for 5 g wireless networks , '' in _ proc . int .wireless commun .( iswcs ) _ , barcelona , spain , aug .2014 , pp . 781785 .l. lei , d. yuan , c. k. ho , and s. sun , `` power and channel allocation for non - orthogonal multiple access in 5 g systems : tractability and computation , '' _ ieee trans .wireless commun .8580 - 8594 , dec . 2016 .d. diamantoulakis , k. n. pappi , z. ding , and g. k. karagiannidis , `` wireless powered communications with non - orthogonal multiple access , '' _ ieee trans .wireless commun .8422 - 8436 , dec . 2016d. lee , h. seo , b. clerckx , e. hardouin , d. mazzarese , s. nagata and k. sayana , `` coordinated multipoint transmission and reception in lte - advanced : deployment scenarios and operational challenges , '' _ ieee commun . mag .148 - 155 , feb . 2012 .y. liu , z. ding , m. elkashlan and j. yuan , `` non - orthogonal multiple access in large - scale underlay cognitive radio networks , '' _ ieee trans . veh .10152 - 10157 , dec . 2016y. liu , z. qin , m. elkashlan , y. gao , and n. arumugam , `` non - orthogonal multiple access in massive mimo aided heterogeneous networks , '' in _ proc . of global commun .( globecom ) _ , washington d.c , usa , dec . 2016 .f. fang , h. zhang , j. cheng , and v. c. m. leung , `` energy - efficient resource allocation for downlink non - orthogonal multiple access network , '' _ ieee trans .commun . _ , vol . 64 , no . 9 ,pp . 37223732 , sep .2016 .s. m. r. islam , n. avazov , o. a. dobre , k. s. kwak , `` power - domain non - orthogonal multiple access ( noma ) in 5 g systems : potentials and challenges , '' in ieee commun . surveys tutorials , vol.pp , no.99 , pp.1 - 1 , nov . 2016 .m. al - imari , p. xiao , m. a. imran , and r. tafazolli , `` uplink non - orthogonal multiple access for 5 g wireless networks , '' in _ proc .wireless commun .( iswcs ) _ , barcelona , spain , aug .2014 , pp . 781785 .d. cai , p. fan , x. lei , y. liu , and d. chen , `` multi - dimensional scma codebook design based on constellation rotation and interleaving , '' in _ proc .technol . conf .( vtc spring ) _ , nanjing , china , may 2016 , pp .15 .s. chen , b. ren , q. gao , s. kang , s. sun , and k. niu , `` pattern division multiple access ( pdma ) - a novel non - orthogonal multiple access for 5 g radio networks , '' _ ieee trans ._ , vol .pp , no .99 , pp . 11 , 2016 .d. fang , y. huang , z. ding , g. geraci , s. l. shieh , and h. claussen , `` lattice partition multiple access : a new method of downlink non - orthogonal multiuser transmissions , '' in _ proc . of global commun .( globecom ) _ , washington d.c , usa , dec .j. zeng , b. li , x. su , l. rong , and r. xing , `` pattern division multiple access ( pdma ) for cellular future radio access , '' in _ proc .wireless commun . signal process .( wcsp ) _ , nanjing , jiangsu , china , oct .2015 , pp . 15 .b. ren , x. yue , w. tang , y. wang , s. kang , x. dai , and s. sun , `` advanced idd receiver for pdma uplink system , '' in _ proc .ieee / cic int .conf . commun . in china ( iccc ) _ , chengdu , china , jul .2016 , pp . 1 - 6 .y. huang and k. r. narayanan , `` construction and lattices : construction , goodness , and decoding algorithms , '' _ arxiv preprint _ , [ online ] .available : http://arxiv.org/abs/1506.08269 , aug .2016 .y. cai and r. c. de lamare , `` multistage mimo receivers based on multi - branch interference cancellation for mimo - cdma systems , '' in _ proc .wireless commun .( iswcs ) _ , tuscany , italy , sep .2009 , pp .
fifth generation ( 5 g ) wireless networks face various challenges in order to support large - scale heterogeneous traffic and users , therefore new modulation and multiple access ( ma ) schemes are being developed to meet the changing demands . as this research space is ever increasing , it becomes more important to analyze the various approaches , therefore in this article we present a comprehensive overview of the most promising modulation and ma schemes for 5 g networks . we first introduce the different types of modulation that indicate their potential for orthogonal multiple access ( oma ) schemes and compare their performance in terms of spectral efficiency , out - of - band leakage , and bit - error rate . we then pay close attention to various types of non - orthogonal multiple access ( noma ) candidates , including power - domain noma , code - domain noma , and noma multiplexing in multiple domains . from this exploration we can identify the opportunities and challenges that will have significant impact on the design of modulation and ma for 5 g networks . 5 g , modulation , non - orthogonal multiple access .
current research on human dynamics is limited to data collected under normal and stationary circumstances , capturing the regular daily activity of individuals .yet , there is exceptional need to understand how people change their behavior when exposed to rapidly changing or unfamiliar conditions , such as life - threatening epidemic outbreaks , emergencies and traffic anomalies , as models based on stationary events are expected to break down under these circumstances .such rapid changes in conditions are often caused by natural , technological or societal disasters , from hurricanes to violent conflicts .the possibility to study such real time changes has emerged recently thanks to the widespread use of mobile phones , which track both user mobility and real - time communications along the links of the underlying social network . herewe take advantage of the fact that mobile phones act as _ in situ _ sensors at the site of an emergency , to study the real - time behavioral patterns of the local population under external perturbations caused by emergencies .advances in this direction not only help redefine our understanding of information propagation and cooperative human actions under externally induced perturbations , which is the main motivation of our work , but also offer a new perspective on panic and emergency protocols in a data - rich environment .our starting point is a country - wide mobile communications dataset , culled from the anonymized billing records of approximately ten million mobile phone subscribers of a mobile company which covers about one - fourth of subscribers in a country with close to full mobile penetration .it provides the time and duration of each mobile phone call , together with information on the tower that handled the call , thus capturing the real - time locations of the users ( methods , supporting information s1 , fig .a ) . to identify potential societal perturbations , we scanned media reports pertaining to the coverage area between january 2007 and january 2009 and developed a corpus of times and locations for eight societal , technological , and natural emergencies , ranging from bombings to a plane crash , earthquakes , floods and storms ( table 1 ) .approximately 30% of the events mentioned in the media occurred in locations with sparse cellular coverage or during times when few users are active ( like very early in the morning ) . the remaining events do offer , however , a sufficiently diverse corpus to explore the generic vs. unique changes in the activity patterns in response to an emergency . herewe discuss four events , chosen for their diversity : ( 1 ) a bombing , resulting in several injuries ( no fatalities ) ; ( 2 ) a plane crash resulting in a significant number of fatalities ; ( 3 ) an earthquake whose epicenter was outside our observation area but affected the observed population , causing mild damage but no casualties ; and ( 4 ) a power outage ( blackout ) affecting a major metropolitan area ( supporting information s1 , fig .b ) . to distinguish emergencies from other events that cause collective changes in human activity, we also explored eight planned events , such as sports games and a popular local sports race and several rock concerts .we discuss here in detail a cultural festival and a large pop music concert as non - emergency references ( table 1 , see also supporting information s1 , sec .the characteristics of the events not discussed here due to length limitations are provided in supporting information s1 , sec .i for completeness and comparison .as shown in fig . [ fig : combinedtimeseries : rawtimeseries ] , emergencies trigger a sharp spike in call activity ( number of outgoing calls and text messages ) in the physical proximity of the event , confirming that mobile phones act as sensitive local `` sociometers '' to external societal perturbations .the call volume starts decaying immediately after the emergency , suggesting that the urge to communicate is strongest right at the onset of the event .we see virtually no delay between the onset of the event and the jump in call volume for events that were directly witnessed by the local population , such as the bombing , the earthquake and the blackout .brief delay is observed only for the plane crash , which took place in an unpopulated area and thus lacked eyewitnesses .in contrast , non - emergency events , like the festival and the concert in fig . [ fig : combinedtimeseries : rawtimeseries ] , display a gradual increase in call activity , a noticeably different pattern from the `` jump - decay '' pattern observed for emergencies . see also supporting information s1 , figs .i and j. to compare the magnitude and duration of the observed call anomalies , in fig .[ fig : combinedtimeseries : normedtimes ] we show the temporal evolution of the relative call volume as a function of time , where , is the call activity during the event and is the average call activity during the same time period of the week . as fig .[ fig : combinedtimeseries : normedtimes ] indicates , the magnitude of correlates with our relative ( and somewhat subjective ) sense of the event s potential severity and unexpectedness : the bombing induces the largest change in call activity , followed by the plane crash ; whereas the collective reaction to the earthquake and the blackout are somewhat weaker and comparable to each other . while the relative change was also significant for non - emergencies, the emergence of the call anomaly is rather gradual and spans seven or more hours , in contrast with the jump - decay pattern lasting only three to five hours for emergencies ( figs .[ fig : combinedtimeseries : normedtimes ] , supporting information s1 , figs .i and j ) . as we show in fig .[ fig : combinedtimeseries : changenrho ] ( see also supporting information s1 , sec .c ) the primary source of the observed call anomaly is a sudden increase of calls by individuals who would normally not use their phone during the emergency period , rather than increased call volume by those that are normally active in the area . the temporally localized spike in call activity ( fig .[ fig : combinedtimeseries : rawtimeseries ] , ) raises an important question : is information about the events limited to the immediate vicinity of the emergency or do emergencies , often immediately covered by national media , lead to spatially extended changes in call activity ? we therefore inspected the change in call activity in the vicinity of the epicenter , finding that for the bombing , for example , the magnitude of the call anomaly is strongest near the event , and drops rapidly with the distance from the epicenter ( fig .[ fig : spatialprops : bombmaps ] ) . to quantify this effect across all emergencies , we integrated the call volume over time in concentric shells of radius centered on the epicenter ( fig . [fig : spatialprops : dvforsomers ] ) .the decay is approximately exponential , , allowing us to characterize the spatial extent of the reaction with a decay rate ( fig .[ fig : spatialprops : dvintegratedvsr ] ) .the observed decay rates range from km ( bombing ) to 10 km ( plane crash ) , indicating that the anomalous call activity is limited to the event s vicinity .an extended spatial range ( km ) is seen only for the earthquake , lacking a narrowly defined epicenter . meanwhile , a distinguishing pattern of non - emergencies is their highly localized nature : they are characterized by a decay rate of less than km , implying that the call anomaly was narrowly confined to the venue of the event .this systematic split in between the spatially extended emergencies and well - localized non - emergencies persists for all explored events ( see table 1 , supporting information s1 , fig .k ) . despite the clear temporal and spatial localization of anomalous call activity during emergencies ,one expects some degree of information propagation beyond the eyewitness population .we therefore identified the individuals located within the event region , as well as a group consisting of individuals outside the event region but who receive calls from the group during the event , a group that receive calls from , and so on .we see that the individuals engage their social network within minutes , and that the , , and occasionally even the group show an anomalous call pattern immediately after the anomaly ( fig .[ fig : incsocialdist : dvforsomegi ] ) .this effect is quantified in fig .[ fig : incsocialdist : dvintegratedvsgi ] , where we show the increase in call volume for each group as a function of their social network based distance from the epicenter ( for example , the social distance of the group is 2 , being two links away from the group ) , indicating that the bombing and plane crash show strong , immediate social propagation up to the third and second neighbors of the eyewitness population , respectively .the earthquake and blackout , less threatening emergencies , show little propagation beyond the immediate social links of and social propagation is virtually absent in non - emergencies . the nature of the information cascade behind the results shown in fig .[ fig : incsocialdist : dvforsomegi ] , is illustrated in fig .[ fig : incsocialdist : bombcascnet ] , where we show the individual calls between users active during the bombing .in contrast with the information cascade triggered by the emergencies witnessed by the users , there are practically no calls between the same individuals during the previous week . to quantify the magnitude of the information cascade we measured the length of the paths emanating from the users , finding them to be considerably longer during the emergency ( fig .[ fig : incsocialdist : bombpaths ] ) , compared to five non - emergency periods , demonstrating that the information cascade penetrates deep into the social network , a pattern that is absent during normal activity .see also supporting information s1 , figs .e , f , g , h , l , m , n , and o , and table a. the existence of such prominent information cascades raises tantalizing questions about who contributes to information propagation about the emergency . using self - reported gender information available for most users ( see supporting information s1 ) , we find that during emergencies female users are more likely to make a call than expected based on their normal call patterns .this gender discrepancy holds for the ( eyewitness ) and groups , but is absent for non - emergency events ( see supporting information s1 , sec .e , fig .we also separated the total call activity of and individuals into voice and text messages ( including sms and mms ) . for most events ( the earthquake and blackout being the only exceptions ) ,the voice / text ratios follow the normal patterns ( supporting information s1 , fig .d ) , indicating that users continue to rely on their preferred means of communication during an emergency . the patterns identified discussed above allow us to dissect complex events , such as an explosion in an urban area preceded by an evacuation starting approximately one hour before the blast . while a call volume anomaly emerges right at the start of the evacuation , it levels off and the jump - decay pattern characteristic of an emergency does not appear until the real explosion ( fig .[ fig : bomb2:temp ] ) .the spatial extent of the evacuation response is significantly smaller than the one observed during the event ( for the evacuation compared with for the explosion , see fig .[ fig : bomb2:sptl ] ) . during the evacuation ,social propagation is limited to the and groups only ( fig .[ fig : bomb2:soct ] , ) while after the explosion we observe a communication cascade that activates the users as well .the lack of strong propagation during evacuation indicates that individuals tend to be reactive rather than proactive and that a real emergency is necessary to initiate a communication cascade that effectively spreads emergency information .the results of figs .[ fig : combinedtimeseries]-[fig : bomb2 ] not only indicate that the collective response of the population to an emergency follows reproducible patterns common across diverse events , but they also document subtle differences between emergencies and non - emergencies .we therefore identified four variables that take different characteristic values for emergencies and non - emergencies : ( i ) the midpoint fraction , where and are the times when the anomalous activity begins and ends , respectively , and is the time when half of the total anomalous call volume has occurred ; ( ii ) the spatial decay rate capturing the extent of the event ; ( iii ) the relative size of each information cascade , representing the ratio between the number of users in the event cascade and the cascade tracked during normal periods ; ( iv ) the probability for users to contact existing friends ( instead of placing calls to strangers ) . in fig .[ fig : summaryprops ] we show these variables for all 16 events , finding systematic differences between emergencies and non - emergencies . as the figure indicates , a multidimensional variable , relying on the documented changes in human activity , can be used to automatically distinguish emergency situations from non - emergency induced anomalies .such a variable could also help real - time monitoring of emergencies , from information about the size of the affected population , to the timeline of the events , and could help identify mobile phone users capable of offering immediate , actionable information , potentially aiding search and rescue .rapidly - evolving events such as those studied throughout this work require dynamical data with ultra - high temporal and spatial resolution and high coverage .although the populations affected by emergencies are quite large , occasionally reaching thousands of users , due to the demonstrated localized nature of the anomaly , this size is still small in comparison to other proxy studies of human dynamics , which can exploit the activity patterns of millions of internet users or webpages .meanwhile , emergencies occur over very short timespans , a few hours at most , whereas much current work on human dynamics relies on longitudinal datasets covering months or even years of activity for the same users ( e.g. ) , integrating out transient events and noise .but in the case of emergencies , such transient events are precisely what we wish to quantify . given the short duration and spatially localized nature of these events , it is vital to have extremely high coverage of the entire system , to maximize the availability of critical information during an event . to push human dynamics research into such fast - moving events requires new tools and datasets capable of extracting signals from limited data . we believe that our research offers a first step in this direction . in summary , similar to how biologists use drugs to perturb the state of a cell to better understand the collective behavior of living systems , we used emergencies as external societal perturbations , helping us uncover generic changes in the spatial , temporal and social activity patterns of the human population . starting from a large - scale , country - wide mobile phone dataset , we used news reports to gather a corpus of sixteen major events , eight unplanned emergencies and eight scheduled activities .studying the call activity patterns of users in the vicinity of these events , we found that unusual activity rapidly spikes for emergencies in contrast with non - emergencies induced anomalies that build up gradually before the event ; that the call patterns during emergencies are exponentially localized regardless of event details ; and that affected users will only invoke the social network to propagate information under the most extreme circumstances .when this social propagation does occur , however , it takes place in a very rapid and efficient manner , so that users three or even four degrees from eyewitnesses can learn of the emergency within minutes .these results not only deepen our fundamental understanding of human dynamics , but could also improve emergency response .indeed , while aid organizations increasingly use the distributed , real - time communication tools of the 21st century , much disaster research continues to rely on low - throughput , post - event data , such as questionnaires , eyewitness reports , and communication records between first responders or relief organizations .the emergency situations explored here indicate that , thanks to the pervasive use of mobile phones , collective changes in human activity patterns can be captured in an objective manner , even at surprisingly short time - scales , opening a new window on this neglected chapter of human dynamics .we use a set of anonymized billing records from a western european mobile phone service provider .the records cover approximately 10 m subscribers within a single country over 3 years of activity .each billing record , for voice and text services , contains the unique identifiers of the caller placing the call and the callee receiving the call ; an identifier for the cellular antenna ( tower ) that handled the call ; and the date and time when the call was placed .coupled with a dataset describing the locations ( latitude and longitude ) of cellular towers , we have the approximate location of the caller when placing the call . for full details , see supporting information s1 , sec .a. to find an event in the mobile phone data , we need to determine its time and location .we have used online news aggregators , particularly the local ` news.google.com ` service to search for news stories covering the country and time frame of the dataset .keywords such as ` storm ' , ` emergency ' , ` concert ' , etc . were used to find potential news stories .important events such as bombings and earthquakes are prominently covered in the media and are easy to find .study of these reports , which often included photographs of the affected area , typically yields precise times and locations for the events .reports would occasionally conflict about specific details , but this was rare .we take the _ reported _ start time of the event as .to identify the beginning and ending of an event , and , we adopt the following procedure .first , identify the event region ( a rough estimate is sufficient ) and scan all its calls during a large time period covering the event ( e.g. , a full day ) , giving . then, scan calls for a number of `` normal '' periods , those modulo one week from the event period , exploiting the weekly periodicity of . these normal periods time seriesare averaged to give .( to smooth time series , we typically bin them into 510 minute intervals . )the standard deviation as a function of time is then used to compute .finally , we define the interval as the longest contiguous run of time intervals where , for some fixed cutoff .we chose for all events . for full details , see supporting information s1 , sec .the authors thank a. pawling , f. simini , m. c. gonzlez , s. lehmann , r. menezes , n. blumm , c. song , j. p. huang , y .- y .ahn , p. wang , r. crane , d. sornette , and d. lazer for many useful discussions .10 [ 1]`#1 ` urlstyle [ 1]doi:#1 [ 1 ] [ 2 ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ key : # 1 + annotation : # 2 _ _ _ _ _ _ _ _ _ _ _ _ _ _ vespignani a ( 2009 ) predicting the behavior of techno - social systems . science 325 : 425 - 428 .brockmann d , hufnagel l , geisel t ( 2006 ) the scaling laws of human travel .nature 439 : 462465 .gonzlez mc , hidalgo ca , barabsi al ( 2008 ) understanding individual human mobility patterns .nature 453 : 779782 .eubank s , guclu h , kumar vsa , marathe mv , srinivasan a , et al .( 2004 ) modelling disease outbreaks in realistic urban social networks .nature 429 : 180184 .lazer d , pentland a , adamic l , aral s , barabsi al , et al . ( 2009 ) computational social science .science 323 : 721 - 723. song c , qu z , blumm n , barabsi al ( 2010 ) limits of predictability in human mobility .science 327 : 1018 - 1021 .onnela jp , saramki j , hyvnen j , szab g , lazer d , et al . ( 2007 ) structure and tie strengths in mobile communication networks .proceedings of the national academy of sciences 104 : 7332 - 7336 .gabrielli a , caldarelli g ( 2009 ) invasion percolation and the time scaling behavior of a queuing model of human dynamics .journal of statistical mechanics : theory and experiment 2009 : p02046 .rybski d , buldyrev sv , havlin s , liljeros f , makse ha ( 2009 ) scaling laws of human interaction activity .proceedings of the national academy of sciences 106 : 12640 - 12645 .singer hm , singer i , herrmann hj ( 2009 ) agent - based model for friendship in social networks .phys rev e 80 : 026113 .hufnagel l , brockmann d , geisel t ( 2004 ) forecast and control of epidemics in a globalized world .proceedings of the national academy of sciences of the united states of america 101 : 15124 - 15129 .colizza v , barrat a , barthlemy m , vespignani a ( 2006 ) the role of the airline transportation network in the prediction and predictability of global epidemics .proceedings of the national academy of sciences of the united states of america 103 : 2015 - 2020 .wu f , huberman b ( 2007 ) novelty and collective attention proceedings of the national academy of sciences of the united states of america 104 : 17599 - 17601 .gonalves b , ramasco j ( 2008 ) human dynamics revealed through web analytics phys rev e 78 : 026123 .ratkiewicz j , fortunato s , flammini a , menczer f , and vespignani a ( 2010 ) characterizing and modeling the dynamics of online popularity .phys rev lett 105 : 158701 .bohorquez j , gourley s , dixon a , spagat m , johnson n ( 2009 ) common ecology quantifies human insurgency .nature 462 : 911914 .balcan d , colizza v , gonalves b , hu h , ramasco jj , et al .( 2009 ) multiscale mobility networks and the spatial spreading of infectious diseases .proceedings of the national academy of sciences 106 : 21484 - 21489 .caldarelli g ( 2007 ) scale - free networks .oxford university press .centola d ( 2010 ) the spread of behavior in an online social network experiment .science 329 : 1194 - 1197 .helbing d , farkas i , vicsek t ( 2000 ) simulating dynamical features of escape panic . nature 407 : 487490 .kaplan eh , craft dl , wein lm ( 2002 ) emergency response to a smallpox attack : the case for mass vaccination .proceedings of the national academy of sciences of the united states of america 99 : 10935 - 10940 . helbing d , johansson a , mathiesen j , jensen mh , hansen a ( 2006 ) analytical approach to continuous and intermittent bottleneck flows .phys rev lett 97 : 168001 .petrescu - prahova m , butts ct ( 2008 ) emergent coordinators in the world trade center disaster . international journal of mass emergencies and disasters 28 : 133 - 168 .kapoor a , eagle n , horvitz e ( 2010 ) people , quakes , and communications : inferences from call dynamics about a seismic event and its influences on a population . in : proceedings of aaai artificial intelligence for development ( ai - d10 ) .lambiotte r , blondel vd , de kerchove c , huens e , prieur c , et al .( 2008 ) geographical dispersal of mobile communication networks .physica a 387 : 5317 - 5325 .crane r , sornette d ( 2008 ) robust dynamic classes revealed by measuring the response function of a social system .proceedings of the national academy of sciences 105 : 15649 - 15653 .onnela jp , reed - tsochas f ( 2010 ) spontaneous emergence of social influence in online systems .proceedings of the national academy of sciences .sheetz s , kavanaugh al , quek f , kim bj , lu sc ( 2010 ) the expectation of connectedness and cell phone use in crises .journal of emergency management .rodriguez h , quarantelli el , dynes r , editors ( 2006 ) handbook of disaster research .springer , 1st edition .lind be , tirado m , butts ct , petrescu - prahova m ( 2008 ) brokerage roles in disaster response : organizational mediation in the wake of hurricane katrina . international journal of emergency management 5 : 75 - 99 ..*summary of the studied emergencies and non - emergencies . *the columns provide the duration of the anomalous call activity ( fig . 1 ) , the spatial decay rate ( fig .2 ) , the number of users in the event population , and the total size of the information cascade ( fig . 3 ) .events discussed in the main text are italicized , the rest are discussed in the supplementary material .` jet scare ' refers to a sonic boom interpreted by the local population and initial media reports as an explosion . [ cols="<,<,<,<,<,<,<,<",options="header " , ]sixteen events were identified for this work ( see main text table 1 ) , but six events were focused upon in the main text . here we report the results for all events . in figs . [fig : timeseries_events ] and [ fig : timeseries_controls ] we provide the call activities for all sixteen events used in this study ( compare to fig . 1 ) .in fig . [fig : spatialpropssi ] we show for the ten events not shown in main text fig .[ mfig : spatialprops : dvintegratedvsr ] . finally in figs .[ fig : socprop_4mainemergs_si ] , [ fig : socprop_4otheremergs_si ] , [ fig : socprop_4concerts_si ] , and [ fig : socprop_4festivals_si ] we present activity levels for through for all 16 events . for the eight non - emergencies .concert 3 takes place at an otherwise unpopulated location and the normal activity is not visible on a scale showing the event activity .[ fig : timeseries_controls ] , scaledwidth=85.0% ] through during the event ( black curve ) and normally ( shaded regions indicate s.d . ) .normal activity levels were rescaled to account for population and selection bias ( see sec .[ sec : calcsocprop ] ) the bombing and plane crash show increased activities for multiple while the earthquake and blackout do not .[ fig : socprop_4mainemergs_si],scaledwidth=100.0% ] for the concerts .all concerts show extra activity only for except concert 4 , which shows a small increase in activity for several hours after the concert started .[ fig : socprop_4concerts_si ] , scaledwidth=100.0% ] for the festivals .interestingly , festival 2 shows no extra activity , even for , indicating that the call anomaly for those events was caused only by a greater - than - expected number of users all making an expected number of calls .[ fig : socprop_4festivals_si],scaledwidth=100.0% ]
despite recent advances in uncovering the quantitative features of stationary human activity patterns , many applications , from pandemic prediction to emergency response , require an understanding of how these patterns change when the population encounters unfamiliar conditions . to explore societal response to external perturbations we identified real - time changes in communication and mobility patterns in the vicinity of eight emergencies , such as bomb attacks and earthquakes , comparing these with eight non - emergencies , like concerts and sporting events . we find that communication spikes accompanying emergencies are both spatially and temporally localized , but information about emergencies spreads globally , resulting in communication avalanches that engage in a significant manner the social network of eyewitnesses . these results offer a quantitative view of behavioral changes in human activity under extreme conditions , with potential long - term impact on emergency detection and response .
semiconductor devices have been continuously downscaled ever since the invention of the first transistor , such that the size of the single building component of modern electronic devices has already reached to a few nanometers ( nm ) .in such a regime , two conceptual changes are required in the device modeling methodology .one aspect is widely accepted where carriers must be treated as quantum mechanical rather than classical objects . the second change is the need to embrace the multi - band models which can describe atomic features of materials , reproducing experimentally verified bulk bandstuructures . while the single - band effective mass approximation ( ema ) predicts bandstructures reasonably well near the conduction band minimum ( cbm ) , the subband quantization loses accuracy if devices are in a sub - nm regime .the ema also fails to predict indirect gaps , inter - band coupling and non - parabolicity in bulk bandstructures .the nearest - neighbor empirical tight - binding ( tb ) and next nearest - neighbor ( kp ) approach are most widely used band models of multiple bases .the most sophisticated tb model uses a set of 10 localized orbital bases ( s , s * , 3 , and 5 ) on real atomic grids ( 20 with spin interactions ) , where the parameter set is fit to reproduce experimentally verified bandgaps , masses , non - parabolic dispersions , hydrostatic and biaxial strain behaviors of bulk materials using a global minimization procedure based on a genetic algorithm and analytical insights .this tb approach can easily incorporate atomic effects such as surface roughness and random alloy compositions as the model is based on a set of atomic grids .these physical effects have been shown to be critical to the quantitative modeling of resonance tunneling diodes ( rtds ) , quantum dots , disordered sige / si quantum wells , and a single impurity device in si bulk . the kp approach typically uses four bases on a set of cubic grids with no spin interactions .while it still fails to predict the indirect gap of bulk dispersions since it assumes that all the subband minima are placed on the point , the credibility is better than the ema since the kp model can still explain the inter - band physics of direct gap iii - v devices , and valence band physics of indirect gap materials such as silicon ( si ) .one of the important issues in modeling of nanoscale devices , is to solve the quantum transport problem with a consideration of real 3-d device geometries .although the non - equilibrium green s function ( negf ) and wavefunction ( wf ) formalism have been widely used to simulate the carrier transport , the computational burden has been always a critical problem in solving 3-d open systems as the negf formalism needs to invert a system matrix of a degree - of - freedom ( dof ) equal to the hamiltonian matrix .the recursive green s function ( rgf ) method saves the computing load by selectively targeting elements needed for the matrix inversion .however , the cost can be still huge depending on the area of the transport - orthogonal plane ( cross - section ) and the length along the transport direction of target devices .the wf algorithm also saves the computing load if the transport is ballistic as it does nt have to invert the system matrix and finding a few solutions of the linear system is enough to predict the transport behaviors .but , the load still depends on the size of the system matrix and the number of solution vectors ( modes ) needed to describe the carrier - injection from external leads . in fact , rgf and wf calculations for atomically resolved nanowire field effect transistors ( fets ) have demonstrated the need to consume over 200,000 parallel cores on large supercomputing clusters . developed by mamaluy _ , the contact block reduction ( cbr ) method has received much attention due to the utility to save computing expense required to evaluate the retarded green s function of 3-d open systems .the cbr method is thus expected to be a good candidate for transport simulations since the method does nt have to solve the linear system yet reducing the computing load needed for matrix inversion .the method indeed has been extensively used such that it successfully modeled electron quantum transport in experimentally realized si finfets , and predicted optimal design points and process variations in design of 10-nm si finfets .however , all the successful applications for 3-d systems so far , have been demonstrated only for the systems represented by the ema .while the use of multi - band approaches can increase the accuracy of simulation results , it requires more computing load as a dof of the hamiltonian matrix is directly proportional to the number of bases required to represent a single atomic ( or grid ) spot in the device geometry . to suggest a solution to this _ trade - off _ issue, we examine the numerical utilities of the cbr method in multi - band ballistic quantum transport simulations , focusing on multi - band 3-d systems represented by either of the tb or kp band model .the objective of this work is to provide detail answers to the following questions through simulations of small two - contact ballistic systems focusing on a proof of principles : ( 1 ) can the original cbr method be extended to simulate ballistic quantum transport of multi - band systems ? ( 2 ) if the answer to the question ( 1 ) is , what is the condition under which the multi - band cbr method becomes particularly useful ? , and ( 3 )how is the numerical practicality of the multi - band cbr method compared to the rgf and wf algorithms , in terms of the accuracy , speed and scalability on high performance computing ( hpc ) clusters ?in real transport problems , a device needs to be coupled with external contacts that allow the carrier - in - and - out flow . with the negf formalism ,this can be done by creating an open system that is described with a non - hermitian system matrix . representing this system matrix as a function of energy , we compute the transmission coefficient and density of states , to predict the current flow and charge profile in non - equilibrium .this energy - dependent system matrix is called the retarded green s function for an open system ( eq .( 1 ) ) .^{-1 } , \indent \eta \rightarrow 0^{+}\ ] ] where is is the hamiltonian representing the device and is the self - energy term that couples the device to external leads .as already mentioned in the previous section , the evaluation of is quite computationally expensive since it involves intensive matrix inversions .the cbr method , however , reduces matrix inversions with the mathematical process based on the dyson equation .we start the discussion revisiting the cbr method that has been so far utilized for ema systems .the cbr method starts decomposing the device domain into two regions : ( 1 ) the boundary region that couples with the contacts , and ( 2 ) the inner region that does nt couple to the contacts .as the self - energy term is non - zero only in the boundary region , and are decomposed as shown in eq .( 2 ) , where subscripts ( , ) denote above - mentioned regions , respectively . , can be evaluated with the dyson equation defined in eq .( 3 ) and eq . ( 4 ) , where and are conditioned with a hermitian matrix to minimize matrix inversions by solving the eigenvalue problem ( eqs . ( 5 ) ) . ^{-1 } \nonumber \\ & = & \begin{bmatrix } g_c^x & g_{cd}^x \\ g_{dc}^x & g_d^x \end{bmatrix } = \sum_\alpha\frac{|\psi_\alpha\rangle\langle\psi_\alpha|}{e-\epsilon_\alpha+i\eta } \nonumber \end{aligned}\ ] ] where and are the eigenvalue and eigenvector of the modified hamiltonian ( + ) . here, we note that the matrix inversion is performed only to evaluate the boundary block ( contact - block ) for one time while the rgf needs to perform the block - inversion many times depending on the device channel length .the computing load for matrix inversion is thus significantly reduced , and the method is also free from solving a linear - system problem .instead , the major numerical issue now becomes a normal eigenvalue problem for a hermitian matrix ( + ) .for the numerical practicality , it is thus critical to reduce a number of required eigenvalues , and for ema hamiltonian matrices , a huge reduction in the number of required eigenvalues can be achieved via a smart choice of the _ prescription matrix _ . to find the matrix and see if it can be extended to multi - band systems , we first need to understand how to couple external contacts to the device . fig .1 illustrates the common approach which treats the contact as a semi - infinite nanowire of a finite cross - section . here , is a block matrix that represents the unit - slab along the transport direction , and is another block matrix which represent the inter - slab coupling .the eigenfunction of the plane wave at the mode in the slab , should then obey the schrdinger equation and the bloch condition ( eqs .( 6 ) ) . where is the plane - wave vector at the mode , is the length of a slab along the transport direction , and m is the maximum number of plane - wave modes that can exist in a single slab and is equal to the dof of .then , the surface green s function and self - energy term can be evaluated by converting eqs .( 6 ) to the generalized eigenvalue problem for a complex and non - hermitian matrix .the solution for and are provided in eqs .( 7 ) , where and are shown in eqs .^{-1}k^{-1 } , \nonumber \\ \sigma & = & w^+g_{surf } w\end{aligned}\ ] ] , \nonumber \\ \lambda & = & diag[exp(ik_1l)\text { } exp(ik_2l)\ldots\text { } exp(ik_ml)]\end{aligned}\ ] ] in systems described by the nearest - neighbor ema , each slab becomes a layer of common cubic grids such that each grid on one layer is coupled to the same grid on the nearest layer .the inter - slab coupling matrix thus becomes a , with which the general solution for and in eqs .( 7 ) can be simplified using a process described in eq .( 9 ) and eq .we note that previous literatures have shown only the simplified solution for and .^{-1}k^{-1 } \nonumber \\ & = & k[k^{-1}(h_{b}-ei)k+w^+\lambda]^{-1}k^{-1 } \nonumber \\ & = & k[-k^{-1}(w^+k\lambda+wk\lambda^{-1})+w^+\lambda]^{-1}k^{-1 } \nonumber \\ & & [ \because ( ei - h_{b})k = w^+k\lambda+wk\lambda^{-1 } ] \nonumber \\ & = & -k[w\lambda^{-1}]^{-1}k^{-1 } = -k\lambda w^{-1}k^{-1 } \\ \nonumber \\\sigma & = & w^+g_{surf}w \nonumber \\ & = & w^+(-k\lambda w^{-1}k^{-1})w \nonumber \\ & = & -w^+k\lambda k^{-1}\text { } ( \because w^+ = w ) \nonumber \\ & = & -wk\lambda k^{-1}\end{aligned}\ ] ] the original cbr method coupled to the ema prescribes the hermitian matrix as or its hermitian component ( if is complex ) .the new self - energy term in eqs .( 5 ) then becomes ( eq .( 11 ) ) : where the matrix ( ) becomes zero at point , where ema subband minima are always placed on .the resulting new hamitonian ( - ) , becomes the hamiltonian with the generalized - boundary condition at contact boundaries .the spectra of the matrix ( - ) , therefore become approximate solutions of the open boundary problem , and the retarded green s function in eq .( 4 ) can be thus with an incomplete set of energy spectra of the hermitian matrix near subband minima . regardless of the band model , the in eq .( 4 ) can be accurately calculated with a complete set of spectra since it then becomes the dyson equation ( eq .( 3 ) ) itself .the important question here is then whether we can make the cbr method be still numerically practical for multi - band systems such that the transport can be simulated with a narrow energy spectrum . to study this issue ,we focus on the inter - slab coupling matrix of multi - band systems .a toy si device that consists of two slabs along the [ 100 ] direction , is used as an example for our discussion . fig .2 shows the device geometry and corresponding hamiltonian matrix built with the ema , kp and tb model , respectively . here, we note that the simplifying process in eq .( 9 ) and eq . (10 ) is not strictly correct if the inter - slab coupling matrix is not an identity matrix , since , for any square matrix and , can not be simplified to if is neither an identity matrix nor a scaled identity matrix .when a system is represented with kp model , a single slab is still a layer of common cubic grids as the kp approach also uses a set of cubic grids .but , the non - zero coupling is extended up to next - nearest neighbors such that the inter - slab coupling matrix is no more an identity matrix . the simplified solution for and ,however , can be still used to the general solutions in eqs .( 7 ) since the coupling matrix is and invertible .but , the situation becomes tricky for tb systems that are represented on a set of real zincblende ( zb ) grids . and are even mathematically invalid . ] in the zb crystal structure , a si unit - slab has a total of four unique atomic layers along the [ 100 ] direction . because the tb approach assumes the nearest - neighbor coupling , only the last layer in one slab is coupled to the first layer in the nearest slab while all the other coupling blocks among layers in different slabs become zero - matrices . as described in fig .2 , this makes the inter - slab coupling matrix be such that matrix inversions become impossible . the simplified solution for and in eq .( 9 ) and eq .( 10 ) are therefore mathematically invalid , and they can not be even used to approximate the full solution ( eqs .( 7 ) ) . a new prescription for is thus needed to make the cbr methodbe still practical for zb - tb systems , and we propose an alternative in eq .( 12 ) . where is the energetic position of the cbm ( valence band maximum ( vbm ) ) of the bandstructure of the semi - infinite contact . if only a few subbands near the cbm ( or vbm ) of the contact bandstructure are enough to describe the external contact , the prescription suggested in eq( 12 ) works quite well as is the hermitian part of the self - energy term , such that ( ) approximates the open system near the edge of the contact bandstructure , the approximation , however , becomes less accurate if more subbands in higher energy ( in lower energy for valence band ) are involved to the open boundaries .away from the band edge , subband placement becomes denser and inter - subband coupling becomes stronger .the prescription in eq .( 12 ) then would not be a good choice as it only approximate the open boundary solution near band edges , and the cbr method thus needs more eigenspectrums to solve open boundary transport problems .so , for example , the multi - band cbr method would not be numerically practical to simulate fets at a high source - drain bias , since a broad energy spectrum is then needed to get an accurate solution . before closing this section , we note that , if the inter - slab coupling matrix is either an identity matrix or a scaled identity matrix , the prescription matrix in eq .( 12 ) becomes to the one utilized to simulate 3-d systems in the previous literatures , where ( ) approximates the open system well near if the system is represented by the ema .once and are determined from the prescription matrix , evaluation of the transmission coefficient ( tr ) and the density of states ( dos ) can be easily done .further detailed mathematics regarding derivation of tr and dos will not be thus discussed here .the results are discussed in two subsections .first , we validate the cbr method for multi - band systems with the new prescription for in eq .focusing on a proof of principles , we compute the tr and dos profiles for a toy tb and kp system , compare the result to the references obtained with the rgf algorithm , and suggest the device category where the multi - band cbr method could be particularly practical .second , we examine the numerical practicality of the multi - band cbr method by computing tr and dos profiles of a resonant tunneling device and a nanowire fet .the accuracy , the speed of calculations in a serial mode , and the scalability on hpc clusters , are compared to those obtained with the rgf and wf algorithm .we assume a two - contact ballistic transport for all the numerical problems . to validate the multi - band cbr method that has been discussed in the previous section , we consider two multi - band toy si systems represented by the 10-band tb and 3-band kp approach .here , we intentionally choose extremely small systems to calculate a complete set of energy spectra of the hamiltonian , with which the cbr method should produce results identical to the ones obtained by the rgf algorithm .for the tb system , the electron - transport is simulated while we calculate the hole - transport for the kp system due to a limitation of the kp approach in representing the si material . _tb system _ : fig .3 illustrates the tr and dos profile calculated for the tb si toy device which consists of ( 2 ) ( 100 ) unit - cells ( .1(nm ) ) .the device involves a complex hermitian hamiltonian matrix of 640 dof , and electrons are assumed to transport along the [ 100 ] direction .the tr and dos profiles are calculated using the cbr method for a total of three cases - with 6 , 60 and full ( 640 ) energy spectra that correspond to 1 , 10 , and 100 of the hamiltonian dof , respectively .the transport happens at the energy above 2.32(ev ) which is the cbm of the contact bandstructure .we note that this energetic position is higher than the si bulk cbm ( 1.13(ev ) ) , due to the structural confinement stemming from the finite cross - section of the nanowire device . with the new prescription matrix suggested in eq .( 12 ) , the tr and dos profile obtained by the cbr method become closer to the reference result as more spectrums are used , and eventually reproduce the reference result with a full set of spectrums , as shown in the left column of fig .3 . here , the cbr result turns out to be quite accurate near the cbm even with 1 of the total spectrums , indicating that the tb - cbr method could be a practical approach if most of the carriers are injected from the first one or two subbands of the contact bandstructure. this condition can be satisfied when ( 1 ) only the first one or two subbands in the contact bandstructure are occupied with electrons , and ( 2 ) the energy difference between the source and drain contact fermi - level ( the source - drain fermi - window ) becomes extremely narrow .so , the simulation of fets at a high source - drain bias would not be an appropriate target of the tb - cbr simulations since the source - drain fermi - window may include many subbands , and many spectra may be thus needed for accurate solutions .instead , we propose that rtds could be one of device categories for which the tb - cbr method is particularly practical , since the fermi - window for transport becomes extremely small in rtds in some cases . the same calculation is performed again but using the old prescription suggested for the ema , and corresponding tr and dos profiles are shown in the right column of fig .3 . the cbr method still reproduce the reference result with a full set of energy spectra since the dyson equation ( eq .( 4 ) ) should always work for any s .the accuracy of the results near the cbm , however , turns out to be worse than the one with the new prescription .the results furthermore reveal that the accuracy with 10 of the total spectra does not necessarily becomes better than the one with 1 , indicating that the old prescription for can not even approximate the solution near the cbm of open tb systems ._ kp system _ : the tr and dos profile of the kp si 2.0(nm ) ( 100 ) cube , are depicted in fig .the structure is discretized with a 0.2(nm ) grid and involves a complex hermitian hamiltonian of 3,000 dof . here , the dof of the real - space kp hamiltonian can be effectively reduced with the - approach .the effective dof of the hamiltonian therefore becomes 500 , where we consider 50 modes per each slab along the transport direction .again , we note that the vbm of the contact bandstructure is placed at -0.4(ev ) , and lower in energy than the vbm of si bulk ( 0(ev ) ) due to the confinement created by the finite cross - section .we claim that the cbr method works quite well for the kp system , since the tr and dos profiles not only become closer to the reference results as more of the energy spectrums are used , but also exhibit excellent accuracy near the vbm of the contact bandstructure as shown in fig .we , however , observe a remarkable feature that is not found in the cbr method coupled to tb systems : the kp - cbr method shows a good accuracy with both the old and new prescription matrix , which supports that the simplified solution for and ( eq .( 9 ) and eq .( 10 ) ) are still useful to approximate the full solution ( eqs .( 7 ) ) as discussed in the previous section .we also claim that the utility of the kp - cbr method could be extended to nanowire fets because the mode - space approach reduces the dof of the hamiltonian such that we save more computing cost needed to calculate energy spectra . in the next subsection, we will come back to this issue again . ) , with respect to the si bulk , is superposed to the channel potential profile to consider the sharp structural confinement stemming from the single donor . ] in this subsection , we provide a detailed analysis of the numerical utility of the multi - band cbr method in terms of the accuracy and speed . based on discussions in the previous subsection with a focus on a proof of principles on small systems , a rtd is considered as a simulation example of tb systems , while a nanowire fet is again used as an example of kp systems to discuss the numerical practicality of the method .the tr and dos profiles obtained by the rgf and wf algorithm are used as reference results .we note that the wf case is added in this subsection to provide a complete and competitive analysis on the speed and scalability on hpc clusters ._ tb system _ : a single phosphorous donor in host si material ( si : p ) creates a 3-d structural confinement around itself .such si : p have gained scientific interest due to their potential utility for qubit - based logic applications . especially , the stark effect in si : p quantum dots is one of the important physical problems , and was quantitatively explained by previous tb studies . the electron - transport in such si : p systems should be therefore another important problem that needs to be studied .the geometry of the example si : p device is illustrated in fig ., we consider a [ 100 ] si nanowire that is 14.0(nm ) long and has a 1.7(nm ) rectangular cross - section .the first and last 3.0(nm ) along the transport direction , are considered as densely n - type doped source - drain region assuming a 0.25(ev ) band - offset in equilibrium .then , a single phosphorous atom is placed at the channel center with a superposition of the impurity coulombic potential that has been calibrated for a single donor in si bulk by by rahman _the electronic structure has a total of 1872 atoms and involves a complex hamiltonian matrix of 18,720 dof .6 shows the tr and dos profiles in four cases , where the first three cases are the cbr results with 10 , 20 and 40 spectra that correspond to 0.05 , 0.1 , and 0.2 of the hamiltonian dof , and the last one is used as a reference . due tothe donor coulombic potential , the channel forms a double - barrier system such that the electron transport should experience a resonance tunneling . as shown in fig . 5, the cbr method produces a nice approximation of the reference result such that the first resonance is observed with just 10 energy spectra .it also turns out that 40 spectra are enough to capture all the resonances that show up in the range of energy of interest .the accuracy of the solutions approximated by the cbr method , is examined in a more quantitative manner by the tr and dos profile over energy .fig . 7 illustrates this tr ( ctr ) and dos ( cdos ) profile , which are equivalent to the current and charge profile , respectively . in spite of a slight deviation in absolute values, the ctr profiles still confirm that the cbr method captures resonances quite precisely such that the energetic positions where the tr sharply increases , are almost on top of the reference result .the cdos profile exhibits much better accuracy such that the result with 40 spectra almost reproduces the reference result even in terms of absolute values .we claim that the accuracy in the cdos profile is particularly critical , since it is directly connected to charge profiles that are essential for charge - potential self - consistent simulations . ,turn out to be enough to almost reproduce the reference solutions in the entire range of energy of interest ( 0.8(ev ) beyond the vbm of the wire bandsturcutrue ) . ]_ kp system _ : si nanowire fets obtained through top - down etching or bottom - up growth have attracted attention due to their enhanced electrostatic control over the channel , and thus become an important target of various modeling works . for kp systems , the cbr method could become a practical approach to solve transport behaviors of fet devices since the computing load for solving eigenvalue problems can be reduced with the mode - space approach . a [ 100 ] si nanowire fet of a 15.0(nm ) long channel and a 3.0(nm ) rectangular cross - section , is therefore considered as a simulation example to test the performance of the kp - cbr method .the hole - transport is simulated with the 3-band kp approach , where the simulation domain is discretized with a set of 0.2(nm ) mesh cubic grids and involves a real - space hamiltonian matrix of 50,625 dof .as the device has a total of 75 slabs along the transport direction , the mode - space hamiltonian has 9,000 dof with a consideration of 120 modes per slab .it has been reported that the wire bandstructure obtained with 120 modes per slab , becomes quite close to the full solution for a cross - section smaller than 5.0.0(nm ) .the wire is assumed to be purely such that neither the doping nor band - offset are considered . to seeif the cbr method can be reasonably practical in simulating the hole - transport at a relatively large source - drain bias , we plan to cover the energy range at least larger than 0.4(ev ) beyond the vbm of the wire bandstucture . for this purpose, we compute 50 , 100 , and 200 energy spectra that correspond to 0.5 , 1.1 , and 2.2 of the dof of the mode - space hamiltonian , respectively . fig .8(a ) shows the corresponding tr and dos profiles . here , the cbr solution not only become closer to the reference result with more spectra considered , but also demonstrate fairly excellent accuracy near the vbm of the wire bandstructure .the ctr and cdos profiles provided in fig .8(b ) further support the preciseness of the cbr solutions near the vbm .the cumulative profiles also support that the cbr solution covers a relatively wide range of energy , such that 50 energy spectra are already enough to cover .4(ev ) below the vbm quite well .we note that the solution obtained with 200 spectra almost replicates the reference result in the entire range of energy that is considered for the simulation ( .8(ev ) below the vbm ) . .the time required to evaluate the tr and dos per single energy point in a serial mode , for the rtd and nanowire fet considered as simulation examples .[ cols="<,<,<,<",options="header " , ] _ speed and scalability on hpcs _ : so far , we have discussed the practicality of the multi - band cbr method focusing on the accuracy of the solutions for two - contact , ballistic - transport problems . another important criterion to determinethe numerical utility should be the speed of calculations .we therefore measure the time needed to evaluate the tr and dos per single energy point for the tb si : p rtd and the kp si nanowire fet represented that are utilized as simulation examples . to examine the practicality of the multi - band cbr method on hpc clusters, we also benchmark the scalability of the simulation time on the cluster under the support of the rosen center for advanced computing ( rcac ) at purdue university .the cbr , rgf , and wf methods are parallelized with mpi / c++ , the multifrontal massively parallel sparse direct linear solver ( mumps ) , and a self - developed eigensolver based on the shift - and - invert arnoldi algorithm .all the measurements are performed on a 64-bit , 8-core hp proliant dl585 g5 system of 16 gb sdram and 10-gigabit ethernet local to each node .table i summarizes the wall - times measured for various methods in a serial mode .generally , the simulation of the kp si nanowire fet needs less computing loads , such that the wall - times are reduced by a factor of two with respect to the computing time taken for the tb si : p rtd .this is because the kp approach can represent the electronic structure with the mode - space approach such that the hamiltonian matrix has a smaller dof ( 9,000 ) , compared to the one used to describe the tb si : p rtd ( 18,720 ) .compared to the rgf algorithm in a serial mode , the cbr method demonstrates a comparable ( kp ) , or better ( tb ) performance . since a single slab of the kp si nanowireis represented with a block matrix ( fig .1 ) of 120 dof , the matrix inversion is not a critical problem any more in the rgf algorithm such that the cbr method does nt necessarily show better performances than the rgf algorithm .the tb example device , however , needs a of 720 dof to represent a single slab ( a total of 26 slabs ) so the burden for matrix inversions become bigger compared to the kp example . as a result, the cbr method generally shows better performances .the cbr method , however , does nt beat the wf method in both the tb and kp case since , in a serial mode , the cbr method consumes time to allocate a huge memory space that is needed to store `` full '' complex matrices via vector - products ( eqs .( 5 ) ) .the strength of the cbr method emerges in a mode ( on multiple cpus ) , where the vector - products are performed via mpi - communication among distributed systems and each node thus saves only a fraction of the full matrix .the scalability of the various methods is compared up to a total of 16 cpus in fig .the common rgf calculation can be effectively parallelized only up to a factor of two , due to its recursive nature , and the scalability of the wf method becomes worse in many cpus because it uses a direct - solver - based lu factorization to solve the linear system .as a result , the cbr method starts to show the best speed when more than 8 cpus are used .in this work , we discuss numerical utilities of the cbr method in simulating ballistic transport of multi - band systems described by the the atomic 10-band tb and 3-band kp approach .although the original cbr method developed for single - band ema systems achieves an excellent numerical efficiency by approximating solutions of open systems , we show that the same approach ca nt be used to approximate tb systems as the inter - slab coupling matrix becomes singular .we therefore develop an alternate method to approximate open system solutions . focusing on a proof of principles on small systems, we validate the idea by comparing the tr and dos profile to the reference result obtained by the rgf algorithm , where the alternative also works well with the kp approach .since the major numerical issue in the cbr method is to solve a normal eigenvalue problem , the numerical practicality of the method becomes better as the transport can be solved with a less number of energy spectra . generally , the practicality would be thus limited in multi - band systems , since multi - band approaches need a larger number of spectra to cover a certain range of energy than the single band ema does .we , however , claim that the rtds could be one category of tb devices , for which the multi - band cbr method becomes particularly practical in simulating transport , and the numerical utility can be even extended to fets when the cbr method is coupled to the kp band model . to support this argument, we simulate the electron resonance tunneling in a 3-d tb rtd , which is basically a si nanowire but has a single phosphorous donor in the channel center , and the hole - transport of a 3-d kp si nanowire fet .we examine numerical practicalities of the multi - band cbr method in terms of the accuracy and speed , with respect to the reference results obtained by the rgf and wf algorithm , and observe that the cbr method gives fairly accurate tr and dos profile near band edges of contact bandstructures . in terms of the speed in a serial mode ,the strength of the cbr method over the rgf algorithm depends on the size of the hamiltonian such that the cbr shows a better performance than the rgf as a larger block - matrix is required to represent the unit - slab of devices .but , the speed of the wf method is still better than the cbr method as the cbr method consumes time to store a full complex matrix during the process of calculations . in a parallel mode, however , the cbr method starts to beat both the rgf and wf algorithm since the full matrix can be stored into multiple clusters in a distributive manner , while the scalability of both the rgf and wf algorithm are limited due to the nature of recursive and direct - solver - based calculation , respectively .h. ryu , h .- h . park and g. klimeck acknowledge the financial support from the national science foundation ( nsf ) under the contract no . 0701612 and the semiconductor research corporation .m. shin acknowledges the financial support from basic science research program through the national research foundation of republic of korea , funded by the ministry of education , science and technology under the contract no .2010 - 0012452 .authors acknowledge the extensive use of computing resources in the rosen center for advanced computing at purdue university , and nsf - supported computing resources on nanohub.org .100 g. e. moore , _ electronics _ * 38 * , 114 , ( 1965 ) .m. luisier , a. schenk and w. fichtner , _ phys .b _ * 74 * 205323 , ( 2006 ) .g. klimeck , s. s. ahmed , h. bae , n. kharche , r. rahman , s. clark , b. haley , s. lee , m. naumov , h. ryu , f. saied , m. prada , m. korkusinski and t. b. boykin , _ ieee trans ._ * 54 * , 2079 , ( 2007 ) .y. x. liu , d. z. -y .ting and t. c. mcgill , _ phys .b _ * 54 * , 5675 , ( 1996 ). g. klimeck , r. lake , r. c. bowen , c. fernando and w. frensley , _ vlsi design _ * 8 * , 79 , ( 1998 ). t. b. boykin , g. klimeck and f. oyafuso , _ phys .b _ , * 69 * 115201 , ( 2004 ) .r. c. bowen , g. klimeck , w. r. frensley , r. k. lake , _ j. appl_ * 81 * , 3207 , ( 1997 ) n. kharche , m. prada , t. b. boykin and g. klimeck , _ appl* 90 * , 9 , ( 2007 ) .r. rahman , c. j. wellard , f. r. bradbury , m. prada , j. h. cole , g. klimeck and l. c. l. hollenberg , _ phys .lett . _ * 99 * , 036403 , ( 2007 ) .g. p. lansbergen , r. rahman , c. j. wellard , i. woo , j. caro , n. collaert , s. biesemans , g. klimeck , l. c. l. hollenberg and s. rogge , _ nature physics _ * 4 * , 656 , ( 2008 ) .m. shin ,_ j. appl .phys . _ * 106 * , 054505 , ( 2009 ) . c. pryor , _ phys .b _ * 57 * , 7190 , ( 1998 ) s. datta , _ superlatt . microstruct . _ * 28 * , 253 , ( 2000 ) .m. stdele , b. r. tuttle and k. hess , _ j. appl. phys . _ * 89 * , 348 , ( 2001 ) .n. kharche , g. klimeck , d. -h .kim , j. a. del alamo and m. luisier , _ proceedings of ieee international electron devices meeting _ , ( 2009 ) .r. lake , g. klimeck , r. c. bowen and d. jovanovic , _ j. appl .* 81 * , 7845 , ( 1996 ) . c. rivas and r. lake , _ phys .( b ) _ * 239 * , 94 , ( 2003 ). s. cauley , j. jain , c. -k . koh and v. balakrishnan , _ j. appl .phys . _ * 101 * , 123715 , ( 2007 ) .d. mamaluy , m. sabathil , t. zibold , p. vogl and d. vasileska , _ phys .b _ * 71 * , 245321 , ( 2005 ) .g. klimeck and m. luisier , _ computing in science and engineering _ * 12 * , 28 , ( 2010 ) .d. mamaluy , d. vasileska , m. sabathil and p. vogl , _ semicond .sci . tech _ * 19 * , 118 , ( 2004 ) .h. r. khan , d. mamaluy and d. vasileska , _ ieee trans .dev . _ * 54 * , 784 , ( 2007 ) .h. r. khan , d. mamaluy and d. vasileska , _ ieee trans .dev . _ * 55 * , 743 , ( 2008 ) .h. r. khan , d. mamaluy and d. vasileska , _ ieee trans . elec .dev . _ * 55 * , 2134 , ( 2008 ) .d. mamaluy , m. sabathil and p. vogl , _ j. appl. phys . _ * 93 * , 4628 , ( 2003 ) . in principle, the 4-band kp model uses a total of four bases to model direct bandgap materials such as gaas and inas , where one basis is used to model the conduction band and the remaining three bases are used to model the valence band . the valence band of indirect bandgap materials such si, however , can be still modeled with three bases if vbm is at the point ( ref .both the tb and kp model considered in this work place the vbm of si bulk at 0 ( ev ) .assuming that the source contact is grounded , the fermi - window at = v , becomes [ - , + ] = [ -- , + ] , where is the single electron charge and is the fermi - level of the system in equilibrium .the maximum and minimum of the window are determined at the source and drain side , respectively .e. lind , b. gustafson , i. pietzonka and l. -e .wernersson , _ phys .b _ * 68 * , 033312 , ( 2003 ) . l. c. l. hollenberg , a. s. dzurak , c. j. wellard , a. r. hamilton , d. j. reilly , g. j. milburn and r. g. clark , _ phys .b _ * 69 * , 113301 , ( 2004 ) .the band - offset between the intrinsic channel and densely doped source - drain leads , is taken from the work of martinez _ et al ._ , where the equilibrium potential profile has been self - consistently obtained for a 14.0(nm ) long [ 100 ] si nanowire that has a 2.0(nm ) rectangular cross - section and 4.0(nm ) long source - drain regions ( ref . [ 32 ] ) .a. martinez , n. seoane , a. r. brown , j. r. barker and a. asenov , _ ieee trans .* 8 * , 603 , ( 2009 ) .n. neophytou , a. paul , m. s. lundstrom and g. klimeck , _ ieee trans .dev . _ * 55 * , 1286 , ( 2008 ) . v. mehrmann and d. watkins , _siam j. sci .* 22 * , 1905 , ( 2001 ) .
numerical utilities of the contact block reduction ( cbr ) method in evaluating the retarded green s function , are discussed for 3-d multi - band open systems that are represented by the atomic tight - binding ( tb ) and continuum ( kp ) band model . it is shown that the methodology to approximate solutions of open systems which has been already reported for the single - band effective mass model , can not be directly used for atomic tb systems , since the use of a set of zincblende crystal grids makes the inter - coupling matrix be non - invertible . we derive and test an alternative with which the cbr method can be still practical in solving tb systems . this - method is validated by a proof of principles on small systems , and also shown to work excellent with the kp approach . further detailed analysis on the accuracy , speed , and scalability on high performance computing clusters , is performed with respect to the reference results obtained by the state - of - the - art recursive green s function and wavefunction algorithm . this work shows that the cbr method could be particularly useful in calculating resonant tunneling features , but show a limited practicality in simulating field effect transistors ( fets ) when the system is described with the atomic tb model . coupled to the kp model , however , the utility of the cbr method can be extended to simulations of nanowire fets .
among biometrics , fingerprints are probably the best - known and widespread because of the fingerprint properties : universality , durability and individuality .unfortunately it has been shown that fingerprint scanners are vulnerable to presentation attacks with an artificial replica of a fingerprint .therefore , it is important to develop countermeasures to those attacks .numerous methods have been proposed to solve the susceptibility of fingerprint devices to attacks by spoof fingers .one primary countermeasure to spoofing attacks is called `` liveness detection '' or presentation attack detection .liveness detection is based on the principle that additional information can be garnered above and beyond the data procured and/or processed by a standard verification system , and this additional data can be used to verify if an image is authentic .liveness detection uses either a hardware - based or software - based system coupled with the authentication program to provide additional security .hardware - based systems use additional sensors to gain measurements outside of the fingerprint image itself to detect liveness .software - based systems use image processing algorithms to gather information directly from the collected fingerprint to detect liveness .these systems classify images as either live or fake .since 2009 , in order to assess the main achievements of the state of the art in fingerprint liveness detection , university of cagliari and clarkson university organized the first fingerprint liveness detection competition .the first international fingerprint liveness detection competition ( livdet ) 2009 , provided an initial assessment of software systems based on the fingerprint image only . the second, third and fourth liveness detection competitions ( livdet 2011 , 2013 and 2015 ) were created in order to ascertain the progressing state of the art in liveness detection , and also included integrated system testing .this paper reviews the previous livdet competitions and how they have evolved over the years .section 2 of this paper describes the background of spoofing and liveness detection .section 3 details the methods used in testing for the livdet competitions as well as descriptions of the datasets that have generated from the competition so far .section 4 discusses the trends across the competitions reflecting advances in the state of the art .section 5 concludes the paper and discusses the future of the livdet competitions .the concept of spoofing has existed for some time now .research into spoofing can be seen beginning in 1998 from research conducted by d. willis and m. lee where six different biometric fingerprint devices were tested against fake fingers and it was found that four of the six were susceptible to spoofing attacks .this research was approached again in 2000 - 2002 by multiple institutions including ; putte and kuening as well as matsumoto et al .putte et al .examined different types of scanning devices as well as different ways of counterfeiting fingerprints .the research presented by these researchers looked at the vulnerability of spoofing . in 2001 , kallo ( et al . )looked at a hardware solution to liveness detection ; while in 2002 , schuckers delved into using software approaches for liveness detection .liveness detection , with either hardware - based or software - based systems , is used to check if a presented fingerprint originates from a live person or an artificial finger . usually the result of this analysis is a score used to classify images as either live or fake .many solutions have been proposed to solve the vulnerability of spoofing .bozhao tan et al .has proposed a solution based on ridge signal and valley noise analysis .this solution examines the perspiration patterns along the ridge and the patterns of noise in the valleys of images .it was proposed that since live fingers sweat , but spoof fingers do not , the live fingerprint will look `` patchy '' compared to a spoof .also it was proposed that due to the properties of a spoof material , spoof fingers will have granules in the valleys that live fingers will not have .pietro coli et al .examined static and dynamic features of collected images on a large data set of images .there are two general forms of creating artificial fingers , the cooperative method and non - cooperative method . in the cooperative methodthe subject pushes their finger into a malleable material such as dental impression material , plastic , or wax creating a negative impression of the fingerprint as a mold , see figure [ fig : mold ] .the mold is then filled with a material , such as gelatin , playdoh or silicone .this cast can be used to represent a finger from a live subject , see figure [ fig : fakefinger ] .the non - cooperative method involves enhancing a latent fingerprint left on a surface , digitizing it through the use of a photograph , and finally printing the negative image on a transparency sheet .this printed image can then be made into a mold , for example , by etching the image onto a printed circuit board ( pcb ) which can be used to create the spoof cast as seen on figure [ fig : latentpcb ] .most competitions focus on matching , such as the fingerprint verification competition held in 2000 , 2002 , 2004 and 2006 and the icb competition on iris recognition ( icir2013 ) .however , these competitions did not consider spoofing .the liveness detection competition series was started in 2009 and created a benchmark for measuring liveness detection algorithms , similar to matching performance . at that time , there had been no other public competitions held that has examined the concept of liveness detection as part of a biometric modality in deterring spoof attacks . in order to understand the motivation of organizing such a competition, we observed that the first trials to face with this topic were often carried out with home - made data sets that were not publicly available , experimental protocols were not unique , and the same reported results were obtained on very small data sets .we pointed out these issues in .therefore , the basic goal of livdet has been since its birth to allow researchers testing their own algorithms and systems on publicly available data sets , obtained and collected with the most updated techiques to replicate fingerprints enabled by the experience of clarkson and cagliari laboratories , both active on this problem since 2000 and 2003 , respectively .at the same time , using a competition " instead of simply releasing data sets , could be assurance of a free - of - charge , third - party testing using a sequestered test set .( clarkson and cagliari has never took part in livdet as competitors , due to conflict of interest . ) livdet 2009 provided results which demonstrated the state of the art at that time for fingerprint systems .livdet continued in 2011 , 2013 and 2015 and contained two parts : evaluation of software - based systems in part 1 : algorithms , and evaluation of integrated systems in part 2 : systems .fingerprint will be the focus of this paper . however , livdet 2013 also included a part 1 : algorithms for the iris biometric and is continuing in 2015 .since 2009 , evaluation of spoof detection for facial systems was performed in the competition on counter measures to 2-d facial spoofing attacks , first held in 2011 and then held a second time in 2013 .the purpose of this competition is to address different methods of detection for 2-d facial spoofing .the competition dataset consisted of 400 video sequences , 200 of them real attempts and 200 attack attempts .a subset was released for training and then another subset of the dataset was used for testing purposes . [ cols="<,>,>,>,>,>,>,>",options="header " , ] table [ tab : easynessdegree ] shows the subjective evaluation on the easiness of obtaining a good spoof from the combination of the same materials of table [ tab : qualitydegree ] .this evaluation depends on the solidification time of the adopted material , the level of difficulty in separating mold and cast without destroying one of them or both , the natural dryness or wetness level of the related spoof .tables [ tab : qualitydegree ] , [ tab : easynessdegree ] show that , from a practical viewpoint , many materials are difficult to manage when fabricating a fake finger . in many cases , the materials with this propertyalso exhibit a low subjective quality level .therefore , thanks to this lesson , the livdet competition challenge participants with images coming from the spoofs obtained with the best and most `` potentially dangerous '' materials .the materials choice is made on the basis of the best trade off between the criteria pointed out in tables [ tab : qualitydegree ] , [ tab : easynessdegree ] and the objective quality values output by quality assessment algorithms such as nfiq .what reported has been confirmed along the four livdet editions .in particular the ferrfake and ferrlive rates for each differing quality levels support the idea that the images quality level is correlated with the error rate decrease .the error rates for each range of quality levels for dermalog in livdet 2011 fingerprint part 1 : algorithms is shown in figure [ fig : ferrderm ] , as an example .the graphs showcase from images of only quality level 1 up to all quality levels being shown .as lower quality spoof images were added , ferrfake generally decreased .for all images which included the worst quality images , the error rates were less consistent likely due to the variability in low quality spoofs .the percentage of images at each quality level for two representative datasets for livdet 2011 , 2013 , and 2015 , respectively , are given in figures [ fig : imageperc ] , [ fig : spoofperc ] , and [ fig : quality2015 ] .the crossmatch dataset had high percentages of the data being in the top two quality levels in both livdet 2011 and 2013 .the swipe dataset had many images that were read as being of lower quality which could be seen in the data itself because of the difficulty in collecting spoof data on the swipe device .since its first edition in 2009 , the fingerprint liveness detection competition was aimed to allow research centres and companies a fair and independent assessment of their anti - spoofing algorithms and systems . we have seen over time an increasing interest for this event , and the general recognition for the enormous amount of data made publicly available .the number of citations that livdet competitions have collected is one of the tangible signs of such interest ( about 100 citations according to google scholar ) and further demonstrates the benefits that the scientific community has received from livdet events .the competition results show that liveness detection algorithms and systems strongly improved their performance : from about 70% classification accuracy achieved in livdet 2011 , to 90% classification accuracy in livdet 2015 .this result , obtained under very difficult conditions like the ones of the consensual methodology of fingerprints replication , is comparable with that obtained in livdet 2013 ( first two data sets ) , where the algorithms performance was tested under the easier task of fingerprints replication from latent marks .moreover , the two challenges characterizing the last edition , namely , the presence of 1000 dpi capture device and the evaluation against unknown " spoofing materials , further contributed to show the great improvement that researchers achieved on these issues : submitted algorithms performed very well on both 500 and 1000 dpi capture devices , and some of them also exhibited a good robustness degree against never - seen - before attacks .results reported on fusion also shows that the liveness detection could further benefit from the combination of multiple features and approaches . a specific section on algorithms and systems fusionmight be explicitly added to a future livdet edition .there is a dark side of the moon , of course .it is evident that , despite the remarkable results reported in this paper , there is a clear need of further improvements .current performance for most submissions are not yet good enough for embedding a liveness detection algorithm into fingerprint verification system where the error rate is still too high for many real applications . in the authors opinion , discovering and explaining benefits and limitations of the currently used features is still an issue whose solution should be encouraged , because only the full understanding of the physical process which leads to the finger s replica and what features extraction process exactly does will shed light on the characteristics most useful for classification .we are aware that this is a challenging task , and many years could pass before seeing concrete results .however , we believe this could be the next challenge for a future edition of livdet , the fingerprint liveness detection competition .the first and second author had equal contributions to the research . this work has been supported by the center for identification technology research and the national science foundation under grant no .1068055 , and by the project computational quantum structures at the service of pattern recognition : modeling uncertainty " [ crp-59872 ] funded by regione autonoma della sardegna , l.r . 7/2007 ,bando 2012 .marcialis , et al . , first international fingerprint liveness detection competition livdet 2009 .d. yambay , et al ., livdet 2011 - fingerprint liveness detection competition 2011 , 5th iapr / ieee int .conf . on biometrics ( icb 2012 ) , new delhi ( india ) , march , 29th , april , 1st , 2012 .marcialis , et al . ,livdet 2013 fingerprint liveness detection competition 2013 , 6th iapr / ieee int . conf . on biometrics ( icb2013 ) , madrid ( spain ) , june , 4th , june , 7 , 2013 .marcialis , et al . ,livdet 2015 fingerprint liveness detection competition 2015 , 7th ieee int . conf .on biometrics : theory , applications and systems ( btas 2015 ) , in press .d. yambay , et al ., livdet - iris 2013-iris liveness detection competition 2013 .ieee international joint conference on biometrics ( ijcb 2014 ) , 2014 . c. sousedik , and c. busch ,presentation attack detection methods for fingerprint recognition systems : a survey , iet biometrics , 2014 .p. coli , g.l .marcialis , and f. roli , vitality detection from fingerprint images : a critical survey , ieee / iapr 2nd international conference on biometrics icb 2007 , august , 27 - 29 , 2007 , seoul ( korea ) , s .- w .lee and s. li eds ., springer lncs 4642 , pp.722 - 731 .d. willis , m. lee , six biometric devices point the finger at security .biometrics under our thumb , network computing , june 1998 .van der putte , t. and keuning , j. : biometrical fingerprint recognition : do nt get your fingers burned , smart card reserch and advanced applications , ifip tc8/wg8.8 fourth working conference on smart card research and advanced applications , pp .289 - 303 ( 2001 ) .t. matsumoto , h. matsumoto , k. yamada , and s. hoshino , impact of artificial gummy fingers on fingerprint systems , in proceedings of spie , 4677 , optical security and counterfeit deterence techniques iv , yokohama , japan .peter kallo , imre kiss , andras podmaniczky , all of budapest , janos talosi , negykanizsa , all of ( hu ) .`` detector for recognizing the living character of a finger in a fingerprint recognizing apparatus '' patent us 6,175641 , jan .16 , 2001 . schuckers sac . spoofing and anti - spoofing measures .information security technical report , vol 7 .4 , pages 56 - 62 , 2002 .bozhao tan , stephanie schuckers , spoofing protection for fingerprint scanner by fusing ridge signal and valley noise , pattern recognition , volume 43 , issue 8 , august 2010 , pages 2845 - 2857 , issn 0031 - 3203 , doi : 10.1016/j.patcog.2010.01.023 .p. coli , g.l .marcialis , and f. roli , fingerprint silicon replicas : static and dynamic features for vitality detection using an optical capture device , international journal of image and graphics , world scientific , 8 ( 4 ) 495 - 512 , 2008 .cappelli , raffaele , et al . fingerprint verification competition 2006 . "biometric technology today 15.7 ( 2007 ) : 7 - 9 .alonso - fernandez , fernando , and josef bigun . halmstad university submission to the first icb competition on iris recognition ( icir2013 ) . " ( 2013 ) .chakka , murali mohan , et al . competition on counter measures to 2-d facial spoofing attacks . " biometrics ( ijcb ) , 2011 international joint conference on .ieee , 2011 .c. watson , m. garris , e. tabassi , c. wilson , r. mccabe , s. janet , k. ko .user s guide to nist biometric image software . national institute of standards and technology .j. galbally , et al ., a high performance fingerprint liveness detection method based on quality related features , future gener .28 , 1 ( january 2012 ) , 311 - 321 .doi=10.1016/j.future.2010.11.024 b. biggio , et al . ,security evaluation of biometric authentication systems under real spoofing attacks , in biometrics , iet , vol.1 , no.1 , pp.11 - 24 , march 2012 e. marasco , and c. sansone , combining perspiration- and morphology - based static features for fingerprint liveness detection , pattern recognition letters , volume 33 , issue 9 , 1 july 2012 , pages 1148 - 1156 , issn 0167 - 8655 j. galbally , et al . , image quality assessment for fake biometric detection : application to iris , fingerprint , and face recognition , in image processing , ieee transactions on , vol.23 , no.2 , pp.710 - 724 , feb .2014 , doi : 10.1109/tip.2013.2292332 e. marasco , and c. sansone , an anti - spoofing technique using multiple textural features in fingerprint scanners , in biometric measurements and systems for security and medical applications ( bioms ) , 2010 ieee workshop on , vol ., no . , pp.8 - 14 , 9 - 9 sept .2010 , doi : 10.1109/bioms.2010.5610440 l. ghiani , et al ., experimental results on fingerprint liveness detection , in proceedings of the 7th international conference on articulated motion and deformable objects ( amdo 2012 ) , springer - verlag , berlin , heidelberg , 210 - 218 d. gragnaniello , et al . , wavelet - markov local descriptor for detecting fake fingerprints , electronics letters , 2014 , 50 , ( 6 ) , p. 439 - 441 , doi : 10.1049/el.2013.4044 , iet digital library p. b. patil , and h. shabahat , an anti - spoofing technique using multiple textural features in fingerprint scanner , international journal of electrical , electronics and computer engineering r. nogueira , et al . , evaluating software - based fingerprint liveness detection using convolutional networks and local binary patterns , in biometric measurements and systems for security and medical applications ( bioms ) proceedings , 2014 ieee workshop on , vol ., pp.22 - 29 , 17 - 17 oct .2014 doi : 10.1109/bioms.2014.6951531 y. jiang , and l. xin , spoof fingerprint detection based on co - occurrence matrix , international journal of signal processing , image processing and pattern recognition ( 2015 ) .x. jia , et al ., multi - scale local binary pattern with filters for spoof fingerprint detection , information sciences , volume 268 , 1 june 2014 , pages 91 - 102 , issn 0020 - 0255 , http://dx.doi.org/10.1016/j.ins.2013.06.041 .d. gragnaniello , et al . ,local contrast phase descriptor for fingerprint liveness detection , pattern recognition , volume 48 , issue 4 , april 2015 , pages 1050 - 1058 , issn 0031 - 3203 , http://dx.doi.org/10.1016/j.patcog.2014.05.021 .n. poh , et al ., anti - forensic resistant likelihood ratio computation : a case study using fingerprint biometrics , in signal processing conference ( eusipco ) , 2014 proceedings of the 22nd european , vol ., no . , pp.1377 - 1381 , 1 - 5 sept .2014 a. f. sequeira , and j. s. cardoso , fingerprint liveness detection in the presence of capable intruders , sensors .2015 , 15(6):14615 - 14638 .g. fumera , et al ., multimodal antispoofing in biometric recognition systems , in handbook of biometric antispoofing , s. marcel , m. nixon , and s. li ( eds . ) , springer , pp .145 - 164 , doi : 10.1007/978 - 1 - 4471 - 6524 - 89 , 2014 n. poh , et al ., toward an attack - sensitive tamper - resistant biometric recognition with a symmetric matcher : a fingerprint case study , in computational intelligence in biometrics and identity management ( cibim ) , 2014 ieee symposium on , vol ., no . , pp.175 - 180 , 9 - 12 dec .2014 , doi : 10.1109/cibim.2014.7015460 l. ghiani , et al ., fingerprint liveness detection using binarized statistical image features , in biometrics : theory , applications and systems ( btas ) , 2013 ieee sixth international conference on , vol ., no . , pp.1 - 6 , sept .29 2013-oct . 2 2013 ,doi : 10.1109/btas.2013.6712708 x. jia , et al ., multi - scale block local ternary patterns for fingerprints vitality detection , in biometrics ( icb ) , 2013 international conference on , vol ., no . , pp.1 - 6 , 4 - 7 june 2013 , doi : 10.1109/icb.2013.6612964 g.l .marcialis , et al ., large scale experiments on fingerprint liveness detection , joint iapr int .work . on structural and statistical pattern recognition ( spr & sspr 2012 ) , hiroshima ( japan ) , november , 7 - 9 , 2012 , springer lncs 7625 , pp .501 - 509 , 2012 y. zhang , et al . , fake fingerprint detection based on wavelet analysis and local binary pattern , biometric recognition ( ccbr 2014 ) , 8833 : 191 - 1982014 a. rattani , et al ., open set fingerprint spoof detection across novel fabrication materials , in information forensics and security , ieee transactions on , vol.10 , no.11 , pp.2447 - 2460 , nov . 2015 , doi : 10.1109/tifs.2015.2464772 p. johnson , and s. schuckers , fingerprint pore characteristics for liveness detection , proceedings of the international conference of the biometrics special interest group ( biosig ) , darmstadt , germany , 1012 september 2014 ; pp .x. jia , et al ., one - class svm with negative examples for fingerprint liveness detection , biometric recognition .springer international publishing , 2014 .216 - 224 .c. gottschlich , et al . , fingerprint liveness detection based on histograms of invariant gradients , biometrics ( ijcb ) , 2014 ieee international joint conference on .ieee , 2014 .
a spoof attack , a subset of presentation attacks , is the use of an artificial replica of a biometric in an attempt to circumvent a biometric sensor . liveness detection , or presentation attack detection , distinguishes between live and fake biometric traits and is based on the principle that additional information can be garnered above and beyond the data procured by a standard authentication system to determine if a biometric measure is authentic . the goals for the liveness detection ( livdet ) competitions are to compare software - based fingerprint liveness detection and artifact detection algorithms ( part 1 ) , as well as fingerprint systems which incorporate liveness detection or artifact detection capabilities ( part 2 ) , using a standardized testing protocol and large quantities of spoof and live tests . the competitions are open to all academic and industrial institutions which have a solution for either software - based or system - based fingerprint liveness detection . the livdet competitions have been hosted in 2009 , 2011 , 2013 and 2015 and have shown themselves to provide a crucial look at the current state of the art in liveness detection schemes . there has been a noticeable increase in the number of participants in livdet competitions as well as a noticeable decrease in error rates across competitions . participants have grown from four to the most recent thirteen submissions for fingerprint part 1 . fingerprints part 2 has held steady at two submissions each competition in 2011 and 2013 and only one for the 2015 edition . the continuous increase of competitors demonstrates a growing interest in the topic . fingerprint , liveness detection , biometric
[ secintro ] dyadic data are common in psychosocial and behavioral studies [ ] .many social phenomena , such as dating and marital relationships , are interpersonal by definition , and , as a result , related observations do not refer to a single person but rather to both persons involved in the dyadic relationship . members of dyads often influence each other s cognitions , emotions and behaviors , which leads to interdependence in a relationship .for example , a husband s ( or wife s ) drinking behavior may lead to lowered marital satisfaction for the wife ( or husband ) .a consequence of interdependence is that observations of the two individuals are correlated .for example , the marital satisfaction scores of husbands and wives tend to be positively correlated .one of the primary objectives of relationship research is to understand the interdependence of individuals within dyads and how the attributes and behaviors of one dyad member impact the outcome of the other dyad member . in many studies ,dyadic outcomes are measured over time , resulting in longitudinal dyadic data .repeatedly measuring dyads brings in two complications .first , in addition to the within - dyad correlation , repeated measures on each subject are also correlated , that is , within - subject correlation . when analyzing longitudinal dyadic data , it is important to account for these two types of correlations simultaneously ; otherwise , the analysis results may be invalid .the second complication is that longitudinal dyadic data are prone to the missing data problem caused by dropout , whereby subjects are lost to follow - up and their responses are not observed thereafter . in psychosocial dyadic studies ,the dropouts are often nonignorable or informative in the sense that the dropout depends on missing values . in the presence of the nonignorable dropouts, conventional statistical methods may be invalid and lead to severely biased estimates [ ] .there is extensive literature on statistical modeling of nonignorable dropouts in longitudinal studies . based on different factorizations of the likelihood of the outcome process andthe dropout process , identified two broad classes of likelihood - based nonignorable models : selection models [ ; ; follman and wu ( ) ; ] and pattern mixture models [ ; little ( , ) ; hogan and laird ( ) ; ; ] .other likelihood - based approaches that do not directly belong to this classification have also been proposed in the literature , for example , the mixed - effects hybrid model by and a class of nonignorable models by .another general approach for dealing with nonignorable dropouts is based on estimation equations and includes , , and .recent reviews of methods handling nonignorable dropouts in longitudinal data can be found in , , little ( ) , and . in spite of the rich body of literature noted above , to the best of our knowledge, the nonignorable dropout problem has not been addressed in the context of longitudinal dyadic data .the interdependence structure within dyads brings new challenges to this missing data problem .for example , within dyads , one member s outcome often depends on his / her covariates , as well as the other member s outcome and covariates .thus , the dropout of the other member in the dyad causes not only a missing ( outcome ) data problem for that member , but also a missing ( covariate ) data problem for the member who remains in the study.=-1 we propose a fully bayesian approach to deal with longitudinal dyadic data with nonignorable dropouts based on a selection model .specifically , we model each subject s longitudinal measurement process using a transition model , which includes both the patient s and spouse s characteristics as covariates in order to capture the interdependence between patients and their spouses .we account for the within - dyad correlation by introducing dyad - specific random effects into the transition model . to accommodate the nonignorable dropouts , we take the selection model approach by directly modeling the relationship between the dropout process and missing outcomes using a discrete time survival model . the remainder of the article is organized as follows . in section [ sec2 ]we describe our motivating data collected from a longitudinal dyadic breast cancer study . in section [ sec3 ]we propose a bayesian selection - model - based approach for longitudinal dyad data with informative nonresponse , and provide estimation procedures using a gibbs sampler in section [ sec4 ] . in section [ sec5 ] we present simulation studies to evaluate the performance of the proposed method . in section [ sec6 ] we illustrate our method by analyzing a breast cancer data set and we provide conclusions in section [ sec7 ] .our research is motivated by a single - arm dyadic study focusing on physiological and psychosocial aspects of pain among patients with breast cancer and their spouses [ ] . for individuals with breast cancer , spouses are most commonly reported as being the primary sources of support [ ] , and spousal support is associated with lower emotional distress and depressive symptoms in these patients [ ] .one specific aim of the study is to characterize the depression experience due to metastatic breast cancer from both patients and spouses perspectives , and examine the dyadic interaction and interdependence of patients and spouses over time regarding their depression .the results will be used to guide the design of an efficient prevention program to decrease depression among patients .for example , conventional prevention programs typically apply interventions to patients directly .however , if we find that the patient s depression depends on both her own and spouse s previous depression history and chronic pain , when designing a prevention program to improve the depression management and pain relief , we may achieve better outcomes by targeting both patients and spouses simultaneously rather than targeting patients only . in this study, female patients who had initiated metastatic breast cancer treatment were approached by the project staff .patients meeting the eligibility criteria ( e.g. , speak english , experience pain due to the breast cancer , having a male spouse or significant other , be able to carry on pre - disease performance , be able to provide informed consent ) were asked to participate the study on a voluntary basis .the participation of the study would not affect their treatment in any way .depression in patients and spouse was measured at three time points ( baseline , 3 months and 6 months ) using the center for epidemiologic studies depression scale ( cesd ) questionnaires .however , a substantial number of dropouts occurred .baseline cesd measurements were collected from 191 couples ; however , at 3 months , 101 couples ( 105 patients and 107 spouses ) completed questionnaires , and at 6 months , 73 couples ( 76 patients and 79 spouses ) completed questionnaires .the missingness of the cesd measurements is likely related to the current depression levels of the patients or spouses , thus an nonignorable missing data mechanism is assumed for this study .consequently , it is important to account for the nonignorable dropouts in this data analysis ; otherwise , the results may be biased , as we will show in section [ sec6 ] .consider a longitudinal dyadic study designed to collect repeated measurements of a response and a vector of covariates for each of dyads .let , and denote the outcome , covariate vector and outcome history , respectively , for the member of dyad at the measurement time with .we assume that is fully observed ( e.g. , is external or fixed by study design ) , but is subject to missingness due to dropout .the random variable , taking values from to , indicates the time the member of the dyad drops out , where if the subject completes the study , and if the subject drops out between the and measurement time , that is , are observed and are missing .we assume at least 1 observation for each subject , as subjects without any observations have no information and are often excluded from the analysis . when modeling longitudinal dyadic data , we need to consider two types of correlations : the within - subject correlation due to repeated measures on a subject , and the within - dyad correlation due to the dyadic structure. we account for the first type of correlation by a transition model , and the second type of correlation by dyad - specific random effects , as follows : parameters in this random - effects transition model have intuitive interpretations similar to those of the actor partner interdependence model , a conceptual framework proposed by to study dyadic relationships in the social sciences and behavior research fields .specifically , and represent the `` actor '' effects of the patient , which indicate how the covariates and the outcome history of the patient ( i.e. , and ) affect her own current outcome , whereas and represent the `` partner '' effects for the patient , which indicate how the covariates and the outcome history of the spouse ( i.e. , and ) affect the outcome of the patient . similarly , and characterize the actor effects and and characterize the partner effects for the spouse of the patient .estimates of the actor and partner effects provide important information about the interdependence within dyads .we assume that residuals and are independent and follow normal distributions and , respectively ; and and are independent of random effects s . the parameters and are intercepts for the patients and spouses , respectively . in many situations ,the conditional distribution of given and depends only on the prior outcomes and .if this is the case , we obtain the so - called - order transition model , a type of transition model that is most useful in practice [ ] .the choice of the model order depends on subject matters . in many applications ,it is often reasonable to set when the current outcome depends on only the last observed previous outcome , leading to commonly used markov models . the likelihood ratio test can be used to assess whether a specific value of is appropriate [ ] .auto - correlation analysis of the outcome history also can provide useful information to determine the value of [ ; ] .define and for . given \{ and the random effect , the joint log likelihood of for the dyad under the - order ( random - effects ) transition modelis given by where is the likelihood corresponding to model ( [ transition ] ) , and is assumed free of , for .an important feature of model ( [ transition ] ) that distinguishes it from the standard transition model is that the current value of the outcome depends on not only the subject s outcome history , but also the spouse s outcome history .such a `` partner '' effect is of particular interest in dyadic studies because it reflects the interdependence between the patients and spouses .this interdependence within dyads also makes the missing data problem more challenging .consider a dyad consisting of subjects and and that drops out prematurely . because the outcome history of is used as a covariate in the transition model of , when drops out , we face not only the missing outcome ( for ) but also missing covariates ( for ) .we address this dual missing data problem using the data augmentation approach , as described in section [ sec4 ] . to account for nonignorable dropouts ,we employ the discrete time survival model [ ] to jointly model the missing data mechanism .specifically , we assume that the distribution of depends on both the past history of the longitudinal process and the current outcome , but not on future observations .define the discrete hazard rate .it follows that the probability of dropout for the member in the dyad is given by we specify the discrete hazard rate using the logistic regression model : where is the random effect accounting for the within - dyadic correlation , and and are unknown parameters . in this dropout model, we assume that , conditioning on the random effects , a subject s covariates , past history and current ( unobserved ) outcome , the dropout probability of this subject is independent of the characteristics and outcomes of the other member in the dyad .the spouse may indirectly affect the dropout rate of the patient through influencing the patient s depression status ; however , when conditional on the patient s depression score , the dropout of the patient does not depend on her spouse s depression score . in practice, we often expect that , given and , the conditional dependence of on will be negligible because , temporally , the patient s ( current ) decision of dropout is mostly driven by his ( or her ) current and the most recent outcome statuses . using the breast cancer study as an example , we do not expect that the early history of depression plays an important role for the patient s current decision of dropout ; instead , the patient drops out typically because she is currently experiencing or most recently experienced high depression .the early history may influence the dropout but mainly through its effects on the current depression status .once conditioning on the current and the most recent depression statuses , the influence from the early history is essentially negligible .thus , we use a simpler form of the discrete hazard model [ secestimation ] under the bayesian paradigm , we assign the following vague priors to the unknown parameters and fit the proposed model using a gibbs sampler : where denote an inverse gamma distribution with a shape parameter and a scale parameter .we set and at smaller values , such as 0.1 , so that the data dominate the prior information .let and denote the observed and missing part of the data , respectively .considering the iteration of the gibbs sampler , the first step of the iteration is `` data augmentation '' [ ] , in which the missing data are generated from their full conditional distributions . without loss of generality ,suppose for the dyad , member 2 drops out of the study no later than member 1 , that is , , and let .assuming a first - order ( ) transition model ( or markov model ) and letting denote a generic symbol that represents the values of all other model parameters , the data augmentation consists of the following 3 steps : for , draw from the conditional distribution where draw from the conditional distribution draw from the conditional distribution now , with the augmented complete data , the parameters are drawn alternatively as follows : for , draw random effects from the conditional distribution where draw from the conditional distribution where draw from the conditional distribution draw from the normal distribution where and similarly , draw from the conditional distribution where and are defined in a similar way to and draw and from the conditional distributions random effects from the conditional distribution draw from the conditional distribution [ secsimu ] we conducted two simulation studies ( a and b ) .simulation a consists of 500 data sets , each with 200 dyads and three repeated measures . for the dyad, we generated the first measurements , and , from normal distributions and , respectively , and generated the second and third measurements based on the following random - effects transition model : where , , , and covariates and were generated independently from .we assumed that the baseline ( first ) measurements and were observed for all subjects , and the hazard of dropout at the second and third measurement times depended on the current and last observed values of , that is , under this dropout model , on average , 24% ( 12% of member 1 and 13% of member 2 ) of the dyads dropped out at the second time point and 45% ( 26% of member 1 and 30% of member 2 ) dropped out at the third measurement time .we applied the proposed method to the simulated data sets .we used 1,000 iterations to burn in and made inference based on 10,000 posterior draws . for comparison purposes, we also conducted complete - case and available - case analyses .the complete - case analysis was based on the data from dyads who completed the follow - up , and the available - case analysis was based on all observed data ( without considering the missing data mechanism ) .[ tab1 ] .2ccd2.2ccd2.2cc@ & & & + & & & + * parameter * & & * se * & * coverage * & & * se * & * coverage * & & * se * & * coverage * + & -0.03 & 0.06 & 0.93 & -0.01 & 0.05 & 0.94 & -0.01 & 0.05 & 0.95 + & -0.06 & 0.05 & 0.81 & -0.03 & 0.04 & 0.88 & 0.07 & 0.04 & 0.96 + & -0.16 & 0.12 & 0.72 & -0.10 & 0.10 & 0.81 & 0.05 & 0.08 & 0.94 + & -0.17 & 0.12 & 0.75 & -0.10 & 0.10 & 0.78 & 0.02 & 0.09 & 0.97 + & -0.06 & 0.06 & 0.89 & -0.06 & 0.05 & 0.84 & 0.08 & 0.05 & 0.97 + & -0.04 & 0.05 & 0.87 & -0.00 & 0.04 & 0.95 & -0.04 & 0.06 & 0.96 + & -0.17 & 0.12 & 0.73 & -0.10 & 0.10 & 0.84&-0.01 & 0.12 & 0.95 + & -0.17 & 0.12 & 0.72 & -0.10 & 0.10 & 0.81 & 0.01 & 0.09 & 0.97 + table [ tbtransmodel1 ] shows the bias , standard error ( se ) and coverage rate of the 95% credible interval ( ci ) under different approaches .we can see that the proposed method substantially outperformed the complete - case and available - case analyses .our method yielded estimates with smaller bias and coverage rates close to the 95% nominal level .in contrast , the complete - case and available - case analyses often led to larger bias and poor coverage rates .for example , the bias of the estimate of under the complete - case and available - case analyses were and , respectively , substantially larger than that under the proposed method ( i.e. , 0.05 ) ; the coverage rate using the proposed method was about 94% , whereas those using the complete - case and available - case analyses were under 82% .the second simulation study ( simulation b ) was designed to evaluate the performance of the proposed method when the nonignorable missing data mechanism is misspecified , for example , data actually are missing at random ( mar ) .we generated the first measurements , and , from normal distribution independently , and generated the second and third measurements based on the same transition model as in simulation a. we assumed the hazard of dropout at the second and third measurement times depended on the previous ( observed ) value of quadratically , but not on the current ( missing ) value of , that is , under this mar dropout model , on average , 37% ( 21% of member 1 and 21% of member 2 ) of the dyads dropped out at the second time point and 27% ( 24% of member 1 and 33% of member 2 ) dropped out at the third measurement time . to fit the simulated data , we considered two nonignorable models with different specifications of the dropout ( or selection ) model .the first nonignorable model assumed a flexible dropout model which included the true dropout process ( [ simu2 ] ) as a specific case with ; and the second nonignorable model took a misspecified dropout model of the form table [ simulationc ] shows the bias , standard error and coverage rate of the 95% ci under different approaches .when the missing data were mar , the complete - case analysis was invalid and led to biased estimates and poor coverage rates because the complete cases are not random samples from the original population .in contrast , the available - case analysis yielded unbiased estimates and coverage rates close to the 95% nominal level .for the nonignorable models , the one with the flexible dropout model yielded unbiased estimates and reasonable coverage rates , whereas the model with the misspecified dropout model led to biased estimates ( e.g. , and ) and poor coverage rates .this result is not surprising because it is well known that selection models are sensitive to the misspecification of the dropout model [ ; ] . for nonignorable missing data ,the difficulty is that we can not judge whether a specific dropout model is misspecified or not based solely on observed data because the observed data contain no information about the ( nonignorable ) missing data mechanism . to address this difficulty ,one possible approach is to specify a flexible dropout model to decrease the chance of model misspecification .alternatively , maybe a better approach is to conduct sensitivity analysis to evaluate how the results vary when the dropout model varies .we will illustrate the latter approach in the next section .= [ tab2 ] .2cccccd2.2ccd2.2cc@ & & & & & & & & + & & & & + & & & & + * parameter * & & * se * & * coverage * & * bias * & * se * & * coverage * & & * se * & * coverage * & & * se * & * coverage * + & -0.06 & 0.08 & 0.86 & 0.00 & 0.06 & 0.95 & -0.01 & 0.06 & 0.95 & 0.14 & 0.06 & 0.78 + & -0.09 & 0.08 & 0.82 & 0.00 & 0.05 & 0.96 & 0.07 & 0.05 & 0.97 & -0.01 & 0.05 & 0.95 + & -0.11 & 0.14 & 0.84 & 0.00 & 0.10 & 0.95 & 0.04 & 0.08 & 0.96 & 0.03 & 0.08 & 0.94 + & -0.13 & 0.14 & 0.84 & 0.00 & 0.10 & 0.96 & 0.02 & 0.09 & 0.97 & 0.02 & 0.09 & 0.98 + & -0.07 & 0.08 & 0.87 & 0.00 & 0.06 & 0.96 & 0.02 & 0.06 & 0.97 & 0.12 & 0.06 & 0.79 + & -0.10 & 0.08 & 0.78 & 0.00 & 0.07 & 0.96 & 0.00 & 0.06 & 0.96 & -0.08 & 0.06 & 0.93 + & -0.14 & 0.14 & 0.82 & 0.00 & 0.10 & 0.96&0.01 & 0.12 & 0.94 & 0.01 & 0.12 & 0.95 + & -0.14 & 0.13 & 0.83 & 0.01 & 0.10 & 0.96 & 0.01 & 0.09 & 0.97 & 0.01 & 0.09 & 0.98 +[ secapplication ] we applied our method to the longitudinal metastatic breast cancer data .we used the first - order random - effects transition model for the longitudinal measurement process . in the model, we included 5 covariates : chronic pain measured by the multidimensional pain inventory ( mpi ) and previous cesd scores from both the patients and spouses , and the patient s stage of cancer . in the discrete - time dropout model, we included the subject s current and previous cesd scores , mpi measurements and the patient s stage of cancer as covariates .age was excluded from the models because its estimate was very close to 0 and not significant .we used 5,000 iterations to burn in and made inference based on 5,000 posterior draws .we also conducted the complete - case and available - case analyses for the purpose of comparison .[ tab3 ] .14d2.14d2.14@ & & & & + & intercept & 2.53 ( -1.71 , 6.77 ) & 0.99 ( -2.55 , 4.52 ) & 5.10 ( 3.31 , 6.59 ) + & patient cesd & 0.43 ( 0.29 , 0.58 ) & 0.56 ( 0.44 , 0.68)&0.87 ( 0.80 , 0.93 ) + & spouse cesd & 0.07 ( -0.06 , 0.20 ) & 0.06 ( -0.06 , 0.17 ) & 0.14 ( 0.09 , 0.19 ) + & patient mpi & 0.94 ( 0.22 , 1.67 ) & 0.82 ( 0.21 , 1.43 ) & 1.24 ( 0.83 , 1.64 ) + & spouse mpi & 1.06 ( 0.29 , 1.82 ) & 0.90 ( 0.31 , 1.48 ) & 0.62 ( 0.40 , 0.84 ) + & cancer stage & 0.39 ( -0.81 , 1.60 ) & 0.59 ( -0.43 , 1.60 ) & 0.10 ( -0.47 , 0.66 ) + [ 4pt ] spouses & intercept & 3.68 ( -0.55 , 7.92 ) & 2.00 ( -1.63 , 5.64 ) & 8.16 ( 4.26 , 11.9 ) + & patient cesd & -0.05 ( -0.19 , 0.09 ) & 0.01 ( -0.11 , 0.13 ) & 0.68 ( 0.63 , 0.74 ) + & spouse cesd & 0.77 ( 0.64 , 0.90 ) & 0.78 ( 0.66 , 0.89 ) & 0.76 ( 0.71 , 0.81 ) + & patient mpi & 0.43 ( -0.29 , 1.15 ) & 0.27 ( -0.27 , 0.81 ) & 0.53 ( 0.33 , 0.73 ) + & spouse mpi & 0.55 ( -0.22 , 1.31 ) & 0.58 ( -0.04 , 1.20)&0.36 ( -0.64 , 1.15 ) + & cancer stage & -0.42 ( -1.63 , 0.79)&-0.21 ( -1.23 , 0.80)&-0.50 ( -0.92 , 0.09 ) + as shown in table [ tab3 ] , the proposed method suggests significant `` partner '' effects for the patients . specifically , the patient s depression increases with her spouse s mpi [ and 95% and previous cesd [ and 95% .in addition , there are also significant `` actor '' effects for the patients , that is , the patient s depression is positively correlated with her own mpi and previous cesd scores . for the spouses , we observed similar significant `` partner '' effects : the spouse s depression increases with the patient s mpi and previous cesd scores .the `` actor '' effects for the spouses are different from those for the patients .the spouse s depression correlates with his previous cesd scores but not the mpi level , whereas the patient s depression is related to both variables .based on these results , we can see that the patients and spouses are highly interdependent and influence each other s depression status .therefore , when designing a prevention program to reduce depression in patients , we may achieve better outcomes by targeting both patients and spouses simultaneously .as for the dropout process , the results in table [ tab4 ] suggest that the missing data for the patients are nonignorable because the probability of dropout is significantly associated with the patient s current ( missing ) cesd score .in contrast , the missing data for the spouse appears to be ignorable , as the probability of dropout does not depend on the spouse s current ( missing ) cesd score . for the variance components , the estimates of residuals variances for patients and spouses are [ 95% and [ 95% , respectively. the estimates of the variances for the random effects and are [ 95% and [ 95% , respectively , suggesting substantial variations across dyads .compared to the proposed approach , both the complete - case and available - case analyses fail to detect some `` partner '' effects . for example , for spouses , the complete - case and available - case analyses assert that the spouse s cesd is correlated with his own previous cesd scores only , whereas the proposed method suggested that the spouse s cesd is related not only to his own cesd but also to the patient s cesd and mpi level .in addition , for patients , the `` partner '' effect of the spouse s cesd is not significant under the complete - case and available - case analyses , but is significant under the proposed approach .these results suggest that ignoring the nonignorable dropouts could lead to a failure to detect important covariate effects .nonidentifiability is a common problem when modeling nonignorable missing data . in our approach ,the observed data contain very limited information on the parameters that link the missing outcome with the dropout process , that is , and in the dropout model .the identification of these parameters is heavily driven by the untestable model assumptions [ ; ] . in this case, a sensible strategy is to perform a sensitivity analysis to examine how the inference changes with respect to the values of and [ daniels and hogan ( , ) ; ] .we conducted a bayesian sensitivity analysis by assuming informative normal prior distributions for and with a small variance of 0.01 and the mean fixed , successively , at various values .figures [ figsensitivitygibbs1 ] and [ figsensitivitygibbs2 ] show the parameter estimates of the measurement models when the prior means of and vary from to 3 .in general , the estimates were quite stable , except that the estimate of cancer stage in the measurement model of patient ( figure [ figsensitivitygibbs1 ] ) and the estimate of spouse s mpi in the measurement model of spouse ( figure [ figsensitivitygibbs2 ] ) demonstrated some variations . and with a mean varying from to 3 and a fixed variance of 0.01 . ][ fig1 ] and with a mean varying from to 3 and a fixed variance of 0.01 . ] [ fig2 ] we conducted another sensitivity analysis on the prior distributions of , , and .we considered various inverse gamma priors , , by setting and 5 .as shown in table [ tablepriorvar ] , the estimates of the measurement model parameters were stable under different prior distributions , suggesting the proposed method is not sensitive to the priors of these parameters .[ seccon ] we have developed a selection - model - based approach to analyze longitudinal dyadic data with nonignorable dropouts .we model the longitudinal outcome process using a transition model and account for the correlation within dyads using random effects . in the model , we allow a subject s outcome to depend on not only his / her own characteristics but also the characteristics of the other member in the dyad . as a result ,the parameters of the proposed model have appealing interpretations as `` actor '' and `` partner '' effects , which greatly facilitates the understanding of interdependence within a relationship and the design of more efficient prevention programs . to account for the nonignorable dropout, we adopt a discrete time survival model to link the dropout process with the longitudinal measurement process .we used the data augment method to address the complex missing data problem caused by dropout and interdependence within dyads .the simulation study shows that the proposed method yields consistent estimates with correct coverage rates .we apply our methodology to the longitudinal dyadic data collected from a breast cancer study .our method identifies more `` partner '' effects than the methods that ignore the missing data , thereby providing extra insights into the interdependence of the dyads .for example , the methods that ignore the missing data suggest that the spouse s cesd related only to his own previous cesd scores , whereas the proposed method suggested that the spouse s cesd related not only to his own cesd but also to the patient s cesd and mpi level. this extra information can be useful for the design of more efficient depression prevention programs for breast cancer patients .[ tab5 ] .14d2.14d2.14@ & & & & + & intercept & 4.72 ( 3.32 , 6.11 ) & 5.00 ( 3.48 , 6.47)&5.02 ( 3.57 , 6.48 ) + & patient cesd & 0.87 ( 0.81 , 0.93 ) & 0.86 ( 0.80 , 0.92 ) & 0.88 ( 0.83 , 0.94 ) + & spouse cesd & 0.14 ( 0.09 , 0.19 ) & 0.14 ( 0.08 , 0.19 ) & 0.13 ( 0.08 , 0.18 ) + & patient mpi & 1.27 ( 0.84 , 1.71 ) & 1.12 ( 0.67 , 1.60 ) & 1.20 ( 0.85 , 1.57 ) + & spouse mpi & 0.71 ( 0.49 , 0.91 ) & 0.68 ( 0.46 , 0.87 ) & 0.61 ( 0.39 , 0.82 ) + & cancer stage & -0.03 ( -0.50 , 0.50)&0.18 ( -0.31 , 0.65)&-0.08 ( -0.57 , 0.40 ) + spouses & intercept & 6.40 ( 4.39 , 8.41 ) & 7.56 ( 5.35 , 9.93 ) & 7.52 ( 5.43 , 9.55 ) + & patient cesd & 0.67 ( 0.62 , 0.73 ) & 0.67 ( 0.62 , 0.72 ) & 0.69 ( 0.64 , 0.73 ) + & spouse cesd & 0.76 ( 0.71 , 0.80 ) & 0.75 ( 0.71 , 0.81 ) & 0.75 ( 0.71 , 0.80 ) + & patient mpi & 0.51 ( 0.32 , 0.71 ) & 0.54 ( 0.35 , 0.73 ) & 0.53 ( 0.34 , 0.72 ) + & spouse mpi & 0.79 ( -0.05 , 1.46 ) & 0.54 ( -0.03 , 1.06 ) & 0.45 ( -0.23 , 1.09 ) + & cancer stage & -0.41 ( -0.86 , 0.02 ) & -0.38 ( -0.81 , 0.03 ) & -0.48 ( -0.87 , 0.08 ) + in the proposed dropout model ( [ dropmodel ] ) , we assume that time - dependent covariates and , , have captured all important time - dependent factors that influence dropout . however , this assumption may not be always true .a more flexible approach is to include in the model a time - dependent random effect to represent all unmeasured time - variant factors that influence dropout .we can further put a hierarchical structure on to shrink it toward a dyad - level time - invariant random effect to account for the effects of unmeasured time - invariance factors on dropout .in addition , in ( [ dropmodel ] ) , in order to allow members in a dyad to drop out at different times , we specify separate dropout models for each dyadic member , linked by a common random effect . although the common random effect makes the members in a dyad more likely to drop out at the same time , it may not be the most effective modeling approach when dropout mostly occurs at the dyad level . in this case, a more effective approach is that , in addition to the dyad - level random effect , we further put hierarchical structure on the coefficients of common covariates ( in the two dropout models ) to shrink toward a common value to reflect that dropout is almost always at the dyad level .we would like to thank the referees , associate editor and editor ( professor susan paddock ) for very helpful comments that substantially improved this paper .
dyadic data are common in the social and behavioral sciences , in which members of dyads are correlated due to the interdependence structure within dyads . the analysis of longitudinal dyadic data becomes complex when nonignorable dropouts occur . we propose a fully bayesian selection - model - based approach to analyze longitudinal dyadic data with nonignorable dropouts . we model repeated measures on subjects by a transition model and account for within - dyad correlations by random effects . in the model , we allow subject s outcome to depend on his / her own characteristics and measure history , as well as those of the other member in the dyad . we further account for the nonignorable missing data mechanism using a selection model in which the probability of dropout depends on the missing outcome . we propose a gibbs sampler algorithm to fit the model . simulation studies show that the proposed method effectively addresses the problem of nonignorable dropouts . we illustrate our methodology using a longitudinal breast cancer study . .
the study of complex networks has notably increased in the last years with applications to a variety of fields ranging from computer science and biology to social science and finance .a central problem in network science is the study of the random walks ( rw ) on a graph , and in particular of the relation between the topological properties of the network and the properties of diffusion on it .this subject is not only interesting from a purely theoretical perspective , but it has important implications to various scientific issues ranging from epidemics to the classification of web pages through pagerank algorithm .finally , rw theory is also used in algorithms for community detection . in this paperwe set up a new framework for the study of topologically biased random walks on graphs .this allows to address problems of community detection and synchronization in the field of complex networks .in particular by using topological properties of the network to bias the rws we explore the network structure more efficiently . a similar approach but with different focus can be found in . in this researchwe are motivated by the idea that biased random walks can be efficiently used for community finding .to this aim we introduce a set of mathematical tools which allow us an efficient investigation of the `` bias parameters '' space .we apply this tools to uncover some details in the spectra of graph transition matrix , and use the relation between spectra and communities in order to introduce a novel methodology for an efficient community finding .the paper is organized as follows : in the first section we define the topologically biased random walks ( tbrw ) .we then develop the mathematical formalism used in this paper , specifically the perturbation methods and the parametric equations of motion , to track the behaviour of different biases . in the second section we focus on the behavior of spectral gap in biased random walks .we define the conditions for which such a spectral gap is maximal and we present numerical evidence that this maximum is global . in the third section we present an invariant quantity for the biased random walk ;such constant quantity depends only upon topology for a broad class of biased random walks .finally , in the fourth section we present a general methodology for the application of different tbrw in the community finding problems .we then conclude by providing a short discussion of the material presented and by providing an outlook on different possible applications of tbrw .rws on graphs are a sub - class of markov chains . the traditional approach deals with the connection of the _ unbiased _ rw properties to the spectral features of _ transition operators _ associated to the network .a generic graph can be represented by means of the adjacency matrix whose entries are if an edge connects vertices and and otherwise .here we consider undirected graphs so that is symmetric . the _ normal matrix _ is related to through , where is a diagonal matrix with , i.e. the degree , or number of edges , of vertex . in the following we use uppercase letters for non - diagonal matrices and lowercase letters for the diagonal ones .note that by definition .consequently with _ iif _ , i.e. if and are nearest neighbors vertices .the matrix defines the transition probabilities for an _ unbiased _ random walker to pass from to .in such a case has the same positive value for any of the neighbors of and vanishes for all the other vertices . in analogy to the operator defining the single step transition probabilities in general markov chains , is also called the transition _ matrix _ of the unbiased rw .a _ biased _ rw on a graph can be defined by a more general transition matrix where the element gives again the probability that a walker on the vertex of the graph will move to the vertex in a single step , but depending on appropriate weights for each pair of vertex . a genuine way to write these probabilitiesis to assign weights which represent the rates of jumps from vertex to vertex and normalize them : [ probabpasage ] t_ij=. in this paper we consider biases which are self - consistently related to graph topological properties .for instance can be a function of the vertex properties ( the network degree , clustering , etc . ) or some functions of the edge ones ( multiplicity or shortest path betweenness ) or any combination of the two .there are other choices of biases found in the literature such as for instance maximal entropy related biases .some of the results mentioned in this paper hold also for biases which are not connected to graph properties as will be mentioned in any such case .our focus on graph properties for biases is directly connected with application of biased random walks in examination of community structure in complex networks .let us start by considering a vertex property of the vertex ( it can be either local as for example the degree , or related to the first neighbors of as the clustering coefficient , or global as the vertex betweenness ) .we choose the following form for the weights : [ probbias ] w_ij = a_ije^x_i , where the parameter tunes the strength of the bias .for the unbiased case is recovered . by varying probability of a walker to move from vertex to vertex will be enhanced or reduced with respect to the unbiased case according to the property of the vertex .for instance when , i.e. the degree of the vertex , for positive values of the parameter the walker will spend more time on vertices with high degree , i.e. it will be attracted by hubs .for it will instead try to `` avoid '' traffic congestion by spending its time on the vertices with small degree .the entries of the transition matrix can now be written as : [ transitionentries ] t_ij(,)=. for this choice of bias we find the following results : ( i ) we have a unique representation of any given network via operator , i.e. knowing the operator , we can reconstruct the graph ; ( ii ) for small we can use perturbation methods around the unbasied case ; ( iii ) this choice of bias permits in general also to visit vertices with vanishing feature , which instead is forbidden for instance for a power law ; ( iv ) this choice of biases is very common in the studies of energy landscapes , when biases represent energies ( see for example and references therein ) . in a similar way one can consider a _edge property ( for instance edge multiplicity or shortest path betweenness ) as bias . in this casewe can write the transition probability as : [ transitionentriesedge ] t_ij(,)=. the general case of some complicated multiparameter bias strategy can be finally written as [ transitionentriesmulti ] t_ij(,,)= . while we mostly consider biased rw based on vertex properties , as shown below , most of the results can be extended to the other cases .the transition matrix in the former case can also be written as : where the diagonal matrices and are such that and .the frobenius - perron theorem implies that the largest eigenvalue of is always .furthermore , the eigenvector associated to is strictly positive in a connected aperiodic graph .its normalized version , denoted as , gives the asymptotic stationary distribution of the biased rw on the graph .assuming for it the form , where and is a normalization constant , and plugging this in the equation we get : hence the equation holds _ iif _ .therefore the stable asymptotic distribution of vertex centerd biased rws is [ explidistro ] p_i()=()^-1 e^x_iz_i ( ) .for we have the usual form of the stationary distribution in an unbiased rw where and . for general it can be easily demonstrated that the asymptotic solution of edge biased rw is , while for multiparametric rw the solution is . using eqs .( [ explidistro ] ) and ( [ transitionentries ] ) we can prove that the detailed balance condition holds .at this point it is convenient to introduce a different approach to the problem .we start by symmetrizing the matrix in the following way : ^{-1/2}\bsy{\hat{t}}(\bsy{x } , \beta ) [ \bsy{\hat{p}}(\beta)]^{1/2}\,,\ ] ] where is the diagonal matrix with the stationary distribution on the diagonal .the entries of the symmetric matrix for vertex centered case are given by the symmetric matrix shares the same eigenvalues with the matrix ; anyhow the set of eigenvectors is different and forms a complete orthogonal basis , allowing to define a meaningful distance between vertices. such distance can provide important additional information in the problem of community partition of complex networks .if is the eigenvector of the asymmetric matrix associated to the eigenvalue ( therefore ) , the corresponding eigenvector of the symmetric matrix , can always be written as .in particular for we have .the same transformation ( [ symmat ] ) can be applied to the most general multiparametric rw . in that casethe symmetric operator is this form also enables usage of perturbation theory for hermitian linear operators .for instance , knowing the eigenvalue associated to eigenvector , we can write the following expansions at sufficiently small : and .it follows that for a vertex centered bias [ firstorderlambda ] _^(1)()=^s(1 ) ( , ) , where , \ ] ] with being the anticommutator operator .operator and are diagonal matrices with and which is the expected value of that an random walker , will find moving from vertex to its neighbors . in the case of edge biasthe change of symmetric matrix with parameter can be written as , and represents the schur - hadamard product i.e. element wise multiplication of matrix elements .the eigenvector components in at the first order of expansion in the basis of the eigenvectors at are given by ( for ) : [ firstordvecs ] = .for the product vanishes and eqs .( [ derivation ] ) and ( [ firstordvecs ] ) hold only for non - degenerate cases . in general ,usual quantum mechanical perturbation theory can be used to go to higher order perturbations or to take into account degeneracy of eigenvalues .we can also exploit further the formal analogy with quantum mechanics using parametric equations of motion ( pem ) to study the dependence of the spectrum of .if we know such spectrum for one value of , we can calculate it for any other value of by solving a set of differential equations corresponding to pem in quantum mechanics .they are nothing else the expressions of eqs .( [ firstorderlambda ] ) and ( [ firstordvecs ] ) in an arbitrary complete orthonormal base .first the eigenvector is expanded in such a base : .we can then write where ( ) is a column ( row ) vector with entries and is the matrix with entries .let us now define the matrix whose rows are the copies of vector .the differential equation for the eigenvectors in the basis is then a practical way to integrate eqs .( [ pemlambda ] ) and ( [ pemvector ] ) can be found in . in order to calculate parameter dependence of eigenvectors and eigenvalues ,the best way to proceed is to perform an lu decomposition of the matrix as the product of a lower triangular matrix and an upper triangular matrix , and integrate differential equations of higher order which can be constructed in the same way as equations ( [ pemlambda ] ) and ( [ pemvector ] ) .a suitable choice for the basis is just the ordinary unit vectors spanned by vertices , i.e. .we found that for practical purposes , depending on the studied network , it is appropriate to use pem until the error increases to much and then diagonalize matrix again to get better precision .pem efficiently enables study of the large set of parameters for large networks due to its compatitive advantage over ordinary diagonalization .vs for networks of 10 communities with 10 vertices each ( the probability for an edge to be in a community is while outside of the community it is ) .solid points represent the solutions computed via diagonalization , while lines report the value obtained through integration of pem .different bias choice have been tested .circles ( blue ) are related to degree - based strategy , square ( red ) are related to clustering - based strategies , diamonds ( green ) multiplicity - based strategies . the physical quantities to get the variable in eq .( [ probbias ] ) in these strategies have been normalized with respect to their maximum value.,width=302,height=245 ]a key variable in the spectral theory of graphs is the _ spectral gap _ , i.e. the difference between first unitary and the second eigenvalues . the spectral gap measureshow fast the information on the rw initial distribution is destroyed and the stationary distribution is approached .the characteristic time for that is \simeq 1/\mu$ ] .we show in fig . 1 the dependence of spectral gap of simulated graphs with communities for different strategies ( degree , clustering and multiplicity based ) at a given value of parameter . in all investigated casesthe spectral gap has its well defined maximum , i.e. the value of parameter for which the random walker converges to stationary distribution with the largest rate .the condition of maximal spectral gap implies that it is a stationary point for the function , i.e. that its first order perturbation coefficient vanishes at this point : \ket{v_2(\beta_m)},\end{aligned}\ ] ] where and are defined above .the squares of entries of the vector in the chosen basis , define a particular measure on the graph .equation ( [ maximalspectralgap ] ) can be written as .thus we conclude that the local spectral maximum is achieved if the average difference between property and its expectation , with respect to this measure , in the neighborhood of vertex vanishes .we have studied behavior of spectral gap for different sets of real and simulated networks ( barabsi - albert model with different range of parameters , erds - rnyi model and random netwroks with given community structure ) and three different strategies ( degree - based , clustering - based and multiplicity - based ) .although in general it is not clear that the local maximum of spectral gap is unique , we have found only one maximum in all the studied networks .this observation is interesting because for all cases the shapes of spectral gap _ vs. _ looks typically gaussian - like . in both limits the spectral gap of heterogeneous network is indeed typically zero , as the rw stays in the vicinity of the vertices with maximal or minimal value of studied property .a fundamental question in the theory of complex networks is how topology affects dynamics on networks .our choice of -parametrized biases provides a useful tool to investigate this relationship .a central issue is , for instance , given by the search of properties of the transition matrix which are independent of and the chosen bias , but depend only on the topology of the network .an important example comes from the analysis the determinant of as a function of the bias parameters : for vertex centered bias using eq .( [ derivation ] ) we have and using the diagonality of the and in other words the quantity is a topological constant which does not depend on the choice of parameters .for we get and it follows that this quantity does not depend on the choice of vertex biases either .it can be shown that such quantity coincides with the determinant of adjacency matrix which must be conserved for all processes ..,width=302,height=245 ] there are many competing algorithms and methods for community detection . despite a significant scientific effort to find such reliable algorithms ,there is not yet agreement on a single general solving algorithm for the various cases . in this section instead of adding another precise recipe , we want to suggest a general methodology based on tbrw which could be used for community detection algorithms . to add trouble ,the very definition of communities is not a solid one . in most of the cases we define communities as connected subgraphswhose density of edges is larger within the proposed community than outside it ( a concept quantified by modularity ) .scientific community is therefore thriving to find a benchmark in order to assess the success of various methods .one approach is to create synthetic graphs with assigned community structure ( benchmark algorithms ) and test through them the community detection recipes .the girvan - newman ( gn ) and lancichinetti - fortunato - radicchi ( lfr ) are the most common benchmark algorithms . in both these models several topological properties ( not only edge density )are unevenly distributed within the same community and between different ones .we use this property to propose a novel methodology creating suitable tbrw for community detection .the difference between internal and external part of a community is related to the `` physical '' meaning of the graph . in many real processesthe establishment of a community is facilitated by the subgraph structure .for instance in social networks agents have a higher probability of communication when they share a lot of friends .we test our approach on gn benchmark since in this case we can easily compute the expected differences between the frequency of biased variables within and outside the community . in this sectionwe will describe how to use tbrw for community detection . for method is rather similar to the one introduced by donetti and muoz .the most notable difference is that we consider the spectral properties of transition matrix instead of the laplacian one .we decide if a vertex belongs to a community according to the following ideas : _( i ) _ we expect that the vertices belonging to the same community to have similar values of eigenvectors components ; _ ( ii ) _ we expect relevant eigenvectors to have the largest eigenvalues .indeed , spectral gap is associated with temporal convergence of random walker fluctuations to the ergodic stationary state .if the network has well defined communities , we expect the random walker to spend some time in the community rather than escaping immediately out of it .therefore the speed of convergence to the ergodic state should be related to the community structure .therefore eigenvectors associated with largest eigenvalues ( except for the maximal eigenvalue 1 ) should be correlated with community structure .coming back to the above mentioned donetti and muoz approach here we use the fact that some vertex properties will be more common inside a community and less frequent between different communities .we then vary the bias parameters trying both to shrink the spectral gap in transition matrix and to maximize the separation between relevant eigenvalues and the rest of the spectra . , , , .there is a clear gap between `` community '' band and the rest of the eigenvalues . , width=302,height=245 ] as a function of parameter which biases rw according to degrees of the vertices and parameter which bias rw according to multiplicities of the edges .both degrees and multiplicity values are normalized with respect to the maximal degree and multiplicity ( therefore the largest value is one).,width=302,height=245 ] for example in the case of gn benchmark the network consist of communities each with vertices i.e. vertices all together .the probability that the two vertices which belong to the same community are connected is .the probability that the two vertices which belong to different communities are connected is .the fundamental parameter which characterizes the difficulty of detecting the structure is [ mu ] = , where is the mean degree related to inter - community connections and is the mean degree related to edges inside - community . as a rule of thumb we can expect to find well defined communities when , and observe some signature of communities even when .the probabilities and are related via the control parameter as .we now examine the edge multiplicity .the latter is defined as the number of common neighbors shared by neighbouring vertices .the expected multiplicity of an edge connecting vertices inter - community and inside - communities are respectively on fig .[ params ] we plot the ratio of the quantitites above defined , , _ vs. _ the parameter .we see that even for the ratio remains smaller than implying that the multiplicity is more common in the edges in the same community .based on this analysis for this particular example we expect that if we want to find well - defined communities via tbrw we have to increase bias with respect to the multiplicity . through numerical simulationswe find that the number of communities is related to number of eigenvalues in the `` community band '' .namely one in general observes a gap between eigenvalues and the next eigenvalue evident in a network with a strong community structure ( ) .the explanation that we give for that phenomenon can be expressed by considering a network of separated graphs .for such a network there are degenerate eignevalues .if we now start to connect these graphs with very few edges such a degeneracy is broken with the largest eigenvalue remaining while the next eigenvalues staying close to it .the distance between any two of this set of eigenvalues will be smaller than the gap between this community band and the rest of the eigenvalues in the spectrum .therefore , the number of eigenvalues different of which are forming this `` community band '' is always equal to the number of communities minus one , at least for different gn - type networks with different number of communities and different sizes , as long as .for example in the case of 1000 gn networks described with parameters , , , , i.e. , the histograms of eigenvalues are depicted on figure [ band ] . for our purposes we used two parameters biased rw ,in which topological properties are i.e. the normalized degree ( with respect to maximal degree in the network ) and i.e. the normalized multiplicity ( with respect to maximal multiplicity in the network ) .we choosed gn network whose parameters are , , and , for which . being the number of communities , as a criterion for good choice of parameters we decided to use the difference between and , i.e. , we decided to maximize the gap between `` community band '' and the rest of eigenvalues ; checking at the same time that the spectral gap shrinks . in fig .[ lambda3lambda4 ] , we plot such a quantity with respect to different biases .it is important to mention that for every single network instance there are different optimal parameters .this can be seen on figure [ lambdahistograms ] , where we show the difference between unbiased and biased eigenvalues for 1000 gn nets created with same parameters . as shown in the figurethe difference between fourth and fifth eigenvalue is now not necessarily the optimal for this choice of parameters .every realization of the network should be independently analyzed and its own parameters should be carefully chosen . , , and for 1000 gn networks described with parameters , , and .with black colour we indicate the eigenvalues of nonbiased rw , while with red we indicate the eigenvalues of rw biased with parameters and .note how this choice of parameters does not maximize `` community gap '' for all the different realizations of monitored gn network.,width=302,height=245 ] in the figs .[ unbiased ] and [ biased ] we present instead the difference between unbiased and biased projection on three eigenvectors with largest nontrivial eigenvalues .using 3d view it is easy to check that communities are better separated in the biased case then in the non - biased case . and .for this choice of parameters .there is a strong dispersion between different vertices which belong to the same community.,width=302,height=245 ] and .different markers represent four different predefined communities .this is an example of the same gn graph realization with and as the one on the previous figure . for this choice of parameters .one can notice tetrahedral distribution of vertices in which vertices from the same community belong to the same branch of tetrahedron.,width=302,height=245 ]in this paper we presented a detailed theoretical framework to analyze the evolution of tbrw on a graph . by using as biassome topological property of the graph itself allows to use the rw as a tool to explore the environment .this method maps vertices of the graph to different points in the -dimensional euclidean space naturally associated with the given graph . in this way we can measure distances between vertices depending on the chosen bias strategy and bias parameters .in particular we developed a perturbative approach to the spectrum of eigenvalues and eigenvectors associated with the transition matrix of the system .more generally we generalized the quantum pem approach to the present case .this led naturally to study the behavior of the gap between the largest and the second eigenvalue of the spectrum characterizing the relaxation to the stationary markov state . in numerical applications of such a theoretical frameworkwe have observed a unimodal shape of the spectral gap _ vs. _ the bias parameter which is not an obvious feature of the studied processes .we have finally outlined a very promising application of topologically biased random walks to the fundamental problem of community finding .we described the basic ideas and proposed some criteria for the choice of parameters , by considering the particular case of gn graphs .we are working further in direction of this application , but the number of possible strategies ( different topological properties we can use for biasing ) and types of networks is just too large to be presented in one paper .furthermore , since in many dynamical systems as the www or biological networks , a feedback between function and form ( topology ) is evident , our framework may be a useful way to describe mathematically such an observed mechanism . in the case of biology , for instance, the shape of the metabolic networks can be triggered not only by the chemical properties of the compounds , but also by the possibility of the metabolites to interact .biased rw can be therefore the mechanism through which a network attains a particular form for a given function . by introducing such approachwe can now address the problem of community detection in the graph .this the reason why here we have not introduced another precise method for community detection , but rather a possible framework to create different community finding methods with different _ ad hoc _ strategies . indeed in real situations we expect different types of network to be efficiently explored by use of different topological properties .this explains why we believe that tbrw could play a role in community detection problems , and we hope to stimulate further developments , in the network scientific community , of this promising methodology .* acknowledgments * vinko zlati wants to thanks mses of the republic of croatia through project no . 098 - 0352828 - 2836 for partial support .authors acknowledge support from ec fet open project `` foc '' nr. 255987 .l. lovasz , _ combinatorics _ * 2 * , 1 - 48 , ( 1993 ) d. aldous , j. fill , _ reversible markov chains and random walks on graphs _ , in press , r. pastor - satorras , a. vespignani _ phys ._ * 86 * , 3200 , ( 2001 ) .
we present a new approach of topology biased random walks for undirected networks . we focus on a one parameter family of biases and by using a formal analogy with perturbation theory in quantum mechanics we investigate the features of biased random walks . this analogy is extended through the use of parametric equations of motion ( pem ) to study the features of random walks _ vs. _ parameter values . furthermore , we show an analysis of the spectral gap maximum associated to the value of the second eigenvalue of the transition matrix related to the relaxation rate to the stationary state . applications of these studies allow _ ad hoc _ algorithms for the exploration of complex networks and their communities .
the ability to gain control of a huge amount of internet hosts could be easily achieved by the exploitation of worms which self - propagate through popular internet applications and services .internet worms have already proven their capability of inflicting massive - scale disruption and damage to the internet infrastructure .these worms employ normal _ scanning _ as a strategy to find potential vulnerable targets , i.e. , they randomly select victims from the ip address space .so far , there have been many existing schemes that are effective in detecting such scanning worms , e.g. , by capturing the scanning events or by passively detecting abnormal network traffic activities . in recent years, peer - to - peer ( p2p ) overlay applications have experienced an explosive growth , and now dominate large fractions of both the internet users and traffic volume ; thus , a new type of worms that leverage the popular p2p overlay applications , called _ p2p worms _ , pose a very serious threat to the internet .generally , the p2p worms can be grouped into two categories : _ passive _ p2p worms and _ active _ p2p worms .the passive p2p worm attack is generally launched either by copying such worms into a few p2p hosts shared folders with attractive names , or by participating into the overlay and responding to queries with the index information of worms .unable to identify the worm content , normal p2p hosts download these worms unsuspectedly into their own shared folders , from which others may download later without being aware of the threat , thus passively contributing to the worm propagation .the passive p2p worm attack could be mitigated by current patching systems and reputation models . in this paper , we focus on another serious p2p worm : active p2p worm .the active p2p worms could utilize the p2p overlay applications to retrieve the information of a few vulnerable p2p hosts and then infect these hosts , or as an alternative , these worms are directly released in a hit list of p2p hosts to bootstrap the worm infection . since the active p2p worms have the capacity of gaining control of the infected p2p hosts , they could perform rapid _ topological self - propagation _ by spreading themselves to neighboring hosts , and in turn , spreading throughout the whole network to affect the quality of overlay service and finally cause the overlay service to be unusable .the p2p overlay provides an accurate way for worms to find more vulnerable hosts easily without probing randomly selected ip addresses ( i.e. , low connection failure rate ) .moreover , the worm attack traffic could easily blend into normal p2p traffic , so that the active p2p worms will be more deadly than scanning worms .that is , they do not exhibit easily detectable anomalies in the network traffic as scanning worms do , so many existing defenses against scanning worms are no longer effective . besides the above internal infection in the p2p overlay, the infected p2p hosts could again mount attacks to external hosts . in similar sense , since the p2p overlay applications are pervasive on today s internet , it is also attractive for malicious external hosts to mount attacks against the p2p overlay applications and then employ them as an ideal platform to perform future massive - scale attacks , e.g. , botnet attacks . in this paper, we aim to develop a _holistic _ immunity system to provide the mechanisms of both _ internal defense _ and _ external protection _ against active p2p worms . in our system, we elect a small subset of p2p overlay nodes , _ phagocytes _ , which are immune with high probability and specialized in finding and `` eating '' active p2p worms .each phagocyte in the p2p overlay is assigned to manage a group of p2p hosts .these phagocytes monitor their managed p2p hosts connection patterns and traffic volume in an attempt to detect active p2p worm attacks .once detected , the local isolation procedure will cut off the links of all the infected p2p hosts .afterwards , the responsible phagocyte performs the contagion - based alert propagation to spread worm alerts to the neighboring phagocytes , and in turn , to other phagocytes . here, we adopt a threshold strategy to limit the impact area and enhance the robustness against the malicious alert propagations generated by infected phagocytes . finally , the phagocytes help acquire the software patches and distribute them to the managed p2p hosts . with the above four modules , i.e. , detection , local isolation , alert propagation and software patching , our system is capable of preventing internal active p2p worm attacks from being effectively mounted within the p2p overlay network .the phagocytes also provide the access control and filtering mechanisms for the connection establishment between the internal p2p overlay and the external hosts .firstly , the p2p traffic should be contained within the p2p overlay , and we forbid any p2p traffic to leak from the p2p overlay to external hosts .this is because such p2p traffic is generally considered to be malicious and it is possible that the p2p worms ride on such p2p traffic to spread to the external hosts .secondly , in order to prevent external worms from attacking the p2p overlay , we hide the p2p hosts ip addresses with the help of scalable distributed dns service , e.g. , codons .an external host who wants to gain access to the p2p overlay has no alternative but to perform an interaction towards the associated phagocyte to solve an adaptive computational puzzle ; then , according to the authenticity of the puzzle solution , the phagocyte can determine whether to process the request .we implement a prototype system , and evaluate its performance on a massive - scale testbed with realistic p2p network traces .the evaluation results validate the effectiveness and efficiency of our proposed holistic immunity system against active p2p worms . * outline*. we specify the system architecture in section [ sec : systemarchitecture ] .sections [ sec : internaldefenses ] and [ sec : externaldefenses ] elaborate the internal defense and external protection mechanisms , respectively .we then present the experimental design in section [ sec : exdesign ] , and discuss the evaluation results in section [ sec : exresults ] .finally , we give an overview of related work in section [ sec : relatedwork ] , and conclude this paper in section [ conclusions ] .current p2p overlay networks can generally be grouped into two categories : _ structured _ overlay networks , e.g. , chord , whose network topology is tightly controlled based on distributed hash table , and _ unstructured _ overlay networks , e.g. , gnutella , which merely impose loose structure on the topology .in particular , most modern unstructured p2p overlay networks utilize a two - tier structure to improve their scalability : a subset of peers , called _ ultra - peers _ , construct an unstructured mesh while the other peers , called _ leaf - peers _ , connect to the ultra - peer tier for participating into the overlay network . as shown in figure [ fig : lovers ] ,the network architecture of our system is similar to that of the two - tier unstructured p2p overlay networks . in our system ,a set of p2p hosts act as the phagocytes to perform the functions of defense and protection against active p2p worms .these phagocytes are elected among the participating p2p hosts in terms of the following metrics : high bandwidth , powerful processing resource , sufficient uptime , and applying the latest patches ( interestingly , the experimental result shown in section [ sec : exresults ] indicates that we actually do not need to have a large percentage of phagocytes applying the latest patches ) . as existing two - tierunstructured overlay networks do , the phagocyte election is performed periodically ; moreover , even if an elected phagocyte has been infected , our internal defense mechanism ( described in section [ sec : internaldefenses ] ) can still isolate and patch the infected phagocyte immediately .in particular , the population of phagocytes should be small as compared to the total overlay population , otherwise the scalability and applicability are questionable . as a result ,each elected phagocyte covers a number of managed p2p hosts , and each managed p2p host will belong to one closest phagocyte .that is , the phagocyte acts as the proxy for its managed hosts to participate into the p2p overlay network , and has the control over the managed p2p hosts .moreover , a phagocyte further connects to several nearby phagocytes based on close proximity .our main interest is the unstructured p2p overlay networks , since most of the existing p2p worms target the unstructured overlay applications .naturally , due to the similar network architecture , our system can easily be deployed into the unstructured p2p overlay networks . moreover , for structured p2p overlay networks , a subset of p2p hosts could be elected to perform the functions of phagocytes .we aim not to change the network architecture of the structured p2p overlay networks ; however , we elect phagocytes to form an overlay to perform the defense and protection functions this overlay acts as a security wall in a separate layer from the existing p2p overlay , thus not affecting the original p2p operations . in the next two sections, we will elaborate in detail our mechanisms of internal defense and external protection against active p2p worms .in this section , we first describe the active p2p worm attacks , and then , we design our internal defense mechanism . generally , active p2p worms utilize the p2p overlay to accurately retrieve the information of a few vulnerable p2p hosts , and then infect these hosts to bootstrap the worm infection . on one hand, a managed p2p host clearly knows its associated phagocyte and its neighboring p2p hosts that are managed by the same phagocyte ; so now , an infected managed p2p host could perform the worm infection in several ways simultaneously .firstly , the infected p2p host infects its neighboring managed p2p hosts very quickly .secondly , the infected p2p host attempts to infect its associated phagocyte .lastly , the infected managed p2p host could issue p2p key queries with worms to infect many vulnerable p2p hosts managed by other phagocytes . on the other hand, a phagocyte could be infected as well ; if so , the infected phagocyte infects its managed p2p hosts and then its neighboring phagocytes . as a result , in such a topological self - propagation way , the active p2p worms spread through the whole system at extraordinary speed .since the active p2p worms propagate based on the topological information , and do not need to probe any random ip addresses , thus their connection failure rate should be low ; moreover , the p2p worm attack traffic could easily blend into normal p2p traffic .therefore , the active p2p worms do not exhibit easily detectable anomalies in the network traffic as normal scanning worms do . in our system ,the phagocytes are those elected p2p hosts with the latest patches , and they can help their managed p2p hosts detect the existence of active p2p worms by monitoring these managed hosts connection transactions and traffic volume . if a managed p2p host always sends similar queries or sets up a large number of connections , the responsible phagocyte deduces that this managed p2p host is infected .another pattern the phagocytes will monitor is to determine if a portion of the managed p2p hosts have some similar behaviors such as issuing the similar queries , repeating to connect with their neighboring hosts , uploading / downloading the same files , etc ., then they are considered to be infected .concretely , a managed p2p host s _ latest _ behaviors are processed into a _ behavior sequence _ consisting of continuous hereafter . ]then , we can compute the behavior similarity between any two p2p hosts by using the _ levenshtein edit distance _ . without loss of generality, we suppose that there are two behavior sequences and , in which , where , and is the length of the behavior sequence .further , we can treat each behavior sequence as the combination of the _ operation sequence _ and the _ payload sequence _ .now , we simultaneously _ sort _ the operation sequence and the payload sequence of the behavior sequence to make the following similarity score be maximum . to obtain the optimal solution , we could adopt the _ maximum weighted bipartite matching _algorithm ; however , for efficiency , we use the _ greedy _ algorithm to obtain the approximate solution as an alternative . here , denotes the sorted ; and denote the item of the sorted and , respectively ; is the levenshtein edit distance function .finally , we treat the maximum as the similarity score of the two behavior sequences .if the score exceeds a threshold , we consider the two p2p hosts perform similarly .these detection operations are also performed between phagocytes at the phagocyte - tier because they could be infected as well though with latest patches .the infected phagocytes could perform the worm propagation rapidly ; however , we have the local isolation , alert propagation and software patching procedures in place to handle these infected phagocytes after detected by their neighboring phagocytes with the detection module as described above .note that , our detection mechanism is _ not _ a substitution for the existing worm detection mechanisms , e.g. , the worm signature matching , but rather an effective p2p - tailored complement to them . specifically , some _ tricky _ p2p worms may present the features of mild propagation rate , polymorphism , etc ., so they may maliciously propagate in lower speed than the aggressive p2p worms ; here , our software patching module ( in section [ subsec : patching ] ) and several existing schemes can help mitigate such tricky worm attacks .moreover , a few elaborate p2p worms , e.g. , p2p-worm.win32.hofox , have recently been reported to be able to kill the anti - virus / anti - worm programs on p2p hosts ; at the system level , some local countermeasures have been devised to protect defense tools from being eliminated , and the arms race will continue . in this paper , we assume that p2p worms can not disable our detection module , and therefore , each phagocyte can perform the normal detection operations as expected ; so can the following modules .if a phagocyte discovers that some of its managed p2p hosts are infected , the phagocyte will cut off its connections with the infected p2p hosts , and ask these infected hosts to further cut off the links towards any other p2p hosts .also , if a phagocyte is detected ( by its neighboring phagocyte ) as infected , the detecting phagocyte immediately issues a message to ask the infected phagocyte to cut off the connections towards the neighboring phagocytes , and then to trigger the software patching module ( in section [ subsec : patching ] ) at the infected phagocyte ; after the software patching , these cut connections should be reestablished . with the local isolation module ,our system has the capacity of self - organizing and self - healing .we utilize the local isolation to limit the impact of active p2p worms as quickly as possible .if a worm event has been detected , i.e. , any of the managed p2p hosts or neighboring phagocytes are detected as infected , the phagocyte propagates a worm alert to all its neighboring phagocytes .further , once a phagocyte has received the worm alerts from more than a threshold of its neighboring phagocytes , it also propagates the alert to all its neighboring phagocytes that did not send out the alert . in general, we should appropriately tune to limit the impact area and improve the robustness against the malicious alert propagation generated by infected phagocytes .the analytical study in implied that the effective software patching is feasible for an overlay network if combined with schemes to bound the worm infection rate . in our system , the security patches are published to the participating p2p hosts using the following two procedures : * periodical patching : * a patch distribution service provided by system maintainers periodically pushes the latest security patches to all phagocytes through the underlying p2p overlay , and then these phagocytes install and distribute them to all their managed p2p hosts .note that , we can utilize the periodical patching to help mitigate the tricky p2p worms ( in section [ subsec : detection ] ) which are harder to be detected .* urgent patching : * when a phagocyte is alerted of a p2p worm attack , it will immediately pull the latest patches from a system maintainer via the direct http connection ( for efficiency , not via the p2p overlay ) , and then install and disseminate them to all its managed p2p hosts .specifically , each patch must be signed by the system maintainer , so that each p2p host can verify the patch according to the signature .note that , the zero - day vulnerabilities are not fictional , thus installing the latest patches can not always guarantee the worm immunity .the attackers may utilize these vulnerabilities to perform deadly worm attacks .we can integrate our system with some other systems , e.g. , shield and vigilante , to defend against such attacks , which can be found in .as much as possible , the phagocytes provide the containment of p2p worms in the p2p overlay networks .further , we utilize the phagocytes to implement the p2p traffic filtering mechanism which forbids any p2p connections from the p2p overlay to external hosts because such p2p connections are generally considered to be malicious it is possible that the p2p worms ride on the p2p traffic to spread to the external hosts .we can safely make the assumption that p2p overlay traffic should be contained inside the p2p overlay boundary , and any leaked p2p traffic is abnormal .therefore , once this leakiness is detected , the phagocytes will perform the former procedures for local isolation , alert propagation and software patching .our external protection mechanism aims to protect the p2p overlay network against the external worm attacks .we hide the p2p hosts ip addresses to prevent external hosts from directly accessing the internal p2p resources .this service can be provided by a scalable distributed dns system , e.g. , codons .such dns system returns the associated phagocyte which manages the requested p2p host .then , the phagocyte is able to adopt our following proposed computational puzzle scheme to perform the function of access control over the requests issued by the requesting external host .we propose a _ novel _ adaptive and interaction - based computational puzzle scheme at the phagocytes to provide the access control over the possible external worms attacking the internal p2p overlay . since we are interested in how messages are processed as well as what messages are sent , for clarity and simplicity , we utilize the annotated alice - and - bob specification to describe our puzzle scheme .as shown in figure [ fig : puzzle ] , to gain access to the p2p overlay , an external host has to perform a lightweight interaction towards the associated phagocyte to solve an adaptive computational puzzle ; then , according to the authenticity of the puzzle solution , the phagocyte can determine whether to process the request .* step 1 . *the external host first generates a -bit nonce as its session identifier .then , the external host stores and sends it to the phagocyte .* step 2 . * on receiving the message consisting of sent by the host , the phagocyte adaptively adjusts the puzzle difficulty based on the following two real - time statuses of the network environment . _ status of phagocyte _ : this status indicates the usage of the phagocyte s resources , i.e. , the ratio of consumed resources to total resources possessed by the phagocyte .the more resources a phagocyte has consumed , the harder puzzles the phagocyte issues in the future . _ status of external host _ : in order to mount attacks against p2p hosts effectively , malicious external hosts have no alternative but to perform the interactions and solve many computational puzzles .that is , the more connections an external host tries to establish , the higher the probability that this activity is malicious and worm - like .hence , the more puzzles an external host has solved in the recent period of time , the harder puzzles the phagocyte issues to the very external host .note that , since a malicious external host could simply spoof its ip address , in order to effectively utilize the status of external host , our computational puzzle scheme should have the capability of defending against ip spoofing attacks , which we will describe later .subsequently , the phagocyte simply generates a _ unique _ -bit session identifier for the external host according to the host s ip address ( extracted from the ip header of the received message ) , the host s session identifier and the puzzle difficulty , as follows : here , the is a keyed hash function for message authentication , and the is a -bit key which is _ periodically _ changed and only known to the phagocyte itself .such limits the time external hosts have for computing puzzle solutions , and it also guarantees that an external attacker usually does not have enough resources to pre - compute all possible solutions in step 3 . after the above generation process, the phagocyte replies to the external host at with the host s session identifier , the phagocyte s session identifier and the puzzle difficulty .once the external host has received this reply message , it first checks whether the received is really generated by itself .if the received is bogus , the external host simply drops the message ; otherwise , the host stores the phagocyte s session identifier immediately . such reply and checking operations can effectively defend against ip spoofing attacks .* step 3 . *the external host retrieves the pair as the global session identifier , and then tries to solve the puzzle according to the equation below : here , the is a cryptographic hash function , the is a hash value with the first bits being equal to , and the is the puzzle _ solution_. due to the features of hash function , the external host has no way to figure out the solution other than brute - force searching the solution space until a solution is found , even with the help of many other solved puzzles . the cost of solving the puzzle depends exponentially on the difficulty , which can be effortlessly adjusted by the phagocyte .after the brute - force computation , the external host sends the phagocyte a message including the global session identifier ( i.e. , the pair ) , the puzzle difficulty , the puzzle solution and the actual _ request_.once the phagocyte has received this message , it performs the following operations in turn : * _ a _ ) * check whether the session identifier is really fresh based on the database of the past global session identifiers .this operation can effectively defend against replay attacks .* _ b _ ) * check whether the phagocyte s session identifier can be correctly generated according to equation [ eqn : sip ] . specifically, this operation can additionally check whether the difficulty level reported by the external host is the original determined by the phagocyte . *_ c _ ) * check whether the puzzle solution is correct according to equation [ eqn : compute ] , which will also not incur significant overhead on the phagocyte . *_ d _ ) * store the global session identifier , and act as the overlay proxy to transmit the request submitted by the external host .note that , in our scheme , the phagocyte stores the session - specific data and processes the actual request only after it has verified the external host s puzzle solution .that is , the phagocyte does not commit its resources until the external host has demonstrated the sincerity . specifically in the above sequence of operations ,if one operation succeeds , the phagocyte continues to perform the next ; otherwise , the phagocyte cancels all the following operations , and the entire interaction ends .more details about the puzzle design rationale can be found in .so far , several computational puzzle schemes have been proposed .however , most of them consider only the status of resource providers , so they can not reflect the network environment completely .recently , an ingenious puzzle scheme , portcullis , was proposed . in portcullis , since a resource provider gives priority to requests containing puzzles with higher difficulty levels , to gain access to the requested resources , each resource requester , no matter legitimate or malicious , has to compete with each other and solve hard puzzles under attacks .this may influence legitimate requesters experiences significantly .compared with existing puzzle schemes , our adaptive and interaction - based computational puzzle scheme satisfies the fundamental properties of a good puzzle scheme .it treats each external host _ distinctively _ by performing a lightweight interaction to flexibly adjust the puzzle difficulty according to the real - time statuses of the network environment .this guarantees that our computational puzzle scheme does not influence legitimate external hosts experiences significantly , and it also prevents a malicious external host from attacking p2p overlay without investing unbearable resources .in real - world networks , hosts computation capabilities vary a lot , e.g. , the time to solve a puzzle would be much different between a host with multiple fast cpus and a host with just one slow cpu . to decrease the computational disparity , some other kinds of puzzles , e.g. , memory - bound puzzle , could be complementary to our scheme .note that , with low probability , a phagocyte may also be compromised by external worm attackers , then they could perform the topological worm propagation ; here , our proposed internal defense mechanism could be employed to defend against such attacks .in our experiments , we first implement a prototype system , and then construct a massive - scale testbed to verify the properties of our prototype system . * internal defense .* we implement an internal defense prototype system including all basic modules described in section [ sec : internaldefenses ] . here, a phagocyte monitors each of its connected p2p hosts latest requests .firstly , if more than half of the managed p2p hosts perform similar behaviors , the responsible phagocyte considers that the managed zone is being exploited by worm attackers . secondly, if more than half of a phagocyte s neighboring phagocytes perform the similar operations , the phagocyte considers its neighboring phagocytes are being exploited by worm attackers . in particular , the similarity is measured based on the equation [ eqn : sim ] with a threshold of . then , in the local isolation module , if a phagocyte has detected worm attacks , the phagocyte will cut off the associated links between the infection zone and the connected p2p hosts .afterwards , in the alert propagation module , if a phagocyte has detected any worm attacks , it will broadcast a worm alert to all its neighboring phagocytes ; further , if a phagocyte receives more than half of its neighboring phagocytes worm alerts , i.e. , , the phagocyte will also broadcast the alert to all its neighboring phagocytes that did not send out the alert .finally , in the software patching module , the phagocytes acquire the patches from the closest one of the system maintainers ( i.e. , online trusted phagocytes in our testbed ) , and then distribute them to all their managed p2p hosts .we have not yet integrated the signature scheme into the software patching module of our prototype system .note that , in the above , we simply set the parameters used in our prototype system , and in real - world systems , the system designers should appropriately tune these parameters according to their specific requirements. * external protection . *we utilize our adaptive and interaction - based computational puzzle module to develop the external protection prototype system . in this prototype system , we use sha1 as the cryptographic hash function . generally , solving a puzzle with difficulty level will force an external host to perform sha1 computations on average .in particular , the difficulty level varies between and in our system this will cost an external host second ( ) to seconds ( ) on our power5 cpus .in addition , the change cycle of the puzzle - related parameters is set to minutes .yet , we have not integrated our prototype system with the scalable distributed dns system , and this work will be part of our future work .we use the realistic network traces crawled from a million - node gnutella network by the cruiser crawler .the dedicated massive scale gnutella network is composed of two tiers including the ultra - peer tier and leaf - peer tier . for historical reasons ,the ultra - peer tier consists of not only modern ultra - peers but also some _ legacy - peers _ that reside in the ultra - peer tier but can not accept any leaf - peers .specifically , in our experiments , the ultra - peers excluding legacy - peers perform the functions of phagocytes , and the leaf - peers act as the managed p2p hosts . then , we adopt the widely accepted gt - itm to generate the transit - stub model consisting of routers for the underlying hierarchical internet topology .there are transit domains at the top level with an average of routers in each , and a link between each pair of these transit routers has a probability of .each transit router has an average of stub domains attached , and each stub has an average of routers , with the link between each pair of stub routers having a probability of .there are two million end - hosts uniformly assigned to routers in the core by local area network ( lan ) links .the delay of each lan link is set to and the average delay of core links is .now , the crawled gnutella networks can model the realistic p2p overlay , and the generated gt - itm network can model the underlying internet topology ; thus , we deploy the crawled gnutella networks upon the underlying internet topology to simulate the realistic p2p network environment .we do not model queuing delay , packet losses and any cross network traffic because modeling such parameters would prevent the massive - scale network simulation . as shown in table [tab : trace ] , we list various gnutella traces that we use in our experiments with different node populations and/or different percentages of phagocytes . _ trace _ 1 : crawled by cruiser on sep .27th , 2004 . _ trace _ 2 : crawled by cruiser on feb .2nd , 2005 . _ trace _ 3 : based on trace 1 , we remove a part of phagocytes randomly ; then , we remove the _isolated _ phagocytes , i.e. , these phagocytes do not connect to any other phagocytes ; finally , we further remove the isolated managed p2p hosts , i.e. , these managed p2p hosts do not connect to any phagocytes . _ trace _ 4 : based on trace 3 , we remove a part of managed p2p hosts randomly . _ trace _ 5 : based on trace 4 , we further remove a part of managed p2p hosts randomly . _ trace _ 6 : based on trace 1 , we use the same method as described in the generation process of trace 3 .in addition , we remove an extra part of managed p2p hosts .in our experiments , we characterize the performance under various different circumstances by using three metrics : _ peak infection percentage of all p2p hosts _ : the ratio of the maximum number of infected p2p hosts to the total number of p2p hosts .this metric indicates whether phagocytes can effectively defend against internal attacks . _ blowup factor of latency _ : this factor is the latency penalty between the external hosts and the p2p overlay via the phagocytes and direct routing .this indicates the efficiency of our phagocytes to filter the requests from external hosts to the p2p overlay . _ percentage of successful external attacks _ : the ratio of the number of successful external attacks to the total number of external attacks .this metric indicates the effectiveness of our phagocytes to prevent external hosts from attacking the p2p overlay . in our prototype system , we model a percentage of phagocytes and managed p2p hosts being initially _ immune _ , respectively ; except these immune p2p hosts , the other hosts are _vulnerable_. moreover , there are a percentage of p2p hosts being initially _ infected _ , which are distributed among these vulnerable phagocytes and vulnerable managed p2p hosts uniformly at random .all the infected p2p hosts perform the active p2p worm attacks ( described in section [ subsec : threat_model ] ) , and meanwhile , our internal defense modules deployed at each participant try to defeat such attacks . with different experimental parameters described in table [ tab : ex_in ] , we conduct four different experiments to evaluate the internal defense mechanism .* experiment 1 impact of immune phagocytes : * with seven different initial percentages of immune phagocytes , we fix the initial percentage of immune managed p2p hosts to , and vary the number of initial infected p2p hosts so that these infected hosts make up between and of all the vulnerable p2p hosts .now , we can investigate the impact of immune phagocytes by calculating the peak infection percentage of all p2p hosts .the experimental result shown in figure [ fig : ex1 ] demonstrates that when the initial infection percentage of all vulnerable p2p hosts is low ( e.g. , ) , the phagocytes can provide a good containment of active p2p worms ; otherwise , the worm propagation is very fast , but the phagocytes could still provide the sufficient containment this property is also held in the following experiments .interestingly , the initial percentage of immune phagocytes does not influence the performance of our system significantly , i.e. , the percentage of phagocytes being initially immune has no obvious effect .this is a good property because we do not actually need to have high initial percentage of immune phagocytes .also , this phenomenon implies that increasing the number of immune phagocytes does not further provide much significant defense .thus , we can clearly conclude that the phagocytes are effective and scalable in performing detection , local isolation , alert propagation and software patching .* experiment 2 impact of immune managed p2p hosts : * in this experiment , for of phagocytes being initially immune , we investigate the performance of our system with various initial percentages of immune managed p2p hosts in steps of . the result shown in figure[ fig : ex2 ] is within our expectation .the peak infection percentage of all p2p hosts decreases with the growth of the initial percentage of immune managed p2p hosts .actually , in real - world overlay networks , even a powerful attacker could initially control tens of thousands of overlay hosts ( in the x - axis ) ; hence , we conclude that our phagocytes have the capacity of defending against active p2p worms effectively even in a highly malicious environment .* experiment 3 impact of network scale : * figure [ fig : ex3 ] plots the performance of our system in terms of different network scales . in traces 1 , 2 , 5 and 6, there are different node populations , but the ratios of the number of phagocytes to the number of all p2p hosts are all around .the experimental result indicates that our system can indeed help defend against active p2p worms in various overlay networks with different network scales .furthermore , although the phagocytes perform more effectively in smaller overlay networks ( e.g. , traces 5 and 6 ) , they can still work quite well in massive - scale overlay networks with million - node participants ( e.g. , traces 1 and 2 ) .* experiment 4 impact of the percentage of phagocytes : * in our system , the phagocytes perform the functions of defending against p2p worms . in this experiment, we evaluate the system performance with different percentages of phagocytes but the same number of phagocytes .the result in figure [ fig : ex4 ] indicates that the higher percentage of phagocytes , the better security defense against active p2p worms .that is , as the percentage of phagocytes increases , we can persistently improve the security capability of defending against active p2p worms in the overlay network . further, the experimental result also implies that we do not need to have a large number of phagocytes to perform the defense functions around of the node population functioning as phagocytes is sufficient for our system to provide the effective worm containment . in this section, we conduct two more experiments in our prototype system to evaluate the performance of the external protection mechanism .* experiment 5 efficiency : * in this experiment , we show the efficiency in terms of the latency penalty between the external hosts and the p2p overlay via the phagocytes and direct routing .based on trace 1 , we have external hosts connect to every p2p host via the phagocytes and direct routing in turn .then , we measure the latencies for both cases . figure [ fig : ex5 ] plots the measurement result of latency penalty .we can see that , if routing via the phagocytes , about and of the connections between the external hosts and p2p hosts have the blowup factor of latency be less than and , respectively .figure [ fig : ex5a ] shows the corresponding absolute latency difference , from which we can further deduce that the average latency growth of more than half of these connections ( via the phagocytes ) is less than .actually , due to the interaction required by our proposed computational puzzle scheme , we would expect some latency penalty incurred by routing via the phagocytes . with the puzzle scheme , our system can protect against external attacks effectively which we will illustrate in the next experiment .hence , there would be a tradeoff between the efficiency and effectiveness . * experiment 6 effectiveness : * in this experiment , based on trace 1 , we have external worm attackers flood all phagocytes in the p2p overlay .then , we evaluate the percentage of successful external attacks to show the effectiveness of our protection mechanism against external hosts attacking the p2p overlay . for other numbers of external worm attackers ,we obtain the similar experimental results . in figure [fig : ex6 ] , the x - axis is the attack frequency in terms of the speed of external hosts mounting worm attacks to the p2p overlay , and the y - axis is the percentage of successful external attacks .the result clearly illustrates the effectiveness of phagocytes in protecting the p2p overlay from external worm attacks .our adaptive and interaction - based computational puzzle module at the phagocytes plays an important role in contributing to this observation . even in an extremely malicious environment ,our system is still effective .that is , to launch worm attacks , the external attackers have no alternative but to solve hard computational puzzles which will incur heavy burden on these attackers . from the figure [ fig : ex6 ], we can also find that when the attack frequency decreases , the percentage of successful external attacks increases gradually .however , with a low attack frequency , the attackers can not perform practical attacks .even if a part of external attacks are mounted successfully , our internal defense mechanism can mitigate them effectively .p2p worms could exploit the perversive p2p overlays to achieve fast worm propagation , and recently , many p2p worms have already been reported to employ real - world p2p systems as their spreading platforms .the very first work in highlighted the dangers posed by p2p worms and studied the feasibility of self - defense and containment inside the p2p overlay .afterwards , several studies developed mathematical models to understand the spreading behaviors of p2p worms , and showed that p2p worms , especially the active p2p worms , indeed pose more deadly threats than normal scanning worms . recognizing such threats, many researchers started to study the corresponding defense mechanisms .specifically , yu _ et al ._ in presented a region - based active immunization defense strategy to defend against active p2p worm attacks ; freitas _ et al ._ in utilized the diversity of participating hosts to design a worm - resistant p2p overlay , verme , for containing possible p2p worms ; moreover , in , xie and zhu proposed a partition - based scheme to proactively block the possible worm spreading as well as a connected dominating set based scheme to achieve fast patch distribution in a race with the worm , and in , xie _ et al ._ further designed a p2p patching system through file - sharing mechanisms to internally disseminate security patches .however , existing defense mechanisms generally focused on the internal p2p worm defense without the consideration of external worm attacks , so that they can not provide a total worm protection for the p2p overlay systems .in this paper , we have addressed the deadly threats posed by active p2p worms which exploit the pervasive and popular p2p applications for rapid topological worm infection .we build an immunity system that responds to the active p2p worm infection by using _phagocytes_. the phagocytes are a small subset of specially elected p2p hosts that have high immunity and can `` eat '' active p2p worms in the p2p overlay networks .each phagocyte manages a group of p2p hosts by monitoring their connection patterns and traffic volume .if any worm events are detected , the phagocyte will invoke the internal defense strategies for local isolation , alert propagation and software patching .besides , the phagocytes provide the access control and filtering mechanisms for the communication establishment between the p2p overlay and external hosts .the phagocytes forbid the p2p traffic to leak from the p2p overlay to external hosts , and further adopt a novel adaptive and interaction - based computational puzzle scheme to prevent external hosts from attacking the p2p overlay . to sum up, our holistic immunity system utilizes the phagocytes to achieve both internal defense and external protection against active p2p worms .we implement a prototype system and validate its effectiveness and efficiency in massive - scale p2p overlay networks with realistic p2p network traces .
active peer - to - peer ( p2p ) worms present serious threats to the global internet by exploiting popular p2p applications to perform rapid topological self - propagation . active p2p worms pose more deadly threats than normal scanning worms because they do not exhibit easily detectable anomalies , thus many existing defenses are no longer effective . we propose an immunity system with _ phagocytes _ a small subset of elected p2p hosts that are immune with high probability and specialized in finding and `` eating '' worms in the p2p overlay . the phagocytes will monitor their managed p2p hosts connection patterns and traffic volume in an attempt to detect active p2p worm attacks . once detected , local isolation , alert propagation and software patching will take place for containment . the phagocytes further provide the access control and filtering mechanisms for communication establishment between the internal p2p overlay and the external hosts . we design a novel adaptive and interaction - based computational puzzle scheme at the phagocytes to restrain external worms attacking the p2p overlay , without influencing legitimate hosts experiences significantly . we implement a prototype system , and evaluate its performance based on realistic massive - scale p2p network traces . the evaluation results illustrate that our phagocytes are capable of achieving a total defense against active p2p worms .
there has been an increasing interest over the last decade in performing large - scale simulations of colloidal systems , proteins , micelles and other biological assemblies .simulating such systems , and the phenomena that take place in them , typically requires a description of dynamical events that occur over a wide range of time scales .nearly all simulations of such systems to date are based on following the microscopic time evolution of the system by integration of the classical equations of motion .usually , due to the complexity of intermolecular interactions , this integration is carried out in a step - by - step numerical fashion producing a time ordered set of phase - space points ( a _ trajectory _ ) .this information can then be used to calculate thermodynamic properties , structural functions or transport coefficients .an alternative approach , which has been employed in many contexts , is to use step potentials to approximate intermolecular interactions while affording the analytical solution of the dynamics .the simplification in the interaction potential can lead to an increase in simulation efficiency since the demanding task of calculating forces is reduced to computing momentum exchanges between bodies at the instant of interaction .this approach is called event - driven or _ discontinuous molecular dynamics _ ( dmd ) . in the dmd approach ,various components of the system interact via discontinuous forces , leading to impulsive forces that act at specific moments of time . as a result ,the motion of particles is free of inter - molecular forces between impulsive _ events _ that alter the trajectory of bodies via discontinuous jumps in the momenta of the system at discrete interaction times . to determine the dynamics , the basic interaction rules of how the ( linear and angular ) momenta of the body are modified by collisionsmust be specified . for molecular systems with internal degrees of freedomit is straightforward to design fully - flexible models with discontinuous potentials , but dmd simulations of such systems are often inefficient due to the relatively high frequency of internal motions .this inefficiency is reflected by the fact that most collision events executed in a dmd simulation correspond to intra rather than inter - molecular interactions .on the other hand , much of the physics relevant in large - scale simulations is insensitive to details of intra - molecular motion at long times .for this reason , methods of incorporating constraints into the dynamics of systems with continuous potentials have been developed that eliminate high frequency internal motion , and thus extend the time scales accessible to simulation .surprisingly , relatively little work has appeared in the literature on incorporating such constraints into dmd simulations .the goal of this paper is to extend the applicability of dmd methods to include constrained systems and to outline efficient methods that are generally applicable in the simulations of semi - flexible and rigid bodies interacting via discontinuous potentials .in contrast to systems containing only simple spherical particles , the application of dmd methods to rigid - body systems is complicated by two main challenges .the first challenge is to analytically solve the dynamics of the system so that the position , velocity , or angular velocity of any part of the system can be obtained exactly .this is in principle possible for a rigid body moving in the absence of forces and torques , even if it does not possess an axis of symmetry which facilitates its motion .however , an explicit solution suitable for numerical implementation seems to be missing in the literature ( although partial answers are abundant ) .for this reason , we will present the explicit solution here .armed with a solution of the dynamics of all bodies in the system , one can calculate the collision times in an efficient manner , and in some instances , analytically .the second challenge is to determine how the impulsive forces lead to discontinuous jumps in the momenta of the interacting bodies . for complicated rigid or semi - flexible bodies , the rules for computing the momentum jumps are not immediately obvious .it is clear however that these jumps in momenta must be consistent with basic conservation laws connected to symmetries of the underlying lagrangian characterizing the dynamics .often the basic lagrangian is invariant to time and space translations , and rotations , and , hence , the rules governing collisions must explicitly obey energy , momentum , and angular momentum constraints .such conservation laws can be utilized as a guide to derive the proper collision rules .a first attempt to introduce constraints into an event - driven system was carried out by ciccotti and kalibaeva , who studied a system of rigid , diatomic molecules ( mimicking liquid nitrogen ) .furthermore , non - spherical bodies of a special kind were treated by donev _et al._ by assuming that all rotational motion in between interaction events was that of a spherically symmetric body .more recently , a spherically symmetric hard - sphere model with four tetrahedral short ranged ( sticky ) interactions ( mimicking water ) has been studied by de michele _ et al._ with an event - driven molecular dynamics simulation method similar to the most basic scheme presented in this paper .this work primarily focuses on the phase diagram of this `` sticky '' water model as a prototype of network forming molecular systems .our purpose , in contrast , is to discuss a general framework that allows one to carry out event - driven dmd simulations in the presence of constraints and , in particular , for fully general rigid bodies .the methodology is applicable to modeling the correct dynamics of water molecules in aqueous solutions as well as other many body systems .the paper is organized as follows .section [ calculation ] discusses the equations of motions in the presence of constraints and sec .[ includingcollisions ] discusses the calculation and scheduling of collision times .the collision rules are derived in sec .[ rules ] . in sec . [ ensembles ] it is shown how to sample the canonical and microcanonical ensembles and how to handle subtle issues concerning missing events that are particular to event - driven simulations .finally , conclusions are presented in sec .[ conclusions ] .the motion of rigid bodies can be considered to be a special case of the dynamics of systems under a minimal set of time - independent holonomic constraints ( i.e.dependent only on positions ) that fix all intra - body distances : where the index runs over all constraints present in the system and is a generalized vector whose components are the set of all cartesian coordinates of the total particles in the system . for fully rigid bodies , the number of constraints can easily be calculated by noting that the number of spatial degrees of freedom of an -particle body is in 3 dimensions , while only 6 degrees of freedom are necessary to completely specify the position of all components of a rigid body : 3 degrees of freedom for the center of mass of the object and 3 degrees of freedom to specify its orientation with respect to some arbitrary fixed reference frame .there are therefore constraint equations for a single rigid body with point masses .below , these point masses will be referred to as _ atoms _ while the body as a whole will be called a _molecule_. a typical constraint equation fixes the distance between atoms and in the molecule to be some value , , i.e. , the equations of motion for the system follow from hamilton s principle of stationary action , which states that the motion of the system over a fixed time interval is such that the variation of the line integral defining the action is zero : where the lagrangian in the presence of the constraints is written in cartesian coordinates as where is the interaction potential .for clarity throughout this paper , the einstein summation convention will be used for sums over repeated _ greek _ indices i.e. , whereas the sum over atom indices will be written explicitly . in eq . , the parameters are lagrange multipliers to enforce the distance constraints .the resulting equations of motion are : for an elementary discussion of constrained dynamics in the lagrangian formulation of mechanics , we refer to ref . . when there are no interactions , such as for a single molecule , the potential and eqbecomes these equations of motion must be supplemented by equations for the lagrange multipliers , which are functions of time .although the are not functions of and in a mathematical sense , it will be shown below that once the equations are solved they can be expressed in terms of and .note that the equations of motion show that even in the absence of an external potential , the motion of the point masses ( atoms ) making up a rigid body ( molecule ) are non - trivial due to the emergence of a _ constraint force _ . in fortuitous cases ,the time dependence of the lagrange multipliers is relatively simple and can be solved for by taylor expansion of the lagrange multipliers in time . to evaluate the time derivatives of the multipliers, one can use time derivatives of the initial constraint conditions , which must vanish to all orders .the result is a hierarchy of equations , which , at order , is linear in the unknown time derivatives but depends on the lower order time derivatives , , . in exceptional circumstances, this hierarchy naturally truncates .for example , for a rigid diatomic molecule with a single bond length constraint , one finds that the hierarchy truncates at order , and the lagrange multiplier is a constant .however this is not the typical case . alternatively , since the constraints are to be satisfied at all times , and not just at time zero , their time derivatives are zero at all times . from the first timederivative one sees that the initial velocities must obey for each constraint condition .the lagrange multipliers can be determined by the condition that the second derivatives of all the constraints vanish so that yielding a linear equation for the lagrange multipliers that can be solved in matrix form as where it may be shown that with given by eq . , all higher time derivatives of are automatically zero . as eq .shows , in general the lagrange multipliers are dependent on both the positions and the velocities of the particles . to see that this makes the dynamics non - hamiltonian , the equations of motion can be cast into hamiltonian - like form using , i.e. , where it is apparent that the forces in the system depend on the momentum through in eq . .there exists no hamiltonian that generates these equations of motion. since the underlying dynamics of the system is non - hamiltonian , the statistical mechanics of the constrained system is potentially more complex . in general ,phase - space averages have to be defined with respect to a metric that is invariant to the standard measure of hamiltonian systems , but is not conserved under the dynamics and the standard form of the liouville equation does not hold . in general , there is a phase - space compressibility factor associated with the lack of conservation of the measure that is given by minus the divergence of the flow in phase space .it may be shown that where is the determinant of the matrix defined in eq .( [ zdef ] ) .the compressibility factor is related to the invariant phase - space metric with statistical averages are therefore defined for the non - hamiltonian system as where is the probability density for the unconstrained system and is the partition function for the constrained system , given by although the invariant metric is non - uniform for many constrained systems , for entirely rigid systems the matrix is a function only of the point masses and fixed distances .hence the term acts as a multiplicative factor which cancels in the averaging process . although the solution of the dynamics of constrained systems via time - independent holonomic constraints is intellectually appealing and useful in developing a formal statistical mechanics for these systems , it is often difficult to analytically solve for the values of the lagrange multipliers at arbitrary times .one therefore often resorts to numerical solutions of the multipliers in iterative form , using algorithms such as shake .such an approach is not really consistent with the principles of dmd , in which a computationally efficient means of calculating event times is one of the great advantages of the method . for fully - constrained , rigid bodies , it is more sensible to apply other , equivalent , approaches , such as the principal axis or quaternion methods , to calculate analytically the evolution of the system in the absence of external forces . the basic simplification in the dynamics of rigid bodies results from the fact that the general motion of a rigid body can be decomposed into a translation of the center of mass of the body plus a rotation about the center of mass .the orientation of the body relative to its center of mass is described by the relation between the so - called _ body frame _ , in which a set of axes are held fixed with the body as it moves , and the fixed external _laboratory frame_. the two frames of reference can be connected by an orthogonal transformation , such that the position of an atom in a rigid body can be written at an arbitrary time as : where is the position of atom in the body frame ( which is independent of time ) , is the center of mass , and the matrix is the orthogonal matrix that converts coordinates in the body frame to the laboratory frame . note that matrix - vector and matrix - matrix multiplication will be implied throughout the paper .the matrix is the transpose of , which converts coordinates from the laboratory frame to the body frame at time .the elements composing the columns of the matrix are simply the coordinates of the axes in the body frame written in the laboratory frame .note that eq .( [ cartesianpositions ] ) implies that the relative vector satisfies here as well as below , we have dropped the explicit time dependence for most time dependent quantities with the exception of quantities at time zero or at a time that is integrated over .one sees that in order to determine the location of different parts of the body in the laboratory frame , the rotation matrix must be specified .this matrix satisfies a differential equation that will now be derived and subsequently solved . before doing so, it will be useful to restate some properties of rotation matrices and establish some notation to be used below .formally , a rotation matrix is an orthogonal matrix with determinant one and whose its inverse is equal to its transpose .any rotation can be specified by a rotation axis and an angle over which to rotate . here is a unit vector , so that one may also say that any non - unit vector can be used to specify a rotation , where its norm is equal to the angle and its direction is equal to the axis .according to rodrigues formula , the matrix corresponding to this rotation is the derivation of the differential equation for starts by taking the time derivative of eq . , yielding from elementary classical mechanics , it is known that this relative velocity can also be written as where is the angular velocity vector in the lab frame . since both eq . andeq . are true for any vector , it follows that is the matrix representation of a cross product with the angular velocity , i.e. , multiplying eq . on the right with and taking the transpose on both sides ( note that is antisymmetric ) yields this equation involves the angular velocity in the laboratory frame , but the rotational equations of motion are more easily solved in the body frame .the angular velocity vector transforms to the body frame according to for any rotation and vector one has , hence one can write substituting eq . into eq. yields the differential equation for : although the choice of body frame is arbitrary , perhaps the most convenient choice of axes for the body are the so - called principal axes in which the moment of inertia tensor is diagonal , i.e. , .choosing this reference frame as the body frame , the representation of the components of the angular momentum is where and are the principal moments of inertia and principal components of the angular velocity .the time dependence of the principal components of the angular velocity may be obtained from the standard expression for the torque in the laboratory frame : .\label{transformangularmomentum}\end{aligned}\ ] ] where eq . was used in the last equality .transforming eq . to the principal axis frame gives euler s equations of motion for a rigid body where are the components of the torque in the body frame .note that even in the absence of any torque , the principal components of the angular velocity are in general time dependent .once the angular velocity is known , it can be substituted into eq . for the matrix .the general solution of eq .is of the form where is a rotation matrix itself which ` propagates ' the orientation to the orientation at time . satisfies the same equation as , but with initial condition . by integrating this equation, one can obtain an expression for . at first glance, it may seem that can only be written as a formal expression containing a time - ordered exponential .however , for the torque - free case , the conservation of angular momentum and energy and the orthogonality of the matrix can be used to derive the following explicit expression ( implicitly also found in ref . ): here and are two rotation matrices .the matrix rotates to and can be written as where and while .the matrix can be expressed , using the notation in eq ., as where the angle is given by with the angle can be interpreted as an angle over which the body rotates .if the body rotates one way , the laboratory frame as seen from the body frame rotates in the opposite way , which explains the minus sign in eq . .for the derivation of eqs .- we refer to ref . .similar equations , but in a special reference frame , can be found in ref . .in the following , the solution of eq . with for bodies of differing degrees of symmetry will be analyzed and then used to obtain explicit expressions for the matrix as a function of time and of the initial angular velocity in the body frame .for the case of a spherical rotor in which all three moments of inertia are equal , , the form of the euler equations is particularly simple : .it is therefore clear that all components of the angular velocity in the body frame are conserved , as are those of the angular momentum . as a result , in eq .is equal to the identity matrix .a second consequence is that in eq .is constant , so that where may be rewritten , using , as . therefore eqs . and give corresponding to a rotation by an angle of around the axis . for the case of a symmetric top for which , one can solve the euler equations in terms of simple sines and cosines , since eq .becomes where is the precession frequency .the full solution of the euler equations is given by using eq .and the fact that and are conserved in this case , one can easily show that is given by and one can determine from eq . : } { i_1 ^ 2{\tilde{\omega}_{1}}^2(0 ) + i_1 ^ 2{\tilde{\omega}_{2}}^2(0 ) } = \frac{l}{i_1}.\ ] ] this is a constant so that .thus and one gets from eq . : if all the principal moments of inertia are distinct , the time dependence of the angular velocity involves elliptic functions .while this may seem complicated , efficient standard numerical routines exist to evaluate these functions .more challenging is the evaluation of the matrix .while its exact solution has been known for more than 170 years , it is formulated even in more recent texts in terms of undetermined constants and using complex algebra , which hinders its straightforward implementation in a numerical simulation .it is surprisingly difficult to find an explicit formula in the literature for the matrix as a function of the initial conditions , which is the form needed in dmd simulations .for this reason , the explicit general solution for will briefly be presented here in terms of general initial conditions .the details of the derivation can be found elsewhere .following jacobi , it is useful to adopt the convention that is the moment of inertia intermediate in magnitude ( i.e. , either or ) and one chooses the overall ordering of magnitudes , such that : where is the rotational kinetic energy and is the norm of the angular momentum . without this convention some quantities defined below would be complex valued , which is numerically inconvenient and inefficient . note that in a simulation molecules will often be assigned a specific set of physical inertial moments with fixed order , i.e. not depending on the particular values of and . a simple way to nevertheless adopt the convention in eq . is to introduce internal variables , and which differ when necessary from the physical ones by a rotation given by the rotation matrix this matrix interchanges the and directions and reversed the direction , and is equal to its inverse .the euler equations can be solved because there are two conserved quantities and which allow and to be expressed in terms of , at least up to a sign which the quadratic conserved quantities can not prescribe . in this waythe three coupled equations are reduced to a single ordinary differential equation for , from which can be solved as an integral over : this is an incomplete elliptic integral of the first kind . to get as a function of , one needs its inverse , which is the elliptic function . without giving further details ,the solution of the euler equations is given by here and are also elliptic functions , while the are the extreme ( maximum or minimum ) values of the and are given by where is the sign of .furthermore , in eq . the _ precession frequency _ is given by the elliptic functions are periodic functions of their first argument , and look very similar to the sine , cosine and constant function .they furthermore depend on the _ elliptic parameter _ ( or elliptic modulus ) , which determines how closely the elliptic functions resemble their trigonometric counterparts , and which is given by ( solid line ) , ( bold dashed line ) , ( dotted line ) for ( , , ) .also plotted are the cosine ( short dashed line ) and sine ( thin short dashed line ) with the same period , for comparison . ] by matching the values of at time zero , one can determine the integration constant : where is the incomplete elliptic integral of the first kind in fact , is simply the inverse of this function . as a result of the ordering convention in eq ., the parameter in eq . is guaranteed to be less than one , which is required in order that in eq .not be complex - valued .three more numbers can be derived from the elliptic parameter which play an important role in the elliptic functions .these are the _ quarter - period _ , the _ complementary quarter - period _ and the _ nome _ , which is the parameter in various series expansions . the period of the elliptic functions and is equal to , while that of is .these elliptic functions have the following fourier series : note that the right - hand side of eqs .- depends on through and $ ] . for ,one gets and , and reduce to , and , respectively .the constancy of is reminiscent of the conservation of in the case of the symmetric top , and , indeed , for , according to eq . .typical values for are quite small , hence often the elliptic function , and resemble the , and a constant function with value one ( as e.g. in fig .[ cnsndn ] ) . for small values of , the series expressions for the elliptic functions converge quickly ( although this is not the best way to compute the elliptic functions ) .having given the solutions of the euler equations , we now turn to the solution of eq . as given by eqs . - .the expression on the right - hand side of eq .isnot a constant in this case but involves elliptic functions . despite this difficulty , the integralcan still be performed using some properties of elliptic functions , with the result the constants , and the periodic function can be expressed using the theta function as where we have used the definition the equations - involve complex values which are not convenient for numerical evaluation . using the known series expansions of the theta function and its logarithmic derivative in terms of the nome , these equations may be rewritten in a purely real form .in fact , one readily obtains the sine and cosine of , which are all that is needed in eqs . and , with while the constant is + n\pi,\ ] ] where if , if and , and if and .finally , the constant is given by , \label{c2}\ ] ] where .the series expansion in in eq .convergences for . because ( cf .eq . ) , one has , and the series always converges . since is typically small , the convergence is rarely very slow ( e.g. for convergence up to relative order one needs terms ) . note that since the constants and depend only on the initial angular velocities , they only need to be calculated once at the beginning of the motion of a free rigid body . on the other hand , the series expansions in eqs . and , which have to be evaluated any time the positions are desired , have extremely fast convergence due to the appearing in these expressions ( for example , unless , the series converges up to occurs taking only three terms ) .there are efficient routines to calculate the functions , and , see e.g. refs . , and the series in eqs . , and converge , the former two quite rapidly in fact .therefore , despite an apparent preference in the literature for conventional numerical integration of the equations of motion via many successive small time steps even for torque - free cases , the analytical solution can be used to calculate the same quantities in a computationally more efficient manner requiring only the evaluation of special functions . the gain in efficiencyshould be especially pronounced in applications in which many evaluations at various times could be needed , such as in the root searches in discontinuous molecular dynamics ( see below ) .if the interaction potential between atoms and is assumed to be discontinuous , say of the form then rigid molecules interacting via this potential evolve freely until there is a change in the potential energy of the system and an _ interaction event _ or _ collision event _ occurs .the time at which an event occurs is governed by a collision indicator function defined such that at time , . here, the time dependence of and can be obtained using the results of sec . [ calculation ] .the simplest example of this kind of system consists of two hard spheres of diameter located at positions and .if two spheres are approaching , when they get to a distance from one another , the potential energy would change from to if they kept approaching one another .as this eventually would lead to a violation of energy conservation , the spheres bounce off one another in a _ hard - core collision _ at time , where is determined by the criterion , i.e. by the zeros of the collision indicator function .another kind of interaction event , with and finite , will be called a _ square - well collision _ because the potential then has a square well shape . to find the times at which collisions take place ,the zeros of the collision indicator functions must be determined , which generally has to be done numerically .the calculation of the collision times of non - penetrating rigid objects is an important aspect of manipulating robotic bodies , and is also an important element of creating realistic animation sequences . as a result, many algorithms have been proposed in these contexts to facilitate the event time search .the search for the earliest collision event time can be facilitated using screening strategies to decide when rigid bodies may overlap .usually , these involve placing the bodies in bounding boxes and using an efficient method to determine when bounding boxes intersect .the simplest way to do this in a simulation of rigid molecules is to place each molecule in the smallest sphere around its center of mass containing all components of the molecule .the position of the sphere is determined by the motion of the center of mass , while any change in orientation of the rigid molecule occurs within the sphere .collisions between rigid molecules can therefore only occur when their encompassing spheres overlap , and the time at which this occurs can be calculated analytically for any pair of molecules .this time serves as a useful point to begin a more detailed search for collision events ( see below ) .similarly , one can also calculate the time at which the spheres no longer overlap , and use these event times to bracket a possible root of the collision indicator function .it is crucial to make the time bracketing as tight as possible in any implementation of dmd with numerical root searches because the length of the time bracketing interval determines the required number of evaluations of the positions and velocities of the atoms , and therefore plays a significant role in the efficiency of the overall procedure .the simplest reliable and reasonably efficient means of detecting a root is to perform a _ grid search _ that looks for changes in sign of , i.e. , one looks at and for successive .the time points will be called the _ grid points_. when a time interval in between two grid points is found in which a sign change of occurs , the newton - raphson algorithm can be called to numerically determine the root with arbitrary accuracy .since the newton - raphson method requires the calculation of first time derivatives , one must also calculate , for any time , the derivative , where the notation and has been used .such time derivatives are readily evaluated using eqs .( [ cartesianpositions ] ) and ( [ step1vel ] ) . unfortunately ,while the newton - raphson method is a very efficient algorithm for finding roots , it can be somewhat unstable when one is searching for the roots of an oscillatory function . for translating and rotating rigid molecules ,the collision indicator function is indeed oscillatory due to the periodic motion of the relative orientation of two colliding bodies .it is particularly easy to miss so - called _ grazing collisions _ when the grid search interval is too large , in which case the indicator function is positive in two consecutive points of the grid search , yet nonetheless `` dips '' below zero in the grid interval .it is important that no roots are missed , for a missed root can lead to a different , even infinite energy ( but see sec .[ ensembles ] below ) . to reduce the frequency of missing grazing collisions to zero , a vanishingly small grid interval would be required . of course sucha scheme is not practical , and one must balance the likelihood of missing events with practical considerations since several collision indicator functions need to be evaluated at each point of the grid . clearlythe efficiency of the root search algorithm significantly depends on the magnitude of grid interval . to save computation time, a coarser grid can be utilized if a means of handling grazing collisions is implemented . since the collision indicator function has a local extremum ( maximum or minimum , depending on whether is initially smaller or larger than ) at some time near the time of a grazing collision , a reasonable strategy to find these kind of collision events is to determine the extremum of the indicator function in cases in which the indicator function itself does not change sign on the interval but its derivative does . furthermore , since the indicator function at the grid points near a grazing collision is typically small , it is fruitful to search only for extrema when the indicator function at one of the grid points lies below some threshold value . to find the local extrema of the indicator function , any simple routine of locating the extrema of a non - linear function can be utilized .for example , brent s minimization method , which is based on a parabolic interpolation of the function , is a good choice for sufficiently smooth one - dimensional functions .once the extremum is found , it is a simple matter to decide whether or not a real collision exists by checking the sign of .once the root has been bracketed ( either through a sign change of during the grid search or after searching for an extremum ) , one can simply use the newton - raphson algorithm to find the root to desired accuracy , typically within only a few iterations .the time value returned by the newton - raphson routine needs to be in the bracketed interval and if was initially positive and if it was initially negative .if those criteria are not satisfied , the newton - raphson algorithm has clearly failed and a less efficient but more reliable method is needed to track down the root . for example , the van wijngaarden - dekker - brent method , which combines bisection and quadratic interpolation , is guaranteed to converge if the function is known to have a root in the interval under analysis . in the previous sectionis was shown how to determine the time at which two atoms and collide under the assumption that there is no other earlier collision .this we will call a _ possible collision event_. in a dmd simulation ,once the possible collision events at times have been computed for all possible collision pairs and , the earliest event should be selected .after the collision event between atoms and has been executed ( according to the rules derived in the next section ) , the next earliest collision should be performed . however , because the velocities of atoms of the molecules involved in the collision have changed , the previously computed collision times involving these molecules are no longer valid .the next event in the sequence can be determined and performed only after these collision times have been recomputed .this process describes the basic strategy of dmd , which without further improvements would be needlessly inefficient . forif is the number of possible collision events , finding the earliest time would require checks , and , while the number of invalidated collisions that have to be recomputed after each collision would be .since the number of collisions in the system per unit of physical time also grows with , the cost of a simulation for a given physical time would be for the computation of collision times and for finding the first collision event .fortunately , there are ways to significantly reduce this computational cost .the first technique , also used in molecular dynamics simulations of systems interacting with continuous potentials , reduces the number of possible collision times that have to be computed by employing a _ cell division _ of the system . notethat while the times of certain interaction events ( e.g. involving only the molecule s center of mass ) can be expressed in analytical form and thus computed very efficiently , the atom - atom interactions have , in general , an orientational dependence and the possible collision time has to be found by means of a numerical root search as explained in the previous section . as a consequence ,the most time consuming task in a dmd simulation with rigid bodies is the numerical root search for the collision times .one can however minimize the required number of collision time computations by dividing the system into a cell structure and sorting all molecules into these cells according to the positions of their centers of mass .each cell has a diameter of at least the largest `` interaction diameter '' of a molecule as measured from its center of mass . as a result, molecules can only collide if they are in the same cell or in an adjacent cell , so the number of collision events to determine and to recompute after a collision is much smaller . in this technique ,the sorting of molecules into cells is only done initially , while the sorting is dynamically updated by introducing a _ cell - crossing event _ for each molecule that is also stored .since the center of mass of a molecule performs linear motion between collision events , one can express its cell - crossing time analytically and therefore the numerical computation of that time is very fast .the second technique reduces the cost of finding the earliest event time .it consists of storing possible collision and cell - crossing events in a time - ordered structure called a _binary tree_. for details we refer to refs . and ( alternative event scheduling algorithms exist but it is not clear which technique is generally the most efficient . )finally , a third standard technique is to update the molecules positions and velocities only at collisions ( and possibly upon their crossing the periodic boundaries ) , while storing the time of their last collision as a property of the molecule called its _ _ local clock__ .whenever needed , the positions and velocities at later times can be determined from the exact solution of force - free and torque - free motion of the previous sec .[ calculation ] .the use of cell divisions , a binary event tree to manage the events , and local clocks is a standard practice in dmd simulations and largely improves the simulation s efficiency . to see this, note that in each step of the simulation one picks the earliest event from the tree , which scales as for randomly balanced trees .if it is a cell - collision event , it is then performed and subsequently collisions and cell crossings are recomputed and added to the event tree ( ) . if it is a crossing event , the corresponding molecule is put in its new cell , new possible collision and crossing events are computed ( ) and added to the tree ( ) .then the program progresses to the next event .since still real events take place per unit of physical time , one sees that using these techniques , the computational cost per unit of physical time due to the computation of possible collisions and cell crossing times scales as instead of , while the cost due to the event scheduling is per unit of physical time instead of a huge reduction .contrary to what their scaling may suggest , one often finds that the cost of the computation of collision times greatly dominates the scheduling cost for finite .this is due to fact that the computations of many of the collision times requires numerical root searches , although some can , and should , be done analytically .thus , to gain further computational improvements , one has to improve upon the efficiency of the numerical search for collision event times .a non - standard time - saving technique that we have developed for this purpose is to use _ virtual collision events_. in this case , the grid search ( see sec .[ includingcollisions ] ) for a possible collision time of atoms and is carried out only over a fixed small number of grid points , thus limiting the scope of the root search to a small search interval .if no collisions are detected in this search interval , a virtual collision event is scheduled in the binary event tree , much as if it were a possible future collision at the time of the last grid point that was investigated .if the point at which the grid search is curtailed is rather far in the future , it is likely this virtual event will not be executed because the atoms and probably will have collided with other atoms beforehand .thus , computational work has been saved by stopping the grid search after a few grid points . every now and thenhowever , atoms and will not have collided with other atoms at the time at which the grid search was stopped . in this case , the virtual collision event in the tree is executed , which entails continuing the root search from the point at which the search was previously truncated . the continued search again may not find a root in a finite number of grid points and schedule another virtual collision , or it may now find a collision . in either case the new event is scheduled in the tree .this virtual collision technique avoids the unnecessary computation of a collision time that is so far in the future that it will not be executed in all likelihood anyway , while at the same time ensuring that if , despite the odds , that collision is to happen nonetheless , it is indeed found and correctly executed .the trade - off of this technique is that the event tree is substantially larger , slowing down the event management . due tothe high cost of numerical root searches however , the simulations presented in the accompanying paper showed that using virtual collision events yields an increase in efficiency between 25% and 110% , depending mainly on the system size .at each moment of collision , the impulsive forces and torques lead to discontinuous jumps in the momenta and angular momenta of the colliding bodies . in the presence of constraints , there are two equivalent ways of deriving the rules governing these changes . in the first approach ,the dynamics are treated by applying constraint conditions to cartesian positions and momenta .this approach is entirely general and is suited for both constrained rigid and non - rigid motion . in its generality , it is unnecessarily complicated for purely rigid systems and is not suitable for continuum bodies .the second approach , suitable for rigid bodies only , uses the fact that only six degrees of freedom , describing the center of mass motion and orientational dynamics are required to fully describe the dynamics of an arbitrary rigid body .the derivation therefore consists of prescribing a collision process in terms of impulsive changes to the velocity of the center of mass and impulsive changes to the angular velocity .the general collision process in systems with discontinuous potentials can be seen as a limit of the collision process for continuous systems in which the interaction potential becomes infinitely steep . a useful starting point for deriving the collision rulesis therefore to consider the effect of a force applied to the overall change in the momentum of any atom : where is the total force acting on atom and .furthermore , here and below the pre and post - collision values of a quantity are denoted by and , respectively . for discontinuous systems , the intermolecular forces are impulsive and occur only at an instantaneous collision time .when atoms and collide , the interaction potential depends only on the scalar distance between those atoms , so that the force on an arbitrary atom is given by ( without summation over and ) note that this is non - zero only for the atoms involved in the collision , as expected . given that the force is impulsive , it may be written as where the scalar is the magnitude of the impulse ( to be determined ) on atom in the collision . in general , the constraint forces on the right - hand side of eq .( [ eleq ] ) must also have an impulsive component whenever intermolecular forces are instantaneous in order to maintain the rigid body constraints at all times .we account for this by writing the lagrange multipliers as because enters into the equations of motion for all atoms involved in the constraint , there is an effect of this impulsive constraint force on all those atoms .thus , one can write for the force on a atom when atoms and collide : .\label{forces}\end{gathered}\ ] ] substituting eq .( [ forces ] ) into ( [ deltap1 ] ) , one finds that the term proportional to vanishes in the limit that the time interval approaches zero , so that the post - collision momenta are related to the pre - collision momenta by note that at the instant of collision , the positions of all atoms remain the same ( only their momenta change ) so that there is no ambiguity in the right - hand side of eq . as to whether to take the before or after the collision .it is straightforward to show that due to the symmetry of the interaction potential , the total linear momentum and angular momentum of the system are conserved by the collision rule eq.([deltap2 ] ) for arbitrary values of the unknown scalar functions and .in addition to these constants of the motion , the collision rule must also conserve total energy and preserve the constraint conditions , and , before and after the collision .the first constraint condition is trivially satisfied at the collision time , since the positions are not altered at the moment of contact .the second constraint condition allows the scalar to be related to the value of using eq . before and after the collision , since we must have inserting eq .( [ deltap2 ] ) into eq ., one gets solving this linear equation for gives where the matrix was defined in eq .( [ zdef ] ) .note that if atoms and are on different bodies , a given constraint involves either one or the other atom ( or neither ) , so at least one of the two terms on the right - hand side of eq .is then zero .equation can now be written as where is a function of the phase - space coordinate as determined by eq . and is independent of .finally , the scalar can be determined by employing energy conservation , where denotes the discontinuous change in the potential energy at the collision time . inserting the expression in ( [ changep ] ) into and using eq ., one gets a quadratic equation for the scalar , for finite values of , the value of is therefore where the physical solution corresponds to the positive ( negative ) root if ( ) , provided .if this latter condition is not met , there is not enough kinetic energy to overcome the discontinuous barrier , and the system experiences a hard - core scattering , with , so that eq .gives .once the value of has been computed , the discrete changes in momenta or velocities are easily computed using eq .( [ changep ] ) . the solution method outlined above can be applied to semi - flexible as well as rigid molecular systems , but is not very suitable for rigid , continuous bodies composed of an infinite number of point particles . for perfectly rigid molecules , a more convenient approach is therefore to analyze the effect of impulsive collisions on the center of mass and angular coordinates of the system , which are the minimum number of degrees of freedom required to specify the dynamics of rigid systems .the momentum of the center of mass and the angular momentum of rigid molecule are affected by the impulsive collision via where and are the moment of inertia tensor and the angular velocity of body in the laboratory frame , respectively .note that they are related to their respective quantities in the principal axis frame ( body frame ) via the matrix ( now associated with the body ) : to derive specific forms for the impulsive changes and , one may either calculate the impulsive force and torque acting on the center of mass and angular momentum , leading to and , where and are the points at which the forces are applied on body and , respectively , while , and should be obtained from energy conservation . to understand this better andmake a connection with the previous section , one may equivalently view the continuum rigid body as a limit of a non - continuum rigid body composed of constrained point particles , and use the expressions derived in the previous section for the changes in momenta of the constituents . in the latter approach ,it is convenient to switch the notation for the positions and momenta of the atoms from and to and , which indicate the position and momentum of particle on body , respectively . using this notation and considering a collision between particle on body and particle on body , eq .( [ changep ] ) can be written as , \label{changepa}\end{aligned}\ ] ] where is the unit vector along the direction of the vector connecting atom on body with its colliding partner on body .thus , noting that , where is the total mass of body , and using eq .( [ changepa ] ) , one finds that since similarly , one finds that where . comparing with eq ., it is evident that note that is a matrix inverse .once again the impulsive changes are directly proportional to , and the change of the angular velocity of body in the laboratory frame due to the collision can be calculated analogously . to determine the scalar , one again uses the conservation of total energy ( ) to see that inserting eqs .( [ changepcom ] ) and ( [ changeomega ] ) into the energy equation above yields , after some manipulation , a quadratic equation for of the form of eq .( [ quadratic ] ) , with where with for a spherically symmetric system , , and .any event - driven molecular dynamics simulation relies on the assumption that no collision is ever missed .however , collisions will be missed whenever the time difference between two nearby events is on the order of ( or smaller than ) the time error of the scheduled events , which indicates that there is still a finite chance that a collision is missed even when event times are calculated in a simulation starting from an analytic expression , due to limits on machine precision .although this subtle issue is not very important in a hard sphere system , in the present context it is of interest .indeed , the extensive use of numerical root searches for the event time calculations combined with the need for computational efficiency demands a lower precision in the time values of collision events ( typically a precision of instead of for analytical roots ) . in this section, it will be shown how to handle missed collisions in the context of the hybrid monte carlo scheme ( hmc ) . in general, the hmc method combines the monte carlo method with molecular dynamics to construct a sequence of independent configurations , distributed according to the canonical probability density , \label{canonicalprob}\ ] ] where is the configurational integral , is boltzmann s constant , and is the temperature . in the present context , this method can be implemented as follows : initially , a new set of momenta is selected by choosing a random center of mass momentum and angular velocity for each molecule from the gaussian distribution .\ ] ] the system is then evolved deterministically through phase - space for a fixed time according to the equations of motion .this evolution defines a mapping of phase - space given by .the resulting phase space point and trajectory segment are then accepted with probability \right\ } , \label{probacceptance}\ ] ] where and this algorithm generates a markov chain of configurations with an asymptotic distribution given by the stationary canonical distribution defined in eq .( [ canonicalprob ] ) provided that the phase space trajectory is _ time reversible _ and _ _ area preserving__ . since free translational motion is time reversible , and the reversibility of the rotational equations of motion is evident from eq .( [ pmat ] ) , the first requirement is satisfied . furthermore , since the invariant phase space metric is uniform for fully rigid bodies ( see eq . in sec . [ constraineddynamics ] ) ,the _ area preserving _ condition is also satisfied .ideally , a dmd simulation satisfies so that according to eq .( [ probacceptance ] ) every trajectory segment is accepted . in the less ideal , more realistic case in which collisions are occasionally missed, the hmc scheme provides a rigorous way of accepting or rejecting the segment .if a hard - core collision has been missed and the configuration at the end of a trajectory segment has molecules in unphysical regions of phase space where the potential energy is infinite , then and the new configuration and trajectory segment are always rejected . on the other hand , if only asquare - well interaction has been missed , at the end of the trajectory segment is finite and there is a non - zero probability of accepting the trajectory .an analogous strategy can be devised to carry out microcanonical averages . in this case , the assignment of new initial velocities in the first step is still done randomly but in such a way that the total kinetic energy of the system remains constant .such a procedure can be carried out by exchanging center of mass velocities between randomly chosen pairs of molecules .the system is evolved dynamically through phase space for a fixed time and the new phase space point is accepted according to where is given by eq .( [ deltah ] ) . clearly , the case only occurs when a collision has been missed , and in such a case the trajectory segment is never accepted .it should be emphasized that in the hmc scheme , a new starting configuration for a segment of time evolution is chosen only after every dmd time interval .an algorithm in which a new configuration is selected only after a collision is missed is likely to violate detailed balance , and is therefore not a valid monte - carlo scheme . on the other hand ,the length of the trajectory segments in the hmc method outlined above can be chosen to be slightly larger than the relevant relaxation time of the system .such a choice allows one to use the deterministic phase space trajectory to compute time - dependent correlation functions from the _ exact _ dynamics of the system without rejecting a significant fraction of the trajectory segments .in this paper we have shown how to carry out discontinuous molecular dynamics simulations for arbitrary semi - flexible and rigid molecules . for semi- flexible bodies , the dynamics and collision rules have been derived from the principles of constrained lagrangian mechanics .the implementation of an efficient dmd method for semi - flexible systems is hindered by the fact that in almost all cases the equations of motion must be propagated numerically in an event searching algorithm so that the constraints are enforced at all times .nonetheless , such a scheme can be realized using the shake or rattle algorithms in combination with the root searching methods outlined here .the dynamics of a system of completely rigid molecules interacting through discontinuous potentials is more straightforward .for such a system , the euler equations for rigid body dynamics can be used to calculate the free evolution of a general rigid object .this analytical solution enables the design of efficient numerical algorithms for the search for collision events .in addition , the collision rules for calculating the discontinuous changes in the components of the center of mass velocity and angular momenta have been obtained for arbitrary bodies interacting through a point based on conservation principles .furthermore , the sampling of the canonical and microcanonical ensembles , as well as the handling of missed collisions , has also been discussed in the context of a hybrid monte carlo scheme . from an operational standpoint, the difference between the method of dmd and molecular dynamics using continuous potentials in rigid systems lies in the fact that the dmd approach does not require the calculation of forces and sequential updating of phase space coordinates at discrete ( and short ) time intervals since the response of the system to an impulse can be computed analytically .instead , the computational effort focuses on finding the precise time at which such impulses exert their influence .the basic building block outlined here for the numerical computation of collision times is a grid search , for which the positions of colliding atoms on a given pair of molecules need to be computed at equally spaced points in time . as outlined in sec .[ includingcollisions ] , this can be done efficiently starting with a completely explicit analytical form of the motion of a torque - free rigid body , without which the equations of motions would have to be integrated numerically .an efficient implementation of the dmd technique to find the time collision events should make use of a ) a large grid step combined with a threshold scenario to catch pathological cases , b ) sophisticated but standard techniques such as binary event trees , cell divisions , and local clocks , and c ) a new technique of finding collision times numerically that involves truncating the grid search and scheduling virtual collision events . on a fundamental level, it is natural to wonder whether the ` stepped ' form of a discontinuous potential could possibly model any realistic interaction .such concerns are essentially academic , since it is always possible to approximate a given interaction potential with as many ( small ) steps as one would like in order to approximate a given potential to any desired level of accuracy .of course , the drawback to mimicking a smooth potential with a discontinuous one with many steps is that the number of ` collision ' events that occur in the system per unit time scales with the number of steps in the potential .hence , one would expect that the efficiency of the simulation scales roughly inversely with the number of steps in the interaction potential .nonetheless , the issue is a practical one : how small can the number of steps in the interaction potential be such that one still gets a good description of the physics under investigation ? in the accompanying paper , we will see for benzene and methane that it takes surprisingly few steps ( e.g. a hard core plus a square - well interaction ) to get results which are very close to those of continuous molecular dynamics .additionally , we compare the efficiency of such simulations to simulations based on standard molecular dynamics methods .the authors would like to acknowledge support by grants from the natural sciences and engineering research council of canada ( nserc ) . to be more precise ,no hamiltonian exists with and as conjugate variables .otherwise , should be equal to and should be equal to , whence , but eq . violates this relation .the threshold value is found by trial and error ; a trial simulation with a very small threshold value is run which is sure to miss a collision at some point due to a grazing collision ; the collision indicator function around the time of this grazing collision is inspected and the threshold value is adjusted such that this collision will not be missed .new trial simulation are run , and the threshold adjusted , until the frequency of missed grazing collisions is acceptable .this assumes no optimization whatsoever .it is easy to reduce the cost of finding the first possible collision times to per unit physical time , however , by storing only the first collision for every given particle .
a general framework for performing event - driven simulations of systems with semi - flexible or rigid bodies interacting under impulsive torques and forces is outlined . two different approaches are presented . in the first , the dynamics and interaction rules are derived from lagrangian mechanics in the presence of constraints . this approach is most suitable when the body is composed of relatively few point masses or is semi - flexible . in the second method , the equations of rigid bodies are used to derive explicit analytical expressions for the free evolution of arbitrary rigid molecules and to construct a simple scheme for computing interaction rules . efficient algorithms for the search for the times of interaction events are designed in this context , and the handling of missed interaction events is discussed .
classification of networked data is a quite attractive field with applications in computer vision , bioinformatics , spam detection and text categorization . in recent yearsnetworked data have become widespread due to the increasing importance of social networks and other web - related applications .this growing interest is pushing researchers to find scalable algorithms for important practical applications of these problems .+ in this paper we focus our attention on a task called _ node classification _ , often studied in the semi - supervised setting .recently , different teams studied the problem from a theoretic point of view with interesting results . for example on - line fast predictors for weighted and unweighted graphs and herbster et al . developed different versions of the perceptron algorithm to classify the nodes of a graph ( ) . introduced a game - theoretic framework for node classification .we adopt the same approach and , in particular , we obtain a scalable algorithm by finding a nash equilibrium on a special instance of their game . the main difference between our algorithm and theirsis the high scalability achieved by our approach .this is really important in practice , since it makes possible to use our algorithm on large scale problems .given a weighted graph , a labeling of is an assignment where .+ we expect our graph to respect a notion of regularity where adjacent nodes often have the same label : this notion of regularity is called _homophily_. most machine learning algorithms for node classification ( ) adopt this bias and exploit it to improve their performances .+ the learner is given the graph , but just a subset of , that we call training set .the learner s goal is to predict the remaining labels minimizing the number of mistakes . introduce also an irregularity measure of the graph , for the labeling , defined as the ratio between the sum of the weights of the edges between nodes with different labels and the sum of all the weights . intuitively , we can view the weight of an edge as a similarity measure between two nodes , we expect highly similar nodes to have the same label and edges between nodes with different labels being `` light '' .based on this intuition , we may assign labels to non - training nodes so to minimize some function of the induced weighted cut . in the binary classification case , algorithms based on min - cut have been proposed in the past ( for example ) .generalizing this approach to the multiclass case , naturally takes us to the _ multi - way cut _ ( or multi - terminal cut see ) problem .given a graph and a list of terminal nodes , find a set of edges such that , once removed , each terminal belongs to a different component .the goal is to minimize the sum of the weights of the removed edges .+ unfortunately , the multi - way cut problem is max snp - hard when the number of terminals is bigger than two ( ) . furthermore , efficient algorithms to find the multi - way cut on special instances of the problem are known , but , for example , it is not clear if it is possible to reduce a node classification problem on a tree to a multi - way cut on a tree .in this section we describe the game introduced by that , in a certain sense , aims at distributing over the nodes the cost of approximating the multi - way cut .this is done by expressing the labels assignment as a nash equilibrium .we have to keep in mind that , since this game is non - cooperative , each player maximizes its own payoff disregarding what it can do to maximize the sum of utilities of all the players ( the so - called social welfare ) .the value of the multi - way cut is strongly related to the value of the social welfare of the game , but in the general case a nash equilibrium does not give any guarantee about the collective result . + in the graph transduction game ( later called gtg ) , the graph topology is known in advance and we consider each node as a player .each possible label of the nodes is a pure strategy of the players . since we are working in a batch setting, we will have a train / test split that induces two different kind of players : * * determined players*( ) those are nodes with a known label ( train set ) , so in our game they will be players with a fixed strategy ( they do not change their strategy since we can not change the labels given as training set ) * * undetermined players*( ) those that do not have a fixed strategy and can choose whatever strategy they prefer ( we have to predict their labels ) the game is defined as , where is the set of players , is the joint strategy space ( the cartesian product of all strategy sets ) , and is the combined payoff function which assigns a real valued payoff to each pure strategy profile and player . a mixed strategy of player is a probability distribution over the set of the pure strategies of .each pure strategy corresponds to a mixed strategy where all the strategies but the -th one have probability equals to zero .we define the utility function of the player as where is the probability of .we assume the payoff associated to each player is additively separable ( this will be clear in the following lines ) .this makes gtg a member of a subclass of the multi - player games called polymatrix games . for a pure strategy profile ,the payoff function of every player is : where means that and are neighbors , this can be written in matrix form as where is the partial payoff matrix between and , defined as , where is the identity matrix of size and represent the element of at row and column .the utility function of each player can be re - written as follows : [ cols= " > , < " , ] the results of our experiments , shown in table [ t : multi ] , are not conclusive , but we can observe some interesting trends : * it is not really clear which one between gtg - ess and labprop is the most accurate algorithm , but anyway is always competitive with them .* is always much better than wmv .as expected wmv works better on `` not too sparse '' graphs such ghgraph , but even in this case it is outperformed by .* gtg - ess and labprop s time complexity did not permit us to run them in a reasonable amount of time with our computational resources .we introduced a novel scalable algorithm for multiclass node classification in arbitrary weighted graphs .our algorithm is motivated within a game theoretic framework , where test labels are expressed as the nash equilibrium of a certain game . in practice ,mucca works well even on binary problems against competitors like label propagation and shazoo that have been specifically designed for the binary setting .several questions remain open .for example , committees of mucca predictors work well but we do not know whether there are better ways to aggregate their predictions .also , given their common game - theoretic background , it would be interesting to explore possible connections between committees of mucca predictors and gtg - ess .
we introduce a scalable algorithm , mucca for multiclass node classification in weighted graphs . unlike previously proposed methods for the same task , mucca works in time linear in the number of nodes . our approach is based on a game - theoretic formulation of the problem in which the test labels are expressed as a nash equilibrium of a certain game . however , in order to achieve scalability , we find the equilibrium on a spanning tree of the original graph . experiments on real - world data reveal that mucca is much faster than its competitors while achieving a similar predictive performance .
epidemic models are classically phrased in ordinary differential equation ( ode ) systems for the host population divided in classes of susceptible individuals and infected ones ( sis system ) , or in addition , a class of recovered individuals due to immunity after an infection to the respective pathogen ( sir epidemics ) .the infection term includes a product of two variables , hence a non - linearity which in extended systems can cause complicated dynamics . though these simple sis and sir models only show fixed points as equilibrium solutions , they already show non - trivial equilibria arising from bifurcations , and in stochastic versions of the system critical fluctuations at the threshold .further refinements of the sir model in terms of external forcing or distinction of infections with different strains of a pathogen , hence classes of infected with one or another strain recovered from one or another strain , infected with more than one strain etc ., can induce more complicated dynamical attractors including equilibria , limit cycles , tori and chaotic attractors .classical examples of chaos in epidemiological models are childhood diseases with extremely high infection rates , so that a moderate seasonal forcing can generate feigenbaum sequences of period doubling bifurcations into chaos .the success in analysing childhood diseases in terms of modelling and data comparison lies in the fact that they are just childhood diseases with such high infectivity .otherwise host populations can not sustain the respective pathogens .in other infectious diseases much lower forces of infection have to be considered leading to further conceptual problems with noise affecting the system more than the deterministic part , leading even to critical fluctuations with power law behaviour , when considering evolutionary processes of harmless strains of pathogens versus occasional accidents of pathogenic mutants . only explicitly stochastic models , of which the classical ode models are mean field versions , can capture the fluctuations observed in time series data .more recently it has been demonstrated that the interaction of various strains on the infection of the host with eventual cross - immunities or other interactions between host immune system and multiple strains can generate complicated dynamic attractors .a prime example is dengue fever .a first infection is often mild or even asymptomatic and leads to life long immunity against this strain .however , a subsequent infection with another strain of the virus often causes clinical complications up to life threatening conditions and hospitalization , due to ade .more on the biology of dengue and its consequences for the detailed epidemiological model structure can be found in aguiar and stollenwerk including literature on previous modelling attempts , see also . on the biological evidence for adesee e.g. . besides the difference in the force of infection between primary and secondary infection , parametrized by a so called ade parameter , which has been demonstrated to show chaotic attractors in a certain parameter region , another effect , the temporary cross - immunity after a first infection against all dengue virus strains , parametrized by the temporary cross - immunity rate , shows bifurcations up to chaotic attractors in a much wider and biologically more realistic parameter region .the model presented in the appendix has been described in detail in and has recently been analysed for a parameter value of corresponding to on average half a year of temporary cross immunity which is biologically plausible . for increasing ade parameter first an equilibrium which bifurcates via a hopf bifurcation into a stable limit cycle andthen after further continuation the limit cycle becomes unstable in a torus bifurcation .this torus bifurcation can be located using numerical bifurcation software based on continuation methods tracking known equilibria or limit cycles up to bifurcation points .the continuation techniques and the theory behind it are described e.g. in kuznetsov .complementary methods like lyapunov exponent spectra can also characterize chaotic attractor , and led ultimately to the detection of coexisting attractors to the main limit cycles and tori originated from the analytically accessible fixed point for small .such coexisting structures are often missed in bifurcation analysis of higher dimensional dynamical systems but are demonstrated to be crucial at times in understanding qualitatively the real world data , as for example demonstrated previously in a childhood disease study . in such a study first the understanding of the deterministic system s attractor structure is needed , and then eventually the interplay between attractors mediated by population noise in the stochastic version of the system gives the full understanding of the data .here we present for the first time extended results of the bifurcation structure for various parameter values of the temporary cross immunity in the region of biological relevance and multi - parameter bifurcation analysis .this reveals besides the torus bifurcation route to chaos also the classical feigenbaum period doubling sequence and the origin of so called isola solutions .the symmetry of the different strains leads to symmerty breaking bifurcations of limit cycles , which are rarely described in the epidemiological literature but well known in the biochemical literature , e.g for coupled identical cells .the interplay between different numerical procedures and basic analytic insight in terms of symmetries help to understand the attractor structure of multi - strain interactions in the present case of dengue fever , and will contribute to the final understanding of dengue epidemiology including the observed fluctuations in real world data . in the literature the multi - strain interaction leading to deterministic chaos via ade has been described previously , e.g. but neglecting temporary cross immunity and hence getting stuck in rather unbiological parameter regions , whereas more recently the first considerations of temporary cross immunity in rather complicated and up to now not in detail analysed models including all kinds of interations have appeared , in this case failing to investigate closer the possible dynamical structures .the multistrain model under investigation can be given as an ode system for the state vector of the epidemiological host classes and besides other fixed parameters which are biologically undisputed the parameter vector of varied parameters .for a detailed description of the biological content of state variables and parameters see .the ode equations and fixed parameter values are given in the appendix .the equilibrium values are given by the equilibrium condition , respectively for limit cycles with period .for chaotic attractors the trajectory of the dynamical system reaches in the time limit of infinity the attractor trajectory , equally for tori with irrational winding ratios . in all casesthe stability can be analysed considering small perturbations around the attractor trajectories here , any attractor is notified by , be it an equilibrium , periodic orbit or chaotic attractor . in this ode systemthe linearized dynamics is given with the jacobian matrix of the ode system eq .( [ dynamicsf ] ) evaluated at the trajectory points given in notation of .the jacobian matrix is analyzed for equilibria in terms of eigenvalues to determine stability and the loss of it at bifurcation points , negative real part indicating stability .for the stability and loss of it for limit cylces floquet multipliers are more common ( essentially the exponentials of eigenvalues ) , multipliers inside the unit circle indicating stability , and where they leave eventually the unit circle determining the type of limit cycle bifurcations . and for chaotic systems lyapunov exponents are determined from the jacobian around the trajectory , positive largest exponents showing deterministic chaos , zero largest showing limit cycles including tori , largest smaller zero indicating fixed points . to investigate the bifurcation structure of the system under investigation we first observe the symmetries due to the multi - strain structure of the model .this becomes important for the time being for equilibria and limit cycles .we introduce the following notation : with a symmetry transformation matrix {c c c c c c c c c c } 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \label{symmetrymatrix } \end{array } \right)\ ] ] we have the following symmetry : with equilibrium values or limit cycle for all times $ ] . for the right hand side of the ode system( [ dynamicsf ] ) the kind of symmetry found above is called -symmetry when the following equivariance condition holds with a matrix that obeys and , where is the unit matrix .observe that besides also satisfies ( [ eqn : equivariancecondition ] ) .the symmetry transformation matrix in eq .( [ symmetrymatrix ] ) fulfills these requirements .it is easy to verify that the -equivariance conditions eq .( [ eqn : equivariancecondition ] ) and the properties of are satisfied for our ode system . in seydel a simplified version of the famous brusselator that shows this type of symmetry is discussed .there , an equilibrium and also a limit cycle show a pitchfork bifurcation with symmetry breaking .an equilibrium is called _ fixed _when ( see ) .two equilibria where , are called -conjugate if their corresponding solutions satisfy ( and because also ) . for limit cyclesa similar terminology is introduced .a periodic solution is called _ fixed _ when and the associated limit cycles are also called _ fixed _ .there is another type of periodic solution that is not fixed but called _ symmetric _ when where is the period . againthe associated limit cycles are also called _ symmetric_.both types of limit cycles are -invariant as curves : .that is , in the phase - plane where time parameterizes the orbit , the cycle and the transformed cycle are equal .a -invariant cycle is either fixed or symmetric .two noninvariant limit cycles ( ) are called -conjugate if their corresponding periodic solutions satisfy .the properties of the symmetric systems and the introduced terminology are used below with the interpretation of the numerical bifurcation analysis results .we refer to for an overview of the possible bifurcations of equilibria and limit cycles of -equivariant systems .we show the results of the bifurcation analysis in bifurcation diagrams for several values , varying continuously . besides the previously investigated case of , we show also a case of smaller and a case of larger value , obtaining more information on the bifurcations possible in the model as a whole .the above mentioned symmetries help in understanding the present bifurcation structure .for the one - parameter bifurcation diagram is shown in fig .[ fig : bifdia1phi ] a ) . starting with there is a stable fixed equilibrium , fixed in the above mentioned notion for symmetric systems .this equilibrium becomes unstable at a hopf bifurcation at .a stable fixed limit cycle originates at this hopf bifurcation .this limit cycle shows a supercritical pitch - fork bifurcation , i.e. a bifurcation of a limit cycle with floquet multiplier 1 , splitting the original limit cycle into two new ones . besides the now unstable branch two new branches originate for the pair of conjugated limit cycles .the branches merge again at another supercritical pitch - fork bifurcation , after which the limit cycle is stable again for higher -values .the pair of -conjugate limit cycles become unstable at a torus bifurcation at .\a ) = 3.3 cm b ) = 3.3 cm c ) = 3.3 cm besides this main bifurcation pattern we found two isolas , that is an isolated solution branch of limit cycles .these isola cycles are not -invariant , that is .isolas consisting of isolated limit cycles exist between two tangent bifurcations .one isola consists of a stable and an unstable branch .the other shows more complex bifurcation patterns .there is no full stable branch . for at the tangent bifurcation a stable and an unstable limit cycle collide .the stable branch becomes unstable via a flip bifurcation or periodic doubling bifurcation , with floquet multiplier , at which is also pitchfork bifurcation for the period - two limit cycles . at the other end of that branch at the tangent bifurcation at colliding limit cycles are unstable .close to this point at one branch there is a torus bifurcation , also called neimark - sacker bifurcation , at and a flip bifurcation at which is again a pitchfork bifurcation for the period - two limit cycles .contiuation of the stable branch originating for the flip bifurcation at gives another flip bifurcation at and one closed to the other end at , namely at .these results suggest that for this isola two classical routes to chaos can exist , namely via the torus or neimark - sacker bifurcation where the dynamics on the originating torus is chaotic , and the cascade of period doubling route to chaos . for the one - parameter bifurcation diagramis shown in fig .[ fig : bifdia1phi ] b ) .the stable fixed equilibrium becomes unstable at a supercritical hopf bifurcation at where a stable fixed limit cycle originates .this stable limit cycle becomes unstable at a superciritcal pitchfork bifurcation point at for a limit cycle .this point marks the origin of a pair of -conjugate stable limit cycles besides the now unstable fixed limit cycle .here one has to consider the two infected subpopulations and to distinguish the conjugate limit cycles .because the two variables and are interchangeable this can also be interpreted as the stable limit cycles for the single variable say .the fixed stable equilibrium below the hopf bifurcation where we have , , and is a fixed equilibrium . for the fixed limit cycle in the parameter interval between the hopf bifurcation and the pitchfork bifurcation we have , , and means that at the hopf bifurcation the stable fixed equilibrium becomes an unstable fixed equilibrium . in the parameter interval between the two pitchfork bifurcations at and subcritical at , two stable limit cycles coexist and these limit cycles are -conjugate . at the pitchforkbifurcation points the fixed limit cycle becomes unstable and remains fixed , and two stable -conjugate limit cycles originate ( see ( * ? ? ?* theorem 7.7 ) ) .the invariant plane forms the separatrix between the pair of stable -conjugate limit cycles and .the initial values of the two state variables and together with the point on the invariant plane , determine to which limit cycle the system converges .continuation of the stable symmetric limit cycle gives a torus or neimark - sacker bifurcation at point denoted by at . at his pointthe limit cycles become unstable because a pair of complex - conjugate multipliers crosses the unit circle .observe that at this point in the time series plot ( * ? ? ?* there fig .12 ) the chaotic region starts . in the following route to chaos ,namely the sequence of neimark - sacker bifurcations into chaos , is mentioned . increasing the bifurcation parameter along the now unstable pair of -conjugate limit cycles leads to a tangent bifurcation at where a pair of two unstable limit cycles collide .this branch terminates at the second pitchfork bifurcation point denoted by at . because the first fold point gave rise to a stable limit cycle and this fold point to an unstable limit cycle we call the first pitchfork bifurcation supercritical and the latter pitchfork bifurcation subcritical .these results agree very well with the simulation results shown in the bifurcation diagram for the maxima and minima of the overall infected ( * ? ? ?* there fig .notice that auto calculates only the global extrema during a cycle , not the local extrema . fig .[ fig : bifdia1phi ] b ) shows also two isolas similar to those for in fig .[ fig : bifdia1phi ] a ) . for bifurcation diagram is shown in fig [ fig : bifdia1phi ] c ) . in the lower parameter range there is bistability of two limit cycles in an interval bounded by two tangent bifurcations .the stable manifold of the intermediate saddle limit cycle acts as a separatrix .inceasing the stable limit cycles become unstable at the pitchfork bifurcation at . following the unstable primary branch , for larger values of observe an open loop bounded by two tangent bifurcations .the extreme value for is at .then lowering there is a pitchfork bifurcation at .later we will return to the description of this point .lowering further the limit cycle becomes stable again at the tangent bifurcations at .increasing this limit cycle becomes unstable again at the pitchfork bifurcation at .continuation of the secondary branch of the two -conjugated limit cycles from this point reveals that the stable limit cycle becomes unstable at a torus bifurcation at .the simulation results depicted in ( * ? ? ?13 ) show that there is chaos beyond this point .the secondary pair of -conjugate limit cycles that originate from pitchfork bifurcation at becomes unstable at a flip bifurcation .increasing further it becomes stable again at a flip bifurcation .below we return to the interval between these two flip bifurcations .the stable part becomes unstable at a tangent bifurcation , then continuing , after a tangent bifurcation and a neimark - sacker bifurcation .this bifurcation can lead to a sequence of neimark - sacker bifurcations into chaos .the unstable limit cycles terminates via a tangent bifurcation where the primary limit cycle possesses a pitchfork bifurcation at . at the flip bifurcation the cycle becomes unstable and a new stable limit cycle with double period emanates .the stable branch becomes unstable at a flip bifurcation again .we conclude that there is a cascade of period doubling route to chaos .similarly this happens in reversed order ending at the flip bifurcation where the secondary branch becomes stable again .\a ) = 3.3 cm b ) = 3.3 cm c ) = 3.3 cm fig .[ fig : bifdia1iphidd ] a ) gives the results for the interval where only the minima are show . in this plot also a `` period three '' limit cycle is shown . in a small regionit is stable and coexists together with the `` period one '' limit cycle .the cycles are shown in fig .[ fig : bifdia1iphidd ] b ) and c ) for . the one in c )looks like a period-3 limit cycle . in fig .[ fig : bifdia1iphidd ] continuation of the limit cycle gives a closed graph bounded at the two ends by trangent bifurcations where a stable and an unstable limit cycle collide .the intervals where the limit cycle is stable , are on the other end bounded by flip bifurcations .one unstable part intersects the higher period cycles that originate via the cascade of period doubling between the period-1 limit cycle flip bifurcations at and .this suggest that the period-3 limit cycle is associated with a `` period-3 window '' of the chaotic attractor .we conjecture that this interval is bounded by two homoclinic bifurcations for a period-3 limit cycle ( see ) .the bifurcation diagram shown in ( * ? ? ?* there fig .13 ) shows the point where the chaotic attractor disappears abruptly , possible at one of the two homoclinic bifurcations .in that region the two conjugated limit cycles that originate at the pitchfork bifurcation at are the attractors .these results suggest that there are chaotic attractors associated with the period-1 limit cycle , one occurs via a cascade of flip bifurcations originating from the two ends at and and one via a neimark - sacker bifurcation at .we will now link the three studies of the different values by investigating a two - parameter diagram for and , concentrating especially on the creation of isolated limit cycles , which sometimes lead to further bifurcations inside the isola region .[ fig : phialphad ] gives a two - parameter bifurcation diagram where and are the free parameters . for low -valuesthere is the hopf bifurcation and all other curves are tangent bifurcation curves .= 4.0 cm isolas appear or disappears upon crossing an isola variety . at an elliptic isola pointan isolated solution branch is born , while at a hyperbolic isola point an isolated solution branch vanishes by coalescence with another branch . from fig .[ fig : phialphad ] we see that at two values of isolas are born .furthermore , period doubling bifurcations appear for lower values , indicating the feigenbaum route to chaos .however , only the calculation of lyapunov exponents , which are discussed in the next section , can clearly indicate chaos .the lyapunov exponents are the logarithms of the eigenvalues of the jacobian matrix along the integrated trajectories , eq .( [ dynamicsdeltaf ] ) , in the limit of large integration times . besides for very simple iterated mapsno analytic expressions for chaotic systems can be given for the lyapunov exponents .for the calculation of the iterated jacobian matrix and its eigenvalues , we use the qr decomposition algorithm .\a ) = 3.3 cm b ) = 3.3 cm c ) = 3.3 cm in fig .[ fig : lyapspect6 ] we show for various values the four largest lyapunov exponents in the range between zero and one . for in fig .[ fig : lyapspect6 ] a ) we see for small values fixed point behaviour indicated by a negative largest lyapunov exponent up to around . there , at the hopf bifurcation point , the largest lyapunov exponent becomes zero , indicating limit cycle behaviour for the whole range of , apart from the final bit before , where a small spike with positive lyapunov exponent might be present , but difficult to distinguish from the noisy numerical background . for in fig .[ fig : lyapspect6 ] b ) however , we see a large window with positive largest lyapunov exponent , well separated from the second largest being zero .this is s clear sign of deterministically chaotic attractors present for this range .just a few windows with periodic attractors , indicated by the zero largest lyapunov exponent are visible in the region of . for smaller valueswe observe qualitatively the same behaviour as already seen for .for the smaller value of in fig .[ fig : lyapspect6 ] c ) the chaotic window is even larger than for .hence deterministic chaos is present for temporary cross immunity in the range around in the range of between zero and one .we have presented a detailed bifurcation analysis for a multi - strain dengue fever model in terms of the ade parameter , in the previously not well investigated region between zero and one , and a parameter for the temporary cross immunity .the symmetries implied by the strain structure , are taken into account in the analysis .many of the possible bifurcations of equilibria and limit cycles of -equivariant systems can be distinguished . using auto different dynamical structures were calculated .future time series analysis of epidemiological data has good chances to give insight into the relevant parameter values purely on topological information of the dynamics , rather than classical parameter estimation of which application is in general restricted to farely simple dynamical scenarios .this work has been supported by the european union under the marie curie grant mext - ct-2004 - 14338 .we thank gabriela gomes and luis sanchez , lisbon , for scientific support .99 , _ evolution towards criticality in an epidemiological model for meningococcal disease _ , physics letters a * 317 * ( 2003 ) 8796 ., _ diversity in pathogenicity can cause outbreaks of menigococcal disease _ , proc .usa * 101 * ( 2004 ) 1022910234 . ,_ a new chaotic attractor in a basic multi - strain epidemiological model with temporary cross - immunity _ , arxiv:0704.3174v1 [ nlin.cd ] ( 2007 ) ( accessible electronically at http://arxive.org ) . ,_ scale - free network for a dengue epidemic_ , applied mathematics and computation * 159 * ( 2008 ) 376381 ., _ neutralization and antibody - dependent enhancement of dengue viruses _, advances in virus research * 60 * ( 2003 ) 42167 . , _ epidemiology of dengue fever : a model with temporary cross - immunity and possible secondary infection shows bifurcations and chaotic behaviour in wide parameter regions _ , submitted ( 2008 ) ., _ auto 07p continuation and bifurcation software for ordinary differential equations _ ,technical report : concordia university , montreal , canada ( 2007 ) ( accessible electronically at http://indy.cs.concordia.ca/auto/ ) ., _ elements of applied bifurcation theory _ applied mathematical sciences * 112 * , springer - verlag , 3 edition , new york , 2004 . , _ chaotic evolution and strange attractors _ , cambridge university press , cambridge , 1989 . , _ chaos in dynamical systems _ , cambridge university press , cambridge , 2002 . , _ nonlinear time series analysis of empirical population dynamics _, ecological modelling * 75*/*76 * ( 1994 ) 171181 . , _ the effect of antibody - dependent enhancement on the transmission dynamics and persistence of multiple - strain pathogens _ , proc .usa * 96 * ( 1999 ) 79094 . , _ instabilities in multiserotype disease models with antibody - dependent enhancement _, journal of theoretical biology * 246 * ( 2007 ) 1827 ., _ ecological and immunological determinants of dengue epidemics _ , proc .usa * 103 * ( 2006 ) 1180211807 ., _ decreases in dengue transmission may act to increase the incidence of dengue hemorrhagic fever _ , proc .sci * 105 * ( 2008 ) 22382243 . ,_ practical bifurcation and stability analysis - from equilibrium to chaos _ , springer - verlag , new york , 1994 ., _ singularities and groups in bifurcation theory _ , springer , new york , 1985 . , _ routes to chaos in high - dimensional dynamical systems : a qualitative numerical study _ , physica d * 223 * ( 2006 ) 194207 . ,_ homoclinic and heteroclinic orbits in a tri - trophic food chain _ , journal of mathematical biology * 39 * ( 1999 ) 1938 . , _ multiple attractors and boundary crises in a tri - trophic food chain _ , mathematical biosciences * 169 * ( 2001 ) 109128 ., _ chaotic behaviour of a predator - prey system _ , dynamics of continuous , discrete and impulsive systems , series b : applications and algorithms * 10 * ( 2003 ) 259272 . , _ consequence of symbiosis for food web dynamics _ , journal of mathematical biology * 49 * ( 2004 ) 227271 ., _ liapunov exponents from time series _ , phys .a * 34 * ( 1986 ) 49719 .the complete system of ordinary differential equations for a two strain epidemiological system allowing for differences in primary versus secondary infection and temporary cross immunity is given by for two different strains , and , we label the sir classes for the hosts that have seen the individual strains .susceptibles to both strains ( s ) get infected with strain ( ) or strain ( ) , with infection rate .they recover from infection with strain ( becoming temporary cross - immune ) or from strain ( becoming ) , with recovery rate etc .. with rate , the and enter again in the susceptible classes ( being immune against strain 1 but susceptible to 2 , respectively ) , where the index represents the first infection strain .now , can be reinfected with strain ( becoming ) , meeting with infection rate or meeting with infection rate , secondary infected contributing differently to the force of infection than primary infected , etc .. we include demography of the host population denoting the birth and death rate by . for constant population size we have for the immune to all strains and therefore we only need to consider the first 9 equations of eq .( [ ode2strain ] ) , giving 9 lyapunov exponents . in our numerical studieswe take the population size equal to so that numbers of susceptibles , infected etc .are given in percentage .as fixed parameter values we take , , .the parameters and are varied .
we analyse an epidemiological model of competing strains of pathogens and hence differences in transmission for first versus secondary infection due to interaction of the strains with previously aquired immunities , as has been described for dengue fever ( in dengue known as antibody dependent enhancement , ade ) . such models show a rich variety of dynamics through bifurcations up to deterministic chaos . including temporary cross - immunity even enlarges the parameter range of such chaotic attractors , and also gives rise to various coexisting attractors , which are difficult to identify by standard numerical bifurcation programs using continuation methods . a combination of techniques , including classical bifurcation plots and lyapunov exponent spectra has to be applied in comparison to get further insight into such dynamical structures . here we present for the first time multi - parameter studies in a range of biologically plausible values for dengue . the multi - strain interaction with the immune system is expected to also have implications for the epidemiology of other diseases . numerical bifurcation analysis , lyapunov exponents , symmetry , coexisting attractors , antibody dependent enhancement ( ade ) regino criado , j. vigo aguiar maira.gulbenkian.pt1 nico.fc.ul.pt1 kooi.vu.nl2
over the course of the last decade , network science has attracted an ever growing interest since it provides important insights on a large class of interacting complex systems .one of the features that has drawn much attention is the structure of interactions highlighted by the network representation .indeed , it has become increasingly clear that global structural patterns emerge in most real networks .one such pattern , where links and nodes are aggregated into larger groups , is called the community structure of a network . while the exact definition of communities is still not agreed upon ,the general consensus is that these groups should be denser than the rest of the network .the notion that communities form some sort of independent units ( families , friend circles , coworkers , protein complexes , etc . ) within the network is thus embedded in that broader definition .it follows that communities represent functional modules , and that understanding their layout as well as their organization on a global level is crucial to a fuller understanding of the system under scrutiny . by developing techniques to extract this organization, one assumes that communities are encoded in the way nodes are interconnected , and that their structure may be recovered from limited , incomplete topological information . various algorithms and modelshave been proposed to tackle the problem , each featuring a different definition of the community structure while sharing the same general objective .although these tools have been used with success in several different contexts , a number of shortcomings are still to be addressed . in this report, we show how to improve existing algorithms independent of the procedure or the definitions they use .more precisely , we first show that present algorithms tend to overlook small communities found in the neighborhood of larger , denser ones .then , we propose and develop a _ cascading _ approach to community detection that greatly enhance their performance .it is known that a resolution limit exists for a large class of community detection algorithms that rely on the optimization of a quality function ( e.g. , modularity ) over non - overlapping partitions of the network .indeed , it appears that the size of the smallest detectable community is related to the size of the network .this leads to counterintuitive cases where clearly separated clusters of nodes are considered as one larger community because they are too small to be resolved by the detecting algorithm .a possible solution could be to conduct a second analysis on all detected communities in order to verify that no smaller modules can be identified .however , the optimal partition of a network should include overlapping communities , as they capture the multiplicity of functions that a node might fulfill since nodes can then be shared between many communities .we argue that a different resolution limit , due to an effect that we refer to as _ shadowing _ , arises in detection algorithms that : 1 .allow such _ overlapping communities _ ; 2 .rely on some _ global resolution parameter_. shadowing typically occurs when large / dense communities act as screens hence preventing the detection of smaller / sparser adjacent communities . to illustrate this phenomenon , we study two families of detection algorithms based on two different paradigms of community structure , namely nodes and links communities .the clique percolation algorithm ( cpa ) defines communities as maximal _-clique _ percolation chains , where a -clique is a fully connected subgraphs of nodes , and where a percolation chain is a group of cliques that can be reached from one adjacent - cliques are said to be adjacent if they share nodes . ]-clique to another .the complete community structure is obtained by detecting every maximal percolation chains for a given value of .it is noteworthy that the definition of a community in this context is consistent with the general description of communities outlined in sec .[ sec : intro ] .indeed , -clique percolation chains are dense by definition , and a sparser neighboring region is required to stop a -clique percolation chain , ensuring that communities are denser than their surroundings .we expect shadowing as both conditions listed in sec .[ sec : intro ] are met : 1 .since percolation chains communities consist of -cliques sharing nodes , overlapping communities occur whenever two cliques share less than nodes ; 2 .the size of the cliques , , acts as a global resolution parameter .let us explain this last point . in principle ,low values of lead to a more flexible detection of communities as a smaller clique size allows a wider range of configurations .however , low values of often yield an excessively coarse - grained community structure of the network since percolation chains may grow almost unhindered and include a significant fraction of the nodes . in contrast, large values of may leave most of the network uncharted as only large and dense clusters of nodes are then detected as communities .an _ optimal value _ corresponding to a compromise between these two extreme outcomes must therefore be chosen . as this value of attempts to balance these two unwanted effects for the entire network as a whole, a shadowing effect is expected to arise causing the algorithm to overlook smaller communities , or to merge them with larger ones .see fig .[ fig : cpa_shadowing ] for an illustration of this effect . and are respectively colored in green and blue . from eq .( [ eq : similarity_ahn ] ) , we have that .note that apart from nodes and , the neighboring nodes of the keystone ( colored in yellow ) are not considered in this calculation of . ]the link clustering algorithm ( lca ) aggregates links and hence the nodes they connect into communities based on the similarity of their respective neighborhood . denoting the link between nodes and , the similarity of two adjacent links and ( attached to a same node called the _ keystone _ ) is quantified through a jaccard index where is the set of node and its neighbors , and is the cardinality of the set . figure [ fig : similarity_ahn ] illustrates the calculation of .once the similarity has been calculated for all adjacent pair of links , communities are built by iteratively aggregating adjacent links whose similarity exceeds a given threshold .we refer to links that are left after this process ( i.e. , communities consisting of one single link ) as _ unassigned_. , and contain considerably more elements that their corresponding intersections since nodes , and all have high degrees . according to eq .[ eq : similarity_ahn ] , this implies that , and share lower similarities namely and if the triangle had been isolated ( see sec . [sec : cascading ] ) .it is therefore likely that these three links will be left unassigned . ] again , a shadowing effect is expected , as the two aforementioned conditions are fulfilled : 1 .because communities are built by aggregating links , this algorithm naturally allows communities to overlap ( to share nodes ) since a node can belong to as many communities as its degree ( number of links it is attached to ) can allow ; 2 .the similarity threshold acts as a global resolution parameter as it dictates whether two links belong to the same community or not . to elucidate the global aspect of , we describe how its value is chosen ( as proposed in ) .let us first define the density of community as where and are the number of links and nodes in community , respectively .considering that a community of nodes must at least include links , computes the fraction of potential `` excess links '' that are present in the community .the similarity threshold is then chosen such that it maximizes the overall density of the communities where is the set of communities detected for a given , is the total number of links assigned to communities of more than one link ( i.e. , ) .note that is typically a well - behaved function of and normally displays a single maximal plateau . the value of corresponding to this plateau is then selected as it leads , on average , to the denser set of communities , hence its global nature .following an analysis similar to sec .[ sec : cfinder ] , we expect small communities to be left undetected as they are eclipsed by larger and denser ones .this is mainly due to the use of a resolution parameter ( ) that can not be adjusted locally .for instance , links in a small community could exhibit vanishing similarities because some of the associated nodes are hubs ( nodes of high degree ) .this is especially true in the vicinity of large and dense clusters whose nodes are typically of high degree ( see fig . [fig : ahn_shadowing ] for an illustration ) .figures [ fig : cpa_shadowing ] and [ fig : ahn_shadowing ] suggest that the inability to detect small or sparse communities in the vicinity of larger or denser ones the shadowing effect could be circumvented by removing these structures from the networks .we formalize this idea and propose a _ cascading _ approach to community detection that proceeds as follows : 1 . identify large or dense communities by tuning the resolution parameter accordingly using a given community detection algorithm ; 2 . remove the internal links of the communities identified in step 1 ; 3 .repeat until no new significant communities are found .the first iteration of this algorithm detects the communities that are normally targeted by detection algorithms , thus ensuring that the cascading approach retains the main features of the `` canonical '' community structure .after removal of links involved in the detected communities , a new iteration of the detection algorithm is then performed on a sparser network in which previously hidden communities are now apparent .this process is repeated until a final and more thorough partition of the network into overlapping communities is eventually obtained .for example , in the case of the cpa , a high value of ( which leads to the traditional community structure ) is selected for the first iteration of the algorithm .the network then becomes significantly sparser since all cliques of size are destroyed by the removal of internal links in step 2 .subsequent iterations of the detection algorithm can thus be conducted at lower , unveiling finer structures , as the pathways formed by dense cluster are no longer available .the process naturally comes to a halt at , since only detects the disjoint components of the network . in the case of the lca ,the detection is stopped _ before _the partition density reaches zero , for only detects chains of links ( the keystone ensures a non - vanishing similarity ) , which in general are not classified as significant communities .it is worth mentioning that conducting this repeated analysis does not increase the computational cost significantly , since the cascading algorithm scales exactly like the community detection algorithm used at each iteration , and since the number of iterations that can be carried is small ( typically less than 10 ) .moreover , the size of the networks ( number of links and nodes ) effectively decreases after each iteration , further reducing the cost .to investigate the efficiency and the behavior of the cascading detection , we have applied our approach to 10 network datasets : arxiv cond - mat circa 2004 ( arxiv ) , brightkite online ( brightkite ) , university rovira i virgili email exchanges ( email ) , gnutella peer - to - peer data ( gnutella ) , internet autonomous systems ( internet ) , mathscinet co - authorship ( mathsci ) , pretty - good - privacy data exchange ( pgp ) , western states power grid ( power ) , protein - protein interactions ( protein ) and word associations ( words ) .at the first iteration , leading to an immediate complete detection of the community structure .( _ right _ ) results of a canonical use of the lca are shown in white and shades of blue correspond to subsequent iterations . the final state is again shown in black .note that all results are normalized to the number of assignable links in the original network .for the cpa , this corresponds to the number of links that belong to at least one 3-clique . for the lca, a link is considered assignable if at least one of the two nodes it joins have a degree greater than one . ] first and foremost , our results show that cascading detection _ always _ improves the thoroughness of the community structure detection .indeed , fig .[ fig : cascading_detection ] shows that while a traditional use of the algorithms yields partitions with high fractions of unassigned links , the cascading approach leads to community structures where this fraction is significantly reduced .more precisely , the percentage of remaining assignable links drops from 54.1% to 26.3% on average in the case of cpa , and from 41.0% to 5.3% in the case of lca .note how cascading detection is more efficient when applied to the lca .this is due to the fact that the effective network gets increasingly sparser with each iteration , and that link clustering works equally well on sparse and dense networks , whereas clique percolation requires a high level of clustering to yield any results .figure [ fig : size_distribution ] confirms that as the cascading detection proceeds , smaller and previously masked communities are detected , regardless of the algorithm used .for instance , fig .[ fig : size_distribution_wa_cp ] clearly shows how a significant number of 3-cliques are overlooked by `` traditional '' use of the cpa .however , large communities are also found after many iterations , suggesting that the shadowing effect is not restricted to small communities .+ visual inspection of the detected communities not only verifies the quality of the hidden communities , but also confirms our intuition of the shadowing effect .for instance , fig .[ fig : wordassahn1 ] shows a triangle detected at the third iteration ( out of five ) of the lca on the words network .this structure was most likely missed during the initial detection due to the high degree of its three nodes , as speculated in fig .[ fig : ahn_shadowing ] . similarly , although was initially chosen ( according to the criterion discussed in sec .[ sec : cfinder ] ) for the cpa on the words network , a second iteration using has permitted the detection of other significant communities such as the one shown in fig . [fig : wordasscfinder ] .more complex structures and correlations are also brought to light using this approach .figure [ fig : wordassahn2 ] presents a star of high - degree nodes detected at the third iteration of the lca on the word association network .none of these nodes are directly connected to each other , but they share many neighbors . hence , once the main communities were removed here semantic fields related to toys , theatre and music the shadow was lifted such that this correlated , but unconnected structure could be detected . whether this particular structure should be defined as a relevant community is up for debate . keeping in mind that there are no consensus on the definition of a proper _ community _ in complex networks ,the role of algorithms , and consequently of the cascading method , is to infer plausible significant structures .internal link removal is destructive in the sense that information about shadowed communities is lost in the process , as some of the internal links are shared by more than one community .leaving these links untouched would certainly enhance the quality of the detected communities while further reducing the uncharted portion of the network . nevertheless , without using refined algorithm and by only resorting to our simple idea ,we obtain surprisingly good results .this suggest that shadowing is not necessarily due to the density of the prominent communities but rather to the stiffness of the resolution parameter . indeed , by using a cascading approach , we allow this parameter to vary artificially from a region of the network to the other , as the algorithm is effectively applied to a new network partially retaining the structure of the original network at each iteration .a once rigid global parameter can now flexibly adapt to small changes in the topology of the network to better reveal subtle structures .in conclusion , we have managed to significantly reduce the uncharted portion of a network by assigning an important fraction of seemingly random links to relevant communities .this significant improvement in community detection will help shrink the gap between analytical models and their real network counterparts .the difficult problem of accurately modeling the dynamical properties of real networks might be better tackled if one includes complex community structure through comprehensive distributions or solved motifs , two applications for which a reliable and complete partition is fundamental .moreover , this work opens the way to more subtle cascading approach , as envisioned at the end of the previous section .for instance , we could build an extreme version of the algorithm where communities are detected one by one .such an approach would enable a perfect adaptation of the resolution parameter to the situation at hand . and while it would certainly come to significant computational cost , it could lead to the mapping of the community detection problem unto simpler problems .if we accept to detect communities one at a time , the detection of the most significant ones can be done through well optimized methods , such as modularity optimization , which would otherwise be incapable of detecting overlapping communities .finally , perhaps the most significant observation that emerges from our work could be simply stated : since community structure occurs at all scales , global partitioning of overlapping communities must be done sequentially , cascading through the organizational layers of the network ..*summary of the results presented in fig .[ fig : cascading_detection ] . * cpa and lca are the percentages of remaining assignable links for a _normal _ use of the cpa and lca , respectively , whereas cpa and lca are the _ final _ percentages using a cascading approach .the number of iterations to reach the final state is also given . [ cols="<,^,^,^,^,^,^",options="header " , ]the authors wish to thank the gephi development team for their visualization tool ; yong - yeol ahn _et al . _ for the link community algorithm ; gergely palla _ et al . _ for the clique percolation algorithm ; all the authors of the cited papers for providing the network data ; and calcul qubec for computing facilities .this research was funded by cihr , nserc and frq nt .e. cho , s.a .myers , & j. leskovec , friendship and mobility : user movement in location - based social networks , _ acm sigkdd international conference on knowledge discovery and data mining ( kdd ) _ , ( 2011 ) .m. ripeanu , i. foster , & a. iamnitchi , mapping the gnutella network : properties of large - scale peer - to - peer systems and implications for system design , _ ieee internet computing journal _ , * 6 * , 50 - 57 , ( 2002 ) .
community detection is the process of assigning nodes and links in significant communities ( e.g. clusters , function modules ) and its development has led to a better understanding of complex networks . when applied to sizable networks , we argue that most detection algorithms correctly identify prominent communities , but fail to do so across multiple scales . as a result , a significant fraction of the network is left uncharted . we show that this problem stems from larger or denser communities overshadowing smaller or sparser ones , and that this effect accounts for most of the undetected communities and unassigned links . we propose a generic cascading approach to community detection that circumvents the problem . using real network datasets with two widely used community detection algorithms , we show how cascading detection allows for the detection of the missing communities and results in a significant drop of the fraction of unassigned links . = 1
multiple sequence alignment ( msa ) is one of the most fundamental tasks in bioinformatics .while there are many attempts to handle comparative sequence analyses without relying on msa , it still represents a starting point for most evolutionary biology methods .pairwise sequence alignment has been conceptualized as early as the 1970 s , starting with global alignments that attempt to align entire sequences and then introducing a decade later local alignments that focus on the identification of subsequences sharing high similarity .the standard computational formulation of both tasks is to maximize a scoring function obtained as the sum of the score for each aligned pair of residues ( nucleotides or amino acids , the highest scores being attributed to pairs of residues with highest similarity ) , minus some gaps penalties . since these seminal works, an abundant literature has flourished exploring this topic in many different directions , from the pairwise problem to the more complex task of aligning more than 3 sequences ( one of the very first attempts appearing in * ? ? ?* ) , from exact solutions that scale exponentially with sequence lengths to faster heuristic approaches used in the most common tools , and from the scoring formulation of the alignment problem that requires to choose the scoring parameters to probabilistic formulations in which those parameters are estimated .however , manually refined alignments continue to be superior to purely automated methods and there is a continuous effort to improve the accuracy of msa tools .we refer the reader to the reviews for more details on msa . + dynamic time warping ( dtw ) is a general version of the dynamic programing algorithm that solves exactly the pairwise biological sequence alignment problem .it is a well - known and general technique to find an optimal alignment between two given ( time - dependent ) sequences . in timeseries analysis , dtw is used for constructing an optimal alignment of two sequences with possible different lengths by stretching or contracting time intervals . in functional data analysis ,the time warping approach consists in modeling a set of curves exhibiting time and amplitude variation with respect to a common continuous process .thus , time warping techniques are used in many different areas concerned by sequence or curve comparisons , one of its most famous successes being on human - speech recognition . here , we propose a simple and fast procedure for msa , inspired from recent techniques of curve synchronization developed in the context of functional data analysis . in this setup, one often observes a set of curves which are modeled as the composition of an amplitude process governing their common behavior , and a warping process inducing time distortion among the individuals .specifically , , ] composed by three elementary steps , as shown in figure [ fig : pairwise_align ] . note that for biological reasons , such path is often restricted to never contain two consecutive steps in ( a gap in one sequence may not be followed by a gap in the other sequence ) .we do not use that constraint in what follows but a post - processing step could be applied to any alignment in order to satisfy it ( the simplest way is then to replace those 2 successive gaps with a ( mis)-match move ; from a graphical point of view a horizontal+vertical or vertical+horizontal move is replaced by a diagonal one ) . for notational convenience , we extend the path to \times [ -\epsilon , m] ] with the convention that it is cd - lg ( continuous on the right , limit on the left ) . by letting the left limit of at , we moreover impose that and .note that we can define a unique generalized inverse function \to [ -\epsilon , n] ] , letting denote the smallest integer larger or equal to , we set ( and similarly for sequence ) . from the ( bare ) pairwise alignment of , we obtain a time warping function such that this alignment is entirely described by the association . in what follows, we denote this functional association by .note that we equivalently have .we also mention that when the pairwise alignment between is extracted from a multiple sequence alignment containing at least 3 sequences , say , the above relation should be associative in the sense that whenever and , we should also have .this property will be used in the next section . in this section ,we consider a set of sequences with values in a finite alphabet and respective lengths and we assume that they all share some latent ancestor sequence with values in and length . a multiple sequence alignment of the set of sequences is given by the knowledge of _ homologous _ positions as well as _ inserted _ positions .to fix the ideas , let us consider for example the following multiple alignment of 3 sequences where the first line indicates homologous positions ( h ) while the other are inserted positions .homologous positions describe the characters that derived from the ancestor sequence , so that there are exactly homologous positions in the alignment ( whenever an ancestral position was deleted in all the sequences , this can not be reconstructed and such position would not appear in our reconstructed ancestral sequence ) . for each homologous position , there is at most one character in sequence that is associated to it .this means that homologous columns in the multiple sequence alignment may contain matches , mismatches and even gaps ( when the corresponding ancestral position has been deleted in that particular sequence ) . between two consecutive homologous positions, each sequence might have a certain number of characters that are inserted .these characters do not derive from an ancestral position .note that these insert runs are not aligned in the sense that the choice of how to put the letters in the insert regions is arbitrary and most msa implementations simply left - justify insert regions .now , given the set of sequences , this multiple alignment may be completely encoded through the knowledge of the homologous positions in the sequences ( see section [ sec : algo ] ) .our goal is to estimate the alignment of each to the ancestor and thus the global alignment of the set of sequences , by relying on the set of pairwise alignments of each to all the other sequences . to do this, we will implicitly assume that a ) the multiple sequence alignment of all the sequences is well approximated by the alignment we would obtain from all the sequences ; b ) all the pairwise alignments of are good approximations to the extracted pair alignments from the multiple alignment of all sequences .+ first , any sequence is derived from the common ancestor sequence through an evolutionary process that can be encoded in the alignment of these two sequences .this alignment induces a time warping process \to [ -\epsilon , n] ] to ] and + 1)} ] to ] where ] nbins[ $ ] table with rows and columns filled with gaps symbols insert the homologous positions from table at correct positions in table insert the inserted positions from table at correct positions in table return(t )
we propose an approach for multiple sequence alignment ( msa ) derived from the dynamic time warping viewpoint and recent techniques of curve synchronization developed in the context of functional data analysis . starting from pairwise alignments of all the sequences ( viewed as paths in a certain space ) , we construct a median path that represents the msa we are looking for . we establish a proof of concept that our method could be an interesting ingredient to include into refined msa techniques . we present a simple synthetic experiment as well as the study of a benchmark dataset , together with comparisons with 2 widely used msa softwares . \1 . departamento de estadstica , universidad carlos iii de madrid , c/ madrid , 126 - 28903 getafe , spain . e - mail : ana.arribas.es + 2 . sorbonne universits , universit pierre et marie curie , universit paris diderot , centre national de la recherche scientifique , laboratoire de probabilits et modles alatoires , 4 place jussieu , 75252 paris cedex 05 , france . e - mail : catherine.matias.cnrs.fr _ key words and phrases : alignment ; dynamic time warping ; multiple sequence alignment ; warping _ +
the motivation for this paper stems from an important , but seemingly forgotten 1982 report by prof .marshall p. tulin presented during the symposium on naval hydrodynamics , titled _ an exact theory of gravity wave generation by moving bodies , its approximation and its implications _some thirty years after its publication , tulin wrote of his original motivation for pursuing the former study : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ what were the relations and connections among these various nonlinear approximations ray , slow ship , " , second order , formal straining , and guilloton that had arisen by the end of the 1970s ? [ ... ] i had earlier in the 1970s become intrigued by the davies transformation of the nonlinear free - surface problem , which was revealed in milne - thompson s legendary banquet speech [ in 1956 ] .my hope was that my extension of the davies theory would provide an exact result in analytical form , which even in its complexity could then be subject to various approximations , the connections of which could thereby be discerned .and so it turned out ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in the 1982 paper , tulin sought to derive a rigorous mathematical reduction of the water wave equations in such a way that certain nonlinear contributions within the free surface equations could be preserved .the resultant model was analytically simple , and took the form of a single complex - valued linear differential equation .the theory was also powerful , and provided a formulation that could relate the geometry of a moving body directly with the resultant free - surface waves .however , several important and surprising issues were raised by tulin regarding the model and its interpretation , and in particular , he had noted a paradoxical behaviour of the model at low speeds . in the years that followed , perhaps owing to the difficulty of the model s derivation , tulin s fundamental questions were never re - addressed . in this paper , we shall present an asymptotically consistent derivation that corrects tulin s model , and puts to rest many of the issues previously highlighted . more specifically, we shall present an explicit solution written in terms of a single integral that properly describes the form of water waves produced by two - dimensional moving bodies at low speeds .then , by applying the asymptotic method of steepest descents , we are able to observe how the production of free - surface waves will change depending on the deformation of integration contours connected to the geometry of the moving body .this approach provides an intuitive and visual procedure for studying wave - body interactions .the essential derivation behind tulin s model begins from bernoulli s equation applied to a free surface with streamline , , where is the fluid speed , the streamline angle , the potential , and the non - dimensional parameter is the square of the froude number for upstream speed , gravity , and length scale .if the sinusoidal term is split according to the identity then can be written in complex - valued form where is an analytic function of the complex potential , , and the above is evaluated on where .the rather curious substitution of is attributed to , who had argued that if is considered to be small , then yields a linearized version of bernoulli s equation ( in ) that preserves the essential nonlinearity ( in ) describing the structure of steep gravity waves .inspired by this idea , tulin considered the extension to general free surface flows over a moving body . since the function in the curly braces of is an analytic function in , except at isolated points in the complex plane , then analytic continuation implies where by matching to uniform flow , and thus as .the function is purely imaginary on . in the case of flow without a body or bottom boundary , , but otherwise , will encode the effect of the obstructions through its singular behaviour in the complex plane .if the nonlinear contribution , , is neglected in , then the exact solution can be written as {\mathrm{e}}^{-\frac{1}{{\epsilon } } \int^w { \mathcal{q}}(s ) \ , { \operatorname{d\!}{}}{s}}.\ ] ] this process seems to yield the exponentially small surface waves at low speeds .however , tulin noted that as , there would be locations on the free surface where , and this would lead to unbounded steepness .he wrote of _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the revelation that for sufficiently strong disturbances waves arise at discrete points on the free surface which do not become exponentially small with decreasing froude number , but rather tend to unbounded steepness as . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ he thus proposed the following result : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ tulinquote ] _ the most important comment to makeis that for given , no matter how small , this so - called low speed theory is not valid for sufficiently low speeds .it is a theory valid for low , but not too low speeds ! _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the matter was left at that , and in the three decades following tulin s ingenious paper , the peculiarities surrounding the asymptotic breakdown of were never directly re - addressed ( though we mention the paper by which develops numerical solutions for the case and ) .note that in addition to the investigation in the limit , tulin had also intended to produce an _ exact _ reduction , where the term in was handled through a theoretically posited nonlinear coordinate transformation .however , it is never clear how this transformation is used or derived in practice , except through nonlinear computation of the full solutions . independently from tulin , e.o .tuck later presented a series of papers where he attempted to distill the wave - making properties of wave - body problems into a linear singular equation .the equation presented [ eqn ( 22 ) of ] was where is the height of the free surface , is a function related to the moving body , and is a integral operator known as the hilbert transform ( to be introduced in [ sec : form ] ) .the difficulty in solving is that the hilbert transform is a _global _ operator , requiring values of over the entire domain .in the work , tuck explained how the action of could be viewed as similar to that of the differential operator .this reduction was mostly pedagogic in nature , but tuck was motivated by the fact that and the differential operator will act similarly in the case of sinusoidal functions .he went on to study the various properties of the singular differential equation that depend on the specification of the ` body ' given by .apparently , tuck had been unaware of tulin s ( ) work , but a chance meeting of the two occured during a conference leading to the publication of .we are fortunate enough to possess the archived questions of the meeting , where we discover that tulin had asked the following question : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ is nt it true that the two dimensional wavemaker problem can be presented in terms of an ordinary differential equation in the complex domain , at least to some higher order of approximation ? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tuck replied that he was unsure of the generality of the reduction to problems including different geometries , but noted the connection to the reduction : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ i do not know the answer to this question ... my f(x ) " in some way represents a very general family of wavemakers " , with structure in both spatial dimensions , and i have doubts as to whether the problem can then be converted ( exactly ) to a differential equation . on the other hand ,a few years ago i in fact used the method that you describe , and it is associated with the approximation . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ although tuck s toy reduction should only be regarded as illustrative ( the governing differential equation should rather be first order ) , what is apparent in his work is the desire to systematically reduce both the nonlinearity of bernoulli s equation , and the global effect of into a single ordinary differential equation . in particular , it is tuck s search for a reduction of the operator , , that was missing from earlier works on this topic ( including ) .certainly , tulin and tuck were not the only ones to seek simpler reductions or formulations of the nonlinear equations for wave - body interactions , and indeed , tulin relates his work to the integral models proposed by and .reviews of these and other models were presented by and , and many others .however , what distinguishes tulin and tuck s work is the shifted focus towards the analytic continuation of the flow problem into the complex domain ; indeed , tulin notes his strong motivation by the work of in his review . as we have noted in [ sec : subtulin ] , the low - froude or low speed limit of is the essential approximation in which analytical results can be derived .the subtleties of studying this singular limit can be traced back to a seminal report by , who detailed certain oddities with the previously developed analytical approximations of free surface flow past a submerged body .chief amongst such oddities was the fact that the individual terms of a series approximation in powers of would fail to predict surface waves .thus , one might decompose the surface speed into a regular series expansion and an error term , , with + { \bar{q}}.\ ] ] the challenge , ogilvie realized , was that water waves were exponentially small in , with , and thus _ beyond - all - orders _ of any individual term of the regular series .by linearizing about the zeroth order solution , for , and strategically perserving certain terms in bernoulli s equation , ogilvie developed a general analytical approximation for the exponentially small surface waves .the approximation , however , was not asymptotically consistent , and the search for a complete numerical and analytical treatment of the low froude limit would inspire many papers in the subsequent years .one of the key issues we will explore in this paper addresses the question of how many terms must be included in the linearization of in order to obtain the exponential .originally had chosen , but later revised to in ( who quoted the work of and in particular , the study by in applying the wkb method to streamline ship waves ) .cm _ historical significance _ & & _ papers _ + origin of the low - froude paradox & & + two - dimensional and three - dimensional linearizations & & , , , , , + on numerical solutions & & , , , + on exponential asymptotics applied to water waves & & , , , , + review articles & & , + we shall not pursue , in great detail , the history of the low froude problem that followed ogilvie s seminal report , but instead refer to the review papers by and , particularly , where certain aspects of the low froude difficulty are discussed .additional historical details are presented in 1 of , and a selection of papers on the problem is presented in table [ tab : res ] .the method we apply in this paper , which combines an approach of series truncation with the method of steepest descents , is unique from the previously listed works .we have three principal goals in this work . _( i ) we wish to demonstrate how tulin s formulation can be systematically derived and studied in the low speed limit . _ the source of tulin s puzzling results can be resolved through better understanding of two effects : first , the _ ad - hoc _ linearization of the nonlinear function ; and second , the role of the forcing function , .we clarify these details and demonstrate , through numerical solutions of the full nonlinear water wave equations , the convergence of different proposed models in the limit .let be the fluid speed corresponding to the water waves , as in .one of our principal results ( presented on p. ) is to demonstrate that the exponentially small waves , , are described to leading order by the first - order equation , {\bar{q}}= r(w ; { \hat{{\mathscr{h}}}}[{\bar{\theta}}]),\ ] ] where and is then given by the sum of and its complex conjugate .different choices of the series truncation yield different versions of the right - hand side of , but only change the predicted wave amplitudes by a numerical factor .the leading order contains the prescription of the moving body , and can thus be related to tuck s function in . the formulation in terms of the speed , , rather than tulin s combined function in is more natural , but we will relate tulin s equation to our own in [ sec : tulinconnect ] and appendix [ sec : tulinconnect2 ] . _ ( ii ) we also study the associated integral form of the solution using the method of steepest descents ._ we shall demonstrate how the appearance of surface waves can be associated with sudden deformations of the integration contours if the solution to is analytically continued across critical curves ( stokes lines ) in the complex plane .this process is known as the stokes phenomenon .the novelty of this steepest descents methodology is that it will allow us to not only relate the surface waves directly to the geometry of the moving body , but it will also allow us to observe how integration contours change depending on the geometry of the obstruction .in particular , we conclude that provided there exists a solution of the full potential flow problem , there are no issues with the approximations as limit . _( iii ) our last goal is to provide a link between the tulin approximation , our corrected model , and also the current research on low - froude approximations . _let us now turn to a brief summary of the history of the low froude problem .let us consider steady irrotational two - dimensional flow past a moving body in the presence of gravity , .the body is associated with a length scale , and moves at constant velocity .for instance , this body may correspond to an obstacle at the bottom of the channel ( fig .[ formstepcyl ] , left ) , a submerged object ( fig .[ formstepcyl ] , right ) , or a surface - piercing ship ( fig .[ formship ] , left ) .we shall state more precise geometrical restrictions in [ sec : bdint ] . .flows past closed bluff objects can be complicated through wake separation ( seen right ) , for which the nature of the separation points is unclear .[ formstepcyl],scaledwidth=100.0% ] the velocity potential , , satisfies laplace s equation in the fluid region , . on all boundaries, the kinematic condition implies the normal derivative is zero , , while on the free surface , bernoulli s equation requires that subsequently , all quantities are non - dimensionalized using the velocity and lengths scales , and , and we introduce a cartesian coordinate system such that the body is fixed in the moving frame of reference . with , we define the complex potential , , and the complex velocity is given by , here is the stream function , is the fluid speed , and is the streamline angle , measured from the positive -axis . without loss of generality , we choose on the free surface and within the fluid .we define the logarithmic hodograph variable , and seek to solve the flow problem inversely , that is , use as an independent variable and seek in the lower - half -plane .the advantage of this hodograph formulation is that with the free surface at , its position is known in the -plane , even though its shape in the physical -plane is unknown .differentiating bernoulli s equation with respect to and using then yields the non - dimensionalized form . in theory , the methods we present in this paper will apply to most general two - dimensional free - surface flows . in practice , however , in order to make analytical progress , we will be constrained by problems in which the geometry of the body is known through the specification of the angle , , in terms of the complex potential , .as specific examples , let us focus on two representative geometries : ( a ) flow past a localized obstruction on a channel bottom ( fig . [ formstepcyl ] , left ) , and ( b ) flow past a semi - infinite surface piercing ship ( fig .[ formship ] , left ) .for flow past a varying channel bottom with dimensional upstream depth , we select the length scale , so that the flow region in the -plane consists of an infinite strip between . the stripis then mapped to the upper - half -plane , using where under , the free surface is mapped to , the channel bottom to , and the flow region to . for flows past a semi - infinite surface piercing body , we can choose the length scale to be , where is a representative scale of the potential along the body ( see ( 2.3 ) in for further details ) .we assume that the free surface attaches to the body at a stagnation point , chosen without loss of generality to be . in the potential plane ,the flow region consists of . on the boundary , corresponds to the free surface and to the solid body .since the flow is already contained within a half - plane , we do not need a further transformation , but we shall set so as to use the same notation .bernoulli s equation provides an explicit relationship between the real and imaginary parts of on the free surface .however , because is analytic in the upper or lower - half -plane and tends to zero far upstream , there is a further hilbert - transform relationship between its real and imaginary parts . applying cauchy s theorem to over a large semi - circle in the upper ( channel flow ) or lower ( surface - piercing flow ) half - planes , and taking the real part gives the principal value integral , where for flow in the channel configuration and for flow in the ship configuration .the range of integration can be split into an evaluation over the solid boundary and over the free surface .we write (\xi ) \quad \text{for },\ ] ] where denotes the hilbert transform operator on the semi - infinite interval , (\xi ) = \frac{1}{\pi } \dashint_{0}^{\infty } \frac{\theta(\xi')}{\xi ' - \xi } \ { \operatorname{d\!}{}}{\xi'}.\ ] ] we assume that for a given physical problem , the angle is known for the particular geometry , and thus the function that appears in is known through calculating note that it is somewhat misleading to describe as known " for since in practice , we would specify as a function of the physical coordinates .however , specifying different forms of as a function of ( or the potential , ) is typically sufficient to obtain the qualitatively desired geometry shape .for instance , we consider the step geometry , where , which corresponds to a step of angle .such topographies have been considered by , , , others . in this case , and yields similarly , a semi - infinite ship with a single corner of angle can be specified using such two - dimensional hull shapes have been considered in the works of , , , and others .choosing the dimensional length , where is the value of the potential at the corner sets its non - dimensional position to .then using , the function is given by where recall the ship problem does not require an additional mapping to the half plane , so we can set to preserve the notation . in this paper , we will only consider geometries , as specified through in , which contain strong singularities in the complex plane that is , poles or branch points .for instance , the step and ship contain singularities at solid corners and stagnation points .we will see in [ sec : steep ] that these singularities are often responsible for the creation of the surface waves .weaker singularities , such as for the case of flow past a smoothed hull with a discontinuity in the curvature will be the subject of a forthcoming paper ( see also 7 of ) .flows past closed objects also present challenging cases for study because typically , the geometry is not known as a function of the potential variables , and there are further difficulties with the prediction of wake separation points .for instance , potential flow over a circular cylinder ( fig .[ formstepcyl ] , right ) was computed without a free surface by where it was shown that the properties of the separation points will vary depending on the froude number . in theory , the methodology discussed in this paper can be applied to these challenging flows , but may require hybrid numerical - asymptotic treatments .we shall return to discuss such cases in [ sec : discuss ] .in this section , we demonstrate that the regular expansion of the solution of bernoulli s equation and the boundary integral is divergent , and moreover fails to capture the free surface waves . in the limit , we substitute a regular asymptotic expansion , and into the two governing equations , and obtain for the first two orders , [ asym01 ] , & \qquad \theta_1 & = -q_0 ^ 2 { \frac{\deq_0}{{\operatorname{d\!}{}}\phi}}. \label{asym1}\end{aligned}\ ] ] thus , at leading order , the free surface is entirely flat , , and this solutionis known as the rigid wall solution . the leading - order speed , , is given by for the step , or for the ship , but fails to capture the surface waves . because all the subsequent orders in the asymptotic scheme depend on derivatives of the leading - order solution , it stands to reason that it is impossible to encounter a sinusoidal term , despite going to any algebraic order in .using the numerical algorithms outlined , we calculate the numerical solutions of and for the case of a rectangular ship , with and at in fig .[ fig : profilestern ] .indeed , it is seen that the leading - order asymptotic approximation fails to capture the wave - like behaviour , but provides a good fit to the mean speed .more details of these numerical solutions will be discussed in [ sec : numerics ] .the leading - order approximation , , contains singularities in the analytic continuation of off the free surface , and into the complex plane , .for instance , in the case of the step geometry , we see that contains branch points at and , which correspond to the corner and stagnation points .because of the singularly perturbed nature of bernoulli s equation in the limit , we can see that at each order in the asymptotic procedure , the calculation of and will depend on differentiation of the previous order , and . since the leading - order approximation is singular , this has the effect of increasing the power of the singularities with each subsequent term in the approximation .thus in the limit that , and will diverge . as argued in the work of _ e.g. _ , the divergence at is captured through a factorial over power ansatz , where at the particular singularity .if there are multiple singularities contributing to the divergence ( see for an example ) , then we must include a summation over similar ansatzes of the form where at each individual singularity .there exist other cases where a more general form of the divergence is required , and this is documented in . in order to derive the components , , , and , we can examine the form of the system , and take the limit . in this paper , the late - orders behaviour is not a crucial part of the analysis , but we collect the functional forms of the components in appendix [ sec : divergence ] .in [ sec : reduce ] and [ sec : steep ] , we will demonstrate that the leading - order water waves can be described by an integral equation , which can then be approximated by deforming the path of integration into the complex plane. thus it will be necessary for us to study the analytic continuation of the free surface quantities , and into the complex -plane ( or analogously , the complex -plane ) .let us seek the complexified versions of bernoulli s equation and the boundary integral .we set . for analytic continuation into the upper half - plane, we can verify that in the limit from the upper half - plane , where the second integral on the right hand - side corresponds to the counterclockwise integral around along a small semi - circle .an analogous argument applies for analytic continuation into the lower half - plane .writing , we have that the analytic continuation of the hilbert transform , , is given by (\zeta ) = { \hat{{\mathscr{h}}}}[\theta](\zeta ) - { \mathrm{i}}{\mathrm{k}}\theta(\zeta),\ ] ] where for analytic continuation into the upper half - plane , for the continuation into the lower half - plane , and we have introduced the notation for the integral (\zeta ) = \frac{1}{\pi } \int_{0}^{\infty } \frac{\theta(\xi')}{\xi ' - \zeta } \ { \operatorname{d\!}{}}{\xi'}.\ ] ] thus , the analytic continuation of the boundary integral equation is given by \ ] ] there is a somewhat subtle aspect of analytically continuing and off the free surface , and the relationship of these quantities with the physical complex velocity given by .the two quantities on the left hand - side of are complex - valued , but reduce to the physical speed and velocities on the free surface , .although their _ combined _ values are related to the complex velocity , , their individual values are not known without further work . as an example of this particular subtlety, we may consider the analytic continuation of , evaluated along the physical boundary , .in general , this value will not be the physical angle as defined in or .instead , the analytic continuations of the individual components , and , are related to the physical angle through . in the bulk of this paper , we will focus on the analytic continuation into the upper half - plane , and thus set .since and are real on the axis , then their values in the lower half - plane follows from schwarz s reflection principal ( see [ sec : hankel ] ) .then the two analytically continued governing equations are given by .\label{bdint_anal } \end{gathered}\ ] ] it is possible to combine the two equations into a single complex - valued integro - differential equation .we write write the sine term in terms of complex exponentials using , giving } - \left(\frac{q}{q_s}\right)^j { \mathrm{e}}^{-{\hat{{\mathscr{h}}}}[\theta]}\right].\ ] ] we then simplify using , and substitute into bernoulli s equation to obtain the combined integro - differential formulation .we seek to study the analytically continued equations for the speed , , and angle , off the free surface and into the upper half--plane .the combined integro - differential formulation is } - q_s^2 { \mathrm{e}}^{{\mathrm{j}}{\hat{{\mathscr{h}}}}[\theta ] } \biggr ] = 0.\ ] ] which forms two equations ( real and imaginary parts ) for the two unknowns and . in , contains the problem geometry as defined by and , the analytically continued hilbert transform , , is defined in , and we have defined the constants for the channel geometry , we use the conformal map from the potential strip to the -plane given by , while for the surface - piercing geometry , . note that in all subsequent formulae , we will sometimes use and interchangeably in the functional notation , _e.g. _ writing or depending on the particular context . in general ,primes ( ) will be used solely for differentiation in .the reader will notice that we have somewhat strayed from our original motivation of studying tulin s formulation presented in [ sec : subtulin ] , where the governing equations are posed in terms of a combined analytic quantity , , in .tulin s formulation can be adjusted to use the substitution , where as in , which modifies the analytically continued bernoulli s equation to for function , with , which had resulted from the sine reduction , and for some that has yet to be specified .the adjustment of including serves to orient the fluid with respect to the solid boundary , and facilitates comparison with our .we highlight two main differences between our and tulin s formulation .first , in ours , we have chosen to work with analytic continuations of and independently , whereas tulin has combined , so as to advantageously write the left hand - side of in an elegant form .however , note the nonlinear term , , can not be easily written as a function of , and this nonlinear term will be important in the analysis to come . it is unclear whether it is possible to provide a complete formulation without independentally treating and , as we have done in our .the second main difference is that is self - contained and thus readily solved numerically or asymptotically , whereas the -function in is unknown . like the introduction of , while the introduction of the -function is elegant in appearance , its connection to the low - froude asymptotics is difficult to elucidate .we have included a discussion of tulin s original interpretation of the -function in appendix [ sec : qfunc ] . to summarize the conclusions of the discussion : the -function can be written as a boundary integral evaluated over the solid boundary and its reflected image in the potential or -plane .however , this resultant quantity can not be known without first solving the free - surface problem and .thus , should rather be written and it becomes difficult to unravel components of that are crucial to the determination of the free - surface waves .our formulation sidesteps this by explicitly including the local and global aspects of the problem in form that can be computed numerically and asymptotically .once we have understood how is reduced in the , it will be possible to demonstrate more clearly how tulin s formulation relates .we will do this in appendix [ sec : tulinconnect2 ] .the study of the nonlinear integro - differential equation is primarily complicated because of the nonlocal nature of the hilbert transform . in [ sec : introtuck ] , we had reviewed the efforts of e.o .tuck , who attempted to convert into a local operator . in this section and the next , we shall demonstrate that , while the hilbert transform is crucial for determining the corrections to the rigid - body solution ( _ c.f . _ the dotted curve in figs . [fig : profilestern ] ) , the waves can be derived largely independently from .we first introduce the truncated regular expansion [ qthetasub ] where and are seen to be waveless that is to say , they do not possess any oscillatory component .the oscillatory components will be introduced via a linearization of the solution about the regular expansion .we set , where we assume .our strategy is to substitute into the integro - differential equation , and then separate the terms proportional to onto the left hand - side .simplification then leads to the main result of this paper .[ result : integro ] linearizing the water wave equations about a regular series expansion truncated at terms gives the following integro - differential equation for the perturbation , [ simpsys ] {\bar{q}}= r(w ; { \hat{{\mathscr{h}}}}[{\bar{\theta } } ] ) + { \mathcal{o}}({\bar{\theta}}^2 , { \bar{q}}^2).\ ] ] where ) = -{\mathcal{e}_\mathrm{bern}}+ { \mathrm{i}}{\hat{{\mathscr{h}}}}[{\bar{\theta}}]\frac{\cos\theta_r}{q_r^2}. \label{rfunc}\end{gathered}\ ] ] and and are given in and , and the error term , , represents the error in bernoulli s equation , and is given by note that the function that appears in is precisely the same as the singulant function that describes the factorial - over - power divergence and whose value is found in appendix [ sec : divergence ] through the study of the regular asymptotic expansion of the solutions .similarly , in the next section , will be related to , which appears in .we will use the notation of and ) ] must be written as a principal value and residue contribution , as in .in other words , though it is written in a complex - valued form , will necessarily reduce to a real - valued equation along the free surface . the crucial decision in studyingthe system is to choose how many terms to include in the truncated series , and , which determines the forcing function .that is , we must choose the truncation value of . in early studies of the low - froude problem ,researchers had used , and then later corrected to . however , at a fixed value of , the truncation error in the divergent asymptotic expansion will continue to decrease for increasing values of .it reaches a minimum at an optimal truncation point , , and diverges afterwards . for small ,the optimal truncation point is found where adjacent terms in the series are approximately equal in size , or . using the divergent form , we find that the optimal truncation point is given by thus , for a fixed point in the domain , , and in the limit , the optimal truncation point , , tends to infinity , and we must include infinitely many terms of the series . in [ sec : numerics ] , we will examine the effect of different truncations on the comparison between numerical solutions and asymptotic approximations .most emphatically , we remark that apart from the hidden terms multiplying in the brackets of , this reduced equation is an exact result to all orders in .we have only employed the fact , but the inhomogeneous term , is exact .when the regular series expansions are optimally truncated , and are exponentially small , and consequently , the will not be needed to derive the leading - order behaviour of the exponentials . though it is now written as a first - order differential equation , the system is difficult to solve , since it involves real and complex components , in addition to the integral transforms embedded in the inhomogeneous term .the key in the following sections will be to argue that the hilbert transform can be neglected .the main goal in this section is to demonstrate how the reduced integro - differential equation can be written as an explicit integral , which is then approximated using the method of steepest descents ( for details of this method , see for example ) .it is , in fact , this steepest descents analysis that rectifies the unanswered issues from tulin s work . in this paper, we will present the main ideas of a steepest descents analysis applied to the integral solution of . in practice, the individualized steepest descent paths must be studied for each different moving body ( as specified by the function in ) . in a companion paper , we will derive the full steepest descents structure for the case of flow over a step and past the stern of a ship .this individualized study requires careful consideration of the branch structures and numerically generated contours of the integrand , so we relegate the details to the companion study . to begin , we integrate to obtain in the above integral , the initial point of integration , , is chosen to be the particular point that causes the divergence of the late - order terms .thus , the defined in matches the definition of the singulant function in the late - orders ansatz . next integrating , - 3{\mathrm{i}}{\mathrm{j}}\int_{w^*}^w \frac{q_1(\varphi)}{q_0 ^ 4(\varphi ) } \ , { \operatorname{d\!}{}}{\varphi},\ ] ] where is the constant of integration , and can be chosen wherever the integral is defined .we also note that , given by , is related to given in .thus we can write \exp \left ( 3{\mathrm{i}}j k \int_{w^*}^w \frac{q_1(\varphi)}{q_0 ^ 4(\varphi ) } \ , { \operatorname{d\!}{}}{\varphi}\right).\ ] ] solving now yields a solution for the integro - differential equation ( though note that it is not quite explicit due to the reliance on the hilbert transform ) , [ finalint ] \biggl\ { i(w ) + \text{const . } \biggr\ } \times { \mathrm{e}}^{-\chi(w)/{\epsilon}}\ ] ] where we have introduced the integral ) \left [ \frac{1}{q(\varphi ) } + { \mathcal{o}}({\epsilon})\right ] \ , { \mathrm{e}}^{\chi(\varphi)/{\epsilon}}\ , { \operatorname{d\!}{}}{\varphi}.\ ] ] which will be used to extract the pre - factor of the exponential .the start point , , can be conveniently chosen based on any additional boundary or radiation conditions . for channel flow ,the water wave problem will naturally impose a radiation condition which requires the free surface to be flat at ( upstream ) , so taking , the constant of integration in is zero .similarly , for the surface - piercing problem , can be taken to be the stagnation point attachment , , where .for the integral , the paths of steepest descent are given by constant contours of .let us envision a generic situation in which the steepest descent topology is as shown in the fig .[ fig : steepest ] .the valleys , where \geq { \operatorname{re}}[\chi(w_0)] ] from the point , , then the steepest descent paths from each individual endpoint tends to distinct valleys that must be joined by a contour through the saddle . in such cases, we shall write in this work , we shall not discuss the global properties of the steepest descent curves and and the associated stokes lines .these results require careful consideration of the riemann - sheet structure of the integrand , and can be found in the accompanying work . instead, we will assume that the integral is approximated by , and verify the approximation for the case of the far - field waves , , in [ sec : numerics ] .it can be verified that the dominant contributions from the endpoints will re - expand the regular perturbation series to higher orders .in particular , we see from inspection that locally near , the integrand is exponentially large and of order , and this serves to cancel the exponentially small factor in .the form of the integral in is unwieldy , so we shall demonstrate this re - expansion for the simplest case where the regular series , and , contains only a single term . setting and from , the integralis reduced to } { q_0 ^ 2 } \biggr ] \left [ \frac{1}{q(\varphi ) } + { \mathcal{o}}({\epsilon})\right ] \ , { \mathrm{e}}^{\chi(\varphi)/{\epsilon } } \ , { \operatorname{d\!}{}}{\varphi}.\ ] ] integrating by parts , we have } { q_0 ^ 2 } \biggr ] \left [ \frac{1}{q(w ) } + { \mathcal{o}}({\epsilon})\right ] \ , { \mathrm{e}}^{\chi(w)/{\epsilon } } \\ - { \epsilon}\int_{w_s}^w \frac{1}{\chi ' } { \frac{{\operatorname{d\!}{}}}{{\operatorname{d\!}{}}\varphi } } \left\{\biggl [ -{\epsilon}q_0 ' + \frac{{\mathrm{i}}{\hat{{\mathscr{h}}}}[{\bar{\theta } } ] } { q_0 ^ 2 } \biggr ] \left [ \frac{1}{q(\varphi ) } + { \mathcal{o}}({\epsilon})\right ] \right\ } \ , { \mathrm{e}}^{\chi(\varphi)/{\epsilon } } \ , { \operatorname{d\!}{}}{\varphi},\end{gathered}\ ] ] where we have assumed that the boundary term contributions due to are zero from the boundary or radiation conditions ( see the discussion following ) .repeated integration by parts to the integral term will extract further contributions .we keep only the first term above , and re - combine with in to obtain }{q_0 ^ 2 } - \frac{{\bar{\theta}}}{q_0 ^ 2}\biggr].\ ] ] where we have used and . matching real and complex partsthen yields ] in . in the previous section, we discovered that the hilbert transform plays an important role in the further development of the endpoint contributions .indeed , it appears as a contributing term in each order of the asymptotic process , beginning from in . upon applying the method of steepest descents to the integral , we had separated the contributions into those due to endpoints and those due to saddle points ( assumed to lie away from the free surface ) .let us assume that the contribution due to the saddle point is exponentially small along the free surface , so that in , with on the free surface , and similar relations for . then is at most algebraically large in and near the singularity , , and the term \frac{\cos \theta_r}{q_r^2}\ ] ] involves the integration of an exponentially small term along the free surface , where it remains exponentially small .thus , this term is subdominant to the square - bracketed contributions in .further , note that this argument does not work for the endpoint contributions , as the hilbert transform within is _ not _ negligible , as it involves the integration of the regular perturbative series via \phi \to \infty ] , we obtain = 0.\ ] ] in order to avoid confusion , we shall write the solution of as . expressing as a regular series expansion , we find the leading - order solution is as expected , but ignoring the ] remains bounded , but and its derivatives are singular according to .in other words , the simplified formulation of has replaced in the full model with its local behaviour near the singularity .substitution of into the simplified nonlinear model yields {\bar{q}}\sim \tilde{r}(w),\ ] ] where we have withheld the right hand - side , for clarity .we can verify that as , , and thus comparing the bracketed terms in and , we see that while we have completely neglected the $ ] terms , we are nevertheless able to preserve the inner limit of . thus , since the limiting behaviours of the coefficient functions are preserved exactly , then the hankel contour analysis of [ sec : hankel ] ( which depends only on local properties ) will also be preserved exactly , with the exception of a different numerical prefactor due to the right hand - side differences . in summary, let us assume that the leading - order exponential for the full problem is written as with limiting behaviour as , and for constants and .then the exponential that results from using the simplified nonlinear formulation of is for some constant . in the study of ,the simplified nonlinear problem was used as a toy model for the study of wave - structure interactions with coalescing singularities . by duplicating the derivation of [ sec : steep ] , the analogous formula to can be developed for .we will provide a numerical computation of this problem in [ sec : numerics ] .a detailed summary of the full and simplified models we have presented thus far is shown in table [ tab : models ] . in this section, we will verify the fit of the three reduced models with the numerical solutions of the full nonlinear problem in the limit .the models include ( i ) the full nonlinear problem , or more conventionally , the solution of bernoulli s equation and the boundary integral equation ; ( ii ) the truncated linear model in ; and ( iii ) the nonlinear model in .we study the case of a semi - infinite rectangular ship given by and with , and thus which is the leading - order speed associated with a hull with a right - angled corner ( in the interior of the fluid ) at , and a stagnation point at .the solution is computed in each of the three cases , and the amplitude of the water waves far downstream , with , is extracted .for the two models with truncations at and , recall that the real - valued solution on the axis is formed by adding the complex - valued complex solution to its complex conjugate [ see and the surrounding discussion ] .thus , for these two cases , the amplitude of is taken to be twice the amplitude of . in order to solve the full water wave equations, we use the numerical algorithm described in . in brief, a stretched grid is applied near the stagnation point , and a finite - difference approximation of the boundary integral is calculated using the trapezoid rule . at a singular point of the integral , a quadratic interpolant is applied between the point and its two neighbours , and the resultant quadratic is calculated exactly . for more details of the numerical scheme see , _e.g. _ chap . 7 of and the references therein .from the predicted wave amplitude , we have { \mathrm{e}}^{-3\pi/(2{\epsilon})},\ ] ] where , , and here , . the numerical pre - factor requires a generic calculation ( _ c.f . _10 of ) . both numerical amplitude measurements ( stars ) , as well as the asymptotic prediction ( upper dashed line ) ,are shown in fig .[ fig : numerics ] . truncated linear model , and the truncated nonlinear model .for the two truncated models , the amplitude is multiplied by two to account for the analytic continuation .the dashed lines correspond to leading - order asymptotic approximations of the full nonlinear model ( top ) and the simplified nonlinear model ( bottom ) .[ fig : numerics ] ] the and truncated models can be solved as initial - value problems . due to the singular nature of the stagnation point , where , we use the coordinate transformation , and solve the associated differential equations in .the asymptotic behaviour is used to provide the initial value for at a point near . in the simulations , we typically used , and the resultant amplitudes are verified to be independent of the initial condition .the combined results are shown in fig .[ fig : numerics ] .the leading - order asymptotic approximations fit the data closely , and we see that both the model with simplified nonlinear ( , shown as circle ) and linear ( , shown as stars ) formulations duplicate the requisite behaviours reviewed in table [ tab : models ] .throughout our analysis of [ sec : reduce ] to [ sec : numerics ] , we have chosen to stray from tulin s formulation , which is encapsulated in the study of .this mode of presentation was out of necessity ; while the broad outline of tulin s reduction is ultimately correct , the use of the unknown -function renders the equation impractical for most applications .moreover , the substitution and subsequent truncation of the nonlinear does not make it clear what inaccuracies are introduced by the reduction process ( we have concluded , for example , that the pre - factor of the wave will be incorrect ) .we have provided an extended discussion of tulin s -function in appendix [ sec : qfunc ] , and the connections with our own formulation in [ sec : tulinconnect ] and appendix [ sec : tulinconnect2 ] . while tulin s work may have been unappreciated since its inception , the work was , in fact , ahead of its time .indeed , the proposal of the complex - variable reduction of the water - wave equations , and the subsequent simplification of the hilbert transform would anticipate many of the more sophisticated asymptotic approaches that would later develop independently ( _ e.g. _ in ) . as we have reviewed in [ sec : otherworks ] , others have proposed integral formulations of the low - froude problem ( see _ e.g. _ the collection of models in ) , but such models typically depended on _ ad - hoc _ linearizations of the two - dimensional potential flow equations . also at the forefront of our motivation was to better understand tuck s series of papers , positing simplified toy models that could eliminate the hilbert transform , while preserving essential details of the waves . during the brief exchange between tulin and tuck , as quoted in 1.2 , tuck had indicated that it was unclear whether his reductions were related , via the complex - plane , with tulin s model .the answer is that they are indeed related .both tulin and tuck s formulations are intended to be ( but are ultimately incomplete ) truncated models valid at low froude numbers .both studies attempt to produce an approximate wave solution independent of the hilbert transform . through our corrected reduction and study using steepest descents, we were able to explain why the hilbert transform was crucial in some cases ( determining ) , but negligible in others ( determining ) .this idea of being able to study and visualize wave - structure interactions using the method of steepest descents is a powerful one , and we believe that it has wide applicability to further developing analytical theory for more complicated geometries than the ones we have considered here .another main result of our work relates to the presentation of table [ tab : models ] , which unifies the various truncated models under consideration .we have shown that in the limit , the exponentially small water waves are of the form plus its complex conjugate . in order to obtain the correct ( andthus preserve some of the most important aspects ) , we can solve the one - term truncated model this is the simplest reduction of the water wave problem . despite the fact that it will incorrectly predict , it still serves as a useful toy model since the steepest descent argument remains unchanged .if we wish to obtain the correct functional form of , then we can instead solve the equation , {\bar{q}}= r(w ; { \hat{{\mathscr{h}}}}[{\bar{\theta}}]),\ ] ] where will be truncated to error , as detailed in table [ tab : models ] .it is difficult to propose any reduced equation that allows us to determine the correct numerical pre - factor , , since this involves inclusion of terms up to optimal truncation in and .since this optimal truncation term tends to infinity as , then we must do as we have done in _e.g. _ , and approximate using its divergent form .this is the connection with previous approaches that have used exponential asymptotics .the _ correct _ model that is to say , the one that predicts the leading - order exponential , up to the numerical pre - factor does indeed require accounting for the divergent nature of the asymptotic approximations .tulin s paradoxical comment regarding the validity of the low speed limit , quoted on p. of our introduction , will be resolved once the specific problem geometry and integrand functions of is considered using the method of steepest descents . in this case , we have seen through the methodology of [ sec : steep ] that , for the case of ship flow , in the limit , the wave contributions arise from deformation of the integration contours about the saddle points , which corresponds to the corner of the ship geometry .while the generated waves are unbounded near these critical points , they remain exponentially small order everywhere along the free surface .indeed , it was precisely this argument that had allowed us to neglect the hilbert transform . therefore , there is no issue with taking the limit within the physical domain .tulin s comment , however , has an important consequence in the case of bow flows . in this case, we see that as the solution is analytically continued in the direction of the bow , the generated exponential , of order , will tend to infinite amplitude at the stagnation point .thus there is no bounded solution in the limit for bow flow .this situation was formally discussed in , but it has been known [ see _ e.g. _ ] that the numerical problem is not well - posed when a stagnation point attachment is assumed for the case of incoming flow .we have thus demonstrated this , here , for the limiting case of . in an age where there are a bevy of tools and packages that can perform full numerical computations of the nonlinear water - wave problem, the reader may be justified in wondering whether there is still applicability in studying the significance of historical works by , , and this paper itself . as summarized by table [ tab : models ], the differences between various truncated models are subtle , and given the current state of computation , it seems more difficult to unravel such subtleties than it would be to to solve the full model .however , while methods of computation have improved significantly since tulin s 1982 paper , many theoretical aspects of free - surface wave - body flows are still a mystery , as evident by the review in .for example , there is virtually no analytical theory that can distinguish between waves produced by surface - piercing bodies with sudden angular changes ( corners ) versus bodies that are smooth [ _ c.f . _ the discussion in ] .the classic treatments using linearized theory , as it appears in _ e.g. _ and , are limited to asymptotically small bodies rather than the bluff bodies we consider in this paper . in a forthcoming work, we will demonstrate how the steepest descent methodology developed in this paper , can be applied to the study of smooth - bodied obstructions .the techniques and reductions presented in this paper , along with further developments in the theory of exponential asymptotics , provides hope that analytical progress can be made on the subject of time - dependent and three - dimensional wave - body problems [ _ c.f . _ recent work by , , , on this topic ] .the individual components , , , and , that make up the factorial - over - power divergence in can be derived by examining the governing equations at ) . in the limit , the leading - order contribution gives the _ singulant _ , , and since at the singularities , we write where is a particular singularity of the leading - order solution .similarly , it can be shown that ,\ ] ] where is a constant of integration and is an arbitary point chosen wherever the integral is defined . the pre - factor ,is then related to using the value of is derived by matching the local behaviour of in with leading - order near the singularity , .if we assume that near the singularity , then for most nonlinear problems , the value of in embeds the nonlinearity of the governing equations near the singularity , and must be found through a numerical solution of a recurrence relation . a detailed derivation of the above quantities , including numerical values of , can be found in 3.1 of and 4 of .we also refer the reader to more general reviews of exponential asymptotics by , , and .the most difficult aspect of tulin s work concerns section vi of the manuscript , which seeks to understand the nature of the analytically continued function , . we will attempt to follow the same argument as tulin ( with adjustments for changes in notation and flow geometry ) .tulin had split the form of into a contribution from the uniform flow and a contribution from the geometry . in his notation, our is equal to .the situation of an imposed pressure distribution was also considered in his work , but we shall ignore this effect .tulin had then written in terms of a cauchy integral over the solid boundary and its image reflected about the free surface in the potential plane .this is analogous to applying cauchy s integral theorem to either or the hodograph variable along a counterclockwise circular contour of radius with a slit about the negative real axis ( see fig . [fig : cauchy ] , right ) .this yields \ , { \operatorname{d\!}{}}{t},\ ] ] where the integral along the outer circle tends to zero as by the boundary conditions and the integral over the slit involves and , the limiting values from the upper or lower half - planes .however , in , only the values of on one side are known ( being the physical angle of the solid boundary ) .for instance , in the case of a step , is given by .the problem , however , is that the other values , , and are only known through analytic continuation , and it is impossible to go further with without additional information .[ fig : cauchy],scaledwidth=80.0% ] for most practical implementations , it is preferable to instead apply cauchy s theorem to an integral over the solid boundary and the free surface ( see fig . [fig : cauchy ] , left ) , which was the formulation in [ sec : bdint ] .bernoulli s equation is required in order to provide a relationship between and on the free surface , and thus close the system .tulin had posited that the value of might be found through a theoretically posited surrogate body whose singularities on the physical boundary alone would generate the flow ( rather than the physical boundary and its reflected image ) .this appears as eqn ( 60 ) in his work .however , for a given physical geometry ( _ e.g. _ for the step and ship geometries of [ sec : form ] ) , it is unclear how this surrogate body could ever be determined in an _ a priori _ fashion .thus in the end , tulin s should rather be written as , as it involves the solution itself .this creates a problematic argument if the intention is to treat as an ordinary differential equation to be integrated exactly , for the solution appears on both sides of the formulation .indeed , this is precisely the issue that tuck ( [ sec : introtuck ] ) had wrestled with , in seeking a reduction of the global hilbert transform operator . in the approach to follow , we will resume our study of the formulation in [ sec : analytic ] , and we will return to discuss the connection to tulin s work in [ sec : discuss ] .as explained in [ sec : tulinconnect ] , we have chosen to stray from tulin s formulation , which uses the combined analytic function , , and the unknown right hand - side , . our formulation separates the analytic continuations of the and variables , and is self contained .in contrast , tulin s formulation requires the specification of the -function , which requires additional information .we now wish to show how tulin s equation is related to the equation for the exponential in [ sec : hankel ] , given by {\bar{q}}\sim -{\epsilon}^n q_{n-1}',\ ] ] that is , the reduced integro - differential model with the right hand - side . in order to relate the two formulations , we multiply through by andobtain we expand the unknown functions in the above equation into a regular perturbation expansion and an error term , using where for simplicity , we expand the product rather than the individual factors .substitution into gives where we have introduced the error in the bernoulli equation , which can be compared to .the expansion of , can also be written in terms of the expansions for and .using , and expanding , we find [ g0g1 gb ] + { \epsilon}\bigl [ -9 { \mathrm{i}}{\mathrm{j}}q_0 ^ 2 q_1 { \bar{\theta}}- 9 { \mathrm{i}}{\mathrm{j}}q_0 ^ 2 \theta_1 { \bar{q}}+ 6 q_0 q_1 { \bar{q}}- 9 q_0 ^ 3 \theta_1 { \bar{\theta}}\bigr ] + \ldots . \label{gbexpand}\end{gathered}\ ] ] following the discussion in [ sec : tulinconnect ] and appendix [ sec : qfunc ] , we emphasize that , , , and the low - order terms are derived independently from that is , the single equation for is insufficient to close the system without inclusion of the hilbert transform .instead , we assume that the low - order terms in are known and that provides an equation for .the regular part of , given by , follows from expansion of the left hand - side of . only including up to terms , we obtain + { \mathcal{o}}({\epsilon}^2).\ ] ] we also note that in the limit , the optimal truncation point of the regular series expansion tends to infinity , , and the error in bernoulli s equation is replaced by the divergent term which is analogous to the argument leading to .the result now follows by using for and , for , for , and for the right hand - side of .we are left with { \bar{q}}\sim -{\epsilon}^n q_{n-1}',\ ] ] or , as desired .thus we have shown how tulin s formulation will exactly preserve the exponentially small surface waves .derivation of the full relationship of tulin s equation to the full system can be similarly done , but the algebra ( in expanding and , and returning to the formulation with the embedded hilbert transform ) becomes unwieldy .2010 exponential asymptotics and stokes line smoothing for generalized solitary waves . in _asymptotic methods in fluid mechanics : survey and recent advances _ ( ed. h. steinrck ) , pp .springerwiennewyork .1990 water non - waves . in _mini - conference on free and moving boundary and diffusion problems _proceedings of the centre for mathematics & its applications ) , pp . 109127 .canberra : centre for mathematics and its applications , australian national university. 1977 computation of near - bow or stern flows using series expansion in the froude number . in _2nd international conference on numerical ship hydrodynamics_. berkeley , california : university of california , berkeley .
in 1982 , marshall p. tulin published a report proposing a framework for reducing the equations for gravity waves generated by moving bodies into a single nonlinear differential equation solvable in closed form [ _ proc . 14th symp . on naval hydrodynamics _ , 1982 , pp.1951 ] . several new and puzzling issues were highlighted by tulin , notably the existence of weak and strong wave - making regimes , and the paradoxical fact that the theory seemed to be applicable to flows at low speeds , _ but not too low speeds"_. these important issues were left unanswered , and despite the novelty of the ideas , tulin s report fell into relative obscurity . now thirty years later , we will revive tulin s observations , and explain how an asymptotically consistent framework allows us to address these concerns . most notably , we will explain , using the asymptotic method of steepest descents , how the production of free - surface waves can be related to the arrangement of integration contours connected to the shape of the moving body . this approach provides an intuitive and visual procedure for studying nonlinear wave - body interactions . surface gravity waves , wave - structure interactions , waves / free - surface flows
) by .the lines represent the numerical results for the delta function ( i.e. , all nodes have same activity potential ) and power - law activity distributions .the arrows indicate ^{-1}12 & 12#1212_12%12[1][0] link:\doibase 10.1088/1367 - 2630/18/7/073013[ * * , ( ) ] https://www.amazon.com/guide-temporal-networks-complexity-science-ebook/dp/b01kjcomfw{%}3fsubscriptionid{%}3d0jyn1nvw651kca56c102{%}26tag{%}3dtechkie-20{%}26linkcode{%}3dxm2{%}26camp{%}3d2025{%}26creative{%}3d165953{%}26creativeasin{%}3db01kjcomfw[__ ] ( , , ) link:\doibase 10.1007/s00285 - 010 - 0344-x [ * * , ( ) ] link:\doibase 10.1103/physrevx.5.021005 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.87.925 [ * * , ( ) ]
social contact networks underlying epidemic processes in humans and animals are highly dynamic . the spreading of infections on such temporal networks can differ dramatically from spreading on static networks . we theoretically investigate the effects of concurrency , the number of neighbors that a node has at a given time point , on the epidemic threshold in the stochastic susceptible - infected - susceptible dynamics on temporal network models . we show that network dynamics can suppress epidemics ( i.e. , yield a higher epidemic threshold ) when nodes concurrency is low , but can also enhance epidemics when the concurrency is high . we analytically determine different phases of this concurrency - induced transition , and confirm our results with numerical simulations . _ introduction : _ social contact networks on which infectious diseases occur in humans and animals or viral information spreads online and offline are mostly dynamic . switching of partners and ( usually non - markovian ) activity of individuals , for example , shape network dynamics on such temporal networks . better understanding of epidemic dynamics on temporal networks is needed to help improve predictions of , and interventions in , emergent infectious diseases , to design vaccination strategies , and to identify viral marketing opportunities . this is particularly so because what we know about epidemic processes on static networks is only valid when the timescales of the network dynamics and of the infectious processes are well separated . in fact , temporal properties of networks , such as long - tailed distributions of inter - contact times , temporal and cross - edge correlation in inter - contact times , and entries and exits of nodes , considerably alter how infections spread in a network . in the present study , we focus on a relatively neglected component of temporal networks , i.e. , the number of concurrent contacts that a node has . even if two temporal networks are the same when aggregated over a time horizon , they may be different as temporal networks due to different levels of concurrency . concurrency is a long - standing concept in epidemiology , in particular in the context of monogamy / polygamy affecting sexually transmitted infections . modeling studies to date largely agree that a level of high concurrency ( e.g. , polygamy as opposed to monogamy ) enhances epidemic spreading in a population . however , this finding , while intuitive , lacks theoretical underpinning . first , some models assume that the mean degree , or equivalently the average contact rate , of nodes increases as the concurrency increases . in these cases , the observed enhancement in epidemic spreading is an obvious outcome of a higher density of edges rather than a high concurrency . second , other models that vary the level of concurrency while preserving the mean degree are numerical . in the present study , we build on the analytically - tractable activity - driven model of temporal networks to explicitly modulate the size of the concurrently active network with the structure of the aggregate network fixed . with this machinery , we show that the dynamics of networks can either enhance or suppress infection , depending on the amount of concurrency that individual nodes have . note that analysis of epidemic processes driven by discrete pairwise contact events , which is a popular approach , does not address the problem of concurrency because we must be able to control the number of simultaneously active links possessed by a node in order to examine the role of concurrency without confounding with other aspects . _ model : _ we consider the following continuous - time susceptible - infected - susceptible ( sis ) model on a discrete - time variant of activity - driven networks , which is a generative model of temporal networks . the number of nodes is denoted by . each node is assigned an activity potential , drawn from a probability density . activity potential is the probability with which node is activated in a window of constant duration . if activated , node creates undirected links each of which connects to a randomly selected node ( fig . [ f1 ] ) . if two nodes are activated and send edges to each other , we only create one edge between them . however , for large and relatively small , such events rarely occur . after a fixed time , all edges are discarded . then , in the next time window , each node is again activated with probability , independently of the activity in the previous time window , and connects to randomly selected nodes by undirected links . we repeat this procedure . therefore , the network changes from one time window to another and is an example of a switching network . a large implies that network dynamics are slow compared to epidemic dynamics . in the limit of , the network blinks infinitesimally fast , enabling the dynamical process to be approximated on a time - averaged static network , as in . .,width=326 ] for the sis dynamics , each node takes either the susceptible or infected state . at any time , each susceptible node contracts infection at rate per infected neighboring node . each infected node recovers at rate irrespectively of the neighbors states . changing to is equivalent to changing and to and , respectively , whilst leaving unchanged . therefore , we set without loss of generality . _ analysis : _ we calculate the epidemic threshold as follows . for the sake of the analysis , we assume that star graphs generated by an activated node , which we call the hub , are disjoint from each other . because a star graph with hub node overlaps with another star graph with probability , where is the mean activity potential , we impose . we denote by the probability that a node with activity is infected at time . the fraction of infected nodes in the entire network at time is given by . let be the probability with which the hub in an isolated star graph is infected at time , when the hub is the only infected node at time and the network has switched to a new configuration right at time . let be the probability with which the hub is infected at when only a single leaf node is infected at . the probability that a hub with activity potential is infected after the duration of the star graph , denoted by , is given by in deriving eq . ( [ eq : rho1 ] ) , we considered the situation near the epidemic threshold such that at most one node is infected in the star graph at time ( and hence ) . the probability that a leaf with activity potential that has a hub neighbor with activity potential is infected after time is analogously given by where , , and are the probabilities with which a leaf node with activity potential is infected after duration when only that leaf node , the hub , and a different leaf node is infected at time , respectively . we derive formulas for in the supplemental material . the probability that an isolated node with activity potential is infected after time is given by . by combining these contributions , we obtain to analyze eq . ( [ eq : rho ] ) further , we take a generating function approach . by multiplying eq . ( [ eq : rho ] ) by and averaging over , we obtain {g^{}(z ) } , \label{eq : theta}\end{aligned}\ ] ] where , , , , , is the probability generating function of , , and throughout the paper the superscript represents the -th derivative with respect to . we expand as a maclaurin series as follows : let be the fraction of initially infected nodes , which are uniformly randomly selected , independently of . we represent the initial condition as . epidemic dynamics near the epidemic threshold obey linear dynamics given by by substituting and in eq . ( [ eq : theta ] ) , we obtain a positive prevalence ( i.e. , a positive fraction of infected nodes in the equilibrium state ) occurs only if the largest eigenvalue of exceeds . therefore , we get the following implicit function for the epidemic threshold , denoted by : where , , , , and ( see supplemental material for the derivation ) . note that is a function of ( ) through , , , and , which are functions of . in general , we obtain by numerically solving eq . ( [ eq : implicit eq ] ) , but some special cases can be determined analytically . in the limit , eq . ( [ eq : implicit eq ] ) gives ^{-1} ] . we simulate the stochastic sis dynamics using the quasistationary state method , as in , and calculate the prevalence averaged over realizations after discarding the first time steps . we set the step size in ( a)(d ) and in ( e ) and ( f).,width=326 ] the results shown in figs . [ f2](a ) and [ f2](b ) are qualitatively simillar when the activity potential is power - law distributed ( figs . [ f2](c ) and [ f2](d ) ) and when is constructed from empirical data obtained from the sociopatterns project ( figs . [ f2](e ) and [ f2](f ) ) . to illuminate the qualitatively different behaviors of the epidemic threshold as increases , we determine a phase diagram for the epidemic threshold . we focus our analysis on the case in which all nodes share the activity potential value , noting that qualitatively similar results are also found for power - law distributed activity potentials ( fig . [ f3](b ) ) . we calculate the two boundaries partitioning different phases as follows . first , we observe that the epidemic threshold diverges for . in the limit , infection starting from a single infected node in a star graph immediately spreads to the entire star graph , leading to . by substituting in eq . ( [ eq : implicit eq ] ) , we obtain , where when , infection always dies out even if the infection rate is infinitely large . this is because , in a finite network , infection always dies out after sufficiently long time due to stochasticity . second , although eventually diverges as increases , there may exist such that at is smaller than the value at . motivated by the comparison between the behaviour of at and ( fig . [ f2 ] ) , we postulate that ( ) exists only for . then , we obtain at . the derivative of eq . ( [ eq : implicit eq ] ) gives . because at , we obtain , which leads to when , network dynamics ( i.e. , finite ) always reduce the prevalence for any ( figs . [ f2](a ) , [ f2](c ) , and [ f2](e ) ) . when , a small raises the prevalence as compared to ( i.e. , static network ) but a larger reduces the prevalence ( figs . [ f2](b ) , [ f2](d ) , and [ f2](f ) ) . the phase diagram based on eqs . ( [ eq : tau star ] ) and ( [ eq : mc ] ) is shown in fig . [ f3](a ) . the values numerically calculated by solving eq . ( [ eq : implicit eq ] ) are also shown in the figure . it should be noted that the parameter values are normalized such that has the same value for all at . we find that the dynamics of the network may either increase or decrease the prevalence , depending on the number of connections that a node can simultaneously have , extending the results shown in fig . [ f2 ] . these results are not specific to the activity - driven model . the phase diagram is qualitatively the same for a different model in which an activated node induces a clique instead of a star ( fig . s2 ) , modeling a group conversation event as some temporal network models do . , when the activity potential is ( a ) equal to for all nodes , or ( b ) obeys a power - law distribution with exponent ( ) . we set at and adjust the value of and such that takes the same value for all at . in the `` die out '' phase , infection eventually dies out for any finite . in the `` suppressed '' phase , is larger than the value at . in the `` enhanced '' phase , is smaller than the value at . the solid and dashed lines represent ( eq . ( [ eq : tau star ] ) ) and , respectively . the color bar indicates the values . in the gray regions , .,width=326 ] _ discussion : _ our analytical method shows that the presence of network dynamics boosts the prevalence ( and decreases the epidemic threshold ) when the concurrency is large and suppresses the prevalence ( and increases ) when is small , for a range of values of the network dynamic timescale . this result lends theoretical support to previous claims that concurrency boosts epidemic spreading . the result may sound unsurprising because a large value implies that their exists a large connected component at any given time . however , our finding is not trivial because a large component consumes many edges such that other parts of the network at the same time or the network at other times would be more sparsely connected as compared to the case of a small . our results confirm that monogamous sexual relationship or a small group of people chatting face - to - face , as opposed to polygamous relationships or large groups of conversations , hinders epidemic spreading , where we compare like with like by constraining the aggregate ( static ) network to be the same in all cases . for general temporal networks , immunization strategies that decrease concurrency ( e.g. , discouraging polygamy ) may be efficient . restricting the size of the concurrent connected component ( e.g. , size of a conversation group ) may also be a practical strategy . another important contribution of the present study is the observation that infection dies out for a sufficiently large , regardless of the level of concurrency . as shown in figs . 3 and s1 , the transition to the `` die out '' phase occurs at values of that correspond to network dynamics and epidemic dynamics having comparable timescales . this is a stochastic effect and can not be captured by existing approaches to epidemic processes on temporal networks that neglect stochastic dying out , such as differential equation systems for pair formulation - dissolution models and individual - based approximations . our analysis methods explicitly consider such stochastic effects , and are therefore expected to be useful beyond the activity - driven model ( or the clique - based temporal networks analyzed in the supplemental material ) and the sis model . we thank leo speidel for discussion . we thank the sociopatterns collaboration ( http:// www.sociopatterns.org ) for providing the data set . t.o . acknowledges the support provided through jsps research fellowship for young scientists . j.g . acknowledges the support provided through science foundation ireland ( grant numbers 15/spp / e3125 and 11/pi/1026 ) . n.m . acknowledges the support provided through jst , crest , and jst , erato , kawarabayashi large graph project . 41ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1016/j.physrep.2012.03.001 [ * * , ( ) ] * * , ( ) https://www.amazon.com/guide-temporal-networks-complexity-science-ebook/dp/b01kjcomfw{%}3fsubscriptionid{%}3d0jyn1nvw651kca56c102{%}26tag{%}3dtechkie-20{%}26linkcode{%}3dxm2{%}26camp{%}3d2025{%}26creative{%}3d165953{%}26creativeasin{%}3db01kjcomfw[__ ] ( , , ) * * , ( ) link:\doibase 10.1017/cbo9780511791383 [ _ _ ] ( , , ) link:\doibase 10.1103/revmodphys.87.925 [ * * , ( ) ] link:\doibase 10.1007/978 - 3 - 319 - 26641 - 1 [ _ _ ] , , vol . ( , , ) * * , ( ) * * , ( ) link:\doibase 10.1016/0378 - 8733(95)00268-s [ * * , ( ) ] link:\doibase 10.1016/0025 - 5564(95)00093 - 3 [ * * , ( ) ] arxiv:1611.04800 http://www.ncbi.nlm.nih.gov/pubmed/1551000 [ * * , ( ) ] link:\doibase 10.1097/01.olq.0000194586.66409.7a [ * * , ( ) ] link:\doibase 10.1016/j.mbs.2016.09.009 [ * * , ( ) ] link:\doibase 10.1016/j.mbs.2004.02.003 [ * * , ( ) ] link:\doibase 10.1098/rspb.2000.1244 [ * * , ( ) ] \doibase http://dx.doi.org/10.1097/qad.0000000000000676 [ * * , ( ) ] link:\doibase 10.1038/srep00469 [ * * , ( ) ] * * , ( ) link:\doibase 10.1038/srep03006 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.112.118702 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.117.228302 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1103/physreve.83.045102 [ * * , ( ) ] * * , ( ) link:\doibase 10.1007/978 - 1 - 4612 - 0017 - 8 [ _ _ ] , systems and control : foundations and applications ( , , ) link:\doibase 10.1103/physrevlett.111.188701 [ * * , ( ) ] link:\doibase 10.1137/120893409 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/18/7/073013 [ * * , ( ) ] link:\doibase 10.1017/nws.2015.10 [ * * , ( ) ] link:\doibase 10.1103/physreve.71.016129 [ * * , ( ) ] link:\doibase 10.1098/rsif.2007.1106 [ * * , ( ) ] link:\doibase 10.1007/s00285 - 010 - 0344-x [ * * , ( ) ] link:\doibase 10.1145/1281192.1281269 [ ] * * , ( ) link:\doibase 10.1371/journal.pone.0028116 [ * * , ( ) ] link:\doibase 10.1007/s10461 - 010 - 9787 - 8 [ * * , ( ) ] link:\doibase 10.1103/physrevx.5.021005 [ * * , ( ) ] link:\doibase 10.1038/srep31456 [ * * , ( ) ] * supplemental material for `` concurrency - induced transitions in epidemic dynamics on temporal networks '' *
there is already a considerable but widely varying literature on the application of category theory to the life and cognitive sciences such as the work of robert rosen ( , ) and his followers as well as andre ehresmann and jean - paul vanbremeersch and their commentators .the approach taken here is based on a specific use of the characteristic concepts of category theory , namely universal mapping properties .one such approach in the literature is that of franois magnan and gonzalo reyes which emphasizes that `` category theory provides means to circumscribe and study what is universal in mathematics and other scientific disciplines . ''their intended field of application is cognitive science . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we may even suggest that universals of the mind may be expressed by means of universal properties in the theory of categories and much of the work done up to now in this area seems to bear out this suggestion .... by discussing the process of counting in some detail , we give evidence that this universal ability of the human mind may be conveniently conceptualized in terms of this theory of universals which is category theory . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ another current approach that emphasizes universal mapping properties ( `` universal constructions '' ) is that of s. phillips , w. h. wilson , and g. s. halford ( , , ) .in addition to the focus on universals , the approach here is distinctive in the use of heteromorphisms which are object - to - object morphisms between objects if different categories in contrast to the usual homomorphisms or homs between objects in the same category . by explicitly adding heteromorphisms to the usual homs - only presentation of category theory, this approach can directly represent interactions between the objects of different categories ( intuitively , between an organism and the environment ) .but it is still early days , and many approaches need to be tried to find out `` where theory lives . ''before developing the concept of a brain functor , we need to consider the related concept of a pair of adjoint functors , an adjunction .the developers of category theory , saunders maclane and samuel eilenberg , famously said that categories were defined in order to define functors , and functors were defined in order to define natural transformations .a few years later , the concept of universal constructions or universal mapping properties was isolated ( and ) . adjoints were defined a decade later by daniel kan and the realization of their ubiquity ( `` adjoint functors arise everywhere '' maclane : cwm ) and their foundational importance has steadily increased over time ( lawvere and lambek ). now it would perhaps not be too much of an exaggeration to see categories , functors , and natural transformations as the prelude to defining adjoint functors .as steven awodey put it : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the notion of adjoint functor applies everything that we have learned up to now to unify and subsume all the different universal mapping properties that we have encountered , from free groups to limits to exponentials .but more importantly , it also captures an important mathematical phenomenon that is invisible without the lens of category theory .indeed , i will make the admittedly provocative claim that adjointness is a concept of fundamental logical and mathematical importance that is not captured elsewhere in mathematics . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ other category theorists have given similar testimonials ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to some , including this writer , adjunction is the most important concept in category theory . the isolation and explication of the notion of adjointness is perhaps the most profound contribution that category theory has made to the history of general mathematical ideas . " nowadays , every user of category theory agrees that [ adjunction ] is the concept which justifies the fundamental position of the subject in mathematics . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _how do the ubiquitous and important adjoint functors relate to the universal constructions ?mac lane and birkhoff succinctly state the idea of the universals of category theory and note that adjunctions can be analyzed in terms of those universals . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the construction of a new algebraic object will often solve a specific problem in a universal way , in the sense that every other solution of the given problem is obtained from this one by a unique homomorphism .the basic idea of an adjoint functor arises from the analysis of such universals . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we can use some old language from plato s theory of universals to describe those universals of category theory ( ellerman ) that solve a problem in a universal or paradigmatic way so that `` every other solution of the given problem is obtained from this one '' in a unique way . in plato s theory of ideas or forms ( ) , a property has an entity associated with it , the universal , which uniquely represents the property .an object has the property , i.e. , , if and only if ( iff ) the object _ participates _ in the universal .let ( from or _ methexis _ ) represent the participation relation so `` '' reads as `` participates in ''. given a relation , an entity is said to be a _universal _ for the property ( with respect to ) if it satisfies the following universality condition : for any , if and only if .a universal representing a property should be in some sense unique .hence there should be an equivalence relation ( ) so that universals satisfy a uniqueness condition : if and are universals for the same , then .the two criteria for a _ theory of universals _ is that it contains a binary relation and an equivalence relation so that with certain properties there are associated entities satisfying the following conditions : \(1 ) _ universality condition _ :for any , iff , and \(2 ) _ uniqueness condition _ : if and are universals for the same [ i.e. , satisfy ( 1 ) ] , then .a universal is said to be _ non - self - predicative _ if it does not participate in itself , i.e. , .a universal is _ self - predicative _ if it participates in itself , i.e. , . for the sets in an iterative set theory ( boolos ) , set membership is the participation relation , set equality is the equivalence relation , and those sets are never - self - predicative ( since the set of instances of a property is always of higher type or rank than the instances ) .the universals of category theory form the `` other bookend '' as always - self - predicative universals .the set - theoretical paradoxes arose from trying to have _one _ theory of universals ( `` frege s paradise '' ) where the universals could be _ either _ self - predicative or non - self - predicative , instead of having two opposite `` bookend '' theories , one for never - self - predicative universals( set theory ) and one for always always - self - predicative universals ( category theory ) . for the self - predicative universals of category theory ( see or for introductions ) ,the participation relation is the _ uniquely - factors - through _ relation .it can always be formulated in a suitable category as : `` '' means `` there exists a unique arrow '' .then is said to _ uniquely factor through _ , and the arrow is the unique factor or participation morphism . in the universality condition , for any , if and only if , the existence of the identity arrow is the self - participation of the self - predicative universal that corresponds with , the self - predication of the property to . in category theory , the equivalence relation used in the uniqueness condition is the isomorphism ( ) .we will later use a specific heterodox treatment of adjunctions , first developed by pareigis and later rediscovered and developed by ellerman ( , ) , which shows that adjoints arise by gluing together in a certain way two universals ( left and right representations ) .but for illustration , we start with the standard hom - set definition of an adjunction . the category has all sets as objects and all functions between sets as the homomorphisms so for sets and , is the set of functions . in the product category ,the objects are ordered pairs of sets and homomorphism is just a pair of functions where and .for an example of an adjunction , consider the _ product functor _ which takes a pair of sets to their cartesian product ( set of ordered pairs of elements from and ) and takes a homomorphism to where for and , .the maps in go from one set to one set and the maps in go from a pair of sets to a pair of sets .there is also the idea of a _ cone _ : c\rightarrow\left ( a , b\right ) ] that is universal in the following sense .given any other cone : c\rightarrow(a , b) ] to from any set .the canonical cone of projections : a\timesb\rightarrow ( a , b) ] is defined as `` uniquely factoring through '' ( as in figure 1 ) .the universal mapping property of the product can then be restated as the universality condition : for any cone ] if and only if ] and it is universal in the following sense . for any cone : c\rightarrow\left ( a , b\right ) ] that goes from an object in to an object in .the hets are contrasted with the homs or homomorphisms between objects in the same category . to keep them separate in our notation, we will henceforth use single arrows for hets and double arrows for homs . ) while the homomorphisms or homs between objects in the same category are represented by double arrows ( ) .the functors between whole categories are also represented by single arrows ( ). one must be careful not to confuse a functor from a category to a category with its action on an object which would be symbolized .moreover since a functor often has a canonical definition , there may well be a canonical het or but such hets are no part of the definition of the functor itself .] then the ump for the product functor can be represented as follows .fig3-right-rep-product.eps figure 3 : ump for the product functor it should be particularly noted that this het - formulation of the ump for the product does not involve the diagonal functor . if we associate with each and each , the set of cones or hets : c\rightarrow\left ( a , b\right ) ] , so that the -valued functor is said to be _ represented on the right _ by the -valued : right representation of the hets by the homs .the trivial ump for the diagonal functor can also be stated in terms of the cone - hets without reference to the product functor .fig4-left-rep-diagonal.eps figure 4 : ump for the diagonal functor this ump for the diagonal functor gives a natural isomorphism based on the pairing % , with no resultant understanding while the words in a familiar language prompt an internal process of generating a meaning so that we understand the words .thus it could be said that `` understanding a language '' means there is a left representation for the heard statements in that language , but there is no such internal re - cognition mechanism for the heard auditory inputs in a strange language .dually , there are also hets going the other way from the `` organism '' to the `` environment '' and there is a similar distinction between mere behavior ( e.g. , a reflex ) and an action that expresses an intention .mathematically that is described by dualizing or turning the arrows around which gives an acting brain presented as a right representation .fig8-brain-right-rep.eps figure 8 : acting brain as a right representation . in the heteromorphic treatment of adjunctions ,an adjunction arises when the hets from one category to another category , for and , have a right representation , , and a left representation , . butinstead of taking the same set of hets as being represented by two different functors on the right and left , suppose we consider a single functor that represents the hets on the left : , and represents the hets [ going in the opposite direction ] on the right : .if the hets each way between two categories are represented by the same functor as left and right representations , then that functor is said to be a _brain functor_. thus instead of a pair of functors being adjoint , we have a single functor with values within one of the categories ( the `` organism '' ) as representing the two - way interactions , `` perception '' and `` action , '' between that category and another one ( the `` environment '' ) .the use of the adjective `` brain '' is quite deliberate ( as opposed to say `` mind '' ) since the universal hets going each way between the `` organism '' and `` environment '' are part of the definition of left and right representations .in particular , it should be noted how the `` turn - around - the - arrows '' category - theoretic duality provides a mathematical model for the type of `` duality '' between : * sensory or afferent systems ( brain furnishing the left representation of the environment to organism heteromorphisms ) , and * motor or efferent systems ( brain furnishing the right representation of the organism to environment heteromorphisms ) . in view of this application ,those two universal hets , representing the afferent and efferent nervous systems , might be denoted and as in the following diagrams for the two representations .fig9-brain-landr.eps figure 9 : left and right representation diagrams for the brain functor .we have seen how the adjunctive square diagram for an adjunction can be obtained by gluing together the left and right representation diagrams at the common diagonal .the diagram for a brain functor is obtained by gluing together the diagrams for the left and right representations at the common values of the brain functor .if we think of the diagram for a representation as right triangle , then the adjunctive square diagram is obtained by gluing two triangles together on the hypotenuses , and the diagram for the brain functor is obtained by gluing two triangles together at the right angle vertices to form the _ butterfly diagram ._ fig10-butterfly-xa.eps figure 10 : butterfly diagram combining two representations at the common if both the triangular `` wings '' could be filled - out as adjunctive squares , then the brain functor would have left and right adjoints . thus all functors with both left and right adjoints are brain functors ( although not vice - versa ) .the previous example of the diagonal functor is a brain functor since the product functor is the right adjoint , and the coproduct or disjoint union functor is the left adjoint .the underlying set functor ( see appendix ) that takes a group to its underlying set is a rather trivial example of a brain functor that does not arise from having both a left and right adjoint .it has a left adjoint ( the free group functor ) so provides a right representation for the set - to - group maps or hets .also it trivially provides a left representation for the hets but has no right adjoint . in the butterfly diagrambelow , we have labelled the diagram for the brain as the language faculty for understanding and producing speech .fig11-butterfly-language-faculty.eps figure 11 : brain functor butterfly diagram interpreted as language faculty .wilhelm von humboldt recognized the symmetry between the speaker and listener , which in the same person is abstractly represented as the dual functions of the `` selfsame power '' of the language faculty in the above butterfly diagram ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ nothing can be present in the mind ( seele ) that has not originated from one s own activity. moreover understanding and speaking are but different effects of the selfsame power of speech .speaking is never comparable to the transmission of mere matter ( stoff ) . in the person comprehending as well as in the speaker, the subject matter must be developed by the individual s own innate power .what the listener receives is merely the harmonious vocal stimulus. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _a non - trivial mathematical example of a brain functor is provided by the functor taking a finite set of vector spaces over the same field ( or -modules over a ring ) to the product of the vector spaces .such a product is also the coproduct and that space may be written as the biproduct : .the het from a set of spaces to a single space is a _cocone _ of vector space maps and the canonical such het is the set of canonical injections ( taking the `` brain '' as a coproduct ) with the `` brain '' at the point of the cocone . the perception left representation then might be taken as conceptually representing the function of the brain as integrating multiple sensory inputs into an interpreted perception . fig12-biprod-left-rep.eps figure 12 : brain as integrating sensory inputs into a perception .dually , a het from single space to a set of vector spaces is a _ cone _ the single space at the point of the cone , and the canonical het is the set of canonical projections ( taking the `` brain '' as a product ) with the `` brain '' as the point of the cone : . the action right representation then might be taken as conceptually representing the function of the brain as integrating or coordinating multiple motor outputs in the performance of an action .fig13-biprod-right-rep.eps figure 13 : brain as coordinating motor outputs into an action . putting the two representations togethergives the butterfly diagram for a brain .fig14-biprod-brain.eps figure 14 : conceptual model of a perceiving and acting brain .this gives a conceptual model of a single organ that integrates sensory inputs into a perception and coordinates motor outputs into an action , i.e. , a brain .in view of the success of category theory in modern mathematics , it is perfectly natural to try to apply it in the life and cognitive sciences .many different approaches need to be tried to see which ones , if any , will find `` where theory lives '' ( and will be something more than just applying biological names to bits of pure math ) . the approach developed here differs from other approaches in several ways , but the most basic difference is the use of heteromorphisms to represent interactions between quite different entities ( i.e. , objects in different categories ) .heteromorphisms also provide the natural setting to formulate universal mapping problems and their solutions as left or right representations of hets . in spite of abounding in the wilds of mathematical practice ,hets are not recognized in the orthodox presentations of category theory .one consequence is that the notion of an adjunction appears as one atomic concept that can not be factored into separate parts .but that is only a artifact of the homs - only treatment .the heteromorphic treatment shows that an adjunction factors naturally into a left and right representation of the hets going from one category to another where , in general , one representation might exist without the other .one benefit of this heteromorphic factorization is that the two atomic concepts of left and right representations can then be recombined in a new way to form the cognate recombinant concept of a brain functor .the main conclusion of the paper is that this concept of a brain functor seems to fit very well as an abstract and conceptual but non - trivial description of the dual universal functions of a brain , perception ( using the sensory or afferent systems ) and action ( using the motor or efferent systems ) .since the concept of a brain functor requires hets for its formulation , it is important to consider the role of hets in category theory . the homomorphisms or homs between the objects of a category given by a hom bifunctor .in the same manner , the heteromorphisms or hets from the objects of a category to the objects of a category are given by a het bifunctor .-valued _ profunctors _ , _ distributors _ , or _ correspondences _ are formally the same as het bifunctors . ]the -bifunctor gives the rigorous way to handle the composition of a het in [ thin arrows for hets ] with a homomorphism or hom in [ thick arrows for homs ] and a hom in .for instance , the composition is the het that is the image of under the map : .similarly , the composition is the het that is the image of under the map : .. ] this is all perfectly analogous to the use of -functors to define the composition of homs .since both homs and hets ( e.g. , injection of generators into a group ) are common morphisms used in mathematical practice , both types of bifunctors formalize standard mathematical machinery .the homs - only orientation may go back to the original conception of category theory `` as a continuation of the klein erlanger programm , in the sense that a geometrical space with its group of transformations is generalized to a category with its algebra of mappings . ''eilenberg - macl : gentheory while chimeras do not appear in the orthodox `` ontological zoo '' of category theory , they abound in the wilds of mathematical practice . in spite of the reference to `` working mathematician '' in the title of maclane s text , one might seriously doubt that any working mathematician would give , say , the universal mapping property of free groups using the `` device '' of the underlying set functor instead of the traditional description given in the left representation diagram ( which does not even mention ) as can be seen in most any non - category - theoretic text that treats free groups .for instance , consider the following description in nathan jacobson s text ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to summarize : given the set we have obtained a map of into a group such that if is any group and , is any map of into then we have a unique homomorphism of into , making the following diagram commutative : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in jacobson s diagram , only the morphism is a group homomorphism ; the vertical and diagonal arrows are called `` maps '' and are set - to - group hets so it is the diagram for a left representation .the notion of a homomorphism is so general that hets can always be recast as `` homs '' in a larger category variously called a _ directly connected category _ ( since pareigis calls the het bifunctor a `` connection '' ) , a _ cograph _category , or , more colloquially , a _ collage _ category ( since it combines quite different types of objects and morphisms into one category in total disregard of any connection to the erlangen program ) . the _ collage category _ of a het bifunctor , denoted , has as objects the disjoint union of the objects of and .the _ homs _ of the collage category are defined differently according to the two types of objects . for and in , the homs are the elements of , the hom bifunctor for , and similarly for objects and in , the homs are the elements of . for the different types of objects such as from and from , the `` homs '' are the elements of and there are no homs in the other direction in the collage category .does the collage category construction show that `` hets '' are unnecessary in category theory and that homs suffice ?since all the information given in the het bifunctor has been repackaged in the collage category , any use of hets can always be repackaged as a use of the `` -to- homs '' in the collage category . in any application , like the previous example of the universal mapping property ( ump ) of the free - group functor as a left representation, one must distinguish between the two types of objects and the three types of `` homs '' in the collage category .suppose in jacobson s example , one wanted to `` avoid '' having the different `` maps '' and group homomorphisms by formulating the left representation in the collage category formed from the category of , the category of groups , and the het bifunctor , , for set - to - group maps .since the ump does not hold for arbitrary objects and homs in the collage category , , one would have to differentiate between the `` set - type objects '' and the `` group - type objects '' as well as between the `` mixed - type homs '' in and the `` pure - type homs '' in .then the left representation ump of the free - group functor could be formulated in the het - free collage category as follows ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for every set - type object , there is a group - type object and a mixed - type hom such that for any mixed - type hom from the set - type object to any group - type object , there is a unique pure - type hom such that ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ thus the answer to the question `` are hets really necessary ? ''is `` no!''since one can always use sufficient circumlocutions with the _ different _ types of `` homs '' in a collage category . jokes aside , the collage category formulation is essentially only a reformulation of the left representation ump using clumsy circumlocutions . working mathematicians use phrases like `` mappings '' or `` morphisms '' to refer to hets in contrast to homomorphisms and `` mixed - type homs '' does not seem to be improved phraseology for hets. there is , however , a more substantive point , i.e. , the general umps of left or right representations show that the hets between objects of different categories can be represented by homs _ within _ the codomain category or _ within _ the domain category , respectively .if one conflates the hets and homs in a collage category , then the point of the representation is rather obscured ( since it is then one set of `` homs '' in a collage category being represented by another set of homs in the same category ) .there is another het - avoidance device afoot in the homs - only treatment of adjunctions . for instance, the left - representation ump of the free - group functor can , for each , be formulated as the natural isomorphism : .but if we fix and use the underlying set functor , then there is trivially the right representation : .putting the two representations together , we have the heteromorphic treatment of an adjunction first formulated by pareigis : without any mention of hets .moreover , the het - avoidance device of the underlying set functor allows the ump of the free group functor to be reformulated with sufficient circumlocutions to avoid mentioning hets ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for each set , there is a group and a set hom such that for any set hom from the set to the underlying set of any group , there is a unique group hom over in the other category such that the set hom image of the group hom back in the original category satisfies . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ such het - avoidance circumlocutions have no structural significance since there is a general adjunction representation theorem ellerman : whatisct that _ all _ adjoints can be represented , up to isomorphism , as arising from the left and right representations of a het bifunctor . even though the homs - only formulation of an adjunction only ignores the underlying hets ( due to the adjunction representation theorem ) , is that formulation sufficient to give all umps ? or are there important universal constructions that are not either left or right adjoints ?probably the most important example is the tensor product .the universal mapping property of the tensor product is particularly interesting since it is a case where the heteromorphic treatment of the ump is forced ( under one disguise or another ) .the tensor product functor is _ not _ a left adjoint so the usual device of using the other functor ( e.g. , a forgetful or diagonal functor ) to avoid mentioning hets is not available . for modules ( over some commutative ring ) , one category is the product category where the objects are ordered pairs of -modules and the other category is just the category of -modules .the values of the -bifunctor are the bilinear functions . then the tensor product functor given by gives a left representation : {ccc}\left\langle a , b\right\rangle & & \\ ^{\eta_{\left\langle a , b\right\rangle } } \downarrow^ { { } } & \searrow^{f } & \\ a\otimes b & \underset{\exists!\text { } f_{\ast}}{\longrightarrow } & c \end{array }$ ] for instance , in maclane and birkhoff s _ algebra _ textbook , they explicitly use hets ( bilinear functions ) starting with the special case of an -module ( for a commutative ring ) and then stating the universal mapping property of the tensor product using the left representation diagram like any other working mathematicians . for any -module , there is an -module and a canonical bilinear het such that given any bilinear het to an -module , there is a unique -module hom such that the following diagram commutes .lambek , j. 1981 . the influence of heraclitus on modern mathematics . in _ scientific philosophy today : essays in honor of mario bunge _ , edited by j. agassi and r. s. cohen , 11121 .boston : d. reidel publishing co. louie , a. h. 1985 .categorical system theory . in_ theoretical biology and complexity : three essays on the natural philosophy of complex systems _ , edited by robert rosen , 68163 .orlando fl : academic press .magnan , francois and gonzalo e. reyes 1994 .category theory as a conceptual tool in the study of cognition . in _the logical foundations of cognition_. john macnamara and gonzalo e. reyes eds . , new york : oxford university press : 57 - 90 .makkai , michael .structuralism in mathematics . in _language , logic , and concepts : essays in memory of john macnamara _ , edited by r. jackendoff , p. bloom , and k. wynn , 4366 .cambridge : mit press ( a bradford book ) .phillips , steven , and william h. wilson . 2014 . chapter 9 : a category theory explanation for systematicity : universal constructions . in _ systematicity and cognitive architecture _ , edited by p. calvo and j. symons , 22749 .cambridge , ma : mit press .wood , richard j. 2004 .ordered sets via adjunctions . in_ categorical foundations .encyclopedia of mathematics and its applications vol .97_. maria cristina pedicchio and walter tholen eds ., cambridge : cambridge university press : 5 - 47 .
there is some consensus among orthodox category theorists that the concept of adjoint functors is the most important concept contributed to mathematics by category theory . we give a heterodox treatment of adjoints using heteromorphisms ( object - to - object morphisms between objects of different categories ) that parses an adjunction into two separate parts ( left and right representations of heteromorphisms ) . then these separate parts can be recombined in a new way to define a cognate concept , the brain functor , to abstractly model the functions of perception and action of a brain . the treatment uses relatively simple category theory and is focused on the interpretation and application of the mathematical concepts . the mathematical appendix is of general interest to category theorists as it is a defense of the use of heteromorphisms as a natural and necessary part of category theory .
a central goal of the gaia mission is to teach us how the galaxy functions and how it was assembled .we can only claim to understand the structure of the galaxy when we have a dynamical model galaxy that reproduces the data .therefore the construction of a satisfactory dynamical model is in a sense a primary goal of the gaia mission , for this model will encapsulate the understanding of galactic structure that we have gleaned from gaia .preliminary working models that are precursors of the final model will also be essential tools as we endeavour to make astrophysical sense of the gaia catalogue .consequently , before launch we need to develop a model - building capability , and with it produce dynamical models that reflect fairly fully our current state of knowledge .the modern era of galaxy models started in 1980 , when the first version of the bahcall - soneira model appeared .this model broke new ground by assuming that the galaxy is built up of components like those seen in external galaxies .earlier work had centred on attempts to infer three - dimensional stellar densities by directly inverting the observed star counts . however , the solutions to the star - count equations are excessively sensitive to errors in the assumed obscuration and the measured magnitudes , so in practice it is essential to use the assumption that our galaxy is similar to external galaxies to choose between the infinity of statistically equivalent solutions to the star - count equations .bahcall & soneira showed that a model inspired by data for external galaxies that had only a dozen or so free parameters could reproduce the available star counts . did not consider kinematic data , but updated the classical work on mass models by fitting largely kinematic data to a mass model that comprised a series of components like those seen in external galaxies .these data included the oort constants , the tangent - velocity curve , the escape velocity at the sun and the surface density of the disk near the sun . were the first to fit both kinematic and star - count data to a model of the galaxy that was inspired by observations of external galaxies .they broke the disk down into seven sub - populations by age .then they assumed that motion perpendicular to the plane is perfectly decoupled from motion within the plane , and further assumed that as regards vertical motion , each subpopulation is an isothermal component , with the velocity dispersion determined by the observationally determined age - velocity dispersion relation of disk stars .each sub - population was assumed to form a disk of given functional form , and the thickness of the disk was determined from the approximate formula /\sigma^2\} ] ensures that a discrepancy between and impacts only in so far as the orbit contributes to .the right side starts with a minus sign to ensure that is decreased if and the orbit tends to increase . recently demonstrated the value of the syer & tremaine algorithm by using it to construct a dynamical model of the inner galaxy in the pre - determined potential of .n - body simulations have been enormously important for the development of our understanding of galactic dynamics . to datethey have been of rather limited use in modelling specific galaxies , because the structure of an n - body model has been determined in an obscure way by the initial conditions from which it is started .in fact , a major motivation for developing other modelling techniques has been the requirement for initial conditions that will lead to n - body models that have a specified structure ( e.g * ? ? ? * ) .nothwithstanding this difficulty , was able to find an n - body model that qualitatively fits observations of the inner galaxy. it will be interesting to see whether the syer tremaine algorithm can be used to refine a model like that of fux until it matches all observational constraints .when trying to understand something that is complex , it is best to proceed through a hierarchy of abstractions : first we paint a broad - bruish picture that ignores many details .then we look at areas in which our first picture clearly conflicts with reality , and understand the reasons for this conflict .armed with this understanding , we refine our model to eliminate these conflicts. then we turn to the most important remaining areas of disagreement between our model and reality , and so on .the process terminates when we feel that we have nothing new or important to learn from residual mismatches between theory and measurement .this logic is nicely illustrated by the dynamics of the solar system .we start from the model in which all planets move on kepler ellipses around the sun .then we consider the effect on planets such as the earth of jupiter s gravitational field . to this pointwe have probably assumed that all bodies lie in the ecliptic , and now we might consider the non - zero inclinations of orbits .one by one we introduce disturbances caused by the masses of the other planets. then we might introduce corrections to the equations of motion from general relativity , followed by consideration of effects that arise because planets and moons are not point particles , but spinning non - spherical bodies . as we proceed through this hierarchy of models ,our orbits will proceed from periodic , to quasi - periodic to chaotic .models that we ultimately reject as oversimplified will reveal structure that was previously unsuspected , such as bands of unoccupied semi - major axes in the asteroid belt .the chaos that we will ultimately have to confront will be understood in terms of resonances between the orbits we considered in the previous level of abstraction .the impact of hipparcos on our understanding of the dynamics of the solar neighbourhood gives us a flavour of the complexity we will have to confront in the gaia catalogue .when the density of stars in the plane was determined , it was found to be remarkably lumpy , and the lumps contained old stars as well as young , so they could not be just dissolving associations , as the classical interpretation of star streams supposed .now that the radial velocities of the hipparcos survey stars are available , it has become clear that the hyades - pleiades and sirius moving groups are very heterogeous as regards age . evidently these structures do not reflect the patchy nature of star formation , but have a dynamical origin . they are probably generated by transient spiral structure , so they reflect departures of the galaxy from both axisymmetry and time - independence .such structures will be most readily understod by perturbing a steady - state , axisymmetric galaxy model .a model based on torus mapping is uniquely well suited to such a study because its orbits are inherently quasi - periodic structures with known angle - action coordinates .consequently , we have everything we need to use the powerful techniques of canonical perturbation theory . even in the absence of departures from axisymmetry or time - variation in the potential, resonances between the three characteristic frequencies of a quasi - periodic orbit can deform the orbital structure from that encountered in analytically integrable potentials .important examples of this phenomenon are encountered in the dynamics of triaxial elliptical galaxies , where resonant ` boxlets ' almost entirely replace box orbits when the potential is realistically cuspy , and in the dynamics of disk galaxies , where the 1:1 resonance between radial and vertical oscillations probably trapped significant numbers of thick - disk stars as the mass of the thin disk built up . has shown that such families of resonant orbits may be very successfully modelled by applying perturbation theory to orbits obtained by torus mapping .if the resonant family is exceptionally large , one may prefer to obtain its orbits directly by torus mapping rather than through perturbation theory .figures [ fig1 ] and [ fig2 ] show examples of each approach to a resonant family .both figures show surfaces of section for motion in a planar bar . in figure [ fig1 ]a relatively weak resonance is successfuly handled through perturbation theory , while in figure [ fig2 ] a more powerful resonance that induces significant chaos is handled by directly mapping isochrone orbits into the resonant region .these examples demonstrate that if we obtain orbits by torus mapping , we will be able to discover what the galaxy would look like in the absence of any particular resonant family or chaotic region , so we will be able to ascribe particular features in the data to particular resonances and chaotic zones .this facility will make the modelling process more instructive than it would be if we adopted a simple orbit - based technique .a dynamical model galaxy will consist of a gravitational potential together with distribution functions for each of several stellar populations .each distribution function may be represented by a set of orbital weights , and the populations will consist of probability distributions in mass , metallicity and age that a star picked from the population has the specified characteristics .thus a galaxy model will contain an extremely large number of parameters , and fitting these to the data will be a formidable task .since so much of the galaxy will be hidden from gaia by dust , interpretation of the gaia catalogue will require a knowledge of the three - dimensional distribution of dust .such a model can be developed by the classical method of comparing measured colours with the intrinsic colours of stars of known spectral type and distance . at large distances from the sun , even gaia s small parallax errors will give rise to significantly uncertain distances , and these uncertainties will be an important limitation on the reliability of any dust model that one builds in this way . dynamical modelling offers the opportunity to refine our dust model because newton s laws of motion enable us to predict the luminosity density in obscured regions from the densities and velocities that we see elsewhere , and hence to detect obscuration without using colour data .moreover , they require that the luminosity distributions of hot components are intrinsically smooth , so fluctuations in the star counts of these populations at high spatial frequencies must arise from small scale structure in the obscuring dust .therefore , we should solve for the distribution of dust at the same time as we are solving for the potential and the orbital weights .in principle one would like to fit a galaxy model to the data by predicting from the model the probability density of detecting a star at given values of the catalogue variables , such as celestial coordinates , parallax , and proper motons , and then evaluating the likelihood , where the product runs over stars in the catalogue and with the measured values and the associated uncertainties .unfortunately , it is likely to prove difficult to obtain the required probability density from an orbit - based model , and we will be obliged to compare the real catalogue to a pseudo - catalogue derived from the current model . moreover , standard optimization algorithms are unlikely to find the global maximum in without significant astrophysical input from the modeller . in any event , evaluating for each of observed stars is a formidable computational problem . devising efficient ways of fitting models to the dataclearly requires much more thought .fine structure in the galaxy s phase space may provide crucial assistance in fitting a model to the data .two inevitable sources of fine structure are ( a ) resonances , and ( b ) tidal streams .resonances will sometimes be marked by a sharp increase in the density of stars , as a consequence of resonant trapping , while other resonances show a deficit of stars .suppose the data seem to require an enhanced density of stars at some point in action space and you suspect that the enhancement is caused by a particular resonance .by virtue of errors in the adopted potential , the frequencies will not actually be in resonance at the centre of the enhancement . by appropriate modification of will be straightforward to bring the frequencies into resonance . by reducing the errors in the estimated actions of orbits , a successful update of probably enhance the overdensity around the resonance .in fact , one might use the visibility of density enhancements to adjust very much as the visibility of stellar images is used with adaptive optics to configure the telescope optics .a tidal stream is a population of stars that are on very similar orbits the actions of the stars are narrowly distributed around the actions of the orbit on which the dwarf galaxy or globular cluster was captured .consequently , in action space a tidal stream has higher contrast than it does in real space , where the stars diverging angle variables gradually spread the stars over the sky .errors in will tend to disperse a tidal stream in action space , so again can be tuned by making the tidal stream as sharp a feature as possible .dynamical galaxy models have a central role to play in attaining gaia s core goal of determining the structure and unravelling the history of the milky way .even though people have been building galaxy models for over half a century , we are still only beginning to construct fully dynamical models , and we are very far from being able to build multi - component dynamical models of the type that the gaia will require .at least three potentially viable galaxy - modelling technologies can be identified .one has been extensively used to model external galaxies , one has the distinction of having been used to build the currently leading galaxy model , while the third technology is the least developed but potentially the most powerful . at this pointwe would be wise to pursue all three technologies .once constructed , a model needs to be confronted with the data . on account of the important roles in this confrontation that will be played by obscuration and parallax errors , there is no doubt in my mind that we need to project the models into the space of gaia s catalogue variables .this projection is simple in principle , but will be computationally intensive in practice .the third and final task is to change the model to make it fit the data better .this task is going to be extremely hard , and it is not clear at this point what strategy we should adopt when addressing it .it seems possible that features in the action - space density of stars associated with resonances and tidal streams will help us to home in on the correct potential .there is much to do and it is time we started doing it if we want to have a reasonably complete box of tools in hand when the first data arrive in 20122013 .the overall task is almost certainly too large for a single institution to complete on its own , and the final galaxy - modelling machinery ought to be at the disposal of the wider community than the dynamics community since it will be required to evaluate the observable implications of any change in the characteristics or kinematics of stars or interstellar matter throughout the galaxy .therefore , we should approach the problem of building galaxy models as an aspect of infrastructure work for gaia , rather than mere science exploitation .i hope that in the course of the next year interested parties will enter into discussions about how we might divide up the work , and define interface standards that will enable the efforts of different groups to be combined in different combinations .it is to be hoped that these discussions lead before long to successful applications to funding bodies for the resources that will be required to put the necessary infrastructure in place by 2012 .
techniques for the construction of dynamical galaxy models should be considered essential infrastructure that should be put in place before gaia flies . three possible modelling techniques are discussed . although one of these seems to have significantly more potential than the other two , at this stage work should be done on all three . a major effort is needed to decide how to make a model consistent with a catalogue such as that which gaia will produce . given the complexity of the problem , it is argued that a hierarchy of models should be constructed , of ever increasing complexity and quality of fit to the data . the potential that resonances and tidal streams have to indicate how a model should be refined is briefly discussed .
the analysis of dynamic processes taking place in complex networks is a major research area with a wide range of applications in social , biological , and technological systems .the spread of information in online social networks , the evolution of an epidemic outbreak in human contact networks , and the dynamics of cascading failures in the electrical grid are relevant examples of these processes .while major advances have been made in this field , most modeling and analysis techniques are specifically tailored to study dynamic processes taking place in static networks .however , empirical observations in social , biological , and financial networks illustrate how real - world networks are constantly evolving over time .unfortunately , the effects of temporal structural variations in the dynamics of networked systems remain poorly understood . in the context of temporal networks , we are specially interested in the interplay between the dynamics on networks ( i.e. , the dynamics of processes taking place in the network ) and the dynamics of networks ( i.e. , the temporal evolution of the network structure ) . although the dynamics on and of networks are usually studied separately , there are many cases in which the evolution of the network structure is heavily influenced by the dynamics of processes taking place in the network .one of such cases is found in the context of epidemiology , since healthy individuals tend to avoid contact with infected individuals in order to protect themselves against the disease a phenomenon called _ social distancing_ . as a consequence of social distancing, the structure of the network adapts to the dynamics of the epidemics taking place in the network .similar adaptation mechanisms have been studied in the context of power networks , biological and neural networks and on - line social networks . despite the relevance of network adaptation mechanisms , their effects on the network dynamics are not well understood . in this research direction , we find the seminal work by gross et al . in , where a simple adaptive rewiring mechanism was proposed in the context of epidemic models . in this model , a susceptible node can cut edges connecting him to infected neighbors and form new links to _ any _ randomly selected susceptible nodes without structural constraint in the formation of new links . despite its simplicity, this adaptation mechanism induces a complex bifurcation diagram including healthy , oscillatory , bistable , and endemic epidemic states .several extensions of this work can be found in the literature , where the authors assume homogeneous infection and recovery rates in the network .another model that is specially relevant to our work is the adaptive susceptible - infected - susceptible ( asis ) model proposed in . in this model , edges in a given contact network can be temporarily removed in order to prevent the spread of the epidemic .an interesting feature of the asis model is that , in contrast with gross model , it is able to account for arbitrary contact patterns , since links are constrained to be part of a given contact graph . despite its modeling flexibility ,analytical results for the asis model are based on the assumption of homogeneous contact patterns ( i.e. , the contact graph is complete ) , as well as homogeneous node and edge dynamics ( i.e. , nodes present the same infection and recovery rates , and edges share the same adaptation rates ) . as a consequence of the lack of tools to analyze network adaptation mechanisms ,there is also an absence of effective methodologies for actively utilizing adaptation mechanisms for containing spreading processes .although we find in the literature a few attempts in this direction , most of them rely on extensive numerical simulations , on assuming a homogeneous contact patterns , or a homogeneous node and edge dynamics .in contrast , while controlling epidemic processes over static networks , we find a plethora of tools based on game theory or convex optimization . in this paper , we study adaptation mechanisms over arbitrary contact networks .in particular , we derive an explicit expression for a lower bound on the epidemic threshold of the asis model for arbitrary networks , as well as heterogeneous node and edge dynamics . in the case of homogeneous node and edge dynamics, we show that the lower bound is proportional to the epidemic threshold of the standard sis model over a static network . furthermore , based on our results ,we propose an efficient algorithm for optimally tuning the adaptation rates of an arbitrary network in order to eradicate an epidemic outbreak in the asis model .we confirm the tightness of the proposed lower bonds with several numerical simulations and compare our optimal adaptation rates with popular centrality measures .in this section , we describe the adaptive susceptible - infected - susceptible ( asis ) model over _ arbitrary _ networks with _ heterogeneous _ node and edge dynamics ( heterogeneous asis model for short ) .we start our exposition by considering a spreading process over a time - varying contact graph , where is the set of nodes and is the time - varying set of edges . for any , {i , j} ] .we assume that is strongly connected .edges in the initial graph appear and disappear over time according to the following markov processes : [.9\linewidth ] { p}(a_{ij}(t+h ) = 0 \mid a_{ij}(t ) = 1)=\vspace{.1 cm } \\\phi_{ij}x_i(t ) h+ \phi_{ji}x_j(t ) h + o(h ) , \end{multlined}\label{eq : cut } \\ & { p}(a_{ij}(t+h ) = 1 \mid a_{ij}(t ) = 0 ) = a_{ij}(0)\psi_{ij } h + o(h ) , \label{eq : rewire}\end{aligned}\ ] ] where the parameters and are called the _ cutting _ and _ reconnecting _ rates .notice that the transition rate in depends on and , inducing an adaptation mechanism of the network structure to the state of the epidemics .the transition probability in can be interpreted as a protection mechanism in which edge is stochastically removed from the network if either node or is infected . more specifically , because of the first summand ( respectively , the second summand ) in , whenever node ( respectively , node ) is infected , edge is removed from the network according to a poisson process with rate ( respectively , rate ) . on the other hand ,the transition probability in describes a mechanism for which a ` cut ' edge is ` reconnected ' into the network according to a poisson process with rate ( see figure [ fig : adaptive ] ) .notice that we include the term in to guarantee that only edges present in the initial contact graph can be added later on by the reconnecting process .in other words , we constrain the set of edges in the adaptive network to be a part of the arbitrary contact graph .in this section , we derive a lower bound on the epidemic threshold for the heterogeneous asis model . for , let denote a poisson counter with rate . in what follows , we assume all poisson counters to be stochastically independent .then , from the two equations in , the evolution of the nodal states can be described by the following set of stochastic differential equations for all .similarly , from and , the evolution of the edges can be described by the following set of stochastic differential equations : for all . by, the expectation ] and 12 & 12#1212_12%12[1][0] link:\doibase 10.1038/30918 [ * * , ( ) ] _ _( , ) _ _ ( , ) link:\doibase 10.1007/s00779 - 005 - 0046 - 3 [ * * , ( ) ] * * ( ) in link:\doibase 1102.0629v1 [ _ _ ] ( ) pp .link:\doibase 10.1093/bib / bbp057 [ * * , ( ) ] link:\doibase 10.1038/nature02555 [ * * , ( ) ] link:\doibase 10.1038/nbt.1522 [ * * , ( ) ] link:\doibase 10.1140/epjb / e20020151 [ * * , ( ) ] link:\doibase 10.1016/j.physrep.2012.03.001 [ * * , ( ) ] link:\doibase 10.3201/eid1201.051371 [ * * , ( ) ] link:\doibase 10.1098/rsif.2010.0142 [ * * , ( ) ] link:\doibase 10.1209/epl / i2004 - 10533 - 6 [ * * , ( ) ] link:\doibase 10.1038/304158a0 [ * * , ( ) ] link:\doibase 10.1161/01.atv.0000069625.11230.96 [ * * , ( ) ] link:\doibase 10.1186/s40649 - 015 - 0023 - 6 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.96.208701 [ * * , ( ) ] link:\doibase 10.1098/rsif.2007.1229 [ * * , ( ) ] link:\doibase 10.1007/s10867 - 008 - 9060 - 9 [ * * , ( ) ] link:\doibase 10.1103/physreve.82.036116 [ * * , ( ) ] link:\doibase 10.1103/physreve.83.026102 [ * * , ( ) ] link:\doibase 10.1088/1742 - 5468/2012/08/p08018 [ * * , ( ) ] link:\doibase 10.1007/s00285 - 012 - 0555 - 4 [ * * , ( ) ] link:\doibase 10.1103/physreve.90.022801 [ * * , ( ) ] link:\doibase 10.1103/physreve.88.042802 [ * * , ( ) ] link:\doibase 10.1103/physreve.92.030801 [ * * , ( ) ] link:\doibase 10.1103/physreve.88.042801 [ * * , ( ) ] link:\doibase 10.1186/1471 - 2458 - 12 - 679 [ * * , ( ) ] link:\doibase 10.1103/physreve.85.036108 [ * * , ( ) ] link:\doibase 10.1109/tcns.2015.2426755 [ * * , ( ) ] link:\doibase 10.1016/j.jcss.2006.02.003 [ * * , ( ) ] link:\doibase 10.1109/mcs.2015.2495000 [ * * , ( ) ] link:\doibase 10.1109/tcns.2014.2310911 [ * * , ( ) ] link:\doibase 10.1109/tnet.2008.925623 [ * * , ( ) ] _ _( , ) _ _ ( , ) _ _ ( , ) link:\doibase 10.1103/physreve.87.062816 [ * * , ( ) ] link:\doibase 10.1007/s11081 - 007 - 9001 - 7 [ * * , ( ) ] in link:\doibase 10.1109/cdc.2013.6761078 [ _ _ ] ( ) pp . _ _ ( , ) ( )we show that the matrix defined in is irreducible , that is , there is no similarity transformation that transforms into a block upper - triangular matrix . for this purpose ,define where , , and . since the rates and are positive , if , then for all distinct and .therefore , to prove the irreducibility of , it is sufficient to show that is irreducible . in order to show that is irreducible , we shall show that the directed graph on the nodes , defined as the graph having adjacency matrix , is strongly connected .we identify the nodes , , and variables , , , ( ) , , ( ) .then , the upper - right block of the matrix indicates that the graph contains the directed edge for all and .similarly , from the matrices and in , we see that contains the directed edges and for all and . using these observations , let us first show that has a directed path from to for all . since is strongly connected , it has a path such that and .therefore , from the above observations , we see that contains the directed path . in the same way, we can show that also contains the directed path for every .these two types of directed paths in guarantees that is strongly connected . hence the matrix is irreducible , as desired .in the homogeneous case , the matrix takes the form in what follows , we show that if and only if holds true .since is strongly connected by assumption , its adjacency matrix is irreducible and therefore has an entry - wise positive eigenvector corresponding to the eigenvalue ( see ) . define the positive vector .then , the definition of in shows and therefore . in the same manner , we can show that .moreover , it is straightforward to check that . therefore , for a real number , it follows that hence , if a real number satisfies the following equations : then is an eigenvector of . since is irreducible ( shown in appendix [ appx : pf : irreducibility ] ) , by perron - frobenius theory , if then ( see ( * ? ? ? * theorem 17 ) ) .this , in particular , shows that if and only if there exist and such that holds . the two equations in have two pairs of solutions and such that , and .therefore , we need to show if and only if holds true . to see this, we notice that and are the solutions of the quadratic equation following from . since , we have if and only if the constant term of the quadratic equation is positive , which is indeed equivalent to .this completes the proof of the extinction condition stated in .we first give a brief review of geometric programs .let , , denote positive variables and define .we say that a real function is a _ monomial _ if there exist and such that .also , we say that a function is a _ posynomial _ if it is a sum of monomials of ( we point the readers to for more details ) . given a collection of posynomials , , and monomials , , , the optimization problem is called a _ geometric program_. a constraint of the form with being a posynomial is called a posynomial constraint .although geometric programs are not convex , they can be efficiently converted into equivalent convex optimization problems . in the following , we rewrite the optimization problem into a geometric program using the new variables and .by , these variables should satisfy the constraints also , using these variables , the cost function can be written as , where are posynomials . in order to rewrite the constraint ,we first define the matrices we also introduce the positive constants , , . now ,adding to both sides of , we equivalently obtain where is given by summarizing , we have shown that the optimization problem is equivalent to the following optimization problem with ( entry - wise ) positive variables : in this optimization problem , the objective function is a posynomial in the variables , , and .also , the box constraints , , and can be written as posynomial constraints .finally , since each entry of the matrix is a posynomial in the variables , , and , the vector - constraint yields posynomial constraints .therefore , the optimization problem is a geometric program , as desired .furthremore , a standard estimate on the computational complexity of solving geometric program ( see , e.g. , ( * ? ? ?* proposition 3 ) ) shows that the computational complexity of solving the optimization problem in is given by .finally we remark that contains both the terms and so that we can not use as the decision variable in the geometric program due to the positivity constraint on decision variables . by this reason , we can not design the reconnecting rates under the framework presented in this paper .this issue is left as an open problem .
in this paper , we study the dynamics of epidemic processes taking place in adaptive networks of arbitrary topology . we focus our study on the adaptive susceptible - infected - susceptible ( asis ) model , where healthy individuals are allowed to temporarily cut edges connecting them to infected nodes in order to prevent the spread of the infection . in this paper , we derive a closed - form expression for a lower bound on the epidemic threshold of the asis model in arbitrary networks with heterogeneous node and edge dynamics . for networks with homogeneous node and edge dynamics , we show that the resulting lower bound is proportional to the epidemic threshold of the standard sis model over static networks , with a proportionality constant that depends on the adaptation rates . furthermore , based on our results , we propose an efficient algorithm to optimally tune the adaptation rates in order to eradicate epidemic outbreaks in arbitrary networks . we confirm the tightness of the proposed lower bounds with several numerical simulations and compare our optimal adaptation rates with popular centrality measures .
the complexity of a large variety of systems , from infrastructures to the cell , is rooted in a network of interactions between their constituents .quantifying the robustness of complex networks is one of the main challenges of network of networks with implications in fields as different as biology or policy making and risk assessment . in the last fifteen yearsit has been shown that the structure of a single network is strictly related to its robustness .but only recently , attention has been drawn toward a previously neglected aspects of complex systems , namely the interactions between several complex networks .rarely single networks are isolated , while it is usually the case that several networks are interacting and interdependent on each other .for example , in infrastructures , the banking systems are interdependent with the internet and the electric power - grid , public transport , such as subway , is dependent on the power - grid , which relies on its turn on the water supply system to cool the power - plants , etc . in the cellthe situation is much similar : all cellular networks , such as the metabolic networks , the protein - protein interaction networks , the signaling networks , and the gene transcription networks are all dependent on each other , and the cell is only alive if all these networks are functional. these are examples of network of networks , i.e. , networks formed by several interdependent networks . a special class of network of networksare multiplex networks , which are multilayer structures in which each layer is formed by the same set of nodes interconnected by different kinds of links for different layers . in other words , these are graphs with all nodes of one kind and with links of different colors .multiplex networks are attracting great interest as they represent a large variety of systems such as social networks where people can be linked by different types of relationships ( friendship , family tie , collaboration , citations , etc . ) or , for example , in transportation networks , where different places can be linked by different types of transportation ( train , flight connections , flight connection of different airline companies , etc . ) .multiplex network datasets are starting to be analysed , several modelling framework for these networks have been proposed and the characterization of a large variety of dynamical processes is getting a momentum . at this pointwe emphasize the principal difference between the interdependent and so - called interconnected networks .this difference is not about structural organization of connections between these networks but rather about the function of these interlinks .interlinks connecting different interdependent networks ( interdependencies ) show the pairs of nodes that can not exist without each other . in the present paper we consider variations of this kind of complex networks . on the other hand , in the interconnected networks , the interlinks play the same role as links in single networks , enabling one to consider , e.g. , various percolation problems , disease spreading , etc .a major progress in understanding the robustness of multilayer interdependent networks has been made in a series of seminal papers , where it has been proposed that a natural measure for evaluating the robustness of these structures to random failure is the size of a mutually connected giant component .the mutually connected giant component is the component that remains after breakdowns propagate back and forth between different interdependent networks ( layers ) generating a cascade of failure events .a node is in the mutually connected component of a multilayer network if all the nodes on which it depends are also in the mutually connected network and if at least one neighbor node in its own network ( layer ) belongs to the mutually connected component .clearly , the giant mutually connected component naturally generalizes the giant connected component ( percolation cluster ) in a single network .the robustness properties of multiplex networks have been right now well understood , including effects of degree correlations , the overlap of the links or antagonistic effects in this novel type of percolation problem . as the fraction of of removed nodes``igniters''increases ,multiplex networks are affected by cascading failures , until they reach a point for where the network abruptly collapses , and the size of the mutually connected component shows a discontinuous transition . in this case if a small fraction of nodes in the multiplex are not interdependent , then the transition can change from discontinuous to continuous . although the issue of interest in the present article is the giant mutually connected component , other special giant components can be introduced for these networks . herewe mention only the so - called giant viable cluster between each two nodes of which , there is a complete set of interconnecting paths running through every layer .it is easy to see that the viable cluster is a subgraph of the mutual component . .interdependencies ( interlinks between nodes from different levels ) are shown by the black dashed lines .intralinks between nodes within layers are shown as solid red lines . in each individual layer ( label ) , all nodes ( ) have the same number of interlinks ( superdegree ) .interlinks connect only nodes with the same label in different layers , forming `` local supernetworks '' .each of these local supernetworks is an uncorrelated random graph with a given superdegree sequence , , defined as the standard configuration model ( uniformly random interconnections for a given sequence of superdegrees ) . ]several works have considered afterwards a more general ( than multiplex ) case of a network of networks in which the layers are formed by the same number of nodes , where each node has a replica node in each other network and might depend only on its replica nodes . despite the major interest on the topic , only recently the following key circumstance become clear . when the interdependencies between the nodes are such that if a network depends on another network , all its nodes depend on the replica nodes of the other network , it turns out that the mutually connected component coincides with that in the corresponding fully connected network of networks , which is actually a particular case of a multiplex network . herewe show , nevertheless , that the situation changes if the interdependence links are distributed between layers more randomly , i.e. , if we remove the above constraint that all nodes in each layer are dependent on their replicas in the same set of other layers .we consider the situation in which a superdegree is assigned to each layer , so that each node of the network depends only on other replica nodes , but now these replica nodes are chosen randomly and independently from different layers , see fig .we call this construction the configuration model of a network of networks .the reason for this term is that for each set of nodes with given , consisting of nodes in all the layers , our definition provides the standard configuration model of a random network with a given sequence of superdegrees , , ( `` local supernetwork '' ) .let us compare this construction to the model of ref . .while in ref . , all `` local supernetworks '' coincided , in the configuration model of network of networks defined in the present work , the `` local supernetworks '' differ from each other ; only their superdegrees coincide in the particular case of .depending on the superdegree sequence , `` local supernetworks '' can contain a set of finite connected components and a giant one .we consider specifically the case in which all the layers have the same internal degree distribution and where the superdegree distribution is .we derive equations for the order parameter of this problem . solving them numerically and analytically we show for this network of networks that if , the layers with increasing superdegree have a percolation transition at different values of , where is the fraction of not damaged nodes in the network , see fig .[ fig : fig1a ] . in other words , as increases , in layers with higher and higher , giant clusters of the mutual component emerge progressively , or , one can also say , the mutual component expands to layers with higher , see fig .all these transition are abrupt but also they are characterized by a singularity of the order parameter . finally we show that for , i.e. when interlinks between layers are removed with probability , each of these transitions involving the layers with can become continuous for . in the case of , nodes in the same layer have different number of interlinks to other layers , which essentially generalizes the problem .a network of networks is formed by networks ( layers ) , , each formed by nodes , , where is infinite .in fact , in this work we also set , which enables us to use an analytical technique developed for the standard configuration model .these calculations exploit the locally tree - like structure of infinite uncorrelated networks .a more realistic case of a finite is a challenging problem .every node is connected with a number of nodes in the same layer and with a number of its `` replica nodes '' in other layers . for two layers ,this framework was proposed in ref .it was generalized to an arbitrary number of layers in refs .we consider the situation in which each layer is interdependent on other networks , so that each node in layer has exactly interlinks , which connect node only to its replica nodes in of other layers .we call the superdegree of the nodes of layer .we stress that the superdegree is associated with the nodes of a layer , and each node in a layer has the same superdegree .interlinks connecting replica nodes within different sets , say , interlink and interlink are assumed to be independent ( uncorrelated ) .for example , if a node of layer is interdependent on node of layer , then although another node of layer may in principle occur interdependent on node , in general , it depends on replica nodes sitting in any layers . following refs . , we define the network of networks with a super - adjacency matrix of elements if there is a link between node and node and zero otherwise . in these networkswe have always if both and . for each node and all its replicas ,we introduce a `` local supernetwork '' , whose nodes are the layers and the links are interdependencies within this set of replicas . this network was discussed in refs .this local supernetwork is determined by the adjacency matrix parametrized by the node .this network may consist of a number of connected components .connected components from different local supernetworks are connected with each other through links within individual layers . in this workwe explore this complicated system of interconnected components , which is necessary to describe the emergence of the giant mutually connected component and its expansion ( percolation ) over different layers .we define the mutually connected component as the following .each node is in the mutually connected component if it has at least one neighbor which belongs to the mutually connected component and if all the linked nodes in the interdependent networks are also in the mutually connected component . in these problems ,the giant ( i.e. , containing a finite fraction of nodes ) mutually connected component is single. it immediately follows from this definition that , remarkably , within each connected component of any local supernetwork , all its replica nodes either together belong to the mutually connected component or not . in our considerations, we will essentially exploit this strong consequence .given a network of networks it is easy to construct a message passing algorithm determining if node is in the mutually connected component .let us denote by the message within a layer , from node to node and indicating ( ) if node is in the mutually connected component when we remove the link in network .furthermore , let us denote by the message between the `` replicas '' and of node in layers and .the message indicates if the node is in the mutually connected component when we remove the link between node and node .in addition to that we assume that the node can be damaged and permanently removed from the network .this removal will launch an avalanche of failures ( removals of nodes ) spreading over the layers .note that , of course , the node removal retains its interdependence links .we indicate with a node that is damaged , otherwise we have .the message passing equations for these messages are given by _ij&=&s_i__i()s_ii + & & , + s_ii&=&s_i__i()s_ii + & & , [ mp1 ] where indicates the set of nodes which are neighbors of node in network , and indicates the layers such that the nodes are interdependent on the node . finally indicates if a node is in the mutually interdependent network ( ) and this indicator function can be expressed in terms of the messages as s_i&=&s_i__i()s_ii + & & .[ s ] the solution of the message passing equations is given by the following closed expression , _ij&= & _ ( i,)\{s_i } + & & s_i , [ mesf ] where is the connected cluster of the `` local supernetwork '' of node , i.e. , the network between layers determined by the adjacency matrix parametrized by the node .finally is given by s_i&=&s_i_(i , ) .[ s2 ] for detailed derivation and explanation of this solution see our work and appendix .we assume here that each network ( layer ) is generated from a configuration model with the same degree distribution , and that each node is connected to other `` replica '' nodes chosen uniformly randomly .moreover we assume that the degree sequence in each layer is and that the degrees of the replicas of node are uncorrelated .this implies that we are considering a network of networks ensemble , such that every network of networks with a super - adjacency matrix has a probability given by p(*a*)&=&_=1^m_i=1^n\{(k^_i,_j=1^na_i , j)(q_,_=1^ma_i , i ) .+ & & . } , [ pa ] where indicates the kronecker delta and is a normalization constant .moreover we assume that nodes are removed with probability , i.e. , we consider the following expression for the probability of the variables p(\{s_i})=_=1^m_i=1^n p^s_i(1-p)^1-s_i .[ ps ] in order to quantify the expected size of the mutually connected component in this ensemble , we can average the messages over this ensemble of the network of networks .the message passage equations for this problem are given by eqs .( [ mesf ] ) .therefore the equations for the average message within a layer are given in terms of the parameter and the generating functions given by g_0^k(z)=_k p(k ) z^k , & g_1^k(z)=_k z^k-1 , + g_0^q(z)=_q p(q ) z^q , & g_1^q(z)=_q z^q-1 .in particular , if we indicate by the average messages within a layer of degree we obtain _ q&=&p_sp(s|q)^s-1 + & & , [ sq ] where indicates the probability that a node in layer with is in a connected component of the local supernetwork of cardinality ( number of nodes ) . similarly , the probability that a node in a layer with superdegree is in the mutually connected component is given by s_q&=&p_sp(s|q)^s-1 + & & .[ sq ] equations ( [ sq ] ) and ( [ sq ] ) are valid for any network of networks ensemble described by eqs .( [ pa])([ps ] ) . in the following we study in particular the limiting casein which the number of layers , and the local supernetwork is sparse . in order to find a solution for eqs .( [ sq ] ) , we define as b= p _ q [ 1-g_0(1-_q ) ] . inserting this expression in eq .( [ sq ] ) we get _ q&=&p _ sp(s|q)b^s-1 [ 1-g_1(1-_q ) ] .[ sq2 ] from the definition of we see that and that only if both and .this implies that in all the cases in which the layers are not all formed by a single giant component or in which , we have .moreover if , we can neglect in eq .( [ sq ] ) the contribution coming from the giant component of the local supernetworks in the large limit .therefore in eq .( [ sq2 ] ) we can replace with the probability that a node of degree belongs to a finite component of size in the supernetwork . note that in our model , the statistics of all local supernetworks coincide .let us consider the quantity (s|q) ] .here ] with and , minimal superdegree and maximal superdegree .the supernetwork with is above the ordinary percolation phase transition in the supernetwork ( i.e. , it has a giant component ) while the supernetwork with is below the ordinary percolation phase transition ( i.e. , it consists only of finite components ) .notice that if the supernetwork has no giant component , then , otherwise , .the order parameter displays a series of discontinuous jumps corresponding to the transitions in which layers with increasing values of start to percolate . in other words , in layers with higher and higher superdegree ,giant clusters of the mutually connected component emerge progressively . in order to see this, we observe that for all values of such that ( note that for power - law superdegree distributions the minimal degree is always ) . in fig .[ fig : fig1a ] we plot the maximal superdegree of the percolating layers as ] indicates the integer part . from fig .[ fig : fig1a ] it is clear that the discontinuities in the curve correspond to the percolation transitions of layers of increasing superdegree . for the first , second and third transitions occur at , , and .we found these values by solving numerically eqs .( [ op ] ) and ( [ e220 ] ) .one can see that for the activations of layers of increasing superdegree become much more rapid .for these parameters the first , transitions occur at . for the case of , which we discussed above, we plot the observables ( fraction of nodes in a layer of superdegree , belonging to the mutual component ) vs. for different values of , see fig .this figure demonstrates how giant clusters of the mutually connected component emerge progressively in layers with higher and higher as we increase the control parameter .note that , as is natural , each discontinuous emergence of a fraction of the mutual component nodes in layers of superdegree is accompanied by discontinuities of the dependencies for all smaller superdegrees . as a second example of the configuration model, we have taken a network of networks ensemble with a poisson distribution characterized by and a poisson distribution , where we take ] vs. .the transition points and the corresponding values of the order parameter are obtained by solving numerically eqs .( [ op ] ) and ( [ e220 ] ) .for the first transitions are , , and .for the first transitions are , , and . for a range of the network of networks parameters ,the giant mutual component is absent at any value of , including . in fig .[ fig : fig3a ] we plot the phase diagram containing a phase in which ( white region ) and a phase in which ( shaded region ) for .this phase diagram was obtained by numerical analysis of the equation for the order parameter . in particular , the phase diagram is plotted for a scale - free supernetwork with power - law exponent and superdegrees ] other randomly chosen layers . in this casethe message passing equations are given by _( i,)\{s_i } + & & s_i , where is the connected cluster of the local supernetwork of node when we consider only interdependencies , i.e. , only the interlinks that are not damaged .we can therefore indicate by the average messages within a layer of degree , obtaining _ q&=&p_s_n=0^qr^n(1-r)^q - np(s|n)b^s-1 + & & .[ sqpr ] here , indicates the probability that a node in layer with and interdependent layers is in a connected component of the local supernetwork of interdependent layers of cardinality .moreover , is defined as b= p_q [ 1-g_0^k(1-_q ) ] .[ b2 ] similarly , the probability that a node in a layer with superdegree is in the mutually connected component is given by s_q&=&p_s_n=0^qr^n(1-r)^q - n p(s|n)b^s-1 + & & .[ sqpr ] if the layers are not all formed by a single giant component or if we have , and therefore we can neglect in eqs .( [ sqpr])([sqpr ] ) the contribution coming from the giant component of the local supernetworks in the large limit .therefore in eq .( [ sqpr ] ) we can replace with the probability that a node of degree belongs to a finite component of size in the supernetwork . let us consider the quantity \sum_{n=0}^q\binom{q}{n}r^q ( 1-r)^{q - n}p_f(s|n) ] satisfying the following equation = p^q(1-e^-c ) . for an arbitrary distribution , the problem defined in eqs .( [ ps ] ) still has a single order parameter . as in section [ s3 ] , the first equation in eqs .( [ ps ] ) has solution expressed in terms of the principal value of the lambert function , namely , _ q = .inserting this solution back in the equation for in eqs .( [ ps ] ) we find = g_1^q(r+1-r ) .[ op2 ] similarly to section [ s3 ] , we write this equation as . by imposing and we can find the set of critical points at which the function turns out to be discontinuous .the equation reads as = _ q(1-e^-c_q)_q [ r1-r]^q-2 + cp g_1^q(r1-r ) _ q < q_max e^-c_q . where $ ] .note that in contrast to the case of , for all values of the minimal superdegree provide non - trivial network of networks .indeed , for , even if , local supernetworks have finite connected components .in addition to the abrupt phase transitions , this model displays also a set of continuous phase transition for different , where the order parameter acquires a non - zero value .these transitions occur at p = p_c=. these transitions are only stable for below some special value , and at they become discontinuous .the value can be obtained by solving simultaneously the following set of equations & = & p , + f_2(,p , r)&=&0 , + & = & 0 .in this paper we have characterized the robustness properties of the configuration model of network of networks by evaluating the size of the mutually connected component when a fraction of nodes have been damaged and removed .the configuration model of network of networks is an ensemble of multilayer networks in which each layer is formed by a network of nodes , and where each node might depend only on its `` replica nodes '' on the other layers .we assign to each layer a superdegree indicating the number of interdependent nodes of each individual node of the layer and take these `` replica nodes '' in uniformly randomly chosen layers , independently for each node .we have shown that percolation in this ensemble of networks of networks demonstrate surprising features . specifically , for low values of , only the layers with low enough value of the superdegree are percolating , and as we raise the value of several discontinuous transition can occur in which layers of increasing value of superdegree begin percolate . herewe observe a sharp contrast to ordinary percolation in which nodes of high degree belong to the giant connected component ( percolation cluster ) with higher probability .this principal difference is explained by the definition of the mutual component according to which a node is in a mutual component only if all the nodes interdependent with this node belong to the mutual component .this condition makes more difficult the entrance into the mutual component for layers with a high superdegree .the non - trivial point here is that a layer of each given degree enters the mutual component not smoothly but through a discontinuous transition .the networks of networks which we considered in this paper differ from those we studied in our previous work in one key aspect . in the present work ,interdependence links of different nodes of a layer are not lead to the same other layers as in ref . , but they are distributed over the other layers essentially more randomly , independently for different nodes of a layer . we have found that this additional randomness dramatically change results and leads to new effects . we obtained our results assuming that the number of layers in the network is infinite .a more realistic case of finite is a challenging problem .we would also like to stress that multiple discontinuous phase transitions in models for complex networks is a rear but not unique phenomenon .for example , multiple discontinuous transitions were recently reported in another network model .one should note that interlinking of only `` replica nodes '' is actually a great , very convenient simplification which has enabled us to solve the problem analytically .moreover , we have first assumed that all nodes in the same layer have equal superdegree ( number of interdependencies ) , which is a strong constraint . on the next step ,we however have removed this restriction by introducing the probability that an interdependence link is removed .we found that when a fraction of interdependent links are removed , each of these specific transitions can change from discontinuous to continuous . in summary ,we have found novel percolation phenomena in a more general model of a network of networks than the network models which were considered previously .the multiple transitions , accompanying the expansion of mutual component over the layers of such a network of networks , are in dramatic contrast to ordinary percolation and to more simple interdependent and multiplex networks , e.g. , for a pair of interdependent networks . we suggest that our findings should be valid even for more general networks of networks ._ note added in proof _ we recently considered a network of networks in which interlinks between each two layers connect randomly selected nodes and not only `` replica nodes''.remarkably , it turned out that the results for this model are exactly the same as in the present article .this work was partially supported by the fct project ptdc / mat/114515/2009 and the fet proactive ip project multiplex number 317532 .in ref . the percolation transition in a network of network in which all the local adjacency matrices are the same , i.e. , , was considered , implying that the local supernetwork is the same for every node . in thissetting , the percolation properties are determined by the message passing eqs .( [ mp1 ] ) with indicating the set of layers which are neighbors of layer in any local supernetwork . for this network of networks , it was shown in ref . that eqs .( [ mp1 ] ) can be written as + s_ii & = & _ ( ) \{s_i } , + _ ij&= & _ ( ) \{s_i } + & & s_i , [ msfa ] where indicates the connected component of the supernetwork to which layer belongs .moreover in ref . it has been shown that is given by s_i&=&s_i _( ) . [ s2a ]now we observe that all the steps performed in ref . to obtain eqs .( [ msfa ] ) and ( [ s2a ] ) are in fact only operations acting in the local supernetwork of node .it follows immediately that for the network of networks coming from the configuration models , the same equations should be valid , where we replace with the connected component of the local supernetwork of node passing through layer .therefore for the configuration model of a network of networks the solution to the message passing eqs .( [ mp1 ] ) and ( [ s ] ) is given by eqs .( [ mesf ] ) and eqs .( [ s2 ] ) .
recently much attention has been paid to the study of the robustness of interdependent and multiplex networks and , in particular , networks of networks . the robustness of interdependent networks can be evaluated by the size of a mutually connected component when a fraction of nodes have been removed from these networks . here we characterize the emergence of the mutually connected component in a network of networks in which every node of a network ( layer ) is connected with its randomly chosen replicas in some other networks and is interdependent of these nodes with probability . we find that when the superdegrees of different layers in the network of networks are distributed heterogeneously , multiple percolation phase transition can occur . we show that , depending on the value of , these transition are continuous or discontinuous .
electronically tuned microwave oscillators are key components used in a wide variety of microwave communications systems . the phase of the output signal exhibits fluctuations in time about the steady state oscillations giving rise to phase noise a very important characteristic that influences the overall performance especially at higher microwave frequencies . in order to understand the oscillator phase behaviour ,a statistical model for a non - linear oscillating circuit has to be developed and presently , no accurate theoretical model for phase noise characterization is available because of the particularly difficult nature of this problem .this is due to the hybrid nature of non - linear microwave oscillator circuits where distributed elements ( pertaining usually to the associated feeding or resonator circuits ) and non - linear elements ( pertaining usually to the amplifiying circuit ) have to be dealt with simultaneously .the main aim of this report is to establish a theoretical framework for dealing with the noise sources and non- linearities present in these oscillators , introduce a new methodology to calculate the resonance frequency and evaluate the time responses ( waveforms ) for various voltages and currents in the circuit without or with the noise present .once this is established , the phase noise spectrum is determined and afterwards the validity range of the model is experimentally gauged with the use of different types of microwave oscillators .this report is organised in the following way : section ii covers the theoretical analysis for the oscillating circuit , reviews noise source models and earlier approches .section iii presents results of the theoretical analysis and highlights the determination of the resonance frequency for some oscillator circuits without noise . in section iv ,phase noise spectra are determined for several oscillator circuits and section v contains the experimental results .the appendix contains circuit diagrams and corresponding state equations for several non - linear oscillator circuits .in standard microwave analysis , it is difficult to deal with distributed elements in the time domain and difficult to deal with non - linear elements in the frequency domain .non- linear microwave oscillator circuits have simultaneously non- linear elements in the amplifying part and distributed elements in the resonating part [ non - linearity is needed since it is well known that only non - linear circuits have stable oscillations ] . before we tackle , in detail , the determination of the phase noise , let us describe the standard procedure for dealing with the determination of resonance frequency of non - linear oscillator circuits : * the first step is to develop a circuit model for the oscillator device and the tuning elements .the equivalent circuit should contain inherently noiseless elements and noise sources that can be added at will in various parts of the circuit .this separation is useful for pinpointing later on the precise noise source location and its origin .the resulting circuit is described by a set of coupled non- linear differential equations that have to be written in a way such that a linear sub - circuit ( usually the resonating part ) is coupled to another non - linear sub - circuit ( usually the oscillating part ) . *the determination of the periodic response of the non- linear circuit . *the third step entails performing small signal ac analysis ( linearization procedure ) around the operating point .the result of the ac analysis is a system matrix which is ill - conditioned since a large discrepency of frequencies are present simultaneously ( one has a factor of one million in going from khz to ghz frequencies ) .the eigenvalues of this matrix have to be calculated with extra care due to the sensitivity of the matrix elements to any numerical roundoff .we differ from the above analysis , by integrating the state equations directly with standard / non- standard runge - kutta methods adapted to the non - stiff / stiff system of ordinary differential equations .the resonance frequency is evaluated directly from the waveforms and the noise is included at various points in the circuit as johnson or shot noise .this allows us to deal exclusively with time domain methods for the noiseless / noisy non - linear elements as well as the distributed elements .the latter are dealt with through an equivalence to lumped elements at a particular frequency .as far as point 3 is concerned , the linearization procedure method is valid only for small - signal analysis whereas in this situation , we are dealing with the large signal case .previously , several methods have been developed in order to find the periodic response .the most well established methods are the harmonic balance and the piecewise harmonic balance methods .schwab has combined the time - domain ( for the non - linear amplifier part ) with the frequency domain ( for the linear resonating part ) methods and transformed the system of equations into a boundary value problem that yields the periodic response of the system .for illustration and validation of the method we solve 6 different oscillator circuits ( the appendix contains the circuit diagrams and the corresponding state equations ) : * the standard van der pol oscillator . * the amplitude controlled van der pol oscillator . * the clapp oscillator . * the colpitts oscillator .* model i oscillator .* model ii oscillator .we display the time responses ( waveforms ) for various voltages and currents in the attached figures for each of the six oscillators .all oscillators reach periodic steady state almost instantly except the amplitude controlled van der pol ( acvdp ) and the colpitts circuits .for instance , we need , typically , several thousand time steps to drive the acvdp circuit into the oscillatory steady state whereas several hundred thousand steps are required for the colpitts circuit .typically , the rest of the circuits studied reached the periodic steady state in only less a couple of hundred steps .once the oscillating frequency is obtained , device noise is turned on and its effect on the oscillator phase noise is evaluated .all the above analysis is performed with time domain simulation techniques .finally , fourier analysis is applied to the waveform obtained in order to extract the power spectrum as a function of frequency .very long simulation times ( on the order of several hundred thousand cycles ) are needed since one expects inverse power - law dependencies on the frequency .we use a special stochastic time integration method namely the 2s-2o-2 g runge - kutta method developed by klauder and peterson , and we calculate the psd ( power spectral density ) from the time series obtained .it is worth mentioning that our methodology is valid for any type of oscillator circuit and for any type of noise ( additive white as it is in johnson noise of resistors , mutiplicative and colored or with arbitrary as it is for shot noise stemming from junctions or imperfections inside the device ) .in addition , the approach we develop is independent of the magnitude of the noise . regardless of the noise intensity we evaluate the time response and later on the power spectrum without performing any perturbative development whatsoever .recently , kartner developed a perturbative approach to evaluate the power spectrum without having to integrate the state equations .his approach is valid for weak noise only and is based on an analytical expression for the power spectrum .nevertheless one needs to evaluate numerically one fourier coefficient the spectrum depends on .microwave oscillators are realised using a very wide variety of circuit configurations and resonators .we plan to design , fabricate and test microstrip oscillators with gaas mesfet devices with coupled lines and ring resonators .the measured phase noise of these oscillators will be compared with the theoretical prediction from the above analysis .we also plan to apply the above analysis to the experimental phase results obtained from various electronically tuned oscillators that have been already published in the literature .+ * acknowledgments * : the author would like to thank fx kartner and w. anzill for sending several papers , reports and a thesis that were crucial for the present investigation .thanks also to s.o .faried who made several circuit drawings and s. kumar for suggesting two additional circuits ( model i and ii ) to test the software .v. gngerich , f. zinkler , w. anzill and p. russer , `` noise calculations and experimental results of varacytor tunable oscillators with significantly reduced phase noise , '' ieee transactions on microwave theory and techniques * mtt-43 * 278 ( 1995 ) .s. heinen , j. kunisch and i. wolff , `` a unified framework for computer - aided noise analysis of linear and non - linear microwave circuits , '' ieee transactions on microwave theory and techniques * mtt-39 * 2170 ( 1991 ) .in addition , we have : .+ * state - space equations of clapp oscillator * : + \\ \frac{dv_{cte}}{dt } & = & \frac{1}{c_{te } } [ -i_{p } - ( \frac{v_{cte}-v_{ce}}{r_{e}})]\\ \frac{dv_{cta}}{dt } & = & \frac{1}{c_{ta } } [ i_{p } - ( \frac{v_{cta}-v_{ca}}{r_{a}})]\\ \frac{dv_{ca}}{dt } & = & \frac{1}{c_{a } } [ i_{q } + ( \frac{v_{cta}-v_{ca}}{r_{a } } ) - \frac{v_{ca}}{r_{l } } ] \\\frac{di_{p}}{dt } & = & j_{p } \nonumber\\\end{aligned}\ ] ] * state - space equations of coplitts oscillator * : + \\ \frac{dv_{2}}{dt } & = & \frac{1}{c_{2 } } [ i_{2 } - i_{b } + ( \frac{v_{0}-v_{2 } -v_{3}}{r_{2 } } ) ] \\\frac{dv_{3}}{dt } & = & \frac{1}{c_{3 } } [ - i_{b } + ( \frac{v_{0}-v_{2 } -v_{3}}{r_{2 } } ) ] \\\frac{dv_{4}}{dt } & = & ( \frac{v_{1}-v_{4}}{r_{l}c_{4 } } ) \end{aligned}\ ] ]
we have developed a new methodology and a time - domain software package for the estimation of the oscillation frequency and the phase noise spectrum of non - linear noisy microwave circuits based on the direct integration of the system of stochastic differential equations representing the circuit . our theoretical evaluations can be used in order to make detailed comparisons with the experimental measurements of phase noise spectra in selected oscillating circuits .
let be independent and identically distributed ( i.i.d . ) -variate random vectors generated from the following model : where is a -dimensional unknown vector of means , and are i.i.d .random vectors with zero mean and common covariance . for the sample, is a sequence of weakly stationary dependent random variables with zero mean and variances .motivated by the high - dimensional applications arising in genetics , finance and other fields , the current paper focuses on testing high - dimensional hypotheses the specifications for the sparsity and faintness in the above are the following .there are nonzero s ( signals ) for a , which are sparse since the signal bearing dimensions constitute only a small fraction of the total dimensions . also under the ,the signal strength is faint in that the nonzero for .these specification of the have been the most challenging `` laboratory '' conditions in developing novel testing procedures under high dimensionality . pioneered the theory of the higher criticism ( hc ) test which was originally conjectured in , and showed that the hc test can attain the optimal detection boundary established by for uncorrelated gaussian random vectors ( ) .the optimal detection boundary is a phase - diagram in the space of , the two quantities which define the sparsity and the strength of nonzero s under the , such that if lies above the boundary , there exists a test which has asymptotically diminishing probabilities of the type i and type ii errors simultaneously ; and if is below the boundary , no such test exists .hall and jin ( , ) investigated the impacts of the column - wise dependence on the hc test .in particular , found that the hc test is adversely affected if the dependence is of long range dependent .if the dependence is weak , and the covariance matrix is known or can be estimated reliably , the dependence can be utilized to enhance the signal strength of the testing problem so as to improve the performance of the hc test .the improvement is reflected in lowering the needed signal strength by a constant factor . evaluated the hc test under a nonparametric setting allowing column - wise dependence , and showed that the detection boundary of for the hc test can be maintained under weak column - wise dependence . showed that the standard hc test based on the normality assumption can perform poorly when the underlying data deviate from the normal distribution and studied a version of the hc test based on the -statistics formulation . considered detecting gaussian mixtures which differ from the null in both the mean and the variance .arias - castro , bubeck and lugosi ( ) established the lower and upper bounds for the minimax risk for detecting sparse differences in the covariance .we show in this paper that there are alternative test procedures for weakly dependent sub - gaussian data with unknown covariance which attain the same detection boundary as the hc test established in for gaussian distributed data with .the alternative test statistics are obtained by first constructing , for and , which threshold with respect to at a level for , where , is the sample mean of the margin of the data vectors and is the indicator function .we note that and correspond to the and versions of the thresholding statistics , respectively ; and corresponds to the hc test statistic . in the literature ,the statistic is called the hard thresholding in and , and the statistic is called the clipping thresholding in .we then maximize standardized versions of with respect to over , a subset of , which results in the following maximal -thresholding statistics : where and are , respectively , estimators of the mean and standard deviation of under , whose forms will be given later in the paper . by developing the asymptotic distributions of ,the maximal -thresholding tests are formulated for and with the maximal -test being equivalent to the hc test .an analysis on the relative power performance of the three tests reveals that if the signal strength parameter , the maximal -thresholding test is at least as powerful as the maximal -thresholding test , and both the and -thresholding tests are at least as powerful as the hc test . if we allow a slightly stronger signal so that , the differential power performance of the three tests is amplified with the maximal -test being the most advantageous followed by the maximal -test .in addition to the connection to the hc test , the maximal -thresholding test , by its nature of formulation , is related to the high - dimensional multivariate testing procedures , for instance , the tests proposed by and . while these tests can maintain accurate size approximation under a diverse range of dimensionality and column - wise dependence , their performance is hampered when the nonzero means are sparse and faint .the proposed test formulation is also motivated by a set of earlier works including for selecting significant wavelet coefficients , and who considered testing for the mean of a random vector with i.i.d .normally distributed components .we note that the second step of maximization with respect to is designed to make the test adaptive to the underlying signals strength and sparsity , which is the essence of the hc procedure in , as well as that of .the rest of the paper is organized as follows . in section [ sec2 ]we provide basic results on the -thresholding statistic via the large deviation method and the asymptotic distribution of the single threshold statistic .section [ sec3 ] gives the asymptotic distribution of as well as the associated test procedure .power comparisons among the hc and the maximal and -thresholding tests are made in section [ sec4 ] .section [ sec5 ] reports simulation results which confirm the theoretical results .some discussions are given in section [ sec6 ] .all technical details are relegated to the .let be an independent random sample from a common distribution , and , where is the vector of means and is a vector consisting of potentially dependent random variables with zero mean and finite variances .the dependence among is called the column - wise dependence in .those nonzero are called `` signals . ''let , and be the sample variance for the margin .the signal strength in the margin can be measured by the -statistics or the -statistics if is known . for easy expedition ,the test statistics will be constructed based on the -statistics by assuming is known and , without loss of generality , we assume . using the -statistics actually leads to less restrictive conditions for the underlying random variables since the large deviation results for the self - normalized -statistics can be established under weaker conditions to allow heavier tails in the underlying distribution as demonstrated in , and .see for analysis on the sparse signal detection using the -statistics .we assume the following assumptions in our analysis : the dimension as and .there exists a positive constant such that , for any , for \times[-h , h] 1 \le p_1 < p_2 < \cdots < p_m \le p ] such that , define as the -algebra generated by and define the -mixing coefficients see for comprehensive discussions on the mixing concept .the following is a condition regarding the dependence among . for any ,the sequence of random variables is -mixing such that for some and a positive constant .the requirement of being -mixing for each is weaker than requiring the original data columns being -mixing , whose mixing coefficient can be similarly defined as ( [ eq : mixing ] ) .this is because , according to theorem 5.2 in , the following theorem reports the asymptotic normality of under both and . [ th1 ] assume .then , for any , from ( [ eq : meantn0 ] ) and ( [ eq : vartn0 ] ) , define the leading order terms of and , respectively , \phi\bigl({\lambda^{1/2}_p(s)}\bigr)+6\bar{\phi}\bigl ( { \lambda ^{1/2}_p(s)}\bigr)\bigr\}.\end{aligned}\ ] ] it is clear that the asymptotic normality in theorem [ th1](i ) remains if we replace by . to formulate a test procedure based on the thresholding statistic , we need to estimate by a , say .ideally , if the first part of theorem [ th1 ] remains valid if we replace with .an obvious choice of is , which is known upon given and . indeed ,if are the standard normally distributed , we have implying the leading order is exactly for the gaussian data . hence , if we take , ( [ eq : cri1 ] ) is satisfied for the gaussian data . for non - gaussian observations ,the difference between and may not be a smaller order of . specifically , from ( [ eq : meantn0 ] ) and ( [ eq : vartn0 ] ) , we have to make the above ratio diminishing to zero , the strategy of can be adopted by restricting and for a positive , where if and if . under this circumstance , clearly , for a not so high dimension with , ( [ eq : restrict ] ) holds for all , and satisfies ( [ eq : cri1 ] ) . for higher dimensions with , the thresholding level has to be restricted to ensure ( [ eq : restrict ] ) .the restriction can alter the detection boundary of the test we will propose in the next section .this echoes a similar phenomena for the hc test given in . to expedite our discussion , we assume in the rest of the paper that ( [ eq : cri1 ] )is satisfied by the .we note such an arrangement is not entirely unrealistic , as a separate effort may be made to produce more accurate estimators .assuming so allows us to stay focused on the main agenda of the testing problem .the asymptotic normality established in theorem [ th1 ] allows an asymptotic test that rejects if where is the upper quantile of the standard normal distribution .while the asymptotic normality of in theorem [ th1 ] ensures the single thresholding level test in ( [ eq : test1 ] ) a correct size asymptotically , the power of the test depends on , the underlying signal strength and the sparsity .a test procedure is said to be able to separate a pair of null and alternative hypotheses asymptotically if the sum of the probabilities of the type i and type ii errors converges to zero as .let be a sequence of the probabilities of type i error , which can be made converging to zero as .the sum of the probabilities of the type i and type ii errors for the test given in ( [ eq : test1 ] ) with nominal size is approximately which is attained based on the facts that ( i ) the size is attained asymptotically and ( ii ) and are sufficiently accurate estimators in the test procedure ( [ eq : test1 ] ) .our strategy is to first make such that for an arbitrarily small and a constant .the second term on the right - hand side of ( [ eq : errors ] ) is \\[-8pt ] \nonumber & & \hspace*{34pt}\qquad\leq z_{\alpha_n } \frac { \sigma_{t_{2n } , 0}(s)}{\sigma_{t_{2n } , 1}(s)}-\frac{\mu_{t_{2n},1}(s)-\mu_{t_{2n},0}(s ) } { \sigma_{t_{2n } , 1}(s ) } \biggr).\end{aligned}\ ] ] because is slowly varying , and is stochastically bounded , a necessary and sufficient condition that ensures is from proposition [ chap4-cor2 ] , it follows that , up to a factor , where , , and .let as demonstrated in and , the phase diagram is the optimal detection boundary for testing the hypotheses we are considering in this paper when the data are gaussian and . herethe optimality means that for any , there exists at least one test such that the sum of the probabilities of the type i and type ii errors diminishes to zero as ; but for , no such test exists . for correlated gaussian data such that , found that the detection boundary may be lowered by transforming the data via the inverse of cholesky factorization such that .more discussion on the optimality is given in section [ sec6 ] . from the expression of given above ,it can be shown ( see the proof of theorem [ detect - upper - bound ] in the ) that if there exists at least one for each pair of such that ( [ detectable - condition ] ) is satisfied and , hence , the thresholding test would be powerful .this is the key for the maximal -thresholding test that we will propose later to attain the detection boundary .it is clear that we have to make the thresholding level adaptive to the unknown and .one strategy is to use a range of thresholding levels , say , , so that the underlying can be `` covered . ''this is the very idea of the hc test .let be the standardized version of .define the maximal thresholding statistic where ] for an arbitrary small and is the same as the maximal -thresholding statistic . using the same argument for the maximal -thresholding statistic, it can be shown that attains its maximum value on given in ( [ eq : sn ] ) as well . according to , under , with the same normalizing sequences as those in theorem [ asy - gumbel ] .let be the same as that of the maximal -thresholding test given in ( [ eq : l2test ] ) .an level hc test rejects if let us introduce the maximal -thresholding test statistic . recall that it can be shown that the mean and variance of * * under are , respectively , define where is a sufficiently accurate estimator of in a similar sense to ( [ eq : cri1 ] ) and .the maximal -thresholding statistic is where , again , ] , asymptotically , hence , when lies just above the detection boundary , the three functions are the same . if moves further away from the detection boundary so that , there will be a clear ordering among the functions .the following theorem summarizes the relative power performance .[ th4 ] assume and ( [ eq : cri1 ] ) hold .for any given significant level , the powers of the hc , the maximal and -thresholding tests under as specified in satisfy , as , and are asymptotic equivalent for ] , the three tests have asymptotically equivalent powers . in the latter case , comparing the second order terms of may lead to differentiations among the powers of the three tests .however , it is a rather technical undertaking to assess the impacts of the second order terms .the analysis conducted in theorem [ th4 ] is applicable to the setting of gaussian data with and satisfying ( c.3 ) , which is the setting commonly assumed in the investigation of the detection boundary for the hc test [ ; and arias - castro , bubeck and lugosi ( ) ] .specifically , the power ordering among the three maximal thresholding tests in theorem [ th4 ] remains but under lesser conditions ( c.3)(c.5 ) .condition ( c.1 ) is not needed since the gaussian assumption allows us to translate the problem to since the sample mean is sufficient .condition ( c.2 ) is automatically satisfied for the gaussian distribution .the condition ( [ eq : cri1 ] ) is met for the gaussian data , as we have discussed in section [ sec2 ] .we report results from simulation experiments which were designed to evaluate the performance of the maximal and -thresholding tests and the hc test .the purpose of the simulation study is to confirm the theoretical findings that there is an ordering in the power among the three tests discovered in theorem [ th4 ] .independent and identically distributed -dim random vectors were generated according to where is a stationary random vector and have the same marginal distribution . in the simulation , generated from a -dimensional multivariate gaussian distribution with zero mean and covariance , where for and , respectively .the simulation design on had the sparsity parameter and , respectively , and the signal strength and , respectively .we chose two scenarios on the dimension and sample size combinations : ( a ) a large , small setting and ( b ) both and are moderately large .for scenario ( a ) , we chose , where and so that the dimensions were 2000 and 20,000 , and the sample sizes were and 100 , respectively .we note that under the setting , there were only and 7 nonzero means , respectively , among the 2000 and 20,000 dimensions . andthose for were and , respectively , and those for were and , respectively .these were quite sparse .for scenario ( b ) , we chose such that and .the maximal -test statistic was constructed using and given in ( [ eq : meantn0 ] ) and ( [ eq : vartn0 ] ) , respectively , as the mean and standard deviation estimators . the maximal test statistic and the hc test statistic , and ,were constructed similarly using the leading order mean and standard deviation under .the set of thresholding level was chosen to be ] .it can be shown that if , thus , for every , , which implies that for each , can be partitioned into finitely many sets satisfying let be the bracketing number , the smallest number of functions in such that for each in there exists an ( ) satisfying .applying theorem 2.2 in , if the following two conditions hold for an even integer and a real number such that we have for large enough . invoking the maximal inequality of , it follows that now using the markov inequality , we get for large enough hence , the condition ( [ equaconti ] ) holds and is asymptotically tight . it remains to show ( [ condi-1 ] ) and ( [ condi-2 ] ) hold . for ( [ condi-2 ] ), we note that is a v - c class for each .this is because is a v - c class with vc index 2 .let . then is a v - c class by lemma 2.6.18 in van der vaart and wellner ( ) .let be the envelop function for class .clearly , we can take .it is easy to see that for a constant . applying a result on covering number of v - c classes [ theorem 2.6.7 , van der vaart and wellner ( ) ], we get for a universal constant .it can be verified that if , then ( [ condi-2 ] ) holds .the condition ( [ condi-1 ] ) follows from the assumption that . as a result , converge to a zero mean gaussian process with \biggr)\ ] ] for .it can be shown that there exists an ornstein uhlenbeck ( o u ) process with mean zero 0 and such that .therefore , by a result for the o u process in leadbetter , lindgren and rootzn [ ( ) , page 217 ] , where , , and . from ( [ eq : vartn0 ] ) , we have . since \\ & & { } + \frac{a(\tau_n)}{a(\log p)}b^*(\log p)-b^*(\tau_n),\end{aligned}\ ] ] and \\ & & { } + b^*(\tau_n)\biggl[\frac{a(\tau_n)}{a(\log p)}-1\biggr]\to-\log \frac { ( 1-\eta)}{2},\end{aligned}\ ] ] we have finally , note that .this finishes the proof of theorem [ asy - gumbel ] .proof of theorem [ detect - upper - bound ] ( i ) . the proof is made under four cases . for each case , we find the corresponding detectable region and the union of the four regions are the overall detectable region of the thresholding test .basically , we show for any above within one of the four cases , there exists at least one threshold level such that is detectable . for notation simplification ,we only keep the leading order terms for , , and ._ case _ 1 : and . in this case , and . hence , so to make , we need .it follows that the detectable region for this case is .specifically , if we select , we arrive at the best divergence rate for of order ._ case _ 2 : and . in this case , , , and .then , so the detectable region in the plane is . in this region ,the best divergence rate of is of order for any ._ case _ 3 : and .the case is equivalent to and , . then to ensure ( [ 4:eq:3 ] ) diverging to infinity , we need thus , the detectable region must satisfy this translates to _ case _ 4 : and is equivalent to . in this case , , .then hence , it requires that in order to find an , we need . if , namely , , the above inequality is obviously true . if , then is equivalent to .so the detectable region is in this case . in summary of cases 14 ,the union of the detectable regions in the above four cases is , as illustrated in figure [ figure5 ]. threshold test .case 1 : the union of \{i , ii , iii , iv } ; case 2 , the region is i ; case 3 : the union of \{ii , iii , iv , v , vi , vii } ; case 4 : the union of \{i , ii , iii , vi , vii}. ] now we are ready to prove the theorem .we only need to show that the sum of type i and ii errors of the maximal test goes to 0 when .because the maximal test is of asymptotic level , it suffices to show that the power goes to 1 in the detectable region as and .recall that the level rejection region is . from theorem [ asy - gumbel] , we notice that .then , it is sufficient if at every in the detectable region . since for any , therefore , ( [ 4:infty ] ) is true if for any point in the detectable region , there exists a such that therefore , we want to show because and , ( [ 4:infty:1 ] ) is true if .as we have shown in the early proof , for every in the detectable region , there exists an such that for any slow varying function .this concludes ( [ t2ninfinity ] ) and hence ( [ 4:infty ] ) , which completes the proof of part ( i ) .( ii ) note that where and are defined in ( [ th4decom ] ) and if , then and . hence , it is also noticed that implies that .therefore , for all , .if , then . hence , as . if and , then .it follows that , for all , if and , then for all , hence , as . in summary, we have and if .therefore , together with assumption ( [ eq : cri1 ] ) , .we note that , by employing the same argument of theorem [ asy - gumbel ] , it can be shown that where is defined just above ( [ tn1order ] ). then the power of the test thus , the sum of type i and ii errors goes to 1 .this completes the proof of part ( ii ) .proof of theorem [ th4 ] we first prove that , which will be proved in two parts : where . to show ( [ th4part1 ] ) , note the decomposition for in ( [ th4decom ] ) .let .we can first show that because of the following inequality : under condition ( [ eq : cri1 ] ) , that is , , hence , .second , we can show . note the following inequality : under conditions ( c.1)(c.4 ) , .so we have in summary , we have .therefore , .the path leading to ( [ th4part2 ] ) is the following .first of all , it can be shown using an argument similar to the one used in the proof of theorem [ asy - gumbel ] that where .thus , for and , equations ( [ delta0-sigma - ratio - l2 - 1 ] ) to ( [ delta0-sigma - ratio - l2 - 8 ] ) in the following reveal that for all and , we can classify into two sets and such that where `` '' means that for some . because is above the detection boundary , there exists at least one such that .hence , namely , the maximum of is reached on set where diverges at a much faster rate than that of , if the latter ever diverges .let . combining ( [ tn1order ] ) and ( [ delta - r - order ] ), we have this implies that .together with the following inequality : we conclude that ( [ th4part2 ] ) holds .it remains to show the existence of and in arriving at ( [ delta - r - order ] ) .we only prove it for the test . to complete that, we compare the relative order between and for three regions above the detection boundary : ( i ) ( ii ) ] . in regions ( i ) and ( ii ) with , we can show that in region ( ii ) with , we have \\[-8pt ] & & \eqntext{\mbox{and }.}\end{aligned}\ ] ] for $ ] in region ( iii ) . if , define and .then it may be shown that if , define and .then , it can be shown that the results in ( [ delta0-sigma - ratio - l2 - 1])([delta0-sigma - ratio - l2 - 8 ] ) indicate that in each region listed above , will be attained in situations covered by ( [ delta0-sigma - ratio - l2 - 1 ] ) , ( [ delta0-sigma - ratio - l2 - 3 ] ) , ( [ delta0-sigma - ratio - l2 - 6 ] ) and ( [ delta0-sigma - ratio - l2 - 8 ] ) , which together imply ( [ delta - r - order ] ) . next , we compute for the hc ( ) and the ( ) test . for the hc test ,let . under assumptions ( c.1)(c.2 ), applying the large deviation results [ ] , it may be shown that the mean and variance of under are and respectively .the mean and variance of under the as specified in ( c.4 ) are , respectively , these imply that , up to a factor , and hence , \\[-8pt ] \nonumber & & { } + ( s\pi\log p)^{1/4}p^{1/2-\beta+s/2}i(r > s).\end{aligned}\ ] ] for the test , the mean and variances of under specified in ( c.4 ) are , respectively , up to a factor , it follows that , up to a factor , \\[-8pt ] \nonumber & & { } + ( \sqrt{2r\log p } ) p^{1-\beta}i(r > s ) \ ] ] and therefore , \\[-8pt ] \nonumber & & { } + ( s\pi\log p)^{{1}/{4}}(r / s)^{{1}/{4 } } p^{1/2-\beta + s/2}i(r > s).\end{aligned}\ ] ] replicating the above proof for the test , it can be shown that , for and 1 , at last , we will compare for and 2 when .let be a threshold in that is closest to .then the maximal value of over is attained at .note that such exists with probability 1 . to show this point , it is enough to show that , which is equivalent to showing that .let be a sub - sequence such that and .let . by mixing assumption ( c.5 ) and the triangle inequality, it can be seen that as .then it follows that where we used for all . comparing ( [ l2 - 100 ] ) , ( [ hc-100 ] ) and( [ l1 - 100 ] ) , we see that .it follows that , for , therefore , asymptotically with probability 1 , , which results in .this completes the proof .the authors thank the editor , an associate editor and two referees for insightful and constructive comments which have improved the presentation of the paper .we are also very grateful to dr .jiashun jin and dr .cun - hui zhang for stimulating discussions .
we consider two alternative tests to the higher criticism test of donoho and jin [ _ ann . statist . _ * 32 * ( 2004 ) 962994 ] for high - dimensional means under the sparsity of the nonzero means for sub - gaussian distributed data with unknown column - wise dependence . the two alternative test statistics are constructed by first thresholding and statistics based on the sample means , respectively , followed by maximizing over a range of thresholding levels to make the tests adaptive to the unknown signal strength and sparsity . the two alternative tests can attain the same detection boundary of the higher criticism test in [ _ ann . statist . _ * 32 * ( 2004 ) 962994 ] which was established for uncorrelated gaussian data . it is demonstrated that the maximal -thresholding test is at least as powerful as the maximal -thresholding test , and both the maximal and -thresholding tests are at least as powerful as the higher criticism test . ,
when imperative programmers think of lists , they commonly choose doubly linked lists , instead of the singly linked lists that logic and functional programmers use . in the same way ,it is extremely common for trees to be given parent links , whether they are really needed or not .a typical c example might be .... typedef int datum ; typedef struct treerec * treeptr ; struct treerec { treeptr left , right , up , down ; datum datum ; } ; .... where ` down ' points to the first child of a node , ` up ' to its parents , and the children of a node form a doubly linked list with ` left ' and ` right ' pointers .essentially this representation is required by the document object model , for example .cyclically linked trees in imperative languages such as java provide constant time navigation in any of the four directions ( up , down , left , right ) and also constant time and constant space editing ( insert , delete , replace ) .they do so at a price : each element is rigidly locked into place , so that any kind of space sharing ( such as hash consing ) is made impossible .some logic programming languages have been designed to support cyclically linked terms . that does provide constant time navigation , but not editing .the locking into place that is a nuisance in imperative languages is a very serious difficulty in logic programming languages .additionally , reasoning about the termination of programs that traverse cyclic graphs is harder than reasoning about programs that traverse trees , whether in prolog dialects or in lazy functional languages , so it is useful to get by with trees if we can .this article has two parts .in the first part , i present `` fat pointers '' that can be used to navigate around pure trees .the tree itself remains unmodified throughout .the main predicates i define have the form from_to(from , to ) . if one of the arguments is ground and the other is uninstantiated , the time and space cost is o(1 ) per solution . in the second part , i present `` edit pointers '' that can be used to navigate around trees and edit them , in o(1 ) amortised time and space per step and edit . the type declarations are mycroft / okeefe type declarations using the syntax of mercury .the predicate declarations are also mycroft / okeefe declarations giving argument types and modes .the code has been type - checked by the mercury compiler .the clauses are edinburgh prolog .this paper provides evidence that using different definitions for different modes is useful , but that is difficult in mercury , so the modes were not converted to mercury syntax and the code does not pass mercury mode - checking .this is a generalisation of a method for o(1 ) left and right navigation in a list shown to me by david h. d. warren in 1983 , in a text editor he wrote in prolog .a companion paper presents this technique in a functional context .it was rejected on the grounds that the data structure had already been published by huet as the zipper in . however , the two data structures described in this paper and in are different from the zipper , and the issues discussed here are different .the key idea is to distinguish between a tree and a pointer into a tree .the usual c / java approach blurs this distinction , and that has misled some logic programmers into thinking that cyclically linked trees are necessary in order to obtain a certain effect in pointers .a tree just has to represent certain information ; a pointer has to know how to move .a suitable data type for trees is .... : - type tree(datum ) --- > node(datum , list(tree(datum ) ) ) .: - pred tree_datum(tree(d ) , d ) .tree_datum(node(datum , _ ) , datum ) .: - pred tree_children(tree(d ) , list(tree(d ) ) ) .tree_children(node(_,children ) , children ) ..... like a c pointer , a `` fat pointer '' points to a specific ( sub)tree ; unlike a c pointer , a `` fat pointer '' carries a context : the ( sub)tree s left siblings ( ordered from nearest to furthest ) , its right siblings ( ordered from nearest to furthest ) , and a parent fat pointer , if this is not the root . .... : - type pointer(d ) --- > ptr(tree(d ) , list(tree(d ) ) , list(tree(d ) ) , pointer(d ) ) ; no_ptr ..... the predicates we define will never be true of a ` no_ptr ' argument ..... : - pred top_pointer(tree(d ) , pointer(d ) ) .top_pointer(tree , ptr(tree,[],[],no_ptr ) ) .: - pred pointer_tree(pointer(d ) , tree(d ) ) .pointer_tree(ptr(tree , _ , _ , _ ) , tree ) .: - pred pointer_datum(pointer(d ) , d ) .pointer_datum(ptr(tree , _ , _ , _ ) , datum ) : - tree_datum(tree , datum ) .: - pred at_left(pointer ( _ ) ) .at_left(ptr ( _ , [ ] , _ , _ ) ) .: - pred at_right(pointer ( _ ) ) .at_right(ptr ( _ , _ , [ ] , _ ) ) .: - pred at_top(pointer ( _ ) ) .at_top(ptr(_,_,_,no_ptr ) ) . :- pred at_bottom(pointer ( _ ) ) .at_bottom(ptr(tree , _ , _ , _ ) ) : - tree_children(tree , [ ] ) .: - pred left_right(pointer(d ) , pointer(d ) ) .left_right(ptr(t , l,[n|r],a ) , ptr(n,[t|l],r , a ) ) .: - pred up_down_first(pointer(d ) , pointer(d ) ) .up_down_first(p , ptr(t,[],r , p ) ) : - % p = ptr(tree(_,[t|r ] ) , _ , _ , _ ) .p = ptr(tree , _ , _ , _ ) , tree_children(tree , [ t|r ] ) . .... the ` top_pointer/2 ' predicate may be used to make a fat pointer from a tree , or to extract a tree from a fat pointer positioned at_top .the ` at_/1 ' predicates recognise whether a fat pointer is positioned at a boundary .the query ` left_right(left , right ) ' is true when left and right are pointers to adjacent siblings , left on the left , and right on the right .the query ` up_down_first(up , down ) ' is true when up is a pointer and down is a pointer to up s first child ; it is o(1 ) time and space in either direction provided that the list of children of a node is stored in that node and not recomputed .the query ` up_down(up , down ) ' is to be true when up is a pointer and down is a pointer to any of up s children .it uses mode - dependent code . .... : - pred up_down(pointer(d ) , pointer(d ) ) .up_down(p , ptr(t , l , r , a ) ) : - ( var(p ) - > a = ptr ( _ , _ , _ , _ ) , % not no_ptr , that is .p = a ; a = p , p = ptr(tree , _ , _ , _ ) , tree_children(tree , children ) , % split children++ [ ] into reverse(l)++[t]++r split_children(children , [ ] , l , t , r ) ) .: - pred split_children(list(t ) , list(t ) , list(t ) , t , list(t ) ) .split_children([t|r ] , l , l , t , r ) .split_children([x|s ] , l0 , l , t , r ) : - split_children(s , [ x|l0 ] , l , t , r ) . .... we could write this clearly enough as ....up_down_specification(p , ptr(t , l , r , p ) ) : - p = ptr(tree(_,children ) , _ , _ , _ ) , split_children(children , [ ] , l , t , r ) ..... but then the cost of moving up would not be o(1 ) .this is an interesting predicate , because getting efficient behaviour in multiple modes is not just a matter of subgoal ordering . in the ( + , - ) mode, it is acceptable to call ` split_children/5 ' , because that will be o(1 ) space and time per solution . in the ( -,+ ) mode ,we must bypass that call .if the code is encapsulated in a module and type checked , it is clear that the bypassed call must succeed , but that would not be obvious to a compiler .queries in sgml and xml often ask for ( not necessarily adjacent ) preceding or following siblings of a node . testingwhether one node is a following sibling of another does not require mode - dependent code .however , in order to prevent unbounded backtracking in the reverse direction , the code below uses the technique of passing a variable ( l2 ) twice : once for its value , and once so that its length can act as a depth bound . .... : - pred siblings_before_after(pointer(d ) , pointer(d ) ) .siblings_before_after(ptr(t1,l1,r1,a ) , ptr(t2,l2,r2,a ) ) : - right_move([t1|l1 ] , r1 , l2 , [ t2|r2 ] , l2 ) .: - pred right_move(list(t ) , list(t ) , list(t ) , list(t ) , list(t ) ) .right_move(l , r , l , r , _ ) .right_move(l1 , [ x|r1 ] , l2 , r2 , [ _ |b ] ) : - right_move([x|l1 ] , r1 , l2 , r2 , b ) ..... moving from left to right , this costs per following sibling . moving from right to left , it costs per preceding sibling because the second clause builds up patterns of average length and the first clause unifies them against the known lists for each solution .the ` left_right_star/2 ' predicate below fixes this using mode - dependent code .i do not know whether cost per solution can be obtained with bidirectional code .next we have transitive ( _ plus ) and reflexive transitive ( _ star ) closures of the basic predicates . once again, we want to execute different code for the ( + , - ) and ( -,+ ) modes of the ` * _ plus/2 ' predicates . .... : - pred left_right_star(pointer(d ) , pointer(d ) ) .left_right_star(l , r ) : - ( l = r ; left_right_plus(l , r ) ) .: - pred left_right_plus(pointer(d ) , pointer(d ) ) .left_right_plus(l , r ) : - ( var(l ) - > left_right(x , r ) , left_right_star(l , x ) ; left_right(l , x ) , left_right_star(x , r ) ) .: - pred up_down_star(pointer(d ) , pointer(d ) ) .up_down_star(a , d ) : - ( a = d ; up_down_plus(a , d ) ) .: - pred up_down_plus(pointer(d ) , pointer(d ) ) .up_down_plus(a , d ) : - ( var(a ) - > up_down(x , d ) , up_down_star(a , x ) ; up_down(a , x ) , up_down_star(x , d ) ) .a simple benchmark is to build a large tree and traverse it . .... : - pred run .run : - mk_tree(10 , t ) , time((direct_datum(t , x ) , atom(x ) ) ) , time((any_pointer_datum(t , x ) , atom(x ) ) ) , time(labels(t , _ ) ) , time(collect(t , _ ) ) .: - pred mk_tree(integer , tree(integer ) ) .mk_tree(d , node(d , c ) ) : - ( d > 0 - > d1 is d - 1 , c = [ t , t , t , t ] , mk_tree(d1 , t ) ; c = [ ] ) .: - pred direct_datum(tree(d ) , d ) .direct_datum(node(d , _ ) , d ) .direct_datum(node(_,c ) , d ) : - member(n , c ) , direct_datum(n , d ) .: - pred any_pointer_datum(tree(d ) , d ) .any_pointer_datum(t , d ) : - top_pointer(t , p ) , up_down_star(p , n ) , pointer_datum(n , d ) .: - pred labels(tree(d ) , list(d ) ) .labels(tree , labels ) : - labels(tree , labels , [ ] ) .: - pred labels(tree(d ) , list(d ) , list(d ) ) .labels(node(label , children ) ) -- > [ label ] , labels_list(children ) .: - pred labels_list(list(tree(d ) ) , list(d ) , list(d ) ) .labels_list ( [ ] ) -- > [ ] . labels_list([tree|trees ] ) -- >labels(tree ) , labels_list(trees ) . :- pred collect(tree(d ) , list(d ) ) .collect(tree , labels ) : - top_pointer(tree , ptr ) , collect(ptr , labels , [ ] ) .: - pred collect(pointer(d ) , list(d ) , list(d ) ) .collect(ptr , [ datum|labels1 ] , labels ) : - pointer_datum(ptr , datum ) , ( at_bottom(ptr ) - > labels1 = labels2 ; up_down_first(ptr , child ) , collect(child , labels1 , labels2 ) ) , ( at_right(ptr ) - > labels2 = labels ; left_right(ptr , sibling ) , collect(sibling , labels2 , labels ) ) . .... this builds a tree with 1,398,101 nodes , and traverses it using a backtracking search ( both directly and using `` fat pointers '' ) , and by building a list ( both directly and using `` fat pointers '' ) ..traversal times in seconds , by methods and languages [ cols= " > , > , > , > ,< " , ] table [ traversal - table ] shows the times for this benchmark , as measured on an 84 mhz sparcstation 5 .the last two rows of table [ traversal - table ] refer to lazy functional languages : haskell ( ghc ) , and clean ( clm ) .both of those compilers use whole - program analysis , inlining , and strong type information , unlike the incremental , direct , and untyped prolog compilers .the first draft of this article included measurements for findall/3 .they were horrifying .this data structure is bad news for findall/3 and its relatives . to understand why ,let s consider three queries : .... q1(p , l ) : - findall(q , p1(p , q ) , l ) .q2(p , l ) : - findall(t , p2(p , t ) , l ) .q3(p , l ) : - findall(d , p3(p , d ) , l ) .p1(p , q ) : - up_down_star(p , q ) .p2(p , t ) : - p1(p , q ) , q = ptr(t , _ ) .p3(p , d ) : - p1(p , q ) , pointer_datum(q , d ) . .... for concreteness ,suppose that p points to a complete binary tree with nodes , and that the node data are one - word constants .query q3 requires space to hold the result .query q2 requires space to hold the result .this is because fragments of the tree are repeatedly copied .query q1 requires at least space to hold the result .every pointer holds the entire tree to simplify movement upwards , so the entire tree and its fragments are repeatedly copied .if findall/3 copied terms using some kind of hash consing , the space cost could be reduced to , but not the time cost , because it would still be necessary to test whether a newly generated solution could share structure with existing ones .one referee suggested that mercury s ` solutions/2 ' would be cleverer . a test in the 0.10 release showed that it is not yet clever enough .this is related to the problem of costly pointer unification : a fat pointer contains the entire original tree and many of its fragments as well as the fragment it directly refers to .but that is only a problem when you try to copy a fat pointer or unify it with something .the ` siblings_before_after/2 ' predicate is efficient if one of the arguments is ground and the other uninstantiated . with two ground arguments ,it is expensive , because because testing whether two fat pointers are the same fat pointer is expensive in this representation .two fat pointers are the same if and only if they unify , but that may involve comparing the source tree with itself ( and its subtrees with themselves , perhaps repeatedly ) . after reading an earlier draft of this ,thomas conway suggested a variant in which unifying fat pointers would be instead of .i have not adopted his code , because it makes moving up take more time and turn over more memory .he stores just enough of a parent to reconstruct it , given the child , instead of the entire parent .conway s code is very nearly huet s zipper .thomas conway has pointed out that it would be useful to form sets of ( pointers to ) nodes , and that fat pointers are not suitable for that because they are so costly to unify .sets of nodes are required for the hytime query language and by other query languages , including xpath .when one wants to form a set of subtrees , one might want a set of tree values , or a set of occurrences of tree values . using a pointerforces you to work with occurrences rather than values , and that is not always appropriate .the comparison of subtrees can be made more efficient by preprocessing the main tree so that every node contains a hash code calculated from its value .preprocessing the main tree so that every node has a unique number would make pointer comparison fast too .the ` pair ' type here is standard in mercury , tracing back to ` keysort/2 ' in dec-10 prolog . .... : - type ntree(d ) = tree(pair(integer , d ) ) .: - type npointer(d ) = pointer(pair(integer , d ) ) .: - pred number_tree(tree(d ) , ntree(d ) ) .number_tree(tree , numbered ) : - number_tree(tree , numbered , 0 , _ ) .: - pred number_tree(tree(d ) , ntree(d ) , integer , integer ) .number_tree(node(datum , children ) , node(n0-datum , numberedchildren ) , n0 , n ) : - n1 is 1 + n0 , number_children(children , numberedchildren , n1 , n ) .: - pred number_children(list(tree(d ) ) , list(ntree(d ) ) , integer , integer ) . number_children ( [ ] , [ ] , n , n ) .number_children([c|cs ] , [ d|ds ] , n0 , n ) : - number_tree(c , d , n0 , n1 ) , number_children(cs , ds , n1 , n ) .: - pred same_pointer(npointer(d ) , npointer(d ) ) .same_pointer(p1 , p2 ) : - pointer_datum(p1 , n- _ ) , pointer_datum(p2 , n- _ ) .: - pred compare_pointers(order , npointer(d ) , npointer(d ) ) .compare_pointers(o , p1 , p2 ) : - pointer_datum(p1 , n1- _ ) , pointer_datum(p2 , n2-_ ) , compare(o , n1 , n2 ) . ....prolog , having no type classes , would require special purpose set operations for sets of fat pointers using this technique .mercury , having type classes , would not require special code .the data structure described in the previous section implements pointers that point at particular nodes in trees , may exist in large numbers , and may be stepped in any direction at o(1 ) cost .it is useful for navigating around trees that are not to be changed .if you are writing an editor , it is a blunder to try to point _ at _ nodes .instead , it is advisable to point _ between _ nodes , as emacs does .we need a different data structure , and different operations .the real challenge , as huet understood well in , is to show that declarative languages can handle editing effectively .the title of this article promised reversible navigation ; it did not promise reversible editing .reversible editing is attainable , and so is editing .my attempts to discover a data structure that supports reversible editing have so far failed . because these operations are not reversible, i show input ( + ) and output ( - ) modes in the : - pred declarations . the normal way in prolog to `` change '' a node at depth in a tree is to build a new tree sharing as much as possible with the old one ; that takes time and space . the trick that permits cost edit operations is not to rebuild the new tree at once , but to interleave rebuilding with editing and navigation .an ` edit_ptr ` records the subtrees to either side of the current position , whether the sequence at this level has been edited , and what the parent position was . to `` change '' trees ,we need a new operation that creates a new node with the same information as an old one , except for a new list of children . .... : - pred tree_rebuild(+list(tree(d ) ) , + tree(d ) , -tree(d ) ) .tree_rebuild(c , node(d , _ ) , node(d , c ) ) . .... the ` extract/2 ' predicate extracts the edited tree from an ` edit_ptr ` . since these predicates are not reversible , mode information is shown in the : -pred declarations ..... : - type nlr(p ) --- > no_parent % this is the root ; left_parent(p ) % came down on the left ; right_parent(p ) .% came down on the right .: - type changed --- >n ; y. : - type edit_ptr(d ) --- > edit_ptr(list(tree(d ) ) , list(tree(d ) ) , changed , nlr(edit_ptr(d ) ) ) ..... an edit pointer contains a list of the nodes preceding the current position ( in near - to - far order ) , a list of the nodes following the current position ( in near - to - far order ) , a ` changed ' flag , and a parent pointer tagged so that we know whether we came down on the left or the right .one of the differences between this data structure and the zipper is that the zipper builds new retained structure as it moves , while this data structure uses a ` changed ' flag to revert to the original structure if nothing has happened but movement . .... : - pred start(+tree(d ) , -edit_ptr(d ) ) .start(t , edit_ptr([],[t],n , no_parent ) ) .: - pred at_top(+edit_ptr ( _ ) ) .at_top(edit_ptr(_,_,_,no_parent ) ) . :- pred at_left(+edit_ptr ( _ ) ) .at_left(edit_ptr ( [ ] , _ , _ , _ ) ) .: - pred left_datum(+edit_ptr(d ) , ?left_datum(edit_ptr([t| _ ] , _ , _ , _ ) , d ) : - tree_datum(t , d ) .: - pred left(+edit_ptr(d ) , -edit_ptr(d ) ) .left(edit_ptr([t|l],r , c , p ) , edit_ptr(l,[t|r],c , p ) ) .: - pred left_insert(+tree(d ) , + edit_ptr(d ) , -edit_ptr(d ) ) .left_insert(t , edit_ptr(l , r,_,p ) , edit_ptr([t|l],r , y , p ) ) .: - pred left_delete(-tree(d ) , + edit_ptr(d ) , -edit_ptr(d ) ) .left_delete(t , edit_ptr([t|l],r,_,p ) , edit_ptr(l , r , y , p ) ) . :- pred left_replace(+tree(d ) , + edit_ptr(d ) , -edit_ptr(d ) ) .left_replace(t , edit_ptr([_|l],r,_,p ) , edit_ptr([t|l],r , y , p ) ) . : - pred left_down(+edit_ptr(d ) , -edit_ptr(d ) ) .left_down(p , edit_ptr([],k , n , left_parent(p ) ) ) : - p = edit_ptr([t| _ ] , _ , _ , _ ) , tree_children(t , k ) . : - pred left_promote_children(+edit_ptr(d ) , -edit_ptr(d ) ) .left_promote_children(edit_ptr([t|l],r,_,p ) , edit_ptr(l1,r , y , p ) ) : - tree_children(t , k ) , reverse_append(k , l , l1 ) .: - pred reverse_append(+list(t ) , + list(t ) , -list(t ) ) .reverse_append ( [ ] , r , r ) .reverse_append([x|s ] , r0 , r ) : - reverse_append(s , [ x|r0 ] , r ) .: - pred at_right(+edit_ptr ( _ ) ) .at_right(edit_ptr ( _ , [ ] , _ , _ ) ) .: - pred right_datum(+edit_ptr(d ) , -d ) .right_datum(edit_ptr(_,[t| _ ] , _ , _ ) , d ) : - tree_datum(t , d ) .: - pred right(+edit_ptr(d ) , -edit_ptr(d ) ) .right(edit_ptr(l,[t|r],c , p ) , edit_ptr([t|l],r , c , p ) ) .: - pred right_insert(+tree(d ) , + edit_ptr(d ) , -edit_ptr(d ) ) .right_insert(t , edit_ptr(l , r,_,p ) , edit_ptr(l,[t|r],y , p ) ) .: - pred right_delete(-tree(d ) , + edit_ptr(d ) , -edit_ptr(d ) ) .right_delete(t , edit_ptr(l,[t|r],_,p ) , edit_ptr(l , r , y , p ) ) .: - pred right_replace(+tree(d ) , + edit_ptr(d ) , -edit_ptr(d ) ) .right_replace(t , edit_ptr(l,[_|r],_,p ) , edit_ptr(l,[t|r],y , p ) ) .: - pred right_down(+edit_ptr(d ) , -edit_ptr(d ) ) .right_down(p , edit_ptr([],k , n ,right_parent(p ) ) ) : - p = edit_ptr(_,[t| _ ] , _ , _ ) , tree_children(t , k ) .: - pred right_promote_children(+edit_ptr(d ) , -edit_ptr(d ) ) .right_promote_children(edit_ptr(l,[t|r],_,p ) , edit_ptr(l , r1,y , p ) ) : - tree_children(t , k ) , append(k , r , r1 ) .: - pred up(+edit_ptr(d ) , -edit_ptr(d ) ) .up(edit_ptr(_,_,n , left_parent(p ) ) , p ) .up(edit_ptr(_,_,n , right_parent(p ) ) , p ) .up(edit_ptr(x , y , y , left_parent(edit_ptr([t|l],r,_,p ) ) ) , edit_ptr([n|l],r , y , p ) ) : - reverse_append(x , y , k ) , tree_rebuild(k , t , n ) .up(edit_ptr(x , y , y , right_parent(edit_ptr(l,[t|r],_,p ) ) ) , edit_ptr(l,[n|r],y , p ) ) : - reverse_append(x , y , k ) , tree_rebuild(k , t , n ) .: - pred extract(+edit_ptr(d ) , -tree(d ) ) .extract(edit_ptr(_,[t|_],_,no_parent ) , tree ) : - ! , tree = t. extract(p , tree ) : - up(p , p1 ) , extract(p1 , tree ) . ....it is easy to see that each of these operations except ` up ' and ` extract ' requires time and space , assuming that ` tree_children ' is .moving ` up ' from a position with nodes to its left takes time and space , but that position must have been reached by a minimum of ` right/2 ' and/or ` left_insert/3 ' operations , so in the single - threaded case all the basic operations are amortised time and space .essentially , because the editing operations destroy information . we can easily reverse the operations `` move right '' or `` move down '' , because the only thing that has changed is the position , and the name of the predicate tells us what the old position must have been .suppose , however , we tried to make insertion and deletion the same operation : .... : - pred indel_right(d , edit_ptr(d ) , edit_ptr(d ) ) .indel_right(t , edit_ptr(l , r , fs , p ) , edit_ptr(l,[t|r],fw , p ) ) : - ?.... we would not know how to define fs and fw .the edit_pointer data structure may not have reversible operations , but it is a persistent data structure , so that an editor using it can easily support unbounded undo .imagine an application to html , where we might have .... : - type html_info --- > element(atom , list(pair(atom , string ) ) ) ; pcdata(string ) .: - type html_tree = = tree(html_info ) ..... the ` < font > ` tag is almost always misused , and is not allowed in `` strict '' html4 or xhtml .so we might want to replace every ` < font > ` element by its contents . .... : - pred unfont(+html_tree , -html_tree ) .unfont(html0 , html ) : - start(html0 , ptr0 ) , unfont_loop(ptr0 , ptr ) , extract(ptr , html ) .: - pred unfont_loop(+edit_ptr(html_info ) , -edit_ptr(html_info ) ) .unfont_loop(p0 , p ) : - ( at_right(p0 )- > p = p0 ; right_datum(p0 , element(font , _ ) ) - > right_promote_children(p0 , p1 ) , unfont_loop(p1 , p ) ; right_down(p0 , p1 ) , unfont_loop(p1 , p2 ) , up(p2 , p3 ) , right(p3 , p4 ) , unfont_loop(p4 , p ) ) . ....we do nt need cyclic links or mutable objects in declarative languages to support efficient multiway traversal and editing of trees , including models of xml . 4 conway , t. , ( 2000 ) . private communication .huet , g. ( 1997 ) .functional pearl : `` the zipper '' , _ j. functional programming _ * 7 * ( 5 ) : pp .549554 , september 1997 .mycroft , a. and okeefe , r. a. ( 1984 ) a polymorphic type system for prolog ._ artificial intelligence _ , vol.23 , pp295307 .okeefe , r. ( 2000 ) .`` lifting the curse of dom '' , submitted for publication .somogyi , z. , henderson , f. , and conway , t. ( 1995 ) .`` mercury : + an efficient purely declarative logic programming language '' , in _ proceedings of the australian computer science conference _ , glenelg , australia ,february 1995 , pp 499512 .http://www.cs.mu.oz.au/mercury/ has this and other mercury papers .world wide web consortium ( 1998 ). `` document object model level 1 recommendation '' .url : ` http://www.w3.org/tr/rec-dom-level-1/ ` world wide web consortium ( 1999 ) .`` xml path language ( xpath ) version 1.0 '' .url : ` http://www.w3.org/tr/xpath/ ` world wide web consortium ( 2000 ) .`` document object model level 2 recommendation '' , november 2000 .url : ` http://www.w3.org/tr/rec-dom-level-2/ `
imperative programmers often use cyclically linked trees in order to achieve o(1 ) navigation time to neighbours . some logic programmers believe that cyclic terms are necessary to achieve the same in logic - based languages . an old but little - known technique provides o(1 ) time and space navigation without cyclic links , in the form of reversible predicates . a small modification provides o(1 ) amortised time and space editing . [ firstpage ]
as hetnet provides flexible and efficient topology to boost spectral efficiency , it has recently aroused immense interest in both academia and industry . as illustrated in fig .[ fig : hetnet_2tiers ] , a hetnet consists of a diverse set of regular macro base stations ( bs ) overlaid with low power pico bss .since this overlaid structure may lead to severe interference problem , it is extremely critical to control interference via rrm in hetnet .there has been much research conducted on rrm optimization for traditional cellular networks . in ,the authors considered power and user scheduling in single - carrier cellular networks . in ,the game theoretical approaches are proposed for distributed resource allocation . in , the authors proposed a dynamic fractional frequency reuse scheme to combat the inter - sector interference under a game - based optimization by each sector .the coordinated multipoint transmission ( comp ) is another important technique to handle the inter - cell interference .for example , in , the authors exploited the uplink - downlink duality to do joint optimization of power allocation and beamforming vectors . in ,a wmmse algorithm is proposed to find a stationary point of the weighted sum - rate maximization problem for multi - cell downlink systems .while the above algorithms achieve comparably good performance , they require global channel state information ( csi ) for centralized implementation or over - the - air iterations and global message passing for distributed implementation .it is quite controversial whether comp is effective or not in lte systems due to large signaling overhead , signaling latency , inaccurate csit , and the complexity of the algorithm . on the other hand ,solutions for traditional cellular networks can not be applied directly to hetnet due to the unique difference in hetnet topology .first , the inter - cell interference in hetnet is more complicated , e.g. , there is _ co - tier interference _ among the pico bss and among the macro bss as well as the _ cross - tier interference _ between the macro and pico bss .furthermore , due to load balancing , some of the mobiles in hetnet may be assigned to a pico bs which is not the strongest bs and the mobiles in the pico cell may suffer from strong interference from the macro bss . to solve these problems , some eicic techniques , such as the abs control ,have been proposed in lte and lte - a . in ,the authors analyzed the performance for abs in hetnet under different cell range extension ( re ) biases .however , they focused on numerical analysis for the existing heuristic eicic schemes , which are the baselines of this paper .in , the authors proposed an algorithm for victim pico user partition and optimal synchronous abs rate selection .however , they used a universal abs rate for the whole network , and as a result , their scheme could not adapt to dynamic network loading for different macro cells . in this paper , we focus on the resource optimization in the downlink of a hetnet without comp .we consider _ dynamic abrb _ for interference control and dynamic user scheduling to exploit _ multi - user diversity_. the abrb is similar to the abs but it is scheduled over both time and frequency domain .unlike , we do not restrict the abrb rate to be the same for all macro bss and thus a better performance can be achieved .however , this also causes several new technical challenges as elaborated below . * * exponential complexity for dynamic abrb : * optimization of abrb patterns is challenging due to the combinatorial nature and exponentially large solution space .for example , in a hetnet with macro bss , there are different abrb pattern combinations . hence , brute force solutions are highly undesirable . * * complex interactions between dynamic user scheduling and dynamic abrb * : there is complex coupling between the dynamic user scheduling and abrb control .for instance , the abrb pattern will affect the user sets eligible for user scheduling .furthermore , the optimization objective of abrb control depends on user scheduling policy and there is no closed form characterization . * * challenges in rrm architecture : * most existing solutions for resource optimization of hetnet requires global knowledge of csi and centralized implementations . yet ,such designs are not scalable for large networks and they are not robust with respect to ( w.r.t . )latency in backhaul . to address the above challenges ,we propose a two timescale control structure where the long term controls , such as dynamic abrb , are adaptive to the large scale fading . on the other hand ,the short term control , such as the user scheduling , is adaptive to the local csi within a pico / macro bs .such a multi - timescale structure allows _ hierarchical rrm _ design , where the long term control decisions can be implemented on a rrm server for inter - cell interference coordination .the short - term control decisions can be done locally at each bs with only local csi .such design has the advantages of low signaling overhead , good scalability , and robustness w.r.t .latency of backhaul signaling . while there are previous works on two timescale rrm , those approaches are heuristic ( i.e. the rrm algorithms are not coming from a single optimization problem ) .our contribution in this paper is a formal study of two timescale rrm algorithms for hetnet based on optimization theory . to overcome the exponential complexity for abrb control, we exploit the sparsity in the _ interference graph _ of the hetnet topology and derive structural properties for the optimal abrb control .based on that , we propose a _ two timescale alternative optimization _solution for user scheduling and abrb control .the algorithm has low complexity and is asymptotically optimal at high snr .simulations show that the proposed solution has significant performance gain over various baselines ._ notations _ : let denote the indication function such that if the event is true and otherwise .for a set , denotes the cardinality of .consider the downlink of a two - tier hetnet as illustrated in fig .[ fig : hetnet_2tiers ]. there are macro bss , pico bss , and users , sharing ofdm subbands .denote the set of the macro bss as , and denote the set of the pico bss as . a two - tier heterogeneous network with macro and pico base stations , width=332 ] the hetnet topology ( i.e. , the network connectivity and csi of each link ) is represented by a topology graph as defined below .[ hetnet topology graph]define the _ topology graph _ of the hetnet as a bipartite graph , where denotes the set of all macro and pico bs nodes , denotes the set of all user nodes , and is the set of all edges between the bss and users .an edge between bs node and user node represents a wireless link between them .each edge is associated with a csi label , where represents the channel coefficient between bs and user on subband . for each bsnode , let denote the set of associated users . for each usernode , define as the set of neighbor macro bss and as the set of neighbor pico bss . in the topology graph, means that the path gain between user and bs is sufficiently small compared to the direct link path gain , and thus the interference from bs will have negligible effect on the data rate of user .we have the following assumption on the channel fading process .[ asm : fory]the channel fading coefficient has a two timescale structure given by .the small scale fading process is identically distributed w.r.t .the subframe and subband indices ( ) , and it is i.i.d .user and bs indices ( ) .moreover , for given , is a continuous random variable .the large scale fading process is assumed to be a slow ergodic process ( i.e. , remains constant for several _ _ super - frames _ _ subframes ] . ) according to a general distribution .the two timescale fading model has been adopted in many standard channel models .the large scale fading is usually caused by path loss and shadow fading , which changes much slowly compared to the small scale fading .we consider the following _ biased cell selection _ mechanism to balance the loading between macro and pico bss .let denote the serving bs of user .let denote the cell selection bias and let denote the transmit power of bs on a single sub - band .let and respectively be the strongest macro bs and pico bs for user . if , user will be associated to pico cell , i.e. , ; otherwise .if a user only has a single edge with its serving bs , it will not receive inter - cell interference from other bss and thus its performance is noise limited ; otherwise , it will suffer from strong inter - cell interference if any of its neighbor bss is transmitting data and thus its performance is interference limited .this insight is useful in the control algorithm design later and it is convenient to formally define the interference and noise limited users . [interference / noise limited user][def : inuser]if a user has a single edge with its serving bs only , i.e. , , then it is called a _ noise limited user _ (n - user ) ; otherwise , it is called an _ interference limited user _ ( i - user ) . fig .[ fig : abs_intro ] illustrates an example of the hetnet topology graph . in fig .[ fig : abs_intro](a ) , an arrow from a bs to a user indicates a direct link and the dash circle indicates the coverage area of each bs .an i - user which lies in the coverage area of a macro bs is connected to this macro bs , while a n - user does not have connections with the neighbor macro bss in the topology graph as illustrated in fig .[ fig : abs_intro](b ) .we consider a two timescale hierarchical rrm control structure where the control variables are partitioned into _ long - term _ and _ short - term _ control variables .the long - term control variables are adaptive to the large scale fading and they are implemented at the radio resource management server ( rrms ) .the short - term control variables are adaptive to the instantaneous csi and they are implemented locally at each macro / pico bs .abs is introduced in lte systems for interference mitigation among control channels in hetnet .it can also be used to control the _ co - tier _ and _ cross - tier interference _ among the data channels . in lte systems ,abs is only scheduled over time domain . in this paper, we consider dynamic abrb control for interference coordination .the abrb is similar to abs but it is scheduled over both time and frequency domain . it is a generalization of abs and enables more fine - grained resource allocation .when an abrb is scheduled in a macro bs , a rb with blank payload will be transmitted at a given frequency and time slice and this eliminates the interference from this macro bs to the pico bss and the adjacent macro bss .hence , as illustrated in fig .[ fig : abs_intro ] , scheduling abrb over both time and frequency domain allows us to control both the _ macro - macro _ bs and _ macro - pico _ bs interference .we want to control the abrb dynamically w.r.t .the large scale fading because the optimal abrb pattern depends on the hetnet topology graph .for example , when there are a lot of pico cell i - users , we should allocate more abrbs at the macro bs to support more pico cell i - users . on the other hand , when there are only a few pico cell i - users, we should allocate less abrbs to improve the spatial spectrum efficiency .for any given subframe , define to indicate if abrb is scheduled ( ) for subband at macro bs .let ^{t}\in\mathcal{a} ] be the associated vectorized variable .the set of all feasible user scheduling vectors at bs for the -th subbands with abrb pattern is given by where is the set of users that can not be scheduled on a type a subband under abrb pattern ; and .the physical meaning of is elaborated below .first , if a macro bs is transmitting abrb , none of its associated users can be scheduled for transmission . moreover , due to large cross - tier interference from macro bss , a pico cell i - user can not be scheduled for transmission if any of its neighbor macro bss is transmitting data subframe ( i.e. , ) .as will be seen in section [ sub : structural - properties - ofpa ] , explicitly imposing this user scheduling constraint for the pico cell i - users is useful for the structural abrb design .the user scheduling policy of the -th sub - band is defined below .[ user scheduling policy][def : randomized - user - scheduling ] a user scheduling policy of the -th bs and -th sub - band is a mapping : , where is the csi space .specifically , under the abrb pattern and csi realization , the user scheduling vector of bs is given by .let denote the overall user scheduling policy on sub - band .assuming perfect csi at the receiver ( csir ) and treating interference as noise , the instantaneous data rate of user is given by : where , ; is the mutual information of user contributed by the -th subband ; and is the interference - plus - noise power at user on subband . for a given policy and large scale fading state ,the average data rate of user is given by : =\sum_{m\in\mathcal{m}\left(k\right)}\overline{\mathcal{i}}_{m , k}\left(q_{m},\pi_{m}\right),\ ] ] where the average mutual information on subband is and ] when there is no ambiguity .the performance of the hetnet is characterized by a utility function , where ] .define two sets of abrb patterns and for each bs . for macro bs , is the set of abrb patterns under which macro bs is transmitting data .for pico bs , is the set of abrb patterns under which all of its neighbor macro bss is transmitting abrb . using the configuration in example [ exm - problema ] , we have ,\left[1,1\right]\right\ } ] and ,\left[1,0\right]\right\ } ] , denoted by , has the _ _ synchronous abrb structure__. ] where the transmissions of abrb at the macro bss are aligned as much as possible . as a result , there are only active abrb patterns ,[0,1],[1,1]\right\ } ] , the average mutual information region will be maximized if we can simultaneously maximize for all bss . for macro bss , we have , which is fixed for given . for pico bss , we have , and the equality holds if and only if has the synchronous abrb structure in fig .[ fig : abs_reduction ] .similarly , we can reduce the user scheduling policy space using observation [ clm : effect - of - abrb - mi ] .[ clm : policy - space - reductionpia]consider for the configuration in example [ exm - problema ] . for given csi and abrb pattern ,the optimal user scheduling at bs is given by by observation [ clm : effect - of - abrb - mi ] , if solves the maximization problem ( [ eq : argmaxruo ] ) for certain , it solves ( [ eq : argmaxruo ] ) for all .hence , it will not loss optimality by imposing an additional constraint on the user scheduling such that ( or ) . for convenience ,let denote the set of all feasible user scheduling policies satisfying the above constraint in observation [ clm : policy - space - reductionpia ] ( the formal definition of is given in theorem [ thm : policy - space - reductionpia ] ) .then for given , and under the synchronous abrb , the corresponding objective function of can be rewritten as , where and is the corresponding average mutual information given in ( [ eq : ibarak ] ) of appendix [ sub : spageneral ] . as a result, the subproblem can be transformed into a simpler problem with solution space .[ equivalent problem transformation of ][cor : equivalent - problem - pa]let denote the optimal solution of the following joint optimization problem . where ,\forall j\right\ } ] , the user scheduling vector of bs is given by , where and are the csi and abrb pattern at the -th subframe .* step 2 * ( long timescale abrb optimization ) : * * find the optimal solution of problem ( [ eq : plare ] ) under fixed using e.g. , ellipsoid method .let .* * return to step 1 until * * or the maximum number of iterations is reached . while ( [ eq : plare ] ) is a bi - convex problem and ao algorithmis known to converge to local optimal solutions only , we exploit the hidden convexity of the problem and show below that algorithm ao_a can converge to the global optimal solution under certain conditions .[ global convergence of algorithm ao_a][thm : global - convergence - la]let =\mathcal{f}\left(\left[\mathbf{q}_{a}^{(t)},\pi_{a}^{(t-1)}\right]\right) ] \right)\right\ } ] .then : 1 .algorithm ao_a converges to a fixed point \in\delta ] is a globally optimal solution of problem ( [ eq : plare ] ). please refer to appendix [ sub : proof - of - theoremla ] for the proof .step 1 of algorithm ao_a requires the knowledge of the average data rate under and .we adopt a reasonable approximation on using a moving average data rate ] with denoting the probability that the abrb pattern is used .hence can be reformulated as similar to ( [ eq : plare ] ) , we propose a two timescale ao to solve for problem ( [ eq : plbequ ] ) w.r.t . and ._ algorithm ao_b _ ( two timescale ao for ) : * * initialization * * : choose proper initial , . set . * * step 1 * * ( short timescale user scheduling optimization ) : for fixed , let , where is given by where is the average data rate of user under and user scheduling policy . for each subframe ] with and .for example , for the hetnet in fig .[ fig : ifc_graph1 ] , we have and thus , where ] and ] ..[tab : flowpb]top level flow of finding a good abrb profile [ cols= " < " , ] fig .[ fig : conva ] shows the utility in ( [ eq : plare ] ) of the type a users versus the number of super - frames .the utility increases rapidly and then approaches to a steady state after only updates .the figure demonstrates a fast convergence behavior of algorithm ao_a .similar convergence behavior was also observed for algorithm ao_b and the simulation result is not shown here due to limited space .utility of the type a users versus the number of super - frames , width=321 ] we compare the complexity of the baselines and proposed rrm algorithms .the complexity can be evaluated in the following 3 aspects .\1 ) for the short term user scheduling , the proposed scheme and baseline 1 - 3 have the same complexity order of , while the baseline 4 has a complexity of , where is a proportionality constant that corresponds to some matrix and vector operations with dimension , and is the number of bss in each cooperative cluster .\2 ) for the long term abrb control variables and , as they are updated by solving standard convex optimization problems in step 2 of algorithm ao_a and ao_b respectively , the complexities are polynomial w.r.t .the number of the associated optimization variables .specifically , for control variable , the complexity is polynomial w.r.t . the number of macro bss . for controlvariable , the complexity is polynomial w.r.t .the size of the abrb profile : .in addition , they are only updated once in each super - frame .\3 ) the abrb profile is computed using algorithm b2 in every several super - frames to adapt to the large scale fading . in step 1 of algorithm b2 ,the complexity of solving the convex problem ( [ eq : fixthetapdet ] ) is polynomial w.r.t . . in step 2 of algorithm b2 ,if the mwis algorithm in is used to solve problem ( [ eq : mwisp ] ) , the complexity is , where is the number of edges in the interference graph .we propose a two - timescale hierarchical rrm for hetnet with dynamic abrb . to facilitate structural abrb design for cross - tier and co - tier interference ,the subbands are partitioned into type a and type b subbands .consequently , the two timescale rrm problem is decomposed into subproblems and which respectively optimizes the abrb control and user scheduling for the type a and type b subbands .both subproblems involve non - trivial multi - stage optimization with exponential large solution space w.r.t . the number of macro bss .we exploit the sparsity in the hetnet interference graph and derive the structural properties to reduce the solution space .based on that , we propose two timescale ao algorithm to solve and .the overall solution is asymptotically optimal at high snr and has low complexity , low signaling overhead as well as robust w.r.t .latency of backhaul signaling .define the average rate region as \in\mathbb{r}_{+}^{k}:\ : x_{k}\le\overline{r}_{k}\left(\lambda\right),\forall k\right\ } .\ ] ] for any utility function that is concave and increasing w.r.t . to the average data rates , the optimal policy of must achieve a pareto boundary point of .hence , we only need to show that any pareto boundary point of can be achieved by a symmetric policy .define the average rate region under fixed as \in\mathbb{r}_{+}^{k}:\\ & & x_{k}\le\overline{r}_{k}\left(\left\ { q_{s},\left\ { q_{m}\right\ } , \left\ { \pi_{m}\right\ } \right\ } \right),\forall k\bigg\}.\end{aligned}\ ] ] then we only need to show that any pareto boundary point of can be achieved by a symmetric policy .define the average mutual information region _ _ for subband as : {\forall k\in\mathcal{u}_{a}}\in\mathbb{r}_{+}^{\left|\mathcal{u}_{a}\right|}:\nonumber \\ & & x_{k}\le\overline{\mathcal{i}}_{m , k}\left(q_{m},\pi_{m}\right),\forall k\in\mathcal{u}_{a}\bigg\}.\label{eq : ratereg1g-1}\end{aligned}\ ] ] define the average mutual information region __ for subband as : {\forall k\in\mathcal{u}_{b}}\in\mathbb{r}_{+}^{\left|\mathcal{u}_{b}\right|}:\nonumber \\ & & x_{k}\le\overline{\mathcal{i}}_{m , k}\left(q_{m},\pi_{m}\right),\forall k\in\mathcal{u}_{b}\bigg\}.\label{eq : ratereg2g-1}\end{aligned}\ ] ] it can be verified that is a convex region in and is a convex region in . moreover , due to the statistical symmetry of the subbands , we have let and . from the convexity of and ( [ eq : ra]-[eq : rb ] ) , we have hence , for any pareto boundary point ^{t} ] is a pareto boundary point of and {k\in\mathcal{u}_{b}}\in\mathbb{r}_{+}^{\left|\mathcal{u}_{b}\right|} ] is a pareto boundary point of , it follows that {\forall k\in\mathcal{u}_{a}\cap\mathcal{u}_{n}} ] is a pareto boundary point of .hence , there exists user scheduling policy satisfying and for all .then it follows that {\forall k\in\mathcal{u}_{a}} ] .the rest is to prove that any \in\delta ] of the following convex optimization problem ,\forall j\:\textrm{and}\:\sum_{j=1}^{\left|\theta^{(i)}\right|}\check{q}_{j}=1,\nonumber \end{aligned}\ ] ] where {j=1, ...,\left|\theta^{(i)}\right|}$ ] , and is the mis in .[ asymptotically optimal abrb profile][thm : asymptotic - equivalence - ofpb]algorithm b2 always converges to an abrb profile with .furthermore , the converged result is asymptotically optimal for high snr .i.e. , where for some positive constants s , and is the optimal objective value of . consider problem which is the same as except that there are two differences : 1 ) the fading channel is replaced by a _ deterministic channel _ with the channel gain between bs and user given by the corresponding large scale fading factor ; 2 ) an additional constraint is added to the user scheduling policy such that any two links having an edge ( i.e. , ) in the interference graph can not be scheduled for transmission simultaneously .it can be shown that the optimal solution of problem is asymptotically optimal for at high snr . moreover , using the fact that the achievable mutual information region in the deterministic channel is a convex polytope with as the set of _ pareto boundary _ vertices , it can be shown that is equivalent to the following problem where is the mis in . to complete the proof of theorem [ thm : asymptotic - equivalence - ofpb ] , we only need to further prove that algorithm b2 converges to the optimal solution of problem ( [ eq : pdet ] ) . using the fact that any point in a -dimensional convex polytope can be expressed as a convex combination of no more than vertices , it can be shown that there are at most non - zero elements in in step 1 of algorithm b2 .hence .moreover , it can be verified that if is not optimal for ( [ eq : pdet ] ) . combining the above and the fact that is upper bounded by , algorithm b2 must converge to the optimal solution of ( [ eq : pdet ] ) .this completes the proof . in step 2 of algorithm b2 , problem ( [ eq : mwisp ] ) is equivalent to finding a _ maximum weighted independent set _ ( mwis ) in the interference graph with the weights of the vertex nodes given by .the mwis problem has been well studied in the literature .although it is in general np hard , there exists low complexity algorithms for finding near - optimal solutions .although the asymptotic global optimality of algorithm b2 is not guaranteed when step 2 is replaced by a low complexity solution of ( [ eq : mwisp ] ) , we can still prove its monotone convergence .d. gesbert , s. kiani , a. gjendemsj _ et al ._ , `` adaptation , coordination , and distributed resource allocation in interference - limited wireless networks , '' _ proceedings of the ieee _ , vol . 95 , no . 12 , pp . 23932409 , 2007 .a. l. stolyar and h. viswanathan , `` self - organizing dynamic fractional frequency reuse in ofdma systems , '' in _infocom 2008 .the 27th conference on computer communications .ieee_.1em plus 0.5em minus 0.4emieee , 2008 , pp .691699 .r. irmer , h. droste , p. marsch , m. grieger , g. fettweis , s. brueck , h. mayer , l. thiele , and v. jungnickel , `` coordinated multipoint : concepts , performance , and field trial results , '' _ ieee communications magazine _ , vol .49 , no . 2 ,pp . 102111 , 2011 .q. shi , m. razaviyayn , z .- q . luo , and c. he , `` an iteratively weighted mmse approach to distributed sum - utility maximization for a mimo interfering broadcast channel , '' _ ieee trans . signal processing _59 , no . 9 , pp .43314340 , sept .2011 .y. wang and k. i. pedersen , `` performance analysis of enhanced inter - cell interference coordination in lte - advanced heterogeneous networks , '' in _ vehicular technology conference ( vtc spring ) , 2012 ieee 75th_. 1em plus 0.5em minus 0.4emieee , 2012 , pp .j. pang , j. wang , d. wang , g. shen , q. jiang , and j. liu , `` optimized time - domain resource partitioning for enhanced inter - cell interference coordination in heterogeneous networks , '' in _ wireless communications and networking conference ( wcnc ) , 2012 ieee_.1em plus 0.5em minus 0.4emieee , 2012 , pp .16131617 .m. hong , r .- y .sun , h. baligh , and z .- q .luo , `` joint base station clustering and beamformer design for partial coordinated transmission in heterogenous networks , '' 2012 .[ online ] .available : http://arxiv.org/abs/1203.6390 r. ghaffar and r. knopp , `` fractional frequency reuse and interference suppression for ofdma networks , '' in _ proceedings of the 8th international symposium on modeling and optimization in mobile , ad hoc and wireless networks _ , 2010 , pp .273277 .o. somekh , o. simeone , y. bar - ness , a. haimovich , and s. shamai , `` cooperative multicell zero - forcing beamforming in cellular downlink channels , '' _ ieee trans .inf . theory _55 , no . 7 , pp .32063219 , 2009 .
interference is a major performance bottleneck in heterogeneous network ( hetnet ) due to its multi - tier topological structure . we propose almost blank resource block ( abrb ) for interference control in hetnet . when an abrb is scheduled in a macro bs , a resource block ( rb ) with blank payload is transmitted and this eliminates the interference from this macro bs to the pico bss . we study a two timescale hierarchical radio resource management ( rrm ) scheme for hetnet with _ dynamic abrb _ control . the long term controls , such as dynamic abrb , are adaptive to the large scale fading at a rrm server for co - tier and cross - tier interference control . the short term control ( user scheduling ) is adaptive to the local channel state information within each bs to exploit the _ multi - user diversity_. the two timescale optimization problem is challenging due to the exponentially large solution space . we exploit the sparsity in the _ interference graph _ of the hetnet topology and derive structural properties for the optimal abrb control . based on that , we propose a _ two timescale alternative optimization _ solution for the user scheduling and abrb control . the solution has low complexity and is asymptotically optimal at high snr . simulations show that the proposed solution has significant gain over various baselines . heterogeneous network , dynamic abrb control , two timescale rrm
[ sec:1 ] * remark .* notation is equivalent to where } _ { n\times 1 } \ , & \rho & = & \rho_{ij } & = & \underbrace { \left [ \begin{array}{c } \rho_{1j } \\ \vdots \\ \rho_{nj } \\ \end{array } \right ] } _ { n \times 1 } \ . \end{array}\ ] ] notation is equivalent to where }_{n \times n } } \ .\end{array}\ ] ] let us consider ( e.g. for ) variance of the difference of two random variables and , where , in terms of covariance introducing the estimation statistics in terms of correlation function if and or if and the unbiasedness constraint ( i condition ) is equivalent to the minimization constraint where produces equations in the unknowns : kriging weights and a lagrange parameter ( ii condition ) }_{n\times(n+1 ) } } & \cdot & \underbrace { \left [ \begin{array}{c } \omega_j^1\\ \vdots \\ \omega_j^n \\ \mu_j \\ \end{array } \right ] } _ { ( n+1)\times 1 } & = & \underbrace { \left [ \begin{array}{c } \rho_{1j } \\ \vdots \\ \rho_{nj } \\ \end{array } \right ] } _ { n \times 1 } \end{array}\ ] ] multiplied by and substituted into ^ 2\}-\underbrace{e^2\{v_j-\hat{v}_j\}}_0 \\ & = & e\{[(v_j - m)-(\hat{v}_j - m)]^2\ } \\ & = & e\{[v_j - m]^2\}-2(e\{v_j\hat{v}_j\}-m^2)+e\{[\hat{v}_j - m]^2\ } \\ & = & \sigma^2 -2 \sigma^2 |\omega^i_j \rho_{ij}| + \sigma^2 |\omega^i_j \rho_{ii } \omega^i_j| \\ & = & \sigma^2 \pm 2 \sigma^2 \omega^i_j \rho_{ij } \mp \sigma^2 \omega^i_j \rho_{ii } \omega^i_j \end{array}\ ] ] give the minimized variance of the field under estimation ^ 2\ } = \sigma^2 ( 1 \pm ( \omega^i_j \rho_{ij } + \mu_j ) ) \ ] ] and these two conditions produce equations in the unknowns }_{(n+1)\times(n+1 ) } } & \cdot & \underbrace { \left [ \begin{array}{c } \omega_j^1 \\ \vdots \\ \omega_j^n \\ \mu_j \\ \end{array } \right ] } _ { ( n+1)\times 1 } & = & \underbrace { \left [ \begin{array}{c } \rho_{1j } \\ \vdots \\ \rho_{nj } \\ 1 \\ \end{array } \right ] } _ { ( n+1 ) \times 1 } \ . \end{array}\ ] ]since then and ( since ) then the minimized variance of the field under estimation ^ 2\ } = \sigma^2 ( 1\pm(\omega^i_j \rho_{ij } + \mu_j))\ ] ] has known asymptotic property ^ 2\ } = \lim_{n \rightarrow \infty } e\{[v_j-\omega^i_j v_i]^2\ } = e\{[v_j - m]^2\ } = \sigma^2 \ .\ ] ]let us consider the field under estimation where for auto - estimation holds with minimized variance of the estimation statistics ^ 2\ } & = & cov\{(\omega^i_j v_i)(\omega^i_j v_i)\ } \\ & = & \sum_i\sum_l\omega^i_j \omega^l_j cov\{v_i v_l\ } \\ & = & \sigma^2 |\omega^i_j \rho_{ii } \omega^i_j| \\ & = & \mp\sigma^2(\omega^i_j \rho_{ij}-\mu_j ) \ , \end{array}\ ] ] where for auto - estimation holds ^ 2\ } = e\{[v_i - m]^2\ } = \sigma^2\ ] ] that means outcoming of input value is unknown for mathematical model , with minimized variance of the field under estimation ^ 2\ } & = & \sigma^2(1\pm(\omega^i_j \rho_{ij } + \mu_j ) ) \end{array}\ ] ] where for auto - estimation holds ^ 2\ } = \underbrace{e\{[v_i - m]^2\}}_{\sigma^2 } - \underbrace{2(e\{v_i\hat{v}_i\}-m^2)}_{2\sigma^2 } + \underbrace{e\{[\hat{v}_i - m]^2\}}_{\sigma^2 } = 0\ ] ] that means variance of the field is equal to variance of the ( auto-)estimation statistics ( not that auto - estimation matches observation ) .for } _ { n \times 1 } = \xi \underbrace { \left [ \begin{array}{c } 1 \\\vdots \\ 1 \\ \end{array } \right ] } _ { n \times 1 } \qquad \xi \rightarrow 0 ^ - ~(\mbox{or } ~\xi \rightarrow 0^+ ) \ ] ] and a disjunction of the minimized variance of the field under estimation ^ 2\ } - \underbrace{(e\{v_j\hat{v}_j\}-m^2)}_{\mp\sigma^2\xi } + \underbrace{e\{\hat{v}_j[\hat{v}_j - v_j]\}}_{\mp\sigma^2\xi } \quad \mbox{if } \quad \rho_{ij } \omega^i_j + \mu_j = \xi+ \mu_j=0\ ] ] which fulfills its asymptotic property the kriging system }_{(n+1)\times(n+1 ) } } & \cdot & \underbrace { \left [ \begin{array}{c } \omega^1 \\ \vdots \\ \omega^n \\ - \xi \\ \end{array } \right ] } _ { ( n+1)\times 1 } & = & \underbrace { \left [ \begin{array}{c } \xi \\ \vdots \\ \xi \\ 1 \\ \end{array } \right ] } _ { ( n+1 ) \times 1 } & \end{array}\ ] ] equivalent to and where : , , , has the least squares solution and with a mean squared error of mean estimation ^ 2\ } = \mp\sigma^2 2\xi \ .\ ] ]for white noise ^ 2\ } & = & e\{[v_j - m]^2\}+e\{[\hat{v}_j - m]^2\ } \\ * remark .* precession of arithmetic mean can not be identical to cause a straight line fitted to high - noised data by ordinary least squares estimator can not have the slope identical to .for this reason the estimator of an unknown constant variance in fact is the lower bound for precession of the minimized variance of the field under estimation ^ 2\ } = \sigma^2\left(1+\frac{1}{n}\right)\ ] ] to increase ` a bit ' the lower bound for we can effect on weight and reduce total counts by because is the closest positive integer number to so it is easy to find the closest weight such that then the so - called unbiased variance in fact is the simplest estimator of minimized variance of the field under estimation .
we present statistics ( s - statistics ) based only on random variable ( not random value ) with a mean squared error of mean estimation as a concept of error .
social media has evolved from friendship based networks to become a major source for the consumption of news ( nist , 2008 ) . on social media, news is decentralised as it provides everyone the means to efficiently report and spread information .in contrast to traditional news wire , information on social media is spread without intensive investigation , fact and background checking .the combination of ease and fast pace of sharing information provides a fertile breeding ground for rumours , false- and disinformation .social media users tend to share controversial information in - order to verify it , while asking about for the opinions of their followers ( zhao et .al , 2015 ) .this further amplifies the pace of a rumour s spread and reach .rumours and deliberate disinformation have already caused panic and influenced public opinion .+ the cases in germany and austria in 2016 , show how misleading and false information about crimes committed by refugees negatively influenced the opinion of citizens .+ detecting these rumours allows debunking them to prevent them from further spreading and causing harm .the further a rumour has spread , the more likely it is to be debunked by users or traditional media ( liu et .al , 2015 ) . however , by then rumours might have already caused harm .this highlights the importance and necessity of recognizing rumours as early as possible - preferably instantaneously .+ rumour detection on social media is challenging due to the short texts , creative lexical variations and high volume of the streams .the task becomes even harder if we attempt to perform rumour detection on - the - fly , without looking into the future .we provide an effective and highly scalable approach to detect rumours instantly after they were posted with zero delay .we introduce a new features category called novelty based features .novelty based features compensate the absence of repeated information by consulting additional data sources - news wire articles .we hypothesize that information not confirmed by official news is an indication of rumours .additionally we introduce pseudo feedback for classification . in a nutshell , documents that are similar to previously detected rumours are considered to be more likely to also be a rumour .the proposed features can be computed in constant time and space allowing us to process high - volume streams in real - time ( muthukrishnan , 2005 ) .our experiments reveal that novelty based features and pseudo feedback significantly increases detection performance for early rumour detection .+ the contributions of this paper include : + * novelty based features * + we introduced a new category of features for instant rumour detection that harnesses trusted resources .unconfirmed ( novel ) information with respect to trusted resources is considered as an indication of rumours .+ * pseudo feedback for detection / classification * + pseudo feedback increases detection accuracy by harnessing repeated signals , without the need of retrospective operation . before rumour detection, scientists already studied the related problem of information credibility evaluation ( castillo et .; richardson et .al , 2003 ) .recently , automated rumour detection on social media evolved into a popular research field which also relies on assessing the credibility of messages and their sources .the most successful methods proposed focus on classification harnessing lexical , user - centric , propagation - based ( wu et .al , 2015 ) and cluster - based ( cai et .al , 2014 ; liu et . al , 2015 ; zhao et . al , 2015 ) features .+ many of these context based features originate from a study by castillo et .al ( 2011 ) , which pioneered in engineering features for credibility assessment on twitter ( liu et .al , 2015 ) .they observed a significant correlation between the trustworthiness of a tweet with context - based characteristics including hashtags , punctuation characters and sentiment polarity .when assessing the credibility of a tweet , they also assessed the source of its information by constructing features based on provided urls as well as user based features like the activeness of the user and social graph based features like the frequency of re - tweets .a comprehensive study by castillo et .al ( 2011 ) of information credibility assessment widely influenced recent research on rumour detection , whose main focuses lies upon improving detection quality .+ while studying the trustworthiness of tweets during crises , mendoza et .al ( 2010 ) found that the topology of a distrustful tweet s propagation pattern differs from those of news and normal tweets .these findings along with the fact that rumours tend to more likely be questioned by responses than news paved the way for future research examining propagation graphs and clustering methods ( cai et .al , 2014 ; zhao et .al , 2015 ) .the majority of current research focuses on improving the accuracy of classifiers through new features based on clustering ( cai et .al , 2014 ; zhao et .al , 2015 ) , sentiment analysis ( qazvinian et . al , 2011 ; wu et .al , 2015 ) as well as propagation graphs ( kwon , et .al , 2013 ; wang et .al , 2015 ) .+ recent research mainly focuses on further improving the quality of rumour detection while neglecting the increasing delay between the publication and detection of a rumour .the motivation for rumour detection lies in debunking them to prevent them from spreading and causing harm .unfortunately , state - of - the - art systems operate in a retrospective manner , meaning they detect rumours long after they have spread .the most accurate systems rely on features based on propagation graphs and clustering techniques .these features can only detect rumours after the rumours have spread and already caused harm .+ therefore , researchers like liu et .al ( 2015 ) , wu et .al ( 2015 ) , zhao et .al ( 2015 ) and zhou et . al ( 2015 ) focus on early rumour - detection while allowing a delay up to 24 hours .their focus on latency aware rumour detection makes their approaches conceptually related to ours .al ( 1015 ) found clustering tweets containing enquiry patterns as an indication of rumours .also clustering tweets by keywords and subsequently judging rumours using an ensemble model that combine user , propagation and content - based features proved to be effective ( zhou et .al , 2015 ) .although the computation of their features is efficient , the need for repeated mentions in the form of response by other users results in increased latency between publication and detection .the approach with the lowest latency banks on the wisdom of the crowd ( liu et .al , 2015 ) .in addition to traditional context and user based features they also rely on clustering micro - blogs by their topicality to identify conflicting claims , which indicate increased likelihood of rumours .although they claim to operate in real - time , they require a cluster of at least 5 messages to detect a rumour .+ + in contrast , we introduce new features to detect rumours as early as possible - preferably instantly , allowing them to be debunked before they spread and cause harm .rumour detection is a challenging task , as it requires determining the truth of information ( zhao et .al , 2015 ) . the cambridge dictionary , defines a rumour as information of doubtful or unconfirmed truth .we rely on classification using an svm , which is the state - of - the - art approach for novelty detection .numerous features have been proposed for rumour detection on social media , many of which originate from an original study on information credibility by castillo et .al ( 2011 ) .unfortunately , the currently most successful features rely on information based on graph propagation and clustering , which can only be computed retrospectively .this renders them close to useless when detecting rumours early on .we introduce two new classes of features , one based on novelty , the other on pseudo feedback . both feature categories improve detection accuracy early on , when information is limited .we frame the real - time rumour detection task as a classification problem that assesses a document s likelihood of becoming a future rumour at the time of its publication .consequently , prediction takes place in real - time with a single pass over the data .+ + more formally , we denote by the document that arrives from stream at time . upon arrival of document we compute its corresponding feature vector . given and the previously obtained weigh vector we compute the rumour score .the rumour prediction is based on a fixed thresholding strategy with respect to .we predict that message is likely to become a rumour if its rumour score exceeds the detection threshold .the optimal parameter setting for weight vector and detection threshold are learned on a test to maximise prediction accuracy . to increase instantaneous detection performance , we compensate for the absence of future information by consulting additional data sources . in particular, we make use of news wire articles , which are considered to be of high credibility .this is reasonable as according to petrovic et .al ( 2013 ) , in the majority of cases , news wires lead social media for reporting news .when a message arrives from a social media stream , we build features based on its novelty with respect to the confirmed information in the trusted sources . in a nutshell ,the presence of information unconfirmed by the official media is construed as an indication of being a rumour .note that this closely resembles the definition of what a rumour is .high volume streams demand highly efficient feature computation .this applies in particular to novelty based features since they can be computationally expensive .we explore two approaches to novelty computation : one based on vector proximity , the other on kterm hashing .+ computing novelty based on traditional vector proximity alone does not yield adequate performance due to the length discrepancy between news wire articles and social media messages . to make vector proximity applicable, we slide a term - level based window , whose length resembles the average social media message length , through each of the news articles .this results in sub - documents whose length resembles those of social media messages .novelty is computed using term weighted tf - idf dot products between the social media message and all news sub - documents .the inverse of the minimum similarity to the nearest neighbour equates to the degree of novelty .+ the second approach to compute novelty relies on kterm hashing ( wurzer et .al , 2015 ) , a recent advance in novelty detection that improved the efficiency by an order of magnitude without sacrificing effectiveness .kterm hashing computes novelty non - comparatively . instead of measuring similarity between documents , a single representation of previously seen information is constructed . for each document , all possible kterms are formed and hashed onto a bloom filter .novelty is computed by the fraction of unseen kterms .kterm hashing has the interesting characteristic of forming a collective memory , able to span all trusted resources .we exhaustively form kterm for all news articles and store their corresponding hash positions in a bloom filter .this filter then captures the combined information of all trusted resources .a single representation allows computing novelty with a single step , instead of comparing each social media message individually with all trusted resources .+ + when kterm hashing was introduced by wurzer et .al ( 2015 ) for novelty detection on english tweets , they weighted all kterm uniformly .we found that treating all kterms as equally important , does not unlock the full potential of kterm hashing .therefore , we additionally extract the top 10 keywords ranked by and build a separate set of kterms solely based on them .this allows us to compute a dedicated weight for kterms based on these top 10 keywords .the distinction in weights between kterms based on all versus keyword yields superior rumour detection quality , as described in section [ section_featureanalysis ] .this leaves us with a total of 6 novelty based features for kterm hashing - kterms of length 1 to 3 for all words and keywords .+ + apart from novelty based features , we also apply a range of 51 context based features .the full list of features can be found in table [ tab : allfeatures ] .the focus lies on features that can be computed instantly based only on the text of a message to keep the latency of our approach to a minimum .most of these 51 features overlap with previous studies ( castillo et .al , 2011 ; liu et .al , 2015 ; qazvinian et .al , 2011 ; yang et .al , 2012 ; zhao et .al , 2015 ) .this includes features based on the presence or number of urls , hash - tags and user - names , pos tags , punctuation characters as well as 8 different categories of sentiment and emotions .+ + on the arrival of a new message from a stream , all its features are computed and linearly combined using weights obtained from an svm classifier , yielding the rumour score. we then judge rumours based on an optimal threshold strategy for the rumour score .in addition to novelty based features we introduce another category of features - dubbed pseudo - feedback ( pf ) feature - to boost detection performance . the feature is conceptually related to pseudo relevance feedback found in retrieval and ranking tasks in ir .the concept builds upon the idea that documents , which reveal similar characteristics as previously detected rumours are also likely to be a rumour . during detection , feedback about which of the previous documents describes a rumour is not available .therefore , we rely on pseudo feedback and consider all documents whose rumour score exceeds a threshold as true rumours. + the pf feature describes the maximum similarity between a new document and those documents previously considered as rumour .similarities are measured by vector proximity in term space .conceptually , pf passes on evidence to repeated signals by increasing the rumour score of future documents if they are similar to a recently detected rumour .note that this allows harnessing information from repeated signals without the need of operating retrospectively . + + * training pseudo feedback features * + the trainings routine differs from the standard procedure , because the computation of the pf feature requires two training rounds as we require a model of all other features to identify pseudo rumours .in a first training round a svm is used to compute weights for all features in the trainings set , except the pf features .this provides a model for all but the pf features .then the trainings set is processed to computing rumour scores based on the model obtained from our initial trainings round .this time , we additionally compute the pf feature value by measuring the minimum distance in term space between the current document vector and those previous documents , whose rumour score exceeds a previously defined threshold .since we operate on a stream , the number of documents previously considered as rumours grows without bound . to keep operation constant in time and space ,we only compare against the k most recent documents considered to be rumours .once we obtained the value for the pf feature , we compute its weight using the svm .the combination of the weight for the pf feature with the weights for all other features , obtained in the initial trainings round , resembles the final model . [ cols="^,<",options="header " , ] previous approaches to rumour detection rely on repeated signals to form propagation graphs or clustering methods . beside causing a detection delay these methodsare also blind to less popular rumours that do nt go viral .in contrast , novelty based feature require only a single message enabling them to detect even the smallest rumours .examples for such small rumours are shown in table [ tab_rumourlist ] . to demonstrate the high efficiency of computing novelty and pseudo feedback features, we implement a rumour detection system and measure its throughput when applied to 100k weibos .we implement our system in c and run it using a single core on a 2.2ghz intel core i7 - 4702hq .we measure the throughput on an idle machine and average the observed performance over 5 runs .figure [ fig : throughput ] presents performance when processing more and more weibos .the average throughput of our system is around 7,000 weibos per second , which clearly exceeds the average volume of the full twitter ( 5,700 tweets / sec . ) and sina weibo ( 1,200 weibos / sec . )since the number of news articles is relatively small , we find no difference in terms of efficiency between computing novelty features based on kterm hashing and vector similarity .figure [ fig : throughput ] also illustrates that our proposed features can be computed in constant time with respect to the number of messages processed .this is crucial to keep operation in a true streaming environment feasible .approaches , whose runtime depend on the number of documents processed become progressively slower , which is inapplicable when operating on data streams .our experiments show that the proposed features perform effectively and their efficiency allows them to detect rumours instantly after their publication .we introduced two new categories of features which significantly improve instantaneous rumour detection performance .novelty based features consider the increased presence of unconfirmed information within a message with respect to trusted sources as an indication of being a rumour .pseudo feedback features consider messages that are similar to previously detected rumours as more likely to also be a rumour .pseudo feedback and its variant , recursive pseudo feedback , allow harnessing repeated signals without the need of operating retrospectively .our evaluation showed that novelty and pseudo feedback based features perform significantly more effective than other real - time and early detection baselines , when detecting rumours instantly after their publication .this advantage vanishes when allowing an increased detection delay .we also showed that the proposed features can be computed efficiently enough to operate on the average twitter and sina weibo stream while keeping time and space requirements constant .+ + + + + plainnat.bst james allan , victor lavrenko and hubert jin .first story detection in tdt is hard . in proceedings of the ninth international conference on information and knowledge management .acm , 2000 james allan .topic detection and tracking : event - based information organization .kluwer academic publishers , norwell , 2002 g. cai , h. wu , r. lv , rumours detection in chinese via crowd responses , asonam 2014 , beijing , china , 2014 castillo c , mendoza m , poblete b. information credibility on twitter[c ] , the 20th international conference on world wide web , hyderabad , india , 2011 s. kwon , m. cha , k. jung , w. chen and y. wang , `` prominent features of rumor propagation in online social media '' , data mining ( icdm ) , ieee , 2013 x. liu , a. nourbakhsh , q. li , r. fang , and s. shah , real - time rumor debunking on twitter , in proceedings of the 24th acm international conference on information and knowledge management .acm , 2015 m. mendoza , poblete b , castillo c , twitter under crisis : can we trust what we rt ?, the 1st workshop on social media analytics , soma , 2010 s. muthukrishnan .data streams : algorithms and applications .now publishers inc , 2005 s. petrovic , m. osborne , r. mccreadie , c. macdonald , i. ounis , and l. shrimpton .can twitter replace newswire for breaking news ? in proc .of icwsm , 2013 v. qazvinian , e. rosengren , d. r. radev , q. mei , rumour has it : identifying misinformation in microblogs , emnlp , july 27 - 31 , 2011 , edinburgh , uk , 2011 richardson m , agrawal r , domingos p , trust management for the semantic web ; lecture notes in computer science , 2003 : 351 - 368 .fensel d , sycara k , mylopoulos j , the semantic web - iswc 2003 , heidelberg : springer berlinheidelberg , 2003 s. sun , h. liu , j. he , x , du , detecting event rumours on sina weibo automatically , apweb 2013 tdt by nist - 1998 - 2004 . http://www.itl.nist.gov/iad/mig/tests/tdt/resources.html ( last update : 2008 ) k. wu , s. yang , k. zhu , false rumours detection on sina weibo by propagation structures , in the proceedings of icde , 2015 shihan wang , takao terano , detecting rumour patterns in streaming social media , guimi , ieee , 2015 dominik wurzer , victor lavrenko , miles osborne .tracking unbounded topic streams . in the proceedings of the 53rd annual meeting of the association for computational linguistics , acl , 2015 dominik wurzer , victor lavrenko , miles osborne .twitter - scale new event detection via k - term hashing . in the proceedings of the conference on empirical methods in natural language processing , emnlp , 2015 fan yang , xiaohui yu , yang liu , min yang ( 2012 ) automatic detection of rumor on sina weibo .august 12 , 2012 z. zhao , p. resnick , and q. mei enquiring minds : early detection of rumors in social media from enquiry posts . in the proceedings of www , 2015 x. zhou , j. cao , z. jin , x. fei , y. su , j. zhang , d. chu , and x cao .realtime news certification system on sina weibo .
rumour detection is hard because the most accurate systems operate retrospectively , only recognising rumours once they have collected repeated signals . by then the rumours might have already spread and caused harm . we introduce a new category of features based on novelty , tailored to detect rumours early on . to compensate for the absence of repeated signals , we make use of news wire as an additional data source . unconfirmed ( novel ) information with respect to the news articles is considered as an indication of rumours . additionally we introduce pseudo feedback , which assumes that documents that are similar to previous rumours , are more likely to also be a rumour . comparison with other real - time approaches shows that novelty based features in conjunction with pseudo feedback perform significantly better , when detecting rumours instantly after their publication .
for reliable realization of quantum feedback control , it is indispensable to take into consideration some real - world limitations , such as incomplete knowledge of the physical systems and poor performance of the control devices .various efforts on these issues have been undertaken in these few years , see e.g. , for the system parameter uncertainty . among such limitations , time delays in the feedback loop , which happen due to the finite computational speed of _ classical _ controller devices ,are extremely serious , since their effect may completely lose the benefit of feedback control . to avoid the time delays, one can think to use the markovian feedback control , in which the measurement results are directly fed back .however , while these experimental simplification has been extensively studied , theoretical ways to evaluate the effect of the time delays have not been proposed so far . in this paper, we investigate the effect of the time delays on the control performance , which is defined in terms of the cost function optimized by feedback control .this investigation provides theoretical guidelines for the feedback control experiment . as the controlled object, the linear quantum systems are considered . in order to prepare the tool for the analysis, we first consider the optimal lqg control problem subject to the constant time delay .the optimal controller is obtained via the existing results in the classical control theory .further , these results allow us to obtain the formula for the optimal value of the cost .the obtained formula enables us to examine the relation between the optimal control performance and the time delay both in an analytical and a numerical ways .then , the intrinsic stability of the systems is dominant for the performance degradation effect .if the system is stable , the degradation effect converges to some value in the large time delay limit .otherwise , the performance monotonically deteriorates as the delay length becomes larger .based on this fact , we perform the analysis stated above for several physical systems that possess different stability properties .in addition to the controller design , we examine the relationship between the measurement apparatus and the best achievable performance . based on this, we propose a detector parameter tuning policy for feedback control of the time - delayed systems .this paper is organized as follows .linear quantum control systems are introduced in the next section . in section iii , we state the control problem for dealing with the time delay issue , and provide its optimal solution . in section iv , we investigate the effect of the time delay in quantum feedback control based on two typical examples possessing different stability properties .section v concludes the paper .we use the following notation . for a matrix , , and defined by , and , respectively , where the matrix element may be an operator and denotes its adjoint .the symbols and denote the real and imaginary parts of , respectively , i.e. , and . all the rules above are applied to any rectangular matrix .consider a quantum system which interacts with a vacuum electromagnetic field through the system operator where {^{\sf t}} ] .when the system hamiltonian is denoted by , this interaction is described by a unitary operator obeying the following quantum stochastic differential equation called the _ hudson - parthasarathy equation _ : ,\ ] ] where is the identity operator .the field operators and are the creation and annihilation operator processes , which satisfy the following quantum it rule : further , suppose that the system is trapped in a harmonic potential , and that a linear potential is an input to the system .the system hamiltonian at time is given by where is the control input at time , the system parameters and are a symmetric matrix and a column vector , and is given by }.\ ] ] then , by defining {^{\sf t}}=[u_t q u_t^\dagger , u_t p u_t^\dagger]{^{\sf t}} ] and the quantum it formula , we obtain the following linear equation : where ] , {^{\sf t}}$ ] and satisfies by combining this with ( [ bap ] ) , we obtain the first claim .it should be emphasized that contributes to and only through hence , it is sufficient to show the second claim that ( [ coeff ] ) does not depend on .when defining ,\ ] ] simple calculation yields then , we obtain the following : this completes the proof. ( 70,46 ) ( 7.5,3 ) , , , respectively.,title="fig:",width=219,height=143 ] ( 3,2.5) ( 3,7.2) ( 3,12) ( 3,16.6) ( 3,21.3) ( 3,25.9) ( 3,30.2) ( 3,34.8) ( 3,39.6) ( 7.3,0) ( 18.4,0) ( 29.7,0) ( 41.2,0) ( 52.8,0) ( 63.4,0) ( 35.5,-3) ( -5,30) roughly speaking , the first statement says that increases linearly with the oscillation as the delay length becomes large .this is a natural result of the fact that the matrix has only pure imaginary eigenvalues . on the other hand ,the second statement gives us a nontrivial insight : the growth rate and the oscillation amplitude are independent of the detector parameter .hence , depending on the delay length , the importance of the choice of the measurement apparatus differs .if the delay length is small , the improvable performance level is sensitive to the value of . on the other hand ,if the system suffers from the large delay , the apparatus adjustment is not significant , since the improvable performance level is relatively small compared to the value of .thus , if the time delay can be made sufficiently small , experimenters have to adjust the detector parameter depending on the resulting delay length . for the illustration ,see fig.[fig : harmonic ] for , which illustrates the optimal performance curve .in this paper , we investigated performance degradation effects due to time delays in optimal control of linear quantum systems .the analysis was performed by the optimal control performance formula from the classical control theory .the obtained remarks are strongly related to the intrinsic stability of the physical systems . in particular, we performed intimate evaluations for two typical systems with different types of stability .these results are expected to give useful guidelines for the future experiments .consider problem [ problem ] . with the same notation as that in theorem [ maintheorem ] ,the optimal feedback controller is given by and the finite - time integration system it should be noted that the implementation of this controller involves infinite - dimensional elements .thus , computers with the finite memory can not implement it in a precise sense .fortunately , however , it is known that the approximation method proposed in permits the control with high accuracy .m. r. james , phys .a * 69 * , 032108 ( 2004 ) .m. r. james , j. opt .b : quantum semiclass* 7 * , 198 ( 2005 ) .m. r. james , h. i. nurdin , and i. r. petersen , arxiv : quant - ph/0703150v2 ( 2007 ) .n. yamamoto , phys .a * 74 * , 032107 ( 2006 ) .r. van handel , j. k. stockton , and h. mabuchi , ieee trans .automatic control * 50 * , 768 ( 2005 ) .d. a. steck , k. jacobs , h. mabuchi , s. habib , and t. bhattacharya , phys .a * 74 * , 012322 ( 2006 ) .j. stockton , m. armen , and h. mabuchi , j. opt .b * 19 * , 3019 ( 2002 ) .h. m. wiseman and g. j. milburn , phys .lett . * 70 * , 548 ( 1993 ) .h. m. wiseman , s. mancini , and j. wang , phys .. a * 66 * , 013807 ( 2002 ) .l. mirkin , ieee trans .automatic control * 48 * , 543 ( 2003 ) .r. l. hudson and k. r. parthasarathy , commun .* 93 * , 301 ( 1984 ) .l. bouten , r. van handel , and m. r. james , siam j. control and optimization * 46 * , 2199 ( 2007 ) .k. zhou , j. c. doyle , and k. glover , _ robust optimal control _( prentice - hall , 1995 ) .h. m. wiseman and a. c. doherty , phys .lett . * 94 * , 070405 ( 2005 ) .l. mirkin , system and control letters * 51 * , 331 ( 2004 ) .
we investigate feedback control of linear quantum systems subject to feedback - loop time delays . in particular , we examine the relation between the potentially achievable control performance and the time delays , and provide theoretical guidelines for the future experimental setup in two physical systems , which are typical in this research field . the evaluation criterion for the analysis is given by the optimal control performance formula , the derivation of which is from the classical control theoretic results about the input - output delay systems .
the hungarian - made automated telescope network ( hatnet ; bakos et al .2004 ) survey , has been one of the main contributors to the discovery of transiting exoplanets ( teps ) , being responsible for approximately a quarter of the confirmed teps discovered to date ( fig . 1 ) .it is a wide - field transit survey , similar to other projects such as super - wasp ( pollaco et al .2006 ) , xo ( mccullough et al .2005 ) , and tres ( alonso et al .the teps discovered by these surveys orbit relatively _ bright _ stars ( ) which allows for precise parameter determination ( e.g. mass , radius and eccentricity ) and enables follow - up studies to characterize the planets in detail ( e.g. studies of planetary atmospheres , or measurements of the sky - projected angle between the orbital axis of the planet and the spin axis of its host star ) .since 2006 , hatnet has announced twenty - six teps .below we highlight some of the exceptional properties of these planets ( section 2 ) , we then describe the procedures which we followed to discover them ( section 3 ) , and we conclude by summarizing what hatnet provides to the tep community with each discovery ( section 4 ) .hatnet - detected teps span a wide range of physical properties , including : two neptune - mass planets ( hat - p-11b , bakos et al . 2010a ; and -26b , hartman et al .2010b ) ; planets with masses greater than ( -2b , bakos et al . 2007 ; and -20b , bakos et al .2010b ) ; compact planets ( -2b , and -20b ) ; inflated planets ( -7b , pl et al . 2008; -8b , latham et al .2009 ; -12b , hartman et al . 2009 ; -18b , and -19b , hartman et al .2010a ) ; a planet with a period of just over one day ( -23b , bakos et al .2010b ) ; planets with periods greater than 10 days ( -15b , kovcs et al . 2010 ; and -17b , howard et al .2010 ) ; multi - planet systems ( -13b , c , bakos et al . 2009 ; and -17b , c ) ; and a number of eccentric planets ( -2b ; -11b ; -14b , torres et al . 2010 ; -15b ; -17b ; and -21b , bakos et al .we have also provided evidence for outer planets for 4 systems : hat - p-11c , -13c , -17c ( the latter two with almost closed orbits ) , and hat - p-19c .some of these discoveries were the first of their kind , and thus were important landmarks in exoplanet science .this includes : the first transiting heavy - mass planet ( -2b ) ; the first retrograde planet ( -7b ; narita et al .2009 , winn et al .2009 ) ; two of the first four transiting neptunes ; the first inflated saturn ( -12b ) ; the first and second multi - planet systems with transiting inner planets ; and two of the first six planets with periods longer than 10 days .the 26 hatnet teps were identified from a shortlist of 1300 hand - selected transit _ candidates _ culled from millions of light curves , which were , in turn , the result of diverse activities ranging from remote hardware operations to data analysis . herewe briefly describe this process .hatnet utilizes 6 identical instruments , each with an 11 cm aperture f/1.8 lens and a front - illuminated ccd with 9pixels ( yielding a wide , field ) , attached to a horseshoe mount , protected by a clam - shell dome , and with all devices controlled by a single pc .each instrument , called a hat ( bakos et al .2002 ) , can obtain per - image photometric precision reaching 4mmag at 3.5-min cadence on the bright end at , and 10mmag at . by collecting a light curve with or more points in transit ,a transit with a depth of only a few mmag may be detected .we note that the original hatnet hardware employed front illuminated detectors with cousins -band filters .this was replaced to front - illuminated ccds and cousins filters in 2007 september , and the filter was changed to sloan in 2008 july .four hat instruments are located at the smithsonian astrophysical observatory s ( sao ) fred lawrence whipple observatory ( flwo ) , and an additional two instruments are on the roof of the hangar servicing the antennae of sao s submillimeter array , at mauna kea observatory ( mko ) in hawaii .the network with its current longitude coverage has significant advantages in detecting teps with periods longer than a few days .the instruments are truly autonomous in the sense that they are aware of the observing schedule and the weather conditions , they prepare all the devices ( ccds , dome , telescope ) for the observations , acquire ample calibration frames ( biases , darks , skyflats ) , and then proceed to the science program of the night . for the purpose of monitoring bright stars for transits ,the sky has been split up to 838 non - overlapping fields .fields are chosen for observation based on several factors such as optimal visibility at the given time of the year , proximity of the field to solar system objects , and various other factors . to date hatnethas observed fields ( 29% of the northern sky ) . typically a field is monitored for 3 months ; a given instrument will begin observations of the field after evening twilight and observe it continuously at a cadence of 3.5 minutes until the field sets .the instrument will then target a second field and continue observing it until morning twilight .all time between dusk and dawn is spent by exposing on the selected fields .a single field is typically assigned to a flwo instrument as well as a mko instrument to increase the duty cycle of the observations .based on operations since 2003 , we find that the effective duty cycle of hatnet is .the images are calibrated using standard techniques that take into account the problems raised by the wide fov , such as strong vignetting , distortions , sky - background changes , etc .the entire data flows to the cfa via fast internet .the astrometric solution is determined following pl & bakos ( 2006 ) , based on the two micron all sky survey ( 2mass ; skrutskie et al .2006 ) catalog .we then perform aperture photometry at the fixed positions of the 2mass stars. an initial ensemble magnitude calibration is performed , and the remaining systematic variations are removed from the light curves by decorrelating against external parameters ( i.e. parameters describing the point spread function , the position of the star on the image , and others ) and by making use of the trend filtering algorithm ( tfa ; kovcs et al . 2005 ) .the resulting light curves typically reach a per - point photometric precision ( at 3.5min cadence ) of for the brightest non - saturated stars .we search the trend filtered light curves for periodic transit events using the box - fitting least squares algorithm ( bls ; kovcs et al .we then subject potential transit candidates to a number of automatic filters to select reliable detections which are consistent with a transiting planet - size object , and are not obviously eclipsing binary star systems or other types of variables . to go from the lengthy list of potential transit candidates to a much smaller number of confirmed and well - characterized teps ,we first manually select a list of candidates to consider for follow - up observations , we gather and analyze spectroscopic and photometric observations to reject false positives and confirm bona fide teps , and we finally analyze and publish the confirmed tep systems . the automated transit candidate selection procedure provides a manageable list of potential candidates which must then be inspected by eye to select the most promising targets for follow - up .typically a few hundred to one thousand potential candidates per field are identified by the automated procedures , which are then narrowed down to a few dozen candidates deemed worthy of follow - up . at the end of this procedure ,relative priorities are assigned to these candidates and appropriate facilities for follow - up are identified .candidates selected for follow - up then undergo a procedure consisting of three steps : reconnaissance spectroscopy , photometric follow - up observations , and high - precision spectroscopy .we have found that a very efficient method to reject the majority of false positive transit detections is to first obtain one or more high - resolution , low signal - to - noise ratio ( s / n per resolution element ) spectra using a m telescope ( e.g. latham et al .double - lined eclipsing binaries , giant stars ( where the detected transit is most likely a blend between an eclipsing binary and the much brighter giant ) , and rapidly rotating stars or excessively hot stars ( where confirming a planetary orbit would be very difficult ) may be immediately rejected based on one spectrum .stars that are confirmed to be slowly rotating dwarfs are observed a second time at the opposite quadrature phase to look for a significant velocity variation .stars where the rv amplitude implies a stellar companion ( typically ) are rejected . in some cases changes in the shape of the spectral line profilemay also be detected , enabling us to reject the target as a stellar triple .we find that a significant fraction of the initial candidates selected for follow - up are rejected by this reconnaissance spectroscopy ( rs ) procedure , saving time on precious resources that we use for the final follow - up .to carry out the rs work we primarily use the flwo 1.5 m telescope , previously with the digital speedometer ( ds ) , and now with the tillinghast reflector echelle spectrograph ( tres ) , and to some extent the fiber - fed echelle spectrograph ( fies ) on the nordic optical telescope ( not ) at la palma .we have also made use of the echelle spectrograph on the du pont 2.5 m telescope at lco , the echelle spectrograph on the anu 2.3 m telescope at sso , and the coralie echelle spectrograph on the 1.2 m euler telescope at la silla observatory ( lso ) in chile .if a candidate passes the rs step , we then schedule photometric observations of the candidate over the course of a transit to confirm that the transit is real , and to confirm that the shape of the transit light curve is consistent with a tep .we also note that these follow - up light curves are essential for obtaining precise measurements of system parameters , such as the planet - star radius ratio or the transit impact parameter . in some cases the candidateis subjected to photometry follow - up without rs or with incomplete rs , when , e.g. , there is evidence that the star is a dwarf ( colors , proper motion , parallax ) , or when the transit events are rare ( long period ) .we primarily use the keplercam instrument on the flwo 1.2 m telescope , but have also made use of faulkes telescope north ( ftn ) of the las cumbres observatory global telescope ( lcogt ) on mauna haleakala , hawaii , and occasionally telescopes in hungary and israel .the final step in the confirmation follow - up procedure is to obtain high - resolution , high - s / n spectra with sufficient velocity precision to detect the orbital variation of the star due to the planet , confirm that the system is not a subtle blend configuration , and measure the effective temperature , surface gravity and metallicity of the star . by this stagewe have already excluded the majority of false positives , so that roughly half of the candidates that reach this step are confirmed as teps .false positives that reach this stage generally are rejected after only a few spectra are obtained , so that of the time is spent observing teps .we primarily use the high resolution echelle spectrometer ( hires ) with the iodine cell on the 10 m keck i telescope at mko .we have also used the fies / not facility , the high dispersion spectrograph ( hds ) with the iodine cell on the subaru 8.2 m telescope at mko , and the sophie instrument on the 1.93 m telescope at the observatoire de haute - provence ( ohp ) , in france . once a planet is confirmed , we conduct a joint analysis of the available high - precision rv observations and photometric observations to determine the system parameters , including in particular the masses and radii of the star and planet(s ) ( e.g. bakos et al. 2010a ) . in cases where the spectral line bisector spans are inconclusive, we must also carry out a detailed blend - model of the system concurrently with the tep modeling to definitively prove that the object is a tep .the hatnet project strives to provide the following to the community for a given tep discovery : 1 . hatnet discovery data .2 . high - precision , often multi - band , photometry follow - up .3 . high - precision radial velocity follow - up .access to all of the data via the online tables , including raw and detrended values .characterization of the host star : stellar atmospheric and fundamental parameters ( from isochrone fitting ). 6 . a blend analysis . 7 .accurate parameters of the planetary system .8 . a publication available on arxiv with all the above when the planet is announced
we summarize the contribution of the hatnet project to extrasolar planet science , highlighting published planets ( hat - p-1b through hat - p-26b ) . we also briefly discuss the operations , data analysis , candidate selection and confirmation procedures , and we summarize what hatnet provides to the exoplanet community with each discovery .
instrumentation is a technique whereby existing code is modified in order to observe or modify its behaviour .it has a lot of different applications , such as profiling , coverage analysis and cache simulations .one of its most interesting features is however the ability to perform automatic debugging , or at least assist in debugging complex programs .after all , instrumentation code can intervene in the execution at any point and examine the current state , record it , compare it to previously recorded information and even modify it .debugging challenges that are extremely suitable for analysis through instrumentation include data race detection and memory management checking .these are typically problems that are very hard to solve manually .however , since they can be described perfectly using a set of rules ( e.g. the memory must be allocated before it is accessed , or no two threads must write to the same memory location without synchronising ) , they are perfect candidates for automatic verification .instrumentation provides the necessary means to insert this verification code with little effort on the side of the developer .the instrumentation can occcur at different stages of the compilation or execution process . when performed prior to the execution , the instrumentation results in changes in the object code on disk , which makes them a property of a program or library .this is called static instrumentation .if the addition of instrumentation code is postponed until the program is loaded in memory , it becomes a property of an execution . in this case , we call it dynamic instrumentation . examples of stages where static instrumentation can be performed are directly in the source code , in the assembler output of the compiler , in the compiled objects or programs ( e.g. eel , atom , alto ) .the big advantage of static instrumentation is that it must be done only once , after which one can perform several executions without having to reinstrument the code every time .this means that the cost of instrumenting the code can be relatively high without making such a tool practically unusable .the larges disadvantage of static instrumentation is that it requires a complex analysis of the target application to detect all possible execution paths , which is not always possible .additionally , the user of a static instrumentation tool must know which libraries are loaded at run time by programs he wants to observe , so that he can provide instrumented versions of those .finally , every time a new type of instrumentation is desired , the application and its libraries must be reinstrumented .most of the negative points of static instrumentation are solved in its dynamic counterpart . in this case , the instrumentation is not performed in advance , but gradually at run time as more code is executed . since the instrumentation can continue while the program is running , no prior analysis of all possible execution paths is required .it obviously does mean that the instrumentation must be redone every time the program is executed .this is somewhat offset by having to instrument only the part of the application and its libraries that is covered by a particular execution though .one can even apply dynamic optimization techniques to further reduce this overhead . when using dynamic instrumentation , the code on disk is never modified .this means that a single uninstrumented copy of an application and its libraries suffices when using this technique , no matter how many different types of instrumentation one wants to perform .another consequence is that the code even does not have to exist on disk .indeed , since the original code is read from memory and can be instrumented just before it is executed , even dynamically loaded and generated code pose no problems .however , when the program starts modifying this code , the detection and handling of these modifications is not possible using current instrumentation techniques . yet, being able to instrument self - modifying code becomes increasingly interesting as run time systems that exhibit such behaviour gain more and more popularity .examples include java virtual machines , the .net environment and emulators with embedded just - in - time compilers in general .these environments often employ dynamic optimizing compilers which continuously change the code in memory , mainly for performance reasons .instrumenting the programs running in such an environment is often very easy .after all , the dynamic compiler or interpreter that processes said programs can do the necessary instrumentation most of the time . on the other hand ,observing the interaction of the environments themselves with the applications on top and with the underlying operating system is much more difficult .nevertheless , this ability is of paramount importance when analysing the total workload of a system and debugging and enhancing these virtual machines . even when starting from a system that can already instrument code on the fly , supporting self - modifying code is a quite complex undertaking .first of all , the original program code must not be changed by the instrumentor , since otherwise the program s own modifications may conflict with these changes later on .secondly , the instrumentor must be able to detect changes performed by the program before the modified code is executed , so that it can reinstrument this code in a timely manner .finally , the reinstrumentation itself must take into account that an instruction may be changed using multiple write operations , so it could be invalid at certain points in time . in this paperwe propose a novel technique that can be used to dynamically instrument self - modifying code with an acceptable overhead .we do this by using the hardware page protection facilities of the processor to mark pages that contain code which has been instrumented as read - only . when the program later on attempts to modify instrumented code , we catch the resulting protection faults which enables us to detect those changes and act accordingly . the described method has been experimentally evaluated using the _ diota _ ( dynamic instrumentation , optimization and transformation of applications ) framework on the linux / x86 platform by instrumenting a number of javagrande benchmarks running in the sun 1.4.0 java virtual machine .the paper now proceeds with an overview of dynamic instrumentation in general and _ diota _ in particular .next , we show how the detection of modified code is performed and how to reinstrument this code .we then present some experimental results of our implementation of the described techniques and wrap up with the conclusions and our future plans .dynamic instrumentation can be be done in two ways .one way is modifying the existing code , e.g. by replacing instructions with jumps to routines which contain both instrumentation code and the replaced instruction .this technique is not very usable on systems with variable - length instructions however , as the jump may require more space than the single instruction one wants to replace .if the program later on transfers control to the second instruction that has been replaced , it will end up in the middle of this jump instruction .the technique also wreaks havoc in cases of data - in - code or code - in - data , as modifying the code will cause modifications to the data as well .the other approach is copying the original code into a separate memory block ( this is often called _ cloning _ ) and adding instrumentation code to this copy .this requires special handling of control - flow instructions with absolute target addresses , since these addresses must be relocated to the instrumented version of the code . on the positive side ,data accesses still occur correctly without any special handling , even in data - in - code situations .the reason is that when the code is executed in the clone , only the program counter ( pc ) has a different value in an instrumented execution compared to a normal one .this means that when a program uses non - pc - relative addressing modes for data access , these addresses still refer to the original , unmodified copy of the program or data .pc - relative data accesses can be handled at instrumentation time , as the instrumentor always knows the address of the instruction it is currently instrumenting .this way , it can replace pc - relative memory accesses with absolute memory accesses based on the value the pc would have at that time in a uninstrumented execution ._ diota _ uses the cloning technique together with a cache that keeps track of already translated instruction blocks .it is implemented as a shared library and thus resides in the same address space as the program it instruments . by making use of the ld_preload environment variable under linux, the dynamic linker ( ld.so ) can be forced to load this library , even though an application is not explicitly linked to it .the init routines of all shared libraries are executed before the program itself is started , providing _diota _ an opportunity to get in control .as shown in figure [ fig : diota_operation ] , the instrumentation of a program is performed gradually .first , the instructions at the start of the program are analysed and then copied , along with the desired instrumentation code , to the _ clone _ ( a block of memory reserved at startup time , also residing in the program s address space ) . during this process ,direct jumps and calls are followed to their destination .the instrumentation stops when an instruction is encountered of which the destination address can not be determined unequivocally , such as an indirect jump . at this point ,a _ trampoline _ is inserted in the clone .this is a small piece of code which will pass the actual target address to _ diota _ every time the corresponding original instruction would be executed .for example , in case of a jump with the target address stored in a register , the trampoline will pass the value of that specific register to _ diota _ every time it is executed .diota _ is entered via such a trampoline , it will check whether the code at the passed address has already been instrumented . if that is not the case ,it is instrumented at that point .next , the instrumented version is executed .figure [ fig : translationtable ] shows how _ diota _ keeps track of which instructions it has already instrumented and where the instrumented version can be found . a marker consisting of illegal opcodesis placed after every block of instrumented code ( aligned to a 4-byte boundary ) , followed by the translation table .such a translation table starts with two 32 bit addresses : the start of the block in the original code and its counterpart in the clone .next , pairs of 8 bit offsets between two successive instructions in the respective blocks are stored , with an escape code to handle cases where the offset is larger than 255 bytes ( this can occur because we follow direct calls and jumps to their destination ) .in addition to those tables , an avl tree is constructed .the keys of its elements are the start and stop addresses of the blocks of original code that have been instrumented .the values are the start addresses of the translation tables of the corresponding instrumented versions .every instruction is instrumented at most once , so the keys never overlap .this means that finding the instrumented version of an instruction boils down to first searching for its address in the avl tree and if found , walking the appropriate translation table . to speed up this process ,a small hash table is used which keeps the results of the latest queries .a very useful property of this system is that it also works in reverse : given the address of an instrumented instruction , it is trivial to find the address of corresponding original instruction .first , the illegal opcodes marker is sought starting from the queried address and next the table is walked just like before until the appropriate pair is found .this ability of doing two - way translations is indispensable for the self - modifying code support and proper exception handling .since the execution is followed as it progresses , code - in - data and code loaded or generated at run time can be handled without any problems .when a trampoline passes an address to _ diota _ of code it has not yet instrumented , it will simply instrument it at that time .it is irrelevant where this code is located , when it appeared in memory and whether or not it doubles as data _ diota _ has several modes of operation , each of which can be used separately , but most can be combined as well . through the use of so - called backends ,the different instrumentation modes can be activated and the instrumentation parameters can be modified .these backends are shared libraries that link against _ diota _ and which can ask to intercept calls to arbitrary dynamically linked routines based on name or address , to have a handler called whenever a memory access occurs , when a basic block completes or when a system call is performed ( both before and after the system call , with the ability to modify its parameters or return value ) .several backends can be used at the same time .other features of the _ diota _ framework include the ability to handle most extensions to the 80x86 isa ( such as mmx , 3dnow ! and sse ) and an extensible and modular design that allows easy implementation of additional backends and support for newly introduced instructions .this paper describes the support for instrumenting self - modifying code in _diota_. for other technical details about _ diota _ we refer to .an aspect that is of paramount importance to the way we handle self - modifying code , is the handling of exceptions ( also called signals under linux ) .the next section will describe in more detail how we handle the self - modifying code , but since it is based on marking the pages containing code that has been instrumented as read - only , it is clear that every attempt to modify such code will cause a protection fault ( or _ segmentation fault _ ) exception . these exceptions and those caused by other operations must be properly distinguished , in order to make sure that the program still receives signals which are part of the normal program execution while not noticing the other ones .this is especially important since the java virtual machine that we used to evaluate our implementation uses signals for inter - thread communication .when a program starts up , each signal gets a default handler from the operating system .if a program wants to do something different when it receives a certain signal , it can install a signal handler by performing a system call .this system call gets the signal number and the address of the new handler as arguments .since we want to instrument these user - installed handlers , we have to intercept these system calls .this can be achieved by registering a system call analyser routine with _diota_. this instructs _ diota _ to insert a call to this routine after every system call in the instrumented version of the program .if such a system call successfully installed a new signal handler , the analyser records this handler and then installs a _ diota _handler instead .next , when a signal is raised , _ diota _s handler is activated .one of the arguments passed to a signal handler contains the contents of all processor registers at the time the signal occurred , including those of the instruction pointer register .since the program must not be able to notice it is being instrumented by looking at at that value , it is translated from a clone address to an original program address using the translation tables described previously .finally , the handler is executed under control of _ diota _ like any other code fragment .once the execution arrives at the sig_return or sig_rt_return system call that ends this signal s execution , _ diota _ replaces the instruction pointer in the signal context again .if the code at that address is not yet instrumented , the instruction pointer value in the context is replaced with the address of a trampoline which will transfer control back to _ diota _ when returning from the signal s execution .otherwise , the clone address corresponding to the already instrumented version is used .dynamically generated and loaded code can already be handled by a number of existing instrumentors .the extra difficulty of handling self - modifying code is that the instrumentation engine must be able to detect modifications to the code , so that it can reinstrument the new code .even the reinstrumenting itself is not trivial , since a program may modify an instruction by performing two write operations , which means the intermediate result could be invalid .there are two possible approaches for dealing with code changes .one is to detect the changes as they are made , the other is to check whether code has been modified every time it is executed . given the fact that in general code is modified far less than it is executed , the first approach was chosen .the hardware page protection facilities of the processor are used to detect the changes made page .once a page contains code that has been instrumented , it will be write - protected .the consequence is that any attempt to modify such code will result in a segmentation fault .an exception handler installed by _diota _ will intercept these signals and take the appropriate action .since segmentation faults must always be caught when using our technique to support self - modifying code , _ diota _ installs a dummy handler at startup time and whenever a program installs the default system handler for this signal ( which simply terminates the process if such a signal is raised ) , or when it tries to ignore it . apart from that , no changes to the exception handling support of _ diota _ have been made , as shown in figure [ fig : exceptions ] .whenever a protection fault occurs due to the program trying to modify some previously instrumented code , a naive implementation could unprotect the relevant page , perform the required changes to the instrumented code inside the signal handler , reprotect the page and continue the program at the next instruction .there are several problems with this approach however : * on a cisc architecture , most instructions can access memory , so decoding the instruction that caused the protection fault ( to perform the change that caused the segmentation fault in the handler ) can be quite complex . *it is possible that an instruction is modified by means of more than one memory write operation . trying to reinstrument after the first write operation may result in encountering an invalid instruction .* in the context of a jit - compiler , generally more than one write operation occurs to a particular page .an example is when a page was already partially filled with code which was then executed and thus instrumented , after which new code is generated and placed on that page as well .a better way is to make a copy of the accessed page , then mark it writable again and let the program resume its execution . this way , it can perform the changes it wanted to do itself . after a while, the instrumentor can compare the contents of the unprotected page and the the buffered copy to find the changes .so the question then becomes : when is this page checked for changes , how long will it be kept unprotected and how many pages will be kept unprotected at the same time .all parameters are important for performance , since keeping pages unprotected and checking them for changes requires both processing and memory resources .the when - factor is also important for correctness , as the modifications must be incorporated in the clone code before it is executed again . on architectures with a weakly consistent memory model ( such as the sparc and powerpc ), the program must make its code changes permanent by using an instruction that synchronizes the instruction caches of all processors with the current memory contents .these instructions can be intercepted by the instrumentation engine and trigger a comparison of the current contents of a page with the previously buffered contents . on other architectures, heuristics have be used depending on the target application that one wants to instrument to get acceptable performance .for example , when using the sun jvm 1.4.0 running on a 80x86 machine under linux , we compare the previously buffered contents of a page to the current contents whenever the thread that caused the protection fault does one of the following : * it performs a kill system call .this means the modifier thread is sending a signal to another thread , which may indicate that it has finished modifying the code and that it tells the other thread that it can continue .* it executes a ret or other instruction that requires a lookup to find the appropriate instrumentation code .this is due to the fact that sometimes the modifying and executing threads synchronise using a spinlock .the assumption here is that before the modifying thread clears the spinlock , it will return from the modification routine , thus triggering a flush .although this method is by no means a guarantee for correct behaviour in the general case , in our experience it always performs correctly in the context of instrumenting code generated by the sun jvm 1.4.0 .the unprotected page is protected again when it has been checked n successive times without any changes having been made to it , or when another page has to be unprotected due to a protection fault .note that this optimisation only really pays off in combination with only checking the page contents in the thread that caused the initial protection fault .the reason is that this ensures that the checking limit is not reached prematurely .otherwise , the page is protected again too soon and a lot of extra page faults occur , nullifying any potential gains . finally , it is possible to vary the number of pages that are being kept unprotected at the same time .possible strategies are keeping just one page unprotected for the whole program in order to minimize resources spent on buffering and comparing contents , keeping one page unprotected per thread , or keeping several pages unprotected per thread to reduce the amount of protection faults .which strategy performs best depends on the cost of a page fault and the time necessary to do a page compare .different code fragments in the clone are often interconnected by direct jumps .for example , when while instrumenting we arrive at an instruction which was already instrumented before , we generate a direct jump to this previously instrumented version instead of instrumenting that code again .this not only improves efficiency , but it also makes the instrumentation of modified code much easier , since there is only one location in the clone we have to adapt in case of a code modification . because of these direct jump interconnections , merely generating an instrumented version of the modified code at a different location in the clone is not enough .even if every lookup for the instrumented version of the code in that fragment returns one of the new addresses in the clone , the old code is still reachable via de direct jumps from other fragments . removing the direct jumps and replacing them with lookups results in a severe slowdown .another solution would be keeping track of to which other fragments each fragment refers and adapting the direct jumps in case of changes .this requires a lot of bookkeeping however , and changing one fragment may result in a cascade effect , requiring a lot of additional changes elsewhere in the clone . for these reasons , we opted for the following three - part strategy . the optimal way to handle the modifications ,is to reinstrument the code in - place .this means that the previously instrumented version of the instructions in the clone are simply replaced by the new ones .this only works if the new code has the same length as ( or is shorter than ) the old code however , which is not always the case . a second way to handle modificationscan be applied when the instrumented version of the previous instruction at that location was larger than the size of an immediate jump . in this case , it is possible to overwrite the previous instrumented version with a jump to the new version . at the end of this new code , another jump can transfer control back to rest of the original instrumentation code .finally , if there is not enough room for an immediate jump , the last resort is filling the room originally occupied by the instrumented code with breakpoints .the instrumented version of the new code will simply be placed somewhere else in the code .whenever the program then arrives at such a breakpoint , _ diota _s exception handler is entered .this exception handler has access to the address where the breakpoint exception occurred , so it can use the translation table at the end of the block to look up the corresponding original program address .next , it can lookup where the latest instrumented version of the code at that address is located and transfer control there .[ cols= " < , > , > , > , > , > , > " , ] we evaluated the described techniques by implementing them in the _ diota _ framework .the performance and correctness were verified using a number of tests from the javagrande benchmark , running under the sun jvm 1.4.0 on a machine with two intel celeron processors clocked at 500mhz .the operating system was redhat linux 7.3 with version 2.4.19 of the linux kernel .several practical implementation issues were encountered . the stock kernel that comes with redhat linux 7.3 , which is based on version 2.4.9 of the linux kernel , contains a number of flaws in the exception handling that cause it to lock up or reboot at random times when a lot of page protection exceptions occur .another problem is that threads in general only have limited stack space and although _ diota _ does not require very much , the exception frames together with _s overhead were sometimes large enough to overflow the default stacks reserved by the instrumented programs .therefore , at the start of the main program and at the start of every thread , we now instruct the kernel to execute signal handlers on an alternate stack ._ diota _ s instrumentation engine is not re - entrant and as such is protected by locks . sincea thread can send a signal to another thread at any time , another problem we experienced was that sometimes a thread got a signal while it held the instrumentation lock .if the triggered signal handler was not yet instrumented at that point , _ diota _ deadlocked when it tried to instrument this handler .disabling all signals before acquiring a lock and re - enabling them afterwards solved this problem .the problem with only parts of instructions being modified , which happens considerably more often than replacing whole instructions , was solved by adding a routine that finds the start of the instruction in which a certain address lies and starting the reinstrumentation from that address .most modifications we observed were changes to the target addresses of direct calls .the heuristics regarding only checking for changes in the thread that caused the initial unprotection of the page , reduced the slowdown caused by the instrumentation by 43% relative to a strategy where pages are checked for changes every time a system call occurs and every time a lookup is performed , regardless of the involved threads .the limit on the number of checks done before a page is protected again ( with n set between 3 and 5 ) provided an additional speed increase of 22% .a large part of the overhead stems from the fact that the elf binary format ( which is used by all modern linux applications ) permits code and data to be on the same page .the result is that once a code fragment on such a page has been instrumented , a lot of page faults occur and unnecessary comparisons have to be performed whenever such data is modified afterwards .a possible solution is not marking pages belonging to the elf binary and the standard libraries loaded at startup time as read - only , but this could compromise the correctness of the described technique .however , it could be done if execution speed is of great concern and if one is certain that no such code will be modified during the execution .table [ tab : jvg ] shows the measured timings when running a number of tests from sections 2 and 3 of the sequential part of the javagrande benchmark v2.0 , all using the sizea input set .the first column shows the name of the test program .the second and third columns show the used cpu time ( as measured by the time command line program , expressed in seconds ) of an uninstrumented resp .instrumented execution , while the fourth column shows the resulting slowdown factor .the fifth column contains the the amount of protection faults divided by the uninstrumented execution time , so it is an indication of the degree in which the program writes to pages that contain already executed code . the last column shows the number of lookups per second of uninstrumented execution time , where a lookup equals a trip to _ diota _ via a trampoline to get the address of the instrumented code at the queried target address .the results have been sorted on the slowdown factor .regression analysis shows us that the overhead due to the lookups is nine times higher than that caused by the protection faults ( and page compares , which are directly correlated with the number of protection faults , since every page is compared n times after is unprotected due to a fault ) .this means that the page protection technique has a quite low overhead and that most of the overhead can be attributed to the overhead of keeping the program under control .the cause for the high cost of the lookups comes from the fact that the lookup table must be locked before it can be consulted , since it is shared among all threads .as mentioned before , we have to disable all signals before acquiring a lock since otherwise a deadlock might occur .disabling and restoring signals is an extremely expensive operation under linux , as both operations require a system call .we have verified this by instrumenting a program that does not use use any signals nor self - modifying code using two different versions of _ diota _ : one which does disable signals before acquiring a lock and one which does not .the first version is four times slower than the second one .we have described a method which can be used to successfully instrument an important class of programs that use self - modifying code , specifically java programs run in an environment that uses a jit - compiler .the technique uses the hardware page protection mechanism present in the processor to detect modifications made to already instrumented code .additionally , a number of optimisations have already been implemented to reduce the overhead , both by limiting the number of protection faults that occurs and the number of comparisons that must to be performed . in the near future, a number of extra optimisations will be implemented , such as keeping more than one page unprotected at a time and the possibility to specify code regions that will not be modified , thus avoiding page protection faults caused by data and code being located on the same page .additionally , we are also adapting the _ diota _ framework in such a way that every thread gets its own clone and lookup table .this will greatly reduce the need for locking and disabling / restoring signals , which should also result in a significant speedup for the programs that perform a large number of lookups .derek bruening , evelyn duesterwald , and saman amarasinghe .design and implementation of a dynamic optimization framework for windows . in _ proceedings of the 4th acm workshop on feedback - directed and dynamic optimization ( fddo-4 ) _ , austin , texas , december 2001 .s. eggers , d. keppel , e. koldinger , and h. levy .techniques for efficient inline tracing on a shared - memory multiprocessor . in _sigmetrics conference on measurement and modeling of computer systems _ , volume 8 , may 1990 .barton p. miller , mark d. callaghan , jonathan m. cargille , jeffrey k. hollingsworth , r. bruce irvin , karen l. karavanic , krishna kunchithapadam , and tia newhall . .28(11):3744 , november 1995 .special issue on performance evaluation tools for parallel and distributed computer systems .jonas maebe , michiel ronsse , and koen de bosschere . : dynamic instrumentation , optimization and transformation of applications . in _ compendium of workshops and tutorials , held in conjunction with pact02: international conference on parallel architectures and compilation techniques _ , charlottesville , virginia , usa , september 2002 .michiel ronsse and koen de bosschere .jiti : a robust just in time instrumentation technique .volume 29 of _ series computer architecture news _ , chapter proceedings of workshop on binary translation - 2000 , pages 4354 .acm press , march 2001 .k. scott , n. kumar , s. velusamy , b. childers , j. w. davidson , and m. l. soffa .retargetable and reconfigurable software dynamic translation . in_ proceedings of the international symposium on code generation and optimization 2003 _ , san francisco , california , march 2003 .
adding small code snippets at key points to existing code fragments is called instrumentation . it is an established technique to debug certain otherwise hard to solve faults , such as memory management issues and data races . dynamic instrumentation can already be used to analyse code which is loaded or even generated at run time . with the advent of environments such as the java virtual machine with optimizing just - in - time compilers , a new obstacle arises : self - modifying code . in order to instrument this kind of code correctly , one must be able to detect modifications and adapt the instrumentation code accordingly , preferably without incurring a high penalty speedwise . in this paper we propose an innovative technique that uses the hardware page protection mechanism of modern processors to detect such modifications . we also show how an instrumentor can adapt the instrumented version depending on the kind of modificiations as well as an experimental evaluation of said techniques . elis , ghent university , sint - pietersnieuwstraat 41 , 9000 gent , belgium
in a bidirectional relay network , two users exchange information via a relay node .several protocols have been proposed for such a network under the practical half - duplex constraint , i.e. , a node can not transmit and receive at the same time and in the same frequency band .the simplest protocol is the traditional two - way relaying protocol in which the transmission is accomplished in four successive point - to - point phases : user 1-to - relay , relay - to - user 2 , user 2-to - relay , and relay - to - user 1 .in contrast , the time division broadcast ( tdbc ) protocol exploits the broadcast capability of the wireless medium and combines the relay - to - user 1 and relay - to - user 2 phases into one phase , the broadcast phase .thereby , the relay broadcasts a superimposed codeword , carrying information for both user 1 and user 2 , such that each user is able to recover its intended information by self - interference cancellation .another existing protocol is the multiple access broadcast ( mabc ) protocol in which the user 1-to - relay and user 2-to - relay phases are also combined into one phase , the multiple - access phase . in the multiple - access phase , both user 1 and user 2 simultaneously transmit to the relay which is able to decode both messages .generally , for the bidirectional relay network without a direct link between user 1 and user 2 , six transmission modes are possible : four point - to - point modes ( user 1-to - relay , user 2-to - relay , relay - to - user 1 , relay - to - user 2 ) , a multiple access mode ( both users to the relay ) , and a broadcast mode ( the relay to both users ) , where the capacity region of each transmission mode is known , . using this knowledge ,a significant research effort has been dedicated to obtaining the achievable rate region of the bidirectional relay network - .specifically , the achievable rates of most existing protocols for two - hop relay transmission are limited by the instantaneous capacity of the weakest link associated with the relay .the reason for this is the fixed schedule of using the transmission modes which is adopted in all existing protocols , and does not exploit the instantaneous channel state information ( csi ) of the involved links . for one - way relaying ,an adaptive link selection protocol was proposed in where based on the instantaneous csi , in each time slot , either the source - relay or relay - destination links are selected for transmission . to this end, the relay has to have a buffer for data storage .this strategy was shown to achieve the capacity of the one - way relay channel with fading .moreover , in fading awgn channels , power control is necessary for rate maximization .the highest degree of freedom that is offered by power control is obtained for a joint average power constraint for all nodes .any other power constraint with the same total power budget is more restrictive than the joint power constraint and results in a lower sum rate .therefore , motivated by the protocols in and , our goal is to utilize all available degrees of freedom of the three - node half - duplex bidirectional relay network with fading , via an adaptive mode selection and power allocation policy . in particular , given a joint power budget for all nodes , we find a policy which in each time slot selects the optimal transmission mode from the six possible modes and allocates the optimal powers to the nodes transmitting in the selected mode , such that the sum rate is maximized .adaptive mode selection for bidirectional relaying was also considered in and .however , the selection policy in does not use all possible modes , i.e. , it only selects from two point - to - point modes and the broadcast mode , and assumes that the transmit powers of all three nodes are fixed and identical .although the selection policy in considers all possible transmission modes for adaptive mode selection , the transmit powers of the nodes are assumed to be fixed , i.e. , power allocation is not possible .interestingly , mode selection and power allocation are mutually coupled and the modes selected with the protocol in for a given channel are different from the modes selected with the proposed protocol .power allocation can considerably improve the sum rate by optimally allocating the powers to the nodes based on the instantaneous csi especially when the total power budget in the network is low .moreover , the proposed protocol achieves the maximum sum rate in the considered bidirectional network .hence , the sum rate achieved with the proposed protocol can be used as a reference for other low complexity suboptimal protocols .simulation results confirm that the proposed protocol outperforms existing protocols . finally , we note that the advantages of buffering come at the expense of an increased end - to - end delay .however , with some modifications to the optimal protocol , the average delay can be bounded , as shown in , which causes only a small loss in the achieved rate .the delay analysis of the proposed protocol is beyond the scope of the current work and is left for future research .[ c][c][0.75] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75] [ c][c][0.5] [ c][c][0.5] [ c][c][0.5] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75] in this section , we first describe the channel model. then , we provide the achievable rates for the six possible transmission modes .we consider a simple network in which user 1 and user 2 exchange information with the help of a relay node as shown in fig .we assume that there is no direct link between user 1 and user 2 , and thus , user 1 and user 2 communicate with each other only through the relay node .we assume that all three nodes in the network are half - duplex .furthermore , we assume that time is divided into slots of equal length and that each node transmits codewords which span one time slot or a fraction of a time slot as will be explained later .we assume that the user - to - relay and relay - to - user channels are impaired by awgn with unit variance and block fading , i.e. , the channel coefficients are constant during one time slot and change from one time slot to the next .moreover , in each time slot , the channel coefficients are assumed to be reciprocal such that the user 1-to - relay and the user 2-to - relay channels are identical to the relay - to - user 1 and relay - to - user 2 channels , respectively .let and denote the channel coefficients between user 1 and the relay and between user 2 and the relay in the -th time slot , respectively .furthermore , let and denote the squares of the channel coefficient amplitudes in the -th time slot . and are assumed to be ergodic and stationary random processes with means and in expectations for notational simplicity . ] , respectively , where denotes expectation . since the noise is awgn , in order to achieve the capacity of each mode , nodes have to transmit gaussian distributed codewords .therefore , the transmitted codewords of user 1 , user 2 , and the relay are comprised of symbols which are gaussian distributed random variables with variances , and , respectively , where is the transmit power of node in the -th time slot . for ease of notation , we define . in the following ,we describe the transmission modes and their achievable rates . in the considered bidirectional relay networkonly six transmission modes are possible , cf . fig .[ figmodes ] .the six possible transmission modes are denoted by , and , denotes the transmission rate from node to node in the -th time slot .let and denote two infinite - size buffers at the relay in which the received information from user 1 and user 2 is stored , respectively .moreover , , denotes the amount of normalized information in bits / symbol available in buffer in the -th time slot . using this notation ,the transmission modes and their respective rates are presented in the following : : user 1 transmits to the relay and user 2 is silent . in this mode ,the maximum rate from user 1 to the relay in the -th time slot is given by , where .the relay decodes this information and stores it in buffer .therefore , the amount of information in buffer increases to . : user 2 transmits to the relay and user 1 is silent . in this mode ,the maximum rate from user 2 to the relay in the -th time slot is given by , where .the relay decodes this information and stores it in buffer .therefore , the amount of information in buffer increases to . : both users 1 and 2 transmit to the relay simultaneously . for this mode, we assume that multiple access transmission is used , see .thereby , the maximum achievable sum rate in the -th time slot is given by , where .since user 1 and user 2 transmit independent messages , the sum rate , , can be decomposed into two rates , one from user 1 to the relay and the other one from user 2 to the relay .moreover , these two capacity rates can be achieved via time sharing and successive interference cancelation .thereby , in the first fraction of the -th time slot , the relay first decodes the codeword received from user 2 and considers the signal from user 1 as noise .then , the relay subtracts the signal received from user 2 from the received signal and decodes the codeword received from user 1 .a similar procedure is performed in the remaining fraction of the -th time slot but now the relay first decodes the codeword received from user 1 and treats the signal of user 2 as noise , and then decodes the codeword received from user 2 . therefore , for a given , we decompose as and the maximum rates from users 1 and 2 to the relay in the -th time slot are and , respectively . and are given by the relay decodes the information received from user 1 and user 2 and stores it in its buffers and , respectively .therefore , the amounts of information in buffers and increase to and , respectively . : the relay transmits the information received from user 2 to user 1 .specifically , the relay extracts the information from buffer , encodes it into a codeword , and transmits it to user 1 .therefore , the transmission rate from the relay to user 1 in the -th time slot is limited by both the capacity of the relay - to - user 1 channel and the amount of information stored in buffer .thus , the maximum transmission rate from the relay to user 1 is given by , where .therefore , the amount of information in buffer decreases to . : this mode is identical to with user 1 and 2 switching places . the maximum transmission rate from the relay to user 2is given by , where and the amount of information in buffer decreases to . : the relay broadcasts to both user 1 and user 2 the information received from user 2 and user 1 , respectively .specifically , the relay extracts the information intended for user 2 from buffer and the information intended for user 1 from buffer .then , based on the scheme in , it constructs a superimposed codeword which contains the information from both users and broadcasts it to both users .thus , in the -th time slot , the maximum rates from the relay to users 1 and 2 are given by and , respectively .therefore , the amounts of information in buffers and decrease to and , respectively .our aim is to develop an optimal mode selection and power allocation policy which in each time slot selects one of the six transmission modes , , and allocates the optimal powers to the transmitting nodes of the selected mode such that the average sum rate of both users is maximized . to this end, we introduce six binary variables , , where indicates whether or not transmission mode is selected in the -th time slot .in particular , if mode is selected and if it is not selected in the -th time slot .furthermore , since in each time slot only one of the six transmission modes can be selected , only one of the mode selection variables is equal to one and the others are zero , i.e. , holds . in the proposed framework, we assume that all nodes have full knowledge of the csi of both links .thus , based on the csi and the proposed protocol , cf .theorem [ adaptprot ] , each node is able to individually decide which transmission mode is selected and adapt its transmission strategy accordingly .in this section , we first investigate the achievable average sum rate of the network . then , we formulate a maximization problem whose solution is the sum rate maximizing protocol .we assume that user 1 and user 2 always have enough information to send in all time slots and that the number of time slots , , satisfies .therefore , using , the user 1-to - relay , user 2-to - relay , relay - to - user 1 , and relay - to - user 2 average transmission rates , denoted by , , , and , respectively , are obtained as lll[ratreg123 ] & & |r_1r = _ i = 1^n + & & |r_2r = _ i = 1^n + & & |r_r1 = _ i = 1^n \{c_r1(i),q_2(i-1 ) } + & & |r_r2 = _ i = 1^n \{c_r2(i),q_1(i-1)}. the average rate from user 1 to user 2 is the average rate that user 2 receives from the relay , i.e. , .similarly , the average rate from user 2 to user 1 is the average rate that user 1 receives from the relay , i.e. , . in the following theorem, we introduce a useful condition for the queues in the buffers of the relay leading to the optimal mode selection and power allocation policy .[ queue ] the maximum average sum rate , , for the considered bidirectional relay network is obtained when the queues in the buffers and at the relay are at the edge of non - absorbtion .more precisely , the following conditions must hold for the maximum sum rate lll[ratregapp456-buffer ] |r_1r=|r_r2 = _i = 1^n c_r2(i ) + where and are given by ( [ ratreg123]a ) and ( [ ratreg123]b ) , respectively. please refer to ( * ? ? ?* appendix a ) . using this theorem , in the following , we derive the optimal transmission mode selection and power allocation policy. the available degrees of freedom in the considered network in each time slot are the mode selection variables , the transmit powers of the nodes , and the time sharing variable for multiple access .herein , we formulate an optimization problem which gives the optimal values of , , and , for , , and , such that the average sum rate of the users is maximized .the optimization problem is as follows cll[adaptprob ] & |r_1r+|r_2r + & : |r_1r=|r_r2 + & : |r_2r=|r_r1 + & : |p_1+|p_2+|p_r p_t + & : _ k = 1 ^ 6 q_k ( i ) = 1 , i + & : q_k(i ) [ 1-q_k(i ) ] = 0 , i , k + & : p_j(i)0 , i , j + & : 0t(i)1 , i where is the total average power constraint of the nodes and , and denote the average powers consumed by user 1 , user 2 , and the relay , respectively , and are given by lll |p_1= _i = 1^n ( q_1(i)+q_3(i))p_1 ( i ) + in the optimization problem given in ( [ adaptprob ] ) , constraints and are the conditions for sum rate maximization introduced in theorem [ queue ]. constraints and are the average total transmit power constraint and the power non - negativity constraint , respectively .moreover , constraints and guarantee that only one of the transmission modes is selected in each time slot , and constraint specifies the acceptable interval for the time sharing variable . furthermore , we maximize since , according to theorem 1 ( and constraints and ) , and hold . in the following theorem , we introduce a protocol which achieves the maximum sum rate .[ adaptprot ] assuming , the optimal mode selection and power allocation policy which maximizes the sum rate of the considered three - node half - duplex bidirectional relay network with awgn and block fading is given by lll[seleccrit ] q_k^*(i)= 1 , & k^*= \{_k(i ) } + 0 , & where is referred to as _ selection metric _ and is given by lll[selecmet ] _1(i ) = ( 1-_1)c_1r(i ) - p_1(i ) |_p_1(i)=p_1^_1(i ) + _ 2(i ) = ( 1-_2)c_2r(i ) - p_2(i)|_p_2(i)=p_2^_2(i ) + _ 3(i ) = ( 1-_1)c_12r(i)+(1-_2)c_21r(i ) + - ( p_1(i)+p_2(i))|_p_2(i)=p_2^_3(i)^p_1(i)=p_1^_3(i ) + _ 6(i ) = _ 1 c_r2(i)+_2 c_r1(i ) - p_r(i)|_p_r(i)=p_r^_6(i ) where denotes the optimal transmit power of node for transmission mode in the -th time slot and is given by lll[optpower ] p_1^_1 ( i ) = ^+ + p_2^_2 ( i ) = ^+ + p_1^_3 ( i ) = ^+ , _ 1_2 + ^+ , + p_2^_3 ( i ) = ^+ , _ 1 _ 2 + ^+ , + p_r^_6 ( i ) = ^+ where ^+=\max\{x,0\} ] [ l][c][0.45]}\,(\omega_1=1) ] , to , and later in appendix [ appbinrelax ] , we prove that the binary relaxation does not affect the maximum average sum rate . in the following ,we investigate the karush - kuhn - tucker ( kkt ) necessary conditions for the relaxed optimization problem and show that the necessary conditions result in a unique sum rate and thus the solution is optimal .to simplify the usage of the kkt conditions , we formulate a minimization problem equivalent to the relaxed maximization problem in ( [ adaptprob ] ) as follows cll[adaptprobmin ] & -(|r_1r+|r_2r ) + & : |r_1r-|r_r2=0 + & : |r_2r-|r_r1=0 + & : |p_1+|p_2+|p_r - p_t0 + & : _ k = 1 ^ 6 q_k ( i ) - 1 = 0 , i + & : q_k(i)-1 0 , i , k + & : -q_k(i ) 0 , i , k + & : -p_j(i ) 0 , i , k + & : t(i)-1 0 , i + & : -t(i ) 0 , i. the lagrangian function for the above optimization problem is provided in ( [ kkt function ] ) at the top of the next page where , and are the lagrange multipliers corresponding to constraints , and , respectively .the kkt conditions include the following : l[kkt function ] = + - ( |r_1r+|r_2r ) + _ 1(|r_1r-|r_r2 ) + _ 2(|r_2r-|r_r1 ) + ( |p_1+|p_2+|p_r - p_t ) + + _ i = 1^n ( i ) ( _ k = 1 ^ 6 q_k ( i ) - 1 ) + _ i = 1^n _ k = 1 ^ 6 _ k ( i ) ( q_k ( i ) - 1 ) - _ i = 1^n _ k = 1 ^ 6 _ k ( i ) q_k ( i ) + -_i = 1^n + _ i = 1^n _ 1(i ) ( t(i)-1 ) - _ i = 1^n _ 0(i ) t(i ) * 1 ) * stationary condition : the differentiation of the lagrangian function with respect to the primal variables , , and , is zero for the optimal solution , i.e. , cccl[stationary condition ] & = & 0 , & i , k + & = & 0 , & i , j + & = & 0 , & i. * 2 ) * primal feasibility condition : the optimal solution has to satisfy the constraints of the primal problem in ( [ adaptprobmin ] ) . * 3 ) * dual feasibility condition : the lagrange multipliers for the inequality constraints have to be non - negative , i.e. , lll[dual feasibility condition ] _k(i)0 , & i , k + _ k(i)0,&i , k + 0 , & + _ j(i ) 0 , & i , j + _ l(i ) 0 , & i , l .* 4 ) * complementary slackness : if an inequality is inactive , i.e. , the optimal solution is in the interior of the corresponding set , the corresponding lagrange multiplier is zero .thus , we obtain lll[complementary slackness ] _ k ( i ) ( q_k ( i ) - 1 ) = 0,&i , k + _ k ( i ) q_k ( i ) = 0 , & i , k + ( |p_1+|p_2+|p_r - p_t ) = 0 & + _ j(i ) p_j(i ) = 0 , & i , j + _ 1(i ) ( t(i)-1 ) = 0 , & i + _ 0(i ) t(i ) = 0 , & i. a common approach to find a set of primal variables , i.e. , and lagrange multipliers , i.e. , , which satisfy the kkt conditions is to start with the complementary slackness conditions and see if the inequalities are active or not . combining these results with the primal feasibility and dual feasibility conditions , we obtain various possibilities .then , from these possibilities , we obtain one or more candidate solutions from the stationary conditions and the optimal solution is surely one of these candidates . in the following subsections , with this approach , we find the optimal values of and . in order to determine the optimal selection policy , , we must calculate the derivatives in ( [ stationary condition]a ) .this leads to lll[stationary mode ] = - ( 1-_1)c_1r(i)+(i)+_1(i)-_1(i ) + p_1(i)=0 + = -(1-_2)c_2r(i)+(i)+_2(i)-_2(i ) + p_2(i)=0 + = -[(1-_1)c_12r(i)+(1-_2)c_21r(i)]+(i ) + _ 3(i)-_3(i)+(p_1(i)+p_2(i))=0 + = - _ 2 c_r1(i)+(i)+_4(i)-_4(i)+p_r(i)=0 + = -_1 c_r2(i)+(i)+_5(i)-_5(i)+p_r(i)=0 + = -[_1 c_r2(i)+_2 c_r1(i)]+(i)+_6(i)-_6(i)+p_r(i)=0 . without loss of generality ,we first obtain the necessary condition for and then generalize the result to . if , from constraint in ( [ adaptprobmin ] ) , the other selection variables are zero , i.e. , .furthermore , from ( [ complementary slackness ] ) , we obtain and . by substituting these values into ( [ stationary mode ] ) , we obtain lll[met ] ( i)+_1(i ) = ( 1-_1)c_1r(i ) -p_1(i ) _1(i ) + ( i)-_2(i ) = ( 1-_2)c_2r(i ) -p_2(i ) _2(i ) + ( i)-_3(i ) = ( 1-_1)c_12r(i)+(1-_2)c_21r(i ) -(p_1(i)+p_2(i ) ) _3(i ) + ( i)-_4(i ) = _ 2 c_r1(i ) -p_r(i ) _4(i ) + ( i)-_5(i ) = _ 1 c_r2(i ) -p_r(i ) _5(i ) + ( i)-_6(i ) = _ 1 c_r2(i)+_2 c_r1(i ) -p_r(i ) _6(i ) , where is referred to as selection metric . by subtracting ( [ met]a ) from the rest of the equations in ( [ met ] ) , we obtain rcl[eq_2_1 ] _1(i ) - _ k(i ) = _ 1(i)+_k(i ) , k=2,3,4,5,6 . from the dual feasibility conditions given in ( [ dual feasibility condition]a ) and ( [ dual feasibility condition]b ) , we have . by inserting in ( [ eq_2_1 ] ), we obtain the necessary condition for as lll _1(i ) \ { _ 2(i ) , _ 3(i ) , _4(i ) , _ 5(i ) , _6(i ) } . repeating the same procedure for , we obtain a necessary condition for selecting transmission mode in the -th time slot as follows lll[optmet ] _k^*(i ) \{_k(i ) } , where the lagrange multipliers , and are chosen such that , and in ( [ adaptprobmin ] ) hold and the optimal value of in and is obtained in the next subsection .we note that if the selection metrics are not equal in the -th time slot , only one of the modes satisfies ( [ optmet ] ) .therefore , the necessary conditions for the mode selection in ( [ optmet ] ) is sufficient .moreover , in appendix [ appbinrelax ] , we prove that the probability that two selection metrics are equal is zero due to the randomness of the time- continuous channel gains .therefore , the necessary condition for selecting transmission mode in ( [ optmet ] ) is in fact sufficient and is the optimal selection policy . in order to determine the optimal , we have to calculate the derivatives in ( [ stationary condition]b ) .this leads to [ stationary power ] \nonumber \\ & + \gamma \frac{1}{n } ( q_1(i)+q_3(i ) ) - \nu_1(i ) = 0 \quad \tag{\stepcounter{equation}\theequation}\\ \frac{\partial\mathcal{l}}{\partial p_2(i ) } { \hspace{-0.7mm}=\hspace{-0.7mm}}&-\frac{1}{n\mathrm{ln}2 } \big [ \big \ { ( 1{\hspace{-0.7mm}-\hspace{-0.7mm}}\mu_2)q_2(i ) { \hspace{-0.7mm}+\hspace{-0.7mm}}(1{\hspace{-0.7mm}-\hspace{-0.7mm}}t(i))(\mu_1{\hspace{-0.7mm}-\hspace{-0.7mm}}\mu_2)q_3(i ) \big\ } \nonumber \\ & \times\frac{s_2(i)}{1{\hspace{-0.7mm}+\hspace{-0.7mm}}p_2(i)s_2(i ) } { \hspace{-0.7mm}+\hspace{-0.7mm}}\big\{t(i)(\mu_1{\hspace{-0.7mm}-\hspace{-0.7mm}}\mu_2){\hspace{-0.7mm}+\hspace{-0.7mm}}1{\hspace{-0.7mm}-\hspace{-0.7mm}}\mu_1 \big\}q_3(i ) \nonumber \\ & \times\frac{s_2(i)}{1+p_1(i)s_1(i)+p_2(i)s_2(i ) } \big]\nonumber \\ & + \gamma \frac{1}{n } ( q_2(i)+q_3(i ) ) - \nu_2(i)=0 \quad\tag{\stepcounter{equation}\theequation}\\ \frac{\partial\mathcal{l}}{\partial p_r(i ) } = & -\frac{1}{n\mathrm{ln}2 } \big [ \mu_2\left(q_4(i)+q_6(i)\right)\frac{s_1(i)}{1+p_r(i)s_1(i)}\nonumber \\ & + \mu_1\left(q_5(i)+q_6(i)\right)\frac{s_2(i)}{1+p_r(i)s_2(i ) } \big]\nonumber \\ & + \gamma \frac{1}{n } ( q_4(i)+q_5(i)+q_6(i ) ) - \nu_r(i)=0 \tag{\stepcounter{equation}\theequation}\end{aligned}\ ] ] the above conditions allow the derivation of the optimal powers for each transmission mode in each time slot .for instance , in order to determine the transmit power of user 1 in transmission mode , we assume . from constraint in ( [ adaptprobmin ] ) , we obtained that the other selection variables are zero and therefore . moreover ,if is selected then and thus from ( [ complementary slackness]d ) , we obtain . substituting these results in ( [ stationary power]a ) , we obtain lll [ eq_11 ] p_1^_1 ( i ) = ^+ , where ^+=\max\{0,x\}$ ] . in a similar manner , we obtain the optimal powers for user 2 in mode , and the optimal powers of the relay in modes and as follows : lll[p245 ] p_2^_2 ( i ) = ^+ + p_r^_4 ( i ) = ^+ + p_r^_5 ( i ) = ^+ in order to obtain the optimal powers of user 1 and user 2 in mode , we assume . from in ( [ adaptprobmin ] ) , we obtain that the other selection variables are zero , and therefore and .we note that if one of the powers of user 2 and user 1 is zero mode is identical to modes and , respectively , and for that case the optimal powers are already given by ( [ eq_11 ] ) and ( [ p245]a ) , respectively . for the case when and , we obtain and from ( [ complementary slackness]d ) .furthermore , for , we will show in appendix [ appkkt].c that can only take the boundary values , i.e. , zero or one , and can not be in between .hence , if we assume , from ( [ stationary power]a ) and ( [ stationary power]b ) , we obtain lll[powerm3 ] & - + = 0 + & - + = 0 by substituting ( [ powerm3]a ) in ( [ powerm3]b ) , we obtain and then we can derive from ( [ powerm3]a ) .this leads to lll [ pm3-t0 ] p_1^_3 ( i ) = p_1^_1 ( i ) , s_2 + ^+ , + p_2^_3 ( i ) = p_2^_2 ( i ) , s_2 s_1 + ^+ , similarly , if we assume , we obtain lll [ pm3-t1 ] p_1^_3 ( i ) = p_1^_1 ( i ) , s_2 s_1 + ^+ , + p_2^_3 ( i ) = p_2^_2 ( i ) , s_2 + ^+ , we note that when , we obtain which means that mode is identical to mode .thus , there is no difference between both modes so we select . in figs[ figsregion ] a ) and [ figsregion ] b ) , the comparison of , and is illustrated in the space of .moreover , the shaded area represents the region in which the powers of users 1 and 2 are zero for , and .[ c][c][0.6] [ c][c][0.6] [ c][c][0.6] [ c][c][0.6] [ c][c][0.6] [ c][c][0.6] [ c][c][0.6] [ c][c][0.6] [ c][c][0.75] [ c][b][0.7] [ c][c][0.7] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75] [ c][c][0.65] [ c][t][0.65] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75] [ c][c][0.75]a ) [ c][c][0.75]b ) [ c][c][0.75]c ) for mode , we assume . from constraint in ( [ adaptprobmin ] ) , we obtain that the other selection variables are zero and therefore and .moreover , if then and thus from ( [ complementary slackness]d ) , we obtain . using these results in ( [ stationary power]c ), we obtain lll[powerm6 ] _ 2 + _ 1 = 2 the above equation is a quadratic equation and has two solutions for .however , since we have , we can conclude that the left hand side of ( [ powerm6 ] ) is monotonically decreasing in .thus , if , we have a unique positive solution for which is the maximum of the two roots of ( [ powerm6 ] ) .thus , we obtain lll[pm6 ] p_r^_6 ( i ) = ^+ , where , and . in fig .[ figsregion ] c ) , the comparison between selection metrics and is illustrated in the space of .we note that and hold and the inequalities hold with equality if and , respectively , which happen with zero probability for time - continuous fading . to prove , from ( [ met ] ) ,we obtain rcl _6(i ) & = & _ 1c_r2(i ) + _ 2c_r1(i ) -p_r(i ) |_p_r(i)=p_r^_6(i ) + & & _1c_r2(i ) + _ 2c_r1(i ) -p_r(i)|_p_r(i)=p_r^_4(i ) + & & _2c_r1(i ) -p_r(i ) |_p_r(i)=p_r^_4(i ) = _ 4(i ) , where follows from the fact that maximizes and follows from . the two inequalities and hold with equality only if which happens with zero probability in time - continuous fading or if .however , in appendix [ appmuregion ] , is shown to lead to a contradiction .therefore , the optimal policy does not select and and selects only modes , and . to find the optimal , we assume and calculate the stationary condition in ( [ stationary condition]c ) .this leads to lll [ stationary t ] = & -(_1-_2 ) + & + _1(i)-_0(i)=0 now , we investigate the following possible cases for : * case 1 : * if then from ( [ complementary slackness]e ) and ( [ complementary slackness]f ) , we have . therefore ,from ( [ stationary t ] ) and , we obtain .then , from ( [ stationary power]a ) and ( [ stationary power]b ) , we obtain rcl[contradict ] - ( 1-_1 ) + = 0 + - ( 1-_1 ) + = 0 in appendix [ appmuregion ] , we show that , therefore , the above conditions can be satisfied simultaneously only if , which , considering the randomness of the time - continuous channel gains , occurs with zero probability .hence , the optimal takes the boundary values , i.e. , zero or one , and not values in between .* case 2 : * if , then from ( [ complementary slackness]e ) , we obtain and from ( [ dual feasibility condition]e ) , we obtain . combining these results into ( [ stationary t ] ) , the necessary condition for is obtained as .* case 3 : * if , then from ( [ complementary slackness]f ) , we obtain and from ( [ dual feasibility condition]e ) , we obtain . combining these results into ( [ stationary t ] ) ,the necessary condition for is obtained as .we note that if , we obtain either or .therefore , mode is not selected and the value of does not affect the sum rate . moreover , from the selection metrics in ( [ met ] ) , we can conclude that and correspond to and , respectively .therefore , the optimal value of is given by lll t^*(i ) = 0 , & _ 1 _ 2 + 1 , & _1 < _ 2 now , the optimal values of , and are derived based on which theorem [ adaptprot ] can be constructed .this completes the proof .in this appendix , we prove that the optimal solution of the problem with the relaxed constraint , , selects the boundary values of , i.e. , zero or one . therefore , the binary relaxation does not change the solution of the problem .if one of the , adopts a non - binary value in the optimal solution , then in order to satisfy constraint in ( [ adaptprob ] ) , there has to be at least one other non - binary selection variable in that time slot .assuming that the mode indices of the non - binary selection variables are and in the -th time slot , we obtain from ( [ complementary slackness]a ) , and and from ( [ complementary slackness]b ) . then , by substituting these values into ( [ stationary mode ] ) , we obtain lll[binrelax ] ( i ) = _ k(i ) + ( i)= _ k(i ) + ( i)-_k(i ) = _ k(i ) , kk , k . from ( [ binrelax]a ) and ( [ binrelax]b ) , we obtain and by subtracting ( [ binrelax]a ) and ( [ binrelax]b ) from ( [ binrelax]c ) , we obtain rcl _k(i ) - _ k(i ) & = & _ k(i ) , kk , k + _ k(i ) - _ k(i ) & = & _ k(i ) , kk , k . from the dual feasibility condition given in ( [ dual feasibility condition]b ), we have which leads to .however , as a result of the randomness of the time - continuous channel gains , holds for some transmission modes and , if and only if we obtain or which leads to a contradiction as shown in appendix [ appmuregion ] .this completes the proof .in this appendix , we find the intervals which contain the optimal value of and . we note that for different values of and , some of the optimal powers derived in ( [ eq_11 ] ) , ( [ p245 ] ) , ( [ pm3-t0 ] ) , ( [ pm3-t1 ] ) , and ( [ pm6 ] ) are zero for all channel realizations .for example , if , we obtain from ( [ eq_11 ] ) .[ figmuregion ] illustrates the set of modes that can take positive powers with non - zero probability in the space of ( ) . in the following ,we show that any values of and except and can not lead to the optimal sum rate or violate constraints or in ( [ adaptprobmin ] ) .* case 1 : * sets and lead to selection of either the transmission from the users to the relay or the transmission from the relay to the users , respectively , for all time slots .this leads to violation of constraints and in ( [ adaptprobmin ] ) and thus the optimal values of and are not in this region .* case 2 : * in set , both modes and need the transmission from user 2 to the relay which can not be realized in this set .thus , this set leads to violation of constraint in ( [ adaptprobmin ] ) .similarly , in set , both modes and require the transmission from user 1 to the relay which can not be selected in this set .thus , this region of and leads to violation of constraint in ( [ adaptprobmin ] ) .* case 3 : * in set , there is no transmission from user 2 to the relay .therefore , the optimal values of and have to guarantee that modes and are not selected for any channel realization .however , from ( [ met ] ) , we obtain where follows from the fact that maximizes and follows from . the two inequalities and hold with equality only if which happens with zero probability for time - continuous fading , or which is not included in this region . therefore , mode is selected in this region which leads to violation of constraint in ( [ adaptprobmin ] ) .a similar statement is true for set .thus , the optimal values of and can not be in these two regions . where inequality comes from the fact that and the equality holds when which happens with zero probability , or .inequality holds since maximizes and holds with equality only if and consequently .if , mode is not selected and there is no transmission from the relay to user 2 .therefore , the optimal values of and have to guarantee that modes and are not selected for any channel realization .thus , we obtain which is not contained in this region .if , from ( [ met ] ) , we obtain where both inequalities and hold with equality only if . if , modes and are not selectedthus , there is no transmission from the relay to the users which leads to violation of and in ( [ adaptprobmin ] ) .if , we obtain , thus mode can not be selected and either or , thus mode can not be selected either . since both modes and require the transmission from user 2 to the relay , and both modes and are not selected , constraint in ( [ adaptprobmin ] ) is violated and and can not be optimal .a similar statement is true for set .therefore , the optimal values of and are not in this region .s. j. kim , n. devroye , p. mitran , and v. tarokh , `` achievable rate regions and performance comparison of half duplex bi - directional relaying protocols , '' _ ieee trans .inf . theory _ ,57 , no .6405 6418 , oct .2011 .n. zlatanov and r. schober , `` capacity of the state - dependent half - duplex relay channel without source - destination link , '' _ submitted ieee transactions on information theory _ , 2013 .[ online ] .available : http://arxiv.org/abs/1302.3777 v. jamali , n. zlatanov , a. ikhlef , and r. schober , `` adaptive mode selection in bidirectional buffer - aided relay networks with fixed transmit powers , '' _ submitted in part to eusipco13 _ , 2013 .[ online ] .available : http://arxiv.org/abs/1303.3732
in this paper , we consider the problem of sum rate maximization in a bidirectional relay network with fading . hereby , user 1 and user 2 communicate with each other only through a relay , i.e. , a direct link between user 1 and user 2 is not present . in this network , there exist six possible transmission modes : four point - to - point modes ( user 1-to - relay , user 2-to - relay , relay - to - user 1 , relay - to - user 2 ) , a multiple access mode ( both users to the relay ) , and a broadcast mode ( the relay to both users ) . most existing protocols assume a fixed schedule of using a subset of the aforementioned transmission modes , as a result , the sum rate is limited by the capacity of the weakest link associated with the relay in each time slot . motivated by this limitation , we develop a protocol which is not restricted to adhere to a predefined schedule for using the transmission modes . therefore , all transmission modes of the bidirectional relay network can be used adaptively based on the instantaneous channel state information ( csi ) of the involved links . to this end , the relay has to be equipped with two buffers for the storage of the information received from users 1 and 2 , respectively . for the considered network , given a total average power budget for all nodes , we jointly optimize the transmission mode selection and power allocation based on the instantaneous csi in each time slot for sum rate maximization . simulation results show that the proposed protocol outperforms existing protocols for all signal - to - noise ratios ( snrs ) . specifically , we obtain a considerable gain at low snrs due to the adaptive power allocation and at high snrs due to the adaptive mode selection .
a global economic crisis , such as the recent 2008 - 2009 crisis , is certainly due to a large number of factors . in todays global economy , with strong economic relations between countries , it is important to investigate how a crisis propagates from the country of origin to other countries in the world .indeed , several significant crises in the past few decades have been originated in a single country .however , it is still not clear how and to what extent domestic economies of other countries may be affected by this spreading , due to the inter - dependence of economies . here, we use a statistical physics approach to deal with the modern economy , as it has been done successfully in the recent years for the case of financial markets and currencies .more precisely , we view the global economy by means of a complex network , where the nodes of the network correspond to the countries and the links to their economic relations . for generating the economic network we use two databases , in order to avoid any bias due to the network selection . a global corporate ownership network ( con )is extracted from a database of the 4000 world corporations with the highest turnover , obtained from the _ bureau van dijk _ .this database includes all the corporate ownership relations to their 616000 direct or indirect subsidiaries for the year 2007 .the trade conducted by these companies , in terms of import / export , is a large fraction of the total world trade .furthermore , the network of subsidiaries is a direct measure of the investment of large corporations in order to grow .foreign investment is a key factor for the development of global and local economies while , as recent economic crises suggest , the role of large corporations to the spreading of crisis in the global economy is yet not clearly understood .the second network , the international trade network ( itn ) , is extracted by the 2007 version of the chelem database obtained by _ bureau van dijk _ , which contains detailed information about international trade , and gdp values for 82 countries in million us dollars .this database provides us with an economic network based on import / export relations between countries . for both networks we are able to locate a nucleus of countries that are the most likely to start a global crisis , and to sort the remaining countries crisis spreading potential according to their `` centrality '' .initially , a crisis is triggered in a country and propagates from this country to others .the propagation probability depends on the strength of the economic ties between the countries involved and on the strength of the economy of the target country .our results show that , besides the large economies , even smaller countries have the potential to start a significant crisis outbreak .the con is a network that connects 206 countries around the globe , using as links the ownership relations within large companies .if companies listed in country a have subsidiary corporations in country b , there is a link connecting these two countries directed from country a to country b. the weight of the link , , equals the number of the subsidiary corporations in country b controlled by companies of country a. next , if companies from country b have subsidiary corporations in country c , then again there is a weighted link , , connecting these two countries directed from b to c , and so on . this way we obtain a network with total 2886 links among 206 nodes ( countries ) . of these links685 are bi - directional , meaning that if there is a link from node to , as well as a link from node to , and the rest 1516 are one directional only .we assume that the total link weight between a pair of nodes ( countries ) is the sum of all links independently of their direction , .the total link weight represents the strength of the economic ties between two countries in the network .we quantify the total economic strength of a country by its total node weight , , i.e. , summing the weights of all links of node .the probability density distributions of the total node weights and of the total link weights is skewed and heavy tailed , as shown in fig .s1 in the supplementary information .we find an almost linear relation between and the gdp of country , ( as shown in supplementary fig .s2 ) which indicates that the total weight of a country in our network is strongly correlated to a traditional economic measure .the itn is calculated from the second database after we aggregate the trade actions between all pairs of countries .using the trading relations between each pair of countries e.g. , a and b , we can create a bi - directional network where represents the export of a to b , and represents the export of b to a. of course is equal to , which stands for the imports of b from a. in accordance to the above notations , the total link weight is given by , but the total node weight which quantifies the economic strength of a node equals to its gdp value .to identify the uneven roles of different countries in the global economic network , we use the -shell decomposition and assign a shell index , , to each node .the -shell is a method identifying how central is a node in the network , the higher its the more central role the node is considered to have as a spreader .the nodes in the highest shell , called the nucleus of the network , represent the most central countries . to determine the -shell structure we start by removing all nodes having degree , and we repeat this procedure until we are left only with nodes having .these nodes constitute shell . in a similar way, we remove all nodes with until we are left only with nodes having degree .these nodes constitute .we apply this procedure until all nodes of the network have been assigned to one of the -shells . with this approachwe view the network as a layered structure .the outer layers ( small ) include the loosely connected nodes at the boundary of the network , while in the deeper layers we are able to locate nodes that are more central .an illustration of this structure is shown in fig .[ fig1 ] . to identify the nucleus of con we consider in the -shell the network having only links with weights above a cut - off threshold .by using different threshold values we locate different nuclei of different sizes , as shown in fig . [fig2](a ) .however , for the whole range of the threshold values used , namely for ] , where 0 means that in all runs this country was not infected and 1000 means that in all runs it was infected .we set a threshold value at 80% , so if a country has score then it is considered as infected .if it has score then it is considered as not infected , and if it is in the range we say that we can not conclude about its status .we find that the average percentage of infected countries is 90.6% , while the worst case scenario which is given by the maximum number of infected countries in all our runs is 96.6% . for comparison purposesif we start a crisis of the same magnitude in countries of lower shells we find a much lower percentage .for example , if we start the crisis in russia ( ks = 6 ) the average fraction of infected countries is 3.34% , while the worst case scenario which is given by the maximum value of the fraction of infected countries is 18.9% .( d ) probability of the correct prediction of the model as a function of the 2009 vs. 2008 change in the gdp percentage rate .the curve is smoothed using 6 point moving average .the trend that is shown by the linear fit ( red curve ) shows quantitatively that the more affected a country is by the real crisis of 2008 ( represented by larger change in gdp ) , the higher the probability of our model to yeld a correct prediction.,width=642 ]
* we model the spreading of a crisis by constructing a global economic network and applying the susceptible - infected - recovered ( sir ) epidemic model with a variable probability of infection . the probability of infection depends on the strength of economic relations between the pair of countries , and the strength of the target country . it is expected that a crisis which originates in a large country , such as the usa , has the potential to spread globally , like the recent crisis . surprisingly we show that also countries with much lower gdp , such as belgium , are able to initiate a global crisis . using the _ k_-shell decomposition method to quantify the spreading power ( of a node ) , we obtain a measure of `` centrality '' as a spreader of each country in the economic network . we thus rank the different countries according to the shell they belong to , and find the 12 most central countries . these countries are the most likely to spread a crisis globally . of these 12 only six are large economies , while the other six are medium / small ones , a result that could not have been otherwise anticipated . furthermore , we use our model to predict the crisis spreading potential of countries belonging to different shells according to the crisis magnitude . *
hidden in batse s superb gamma - ray burst lightcurves in different energy bands are temporal and spectral signatures of the fundamental physical processes which produced the observed emission .various techniques have been applied to the batse data to extract these signatures , such as : auto- and crosscorrelations of lightcurves in different energies ; fourier transforms ; lightcurve averaging ; cross - fourier transforms and pulse fitting . herewe propose to use linear state space models ( lssm ) to study the gamma - ray burst lightcurves .lssm estimates a time series underlying autoregressive ( ar ) process in the presence of observational noise .an ar process assumes that the real time series is a linear function of its past values ( `` autoregression '' ) in addition to `` noise , '' a stochastic component of the process .since the noise adds information to the system , it is sometimes called the `` innovation'' .a moving average of the previous noise terms is equivalent to autoregression , and therefore these models are often called arma ( autoregressive , moving average ) processes .while arma processes are simply mathematical models of a time series , the resulting model can be interpreted physically , which is the purpose of their application to astrophysical systems .for example , the noise may be the injection of energy into an emission region , while the autoregression may be the response of the emission region to this energy injection , such as exponential cooling .the application of lssm to burst lightcurves can be viewed as an exploration of burst phenomenology devoid of physical content : how complicated an ar process is necessary to model burst lightcurves ? can all bursts be modeled with the same ar process ? however , because different types of ar processes can be interpreted as the response of a system to a stochastic excitation , characterizing bursts in terms of ar processes has physical implications .since we have lightcurves in different energy bands , we can compare the response at different energies .for example , the single coefficient in the ar[1 ] process ( the nomenclature is described below ) is a function of an exponential decay constant .if the lightcurves in all energy bands can be modeled by ar[1 ] then we have decay constants for every energy band .since most bursts undergo hard - to - soft spectral evolution and temporal structure is narrower at high energy than at low energy , we expect the decay constants to be shorter for the high energy bands .the purpose of the lssm methodology is to recover the hidden ar process . if the time series is an ar[p ] process then where time is assumed to advance in integral units .the `` noise '' ( or `` innovation '' ) is uncorrelated and possesses a well - defined variance ; the noise is usually assumed to be gaussian . since the burst count rate can not be negative , we expect the noise also can not be negative .a kolmogorov - smirnov test is used to determine when p is large enough to model the system adequately .if p=1 , the system responds exponentially to the noise with a decay constant , and the p=2 system is a damped oscillator with period and relaxation time , thus , the lowest order ar processes lend themselves to obvious physical interpretations .unfortunately , we do not detect directly , but a quantity which is a linear function of and observational noise : where in our case is an irrelevant multiplicative factor and is a zero - mean noise term with variance ; is also often assumed to be gaussian .the lssm code uses the expectation - maximization algorithm .we have thus far applied our lssm code to 17 gamma - ray bursts .we used the 4-channel batse lad discriminator lightcurves extracted from the discsc , preb , and discla datatypes , which have 64 ms resolution ; the energy ranges are 2550 , 50100 , 100300 and 3002000 kev .each channel was treated separately , resulting in 68 lightcurves . of these lightcurves ,52 could be modeled by ar[1 ] , 13 by ar[2 ] and 3 by ar[4 ] .thus there is a preference for the simplest model , ar[1 ] .note that chernenko found an exponential response to a source function in their soft component .figure 1 presents the normalized relaxation time constants for the bursts in our sample , as well as their average . even for models more complicated that ar[1 ] a relaxation time constant can be identified .as expected , the averages of these time constants become shorter as the energy increases from channel 1 to channel 4 , consistent with the trend found in quantitative studies of spectral evolution and the qualitative inspection of burst lightcurves . in figure 2we present the analysis of grb 940217 , the burst with an 18 gev photon 90 minutes after the lower energy gamma - ray emission ended . as can be seen ,the residuals are much smaller than the model and are consistent with fluctuations around 0 ; plots for the data and the model are indistinguishable , and only one is presented .the amplitude of the residuals increases as the count rate increases ( attributable in part to counting statistics ) , but there is no net deviation from 0 .we plan to apply the lssm code to a large number of bursts .we will compare the order of the underlying ar process and the resulting coefficients obtained for the different energy lightcurves of the same burst and for different bursts . in this way we can search for hidden classes of bursts and explore the universality of the physical processes .the `` noise '' might be a measure of the energy supplied to the emission region ( although which physical processes are the noise and which the response is model dependent ) . therefore characterizing may probe a deeper level of the burst phenomenon .the lightcurves for the different energy bands should be related ; we expect major events to occur at the same time in all the energy bands , although the relative intensities may differ . many of the bursts consist of well - separated spikes or complexes of spikes .we will apply the lssm code to each part of the burst to determine whether the same order ar process characterizes the entire burst , and if so , whether the ar process has the same coefficients .this will test whether the physical processes remain the same during the burst .d. band s gamma - ray burst research is supported by the _ cgro _ guest investigator program and nasa contract nas8 - 36081 . band , d. l. , _ ap .j. _ * 486 * , 928 ( 1997 ) .shaviv , n. , ph.d .thesis , technion ( 1996 ) .mitrofanov , i. , , _ ap . j. _ * 459 * , 570 ( 1996 ) .kouveliotou , c. , , in _ gamma - ray bursts , aip conf .265 _ , eds .w. paciesas and g. fishman ( new york : aip ) , 299 ( 1992 ) .norris , j. p. , , _ ap . j. _ * 459 * , 393 ( 1996 ) .scargle , j. d. , _ ap .j. suppl . _* 45 * , 1 ( 1981 ) .ford , l. a. , , _ ap . j. _ * 439 * , 307 ( 1995 ) .fenimore , e. e. , , _ ap . j. lett . _ * 448 * , l101 ( 1995 ) .honerkamp , j. , _ stochastic dynamical systems _ ,( new york : vch publ . ) ( 1993 ) .knig , m. , and timmer , j. , _ astron .astrophys . _* 124 * , 589 ( 1997 ) .chernenko , a. , , these proceedings ( 1998 ) .hurley , k. , , _ nature _ * 372 * , 652 ( 1994 ) .
linear state space modeling determines the hidden autoregressive ( ar ) process in a noisy time series ; for an ar process the time series current value is the sum of current stochastic `` noise '' and a linear combination of previous values . we present preliminary results from modeling a sample of 4 channel batse lad lightcurves . we determine the order of the ar process necessary to model the bursts . the comparison of decay constants for different energy bands shows that structure decays more rapidly at high energy . the resulting models can be interpreted physically ; for example , they may reveal the response of the burst emission region to the injection of energy .
multiple - input multiple - output ( mimo ) systems with large number ( e.g. , tens ) of transmit and receive antennas , referred to as ` large - mimo systems , ' are of interest because of the high capacities / spectral efficiencies theoretically predicted in these systems , .research in low - complexity receive processing ( e.g. , mimo detection ) techniques that can lead to practical realization of large - mimo systems is both nascent as well as promising . for e.g. ,ntt docomo has already field demonstrated a v - blast system operating at 5 gbps data rate and 50 bps / hz spectral efficiency in 5 ghz band at a mobile speed of 10 km / hr .evolution of wifi standards ( evolution from ieee 802.11n to ieee 802.11ac to achieve multi - gigabit rate transmissions in 5 ghz band ) now considers mimo operation ; see mimo indoor channel sounding measurements at 5 ghz reported in for consideration in wifi standards .also , mimo channel sounding measurements at 5 ghz in indoor environments have been reported in .we note that , while rf / antenna technologies / measurements for large - mimo systems are getting matured , there is an increasing need to focus on low - complexity algorithms for detection in large - mimo systems to reap their high spectral efficiency benefits . in the above context , in our recent works, we have shown that certain algorithms from machine learning / artificial intelligence achieve near - optimal performance in large - mimo systems at low complexities - . in - , a local neighborhood searchbased algorithm , namely , a _ likelihood ascent search _( las ) algorithm , was proposed and shown to achieve close to maximum - likelihood ( ml ) performance in mimo systems with several tens of antennas ( e.g. , and mimo ) .subsequently , in , , another local search algorithm , namely , _ reactive tabu search _ ( rts ) algorithm , which performed better than the las algorithm through the use of a local minima exit strategy was presented . in ,near - ml performance in a mimo system was demonstrated using a _ gibbs sampling _ based detection algorithm , where the symbols take values from .more recently , we , in , proposed a factor graph based _ belief propagation _( bp ) algorithm for large - mimo detection , where we adopted a gaussian approximation of the interference ( gai ) .the motivation for the present work arises from the following two observations on the rts and bp algorithms in , and : rts works for general -qam . although rts was shown to achieve close to ml performance for 4-qam in large dimensions , significant performance improvement was still possible for higher - order qam ( e.g. , 16- and 64-qam ) . bp also was shown to achieve near - optimal performance for large dimensions , but only for alphabet . in this paper, we improve the large - mimo detection performance of higher - order qam signals by using a hybrid algorithm that employs rts and bp . in particular , we observed that when a detection error occurs at the rts output , the least significant bits ( lsb ) of the symbols are mostly in error .motivated by this observation , we propose to first reconstruct and cancel the interference due to bits other than the lsbs at the rts output and feed the interference cancelled received signal to the bp algorithm to improve the reliability of the lsbs .the output of the bp is then fed back to the rts for the next iteration .our simulation results show that the proposed rts - bp algorithm achieves better uncoded as well as coded ber performance compared to those achieved by rts in large - mimo systems with higher - order qam ( e.g. , rts - bp performs better by about 3.5 db at uncoded ber and by about 2.5 db at rate-3/4 turbo coded ber in v - blast with 64-qam ) at the same order of complexity as rts .the rest of this paper is organized as follows . in sec .[ sec2 ] , we introduce the rts and bp algorithms in , and and the motivation for the current work . the proposed hybrid rts - bp algorithm and its performanceare presented in secs .[ sec3 ] and [ sec4 ] .conclusions are given in sec .consider a v - blast mimo system whose received signal vector , , is of the form where is the symbol vector transmitted , is the channel gain matrix , and is the noise vector whose entries are modeled as i.i.d .assuming rich scattering , we model the entries of as i.i.d .each element of is an -pam or -qam symbol .-pam symbols take values from , where , and -qam is nothing but two pams in quadrature . as in , we convert ( [ eqn1 ] ) into a real - valued system model , given by where , , , . for -qam , ] .let denote the -pam signal set from which takes values , .defining a -dimensional signal space to be the cartesian product of to , the ml solution vector , , is given by whose complexity is exponential in .the rts algorithm in , is a low - complexity algorithm , which minimizes the ml metric in ( [ mldetection ] ) through a local neighborhood search . a detailed description of the rts algorithm for large - mimo detection is available in , .here , we present a brief summary of the key aspects of the algorithm , and its 16- and 64-qam performance that motivates the current work . the rts algorithm starts with an initial solution vector , defines a neighborhood around it ( i.e. , defines a set of neighboring vectors based on a neighborhood criteria ) , and moves to the best vector among the neighboring vectors ( even if the best neighboring vector is worse , in terms of likelihood , than the current solution vector ; this allows the algorithm to escape from local minima ) .this process is continued for a certain number of iterations , after which the algorithm is terminated and the best among the solution vectors in all the iterations is declared as the final solution vector . in defining the neighborhood of the solution vector in a given iteration ,the algorithm attempts to avoid cycling by making the moves to solution vectors of the past few iterations as ` tabu ' ( i.e. , prohibits these moves ) , which ensures efficient search of the solution space .the number of these past iterations is parametrized as the ` tabu period . 'the search is referred to as fixed tabu search if the tabu period is kept constant .if the tabu period is dynamically changed ( e.g. , increase the tabu period if more repetitions of the solution vectors are observed in the search path ) , then the search is called reactive tabu search .we consider reactive tabu search because of its robustness ( choice of a good fixed tabu period can be tedious ) .the per - symbol complexity of rts for detection of v - blast signals is .figure [ fig1 ] shows the uncoded ber performance of rts using the algorithm parameters optimized through simulations for 4- , 16- , and 64-qam in a v - blast system . as lower bounds on the error performance in mimo , the siso awgn performance for 4- , 16- , and 64-qam are also plotted .it can be seen that , in the case of 4-qam , the rts performance is just about 0.5 db away from the siso awgn performance at ber .however , the gap between rts performance and siso awgn performance at ber widens for 16-qam and 64-qam ; the gap is 7.5 db for 16-qam and 16.5 db for 64-qam .this gap can be viewed as a potential indicator of the amount of improvement in performance possible further. a more appropriate indicator will be the gap between rts performance and the ml performance .since simulation of sphere decoding ( sd ) of v - blast with 16- and 64-qam ( 64 real dimensions ) is computationally intensive , we do not show the sd ( ml ) performance .nevertheless , the widening gap of rts performance from siso awgn performance for 16- and 64-qam seen in fig .[ fig1 ] motivated us to explore improved algorithms to achieve better performance than rts performance for higher - order qam .= 7.2 cm = 9.60 cm [ fig1 ] in , we presented a detection algorithm based on bp on factor graphs of mimo systems . in ( [ eqn2 ] ) , each entry of the vector is treated as a function node ( observation node ) , and each symbol , , as a variable node . a key ingredient in the bp algorithm in , which contributes to its low complexity , is the gaussian approximation of interference ( gai ) , where the interference plus noise term , , in is modeled as with , and , where is the element in . with s ,the log - likelihood ratio ( llr ) of at observation node , denoted by , is the llr values computed at the observation nodes are passed to the variable nodes ( as shown in fig . [ fig2 ] ) . using these llrs , the variable nodes compute the probabilities and pass them back to the observation nodes ( fig .[ fig2 ] ) .this message passing is carried out for a certain number of iterations . at the end , is detected as it has been shown in that this bp algorithm with gai , like las and rts algorithms , exhibits ` large - system behavior , ' where the bit error performance improves with increasing number of dimensions .= 3.5 cm = 4.2 cm = 3.5 cm = 4.2 cm in fig .[ fig1 ] , the uncoded ber performance of this bp algorithm for 4-qam ( input data vector of size with elements from ) in v - blast is also plotted .we can see that the performance is almost the same as that of rts . in terms of complexity , the bp algorithm has the advantage of no need to compute an initial solution vector and , which is required in rts .the per - symbol complexity of the bp algorithm for detection in v - blast is .a limitation with this bp approach is that it is not for general -qam .however , its good performance with alphabet at lower complexities than rts can be exploited to improve the higher - order qam performance of rts , as proposed in the following section .[ sec3 ] in this section , we highlight the rationale behind the hybrid rts - bp approach and present the proposed algorithm . _ why hybrid rts - bp ? _the proposed hybrid rts - bp approach is motivated by the the following observation we made in our rts ber simulations .we observed that , at moderate to high snrs , when an rts output vector is in error , the least significant bits ( lsb ) of the data symbols are more likely to be in error than other bits .an analytical reasoning for this behavior can be given as follows .let be the transmit vector and be the corresponding output of the rts detector .let denote the -pam alphabet that s take values from . consider the symbol - to - bit mapping , where we can write the value of each entry of as a linear combination of its constituent bits as where and .we note that the rts algorithm outputs a local minima as the solution vector .so , , being a local minima , satisfies the following conditions : where , and denotes the column of the identity matrix . defining , , and denoting the column of as , the conditions in ( [ localminima ] ) reduce to where denotes the element of . under moderate to high snr conditions , ignoring the noise , ( [ reducedcondition ] ) can be further reduced to where denotes the column of * f*. for rayleigh fading , is chi - square distributed with degrees of freedom with mean . approximating the distribution of to be normal with mean zero and variance for by central limit theorem , we can drop the in ( [ finalcondn1 ] ) . using the fact that the minimum value of is 2 , ( [ finalcondn1 ] )can be simplified as where .also , if , by the normal approximation in the above now , the lhs in ( [ finalcondn2 ] ) being normal with variance proportional to and the rhs being positive , it can be seen that , take smaller values with higher probability . hence , the symbols of are nearest euclidean neighbors of their corresponding symbols of the global minima with high probability s and s take values from -pam alphabet , is said to be the euclidean nearest neighbor of if . ] .now , because of the symbol - to - bit mapping in ( [ linearcomb ] ) , will differ from its nearest euclidean neighbors certainly in the lsb position , and may or may not differ in other bit positions .consequently , the lsbs of the symbols in the rts output are least reliable .the above observation then led us to consider improving the reliability of the lsbs of the rts output using the bp algorithm in , and iterate between rts and bp as follows .= 2.50 cm = 8.4 cm [ fig3 ] _ proposed hybrid rts - bp algorithm : _ figure [ fig3 ] shows the block schematic of the proposed hybrid rts - bp algorithm .the following four steps constitute the proposed algorithm .* _ step 1 : _ obtain using the rts algorithm .obtain the output bits , , , from and ( [ linearcomb ] ) . *_ step 2 : _ using the s from step 1 , reconstruct the interference from all bits other than the lsbs ( i.e. , interference from all bits other than s ) as where ^t$ ] .cancel the reconstructed interference in ( [ intf ] ) from * y * as * _ step 3 : _ run the bp - gai algorithm in sec .[ sec_bp ] on the vector in step 2 , and obtain an estimate of the lsbs .denote this lsb output vector from bp as .now , using from the bp output , and the , from the rts output in step 1 , reconstruct the symbol vector as * _ step 4 : _ repeat steps 1 to 3 using as the initial vector to the rts algorithm .the algorithm is stopped after a certain number of iterations between rts and bp .our simulations showed that two iterations between rts and bp are adequate to achieve good improvement ; more than two iterations resulted in only marginal improvement for the system parameters considered in the simulations .since the complexity of bp part of rts - bp is less than that of the rts part , the order of complexity of rts - bp is same as that of rts .= 7.3 cm = 9.8 cm [ fig4 ][ sec4 ] in this section , we present the uncoded and coded ber performance of the proposed rts - bp algorithm evaluated through simulations. perfect knowledge of is assumed at the receiver ._ performance in large v - blast systems : _ figure [ fig4 ] shows the uncoded ber performance of v - blast with 16- and 64-qam . performance of both rts - bp as well as rts are shown .it can be seen that , at an uncoded ber of , rts - bp performs better than rts by about 3.6 db for 64-qam and by about 1.6 db for 16-qam .this illustrates the effectiveness of the proposed hybrid rts - bp approach .also , this improvement in uncoded ber is found to result in improved coded ber as well , as illustrated in fig .[ fig5 ] . in fig .[ fig5 ] , we have plotted the turbo coded ber of rts - bp and rts in v - blast with 64-qam for rate-1/2 ( 96 bps / hz ) and rate-3/4( 144 bps / hz ) turbo codes .it can be seen that , at a coded ber of , rts - bp performs better than rts by about 1.5 db at 96 bps / hz and by about 2.5 db at 144 bps / hz . = 7.3 cm = 9.8 cm [ fig5 ] _ performance in large non - orthogonal stbc mimo systems : _ we also evaluated the ber performance of large non - orthogonal stbc mimo systems with higher - order qam using rts - bp detection .figure [ fig6 ] shows the uncoded ber of and non - orthogonal stbc from cyclic division algebra for 16-qam . hereagain , we can see that rts - bp achieves better performance than rts ._ performance in frequency - selective large v - blast systems : _ we note that the performance plots in figs . [ fig4 ] to [ fig6 ] are for frequency - flat fading , which could be the fading scenario in mimo - ofdm systems where a frequency - selective fading channel is converted to frequency - flat channels on multiple subcarriers .rts - bp , rts , and las algorithms , being suited to work well in large dimensions , can be applied to equalize signals in frequency - selective channels in large - mimo systems . following the equivalent real - valued system model of the form in ( [ eqn2 ] ) for frequency - selective mimo systems developed in , we evaluated the performance of rts - bp , rts , and las algorithms in v - blast with 16-qam on a frequency selective channel with equal energy multipath components and symbols per frame .figure [ fig7 ] shows the superior performance of the rts - bp algorithm over the rts and las algorithms in this frequency - selective large - mimo system with 16-qam .= 7.3 cm = 10.0 cm [ fig6 ] = 7.3 cm = 10.0 cm [ fig7 ][ sec5 ] we proposed a hybrid algorithm that exploited the good features of the rts and bp algorithms to achieve improved bit error performance and nearness to capacity performance for -qam signals in large - mimo systems at practically affordable low complexities .we illustrated the performance gains of the proposed hybrid approach over the rts algorithm in flat - fading as well as frequency - selective fading for large v - blast as well as large non - orthogonal stbc mimo systems .we note ( e.g. , from the performance plots for 64-qam in figs .[ fig1 ] and [ fig5 ] ) that further improvement in performance beyond what is achieved by the proposed hybrid rts - bp algorithm could be possible .investigation of alternate detection strategies to achieve this possible improvement is a subject for further investigation .
low - complexity near - optimal detection of large - mimo signals has attracted recent research . recently , we proposed a local neighborhood search algorithm , namely _ reactive tabu search _ ( rts ) algorithm , as well as a factor - graph based _ belief propagation _ ( bp ) algorithm for low - complexity large - mimo detection . the motivation for the present work arises from the following two observations on the above two algorithms : rts works for general -qam . although rts was shown to achieve close to optimal performance for 4-qam in large dimensions , significant performance improvement was still possible for higher - order qam ( e.g. , 16- and 64-qam ) . bp also was shown to achieve near - optimal performance for large dimensions , but only for alphabet . in this paper , we improve the large - mimo detection performance of higher - order qam signals by using a hybrid algorithm that employs rts and bp . in particular , motivated by the observation that when a detection error occurs at the rts output , the least significant bits ( lsb ) of the symbols are mostly in error , we propose to first reconstruct and cancel the interference due to bits other than lsbs at the rts output and feed the interference cancelled received signal to the bp algorithm to improve the reliability of the lsbs . the output of the bp is then fed back to rts for the next iteration . our simulation results show that in a v - blast system , the proposed rts - bp algorithm performs better than rts by about 3.5 db at uncoded ber and by about 2.5 db at rate-3/4 turbo coded ber with 64-qam at the same order of complexity as rts . we also illustrate the performance of large - mimo detection in frequency - selective fading channels .
one of the most pervasive tendencies of humans is putting things in ranking order . in human societies these tendencies are reflected in their social interactions and networks being hierarchical in many respects .hierarchies and ranks emerge due to individuals subjective perceptions that some other individuals are in some respect better .then a relevant research question is whether or not the formation and structure of hierarchies in human societies can be understood by making the assumption that the dominant driving force of people in social interactions is to enhance their own `` value '' or `` status '' relative to others .we call this assumption `` better than - hypothesis '' ( bth ) and note that it is closely related to the thinking of the school of individual psychology founded by adler in the early 1900s , which , while starting with the assumption that human individuals universally strive for `` superiority '' over others , emphasizes inferiority avoidance as a motive for many human actions . further studies of this kind of individuals status - seeking behaviour , especially concerning consumer behaviour and economics , include the canonical references by veblen , duesenberry and packard ( see also refs ) .in addition there is a closely related sociological model called social dominance theory , which proposes that the construction and preservation of social hierarchies is one of the main motivations of humans in their social interactions and networks .however , the most relevant observational facts concerning bth come from the field of experimental economics , especially from the results of experiments on the so - called `` ultimatum game '' , where the human players have been shown to reject too unequal distributions of money .the concept of _ inequity aversion _ , that is the observed social phenomenon of humans preferring equal treatment in their societies , is often invoked to explain these observations .recently some models featuring inequity aversion have been proposed in refs . .all of these models , although from different fields of study , have something to do with the relative standings between different human individuals and groups , and so they could all be considered to emerge from or be based on a single principle such as bth .it is this generality which makes bth an intriguing and interesting object of study .there are even some studies on economic data , such as , that suggest a link between relative social standings and human well - being , and considerations of social status have measurable effects on brain functions , as shown in e.g. .these studies imply that bth could well be something fundamental to human nature . the competition for a better hierarchical position among humans can be intense and sometimes even violent .however , humans have other characteristics including egalitarianism as well as striving for fairness .these traits could be interpreted in the context of bth by remarking that people need to live in societies and make diverse social bonds , which in turn would contribute to their social status .this means that the members of society when they make decisions , need to take the feelings of others into account .hence the behavioral patterns of individuals in social networks should then be characterised by sensitivity to the status of the other individuals in the network .this sensitivity manifests itself as inequity aversion and treating others fairly . to find out what in this context are the plausible and relevant mechanisms of human sociality driving societal level community formation we will focus on improving the bth - based approach by using the frame of agent - based models and studying the emergence of social norms in such social systems , following the tradition presented in refs . . in this studywe use an agent - based network model applying bth - based approach to simulate social interactions dependent on societal values and rank , to get insight to their global effects on the structure of society .we find that in such a model society with a given constant ranking system the social network forms a degree hierarchy on top of the ranking system under bth , such that the agents degrees tend to increase , the further away their rank is from the average .the structure of the paper is as follows . in section [ model ]we motivate the basics of bth using the simple and well - researched ultimatum game as an example , and in section [ modelv1 ] we show how the findings from this can be utilised as a part of agent - based models .in section [ nc ] we present the numerical results of the simulations from the model , and in section [ meanfield ] we analyse them .the two final sections discuss the possible interpretations of the results and present the conclusions .in this section we describe the theoretical basis for our model .we start by analysing the ultimatum game first proposed in , as it allows us to derive a basic form for the social gain function in our model .the ultimatum game is a game with two players , where one player has the task to make a proposal to the other player about how a given sum of money should be divided between them .the second player then gets to choose if the proposal is acceptable or not ; if it is , the money is divided as proposed .if not , neither player gets anything .experiments show that humans playing this game normally do not accept deals that are perceived to be unfair , i.e. in situations in which the proposer gets too large a share of the money ( see , e.g. refs .this is a classic problem in the mainstream economics , where humans are assumed to be rational and , therefore , accept something rather than nothing .we implement bth in the ultimatum game by interpreting the money used in a deal as a way of comparing the status between the one who accepts the proposal ( called from now on the accepter ) and its proposer .we denote the change of `` status '' of the accepter as , which takes into account it s own monetary gain , and the gain in relation to the proposer .therefore , the simplest expression for is , \\ & -[r_a(t_0 ) - r_p(t_0 ) ] , \end{aligned } \label{perusta}\ ] ] where and stand for the monetary reserves ( in the context of the game ) of the accepter and proposer , respectively , at time , with being the time before the deal and the time after the deal . in terms of economic theory , would be called the accepter s change of utility , which is ordinarily assumed to consist of the term or the absolute payoff of the accepter . the additional terms -[r_a(t_0 ) - r_p(t_0)] ] and is a small increment . in the spirit of simulated annealing techniques ,the magnitude of the change is larger at the beginning of the dynamics and falls linearly with time to a minimum value , the maximum and minimum values being and , respectively , and the time period to reach the minimum is 1000 time steps . in general , the links between the agents in the social network may change in time for which purpose we use the following rewiring scheme of ref .the social network of the agents is initially random , but will change periodically , i.e. at every time steps of the dynamical eq .( [ dm ] ) . given the definition of the total social value of an agent in eq .( [ totsosval ] ) , the gain function eq.([altdel ] ) can be used to calculate the loss or gain in total social status when forming or breaking new social bonds . in this study , we take any positive gain as sufficient to justify the rearrangement of social relations between the agents . when agent considers cutting an existing bond with agent , the gain function has the form where is the current number of neighbours of agent .similarly , when agent considers forming a new bond with agent , the social gain function reads since any positive change indicated by the functions above leads to rewiring , it is the sign of these functions that determines whether or not links between agents are broken or created .for instance , if , the link between agents and will be cut , and preserved if . in the same vein , a link between agents and will be created if , and not created otherwise .it should be noted that when forming links the opinions of both agents are taken into account : the relation formation only succeeds if and are both positive .the agents will form all the relationships they can in a rewiring cycle .the numerical simulations of the models of social system described in the previous sections are performed as follows .first , the initial state of the system is set at random with the agents given a relatively small initial opinion between and , a ranking parameter between and maximum value of and initial connections to other agents , with initial average degree of .the opinion and ranking parameters are chosen using a random number generator , which returns a flat distribution .the dynamics are then run for time steps , which is , according to our test runs , sufficient for the general structure of the network to settle .however , the dynamics of the opinion variables do not have a set stopping point , so they may experience fluctuations even when such fluctuations do not have an effect on the network structure anymore .to obtain reliable statistics , the same simulations are repeated times with random initial values , and averages are calculated from these repeated tests for the quantities under study . the rewiring timescale is fixed to in our simulations , since this value lies in the range where communities are formed in the opinion formation model of ref .the main parameter whose effect is studied here is the number of simulated agents , .the main objective of this research is to study the structure of the social networks created under bth assumption in the case of a rigid ranking system , which we perform using the model explained in section [ modelv1 ] .the most interesting properties of the system are then associated with assortativity , or the tendency of agents with high degrees connecting to other highly connected agents , and homophily , or the inclination of similar agents forming connections between each other . in the context of this study, homophily refers to agents with similar ranking parameters forming connections with each other .the averaged numerical results extracted from the simulations consist then of the standard network properties , i.e. degree the shortest path , the average clustering coefficient , the mean number of second neighbours , susceptibility and average assortativity coefficient , and a homophily coefficient .susceptibility here refers to average cluster size , which is calculated as the second moment of the number of sized clusters , : as customary in percolation theory , the largest connected component of the network is not counted in calculating . for the assortativity coefficient we use the definition given in , and the homophily coefficientis defined using pearson s product moment coefficient , which measures the goodness of a linear fit to a given data . for a sampleit can be defined as where and are vectors containing the value parameters of agents linked by link , and are the mean values of these vectors , respectively , and is the total number of links . more specifically , if agents and are connected by link , then and . the links are indexed as follows : the links involving the first agent are given the first indices , then follow the links involving the second agent but not the first , and so on , without repeating links that have already been indexed .it should be noted , however , that only measures linear correlation between the ranking parameters of linked agents , it does not indicate how steep these trends are . to check whether the system is truly homophilic , then , one needs to make a linear fit to the data : the closer the obtained linear coefficients are to , the greater the homophily .= 0.85 the average network properties of the system , with graphs illustrating the behaviour of the system are shown in fig .[ fig : np100r ] as functions of the population size , which is varied between and .the main observations that can be made about the graphs in fig.[fig : np100r](a ) are that at lower population levels they show a tendency of breaking apart into many subcomponents of different sizes , while for larger population sizes they tend to consist of a single large component and possibly some smaller separate clusters . a noteworthy fact about these clustersis that they consist of agents with similar values of the ranking pressure , which means that the network exhibits homophily in this case .the largest clusters are found at extreme values , and they become smaller when one approaches , which also corresponds to average ranking parameters . in the high population casethe picture becomes more complicated due to the emergence of clusters that contain agents with opposing opinions as well .these new clusters tend to be less connected than the previously described homogeneous ones , and they tend to connect to the large subgraphs , thus forming a single giant graph .a closer look reveals that these agents generally have opinion variables and ranking pressures with opposite signs and are depicted as triangles in fig .[ fig : np100r ] and named ` contrarians' from now on .a naive analysis would indicate that the agents with positive ranking pressures should always support the ascending hierarchy , and the agents with negative ranking pressures should always support the descending hierarchy .however , the contrarian agents exhibit opposite preferences .the reason why this behaviour is status - wise profitable can be found by looking into the connections of the contrarian agents , details of which are shown in figs .[ fig:1bis ] and [ fig : scatter2 ] .as it turns out , most of the connections they form are to other similar agents but with opposite `` polarity '' to theirs , i.e. the contrarian agent with negative ranking pressure forms connections mostly with contrarian agents of positive ranking pressure , and vice versa .an important quantity in the model is the social value , which is a product of the opinion and ranking pressure , as it is seen in eq .[ totsosval ] .if and have opposite signs then the `` self esteem '' part of is negative , which in itself does not mean that the agent can not develop a contrarian opinion , since the second sum in the equation could be positive because it depends on the opinions of the neighbours .this allows the contrarian to be able to make connections with agents of the same or opposite ranking pressure .additionally , by looking at all connections among contrarians we find that they mostly have of the same sign as , as illustrated in the example of fig .[ fig:1bis ] .= 0.85 this situation can be status - wise beneficial to all parties involved , since the small penalty to an agent s self - esteem is more than compensated by the respect that the agent will gain in this case from other agents .the fact that the agents could find this strategy using as primitive an intelligence setup as hill climbing is astounding .another interesting thing about the contrarians is that they appear mostly as connections between clusters that are defined as communities of `` normal '' agents .the various kinds of behaviour exhibited by the social networks have a marked effect on the network properties also shown in fig .[ fig : np100r](b ) .the most obvious is the gradually rising normalised maximum cluster size ( ) , which is about 40% of total population size for , and over 95% for . from the figure it seems that the maximum cluster size reaches 50% of the population size for approximately ,after which point we may assume that the contrarian behavioural patterns start to become progressively more pronounced .the susceptibility at first rises pretty much linearly , which is not too surprising because of the tendency of network to break into smaller subgraphs at low population sizes. however , once the population size reaches about , the susceptibility starts to decay , most likely due to the main component of the network becoming more prominent , with a decay pattern that is almost piecewise linear itself , apart from fairly large fluctuations . while fairly high throughout , the homophily and clustering coefficients gradually fall as functions of population size , almost certainly due to the proliferation of contrarians .there are no great changes in other network properties , as in the average assortativity coefficient ( ) , although some faint systematic tendencies can be discerned , a slight rising of the average path length ( ) , as well as slightly decreasing average number of clusters ( ) .the rest of the properties per agent are nearly constant , a slight rise of the average number number of second neighbours ( ) , and a just perceptible decrease of the average cluster size ( ) and average degree ( ) . a way to illustrate the homophily of the system is to make a scatterplot of the ranking parameters of linked agents .as it is seen in fig.[fig : scatter2 ] , the correlation turns out to be very homophilous , as the ranking parameters of linked agents correspond very closely to one other .the emergence of the contrarians is also clearly seen : for , the percentage of contrarians in 10 realisations is 5.4% , while for is 13.8% , for is 24.9% and for one realisation in in a network of 1000 agents is 38.5% .there is also a clear decreasing linear trend due to the contrarians .the rising trend is without doubt caused by the normal agents , who tend to associate with agents of similar rank , and the decreasing trend is likewise due to the contrarians .both trends have a similar tendency to form square - like patterns along the diagonals , with each `` square '' corresponding to some of the many visible communities of the graphs .= 0.85 + as explained above , it is necessary to check whether the correspondence of the rankings is truly homophilic . in table[ tab:1 ] we show the value of the homophily coefficient of normal and contrarian agents for networks of various sizes .the data were taken from 10 different numerical realisations in each case .observe that normal agents have values very near one , and contrarians are around one half .also in the table we show the slope of the regression of vs. .normal agents are very close to one , indicating high degree of homophily , and the contrarians are negative and around 0.6 , indicating that they mostly form connection with agents that have opposite signs of the ranking pressure , and are much less homophilic ..homophily measurement from 10 numerical realisations of networks with different sizes . [ cols="<,^,^,^,^,^,^,^,^ " , ] from condition([expla4 ] ) it follows that the largest community in comprises of those agents with ranking parameters over , which means that the community will have approximately members , when one takes into account the fact that the ranking parameters are uniformly distributed .the second largest community , likewise , consists of those agents with ranking parameters between and , and has about members .the nth ( ) largest group will have ranking parameters between and , and have members . while ranking parameters naturally vary from agent to agent within the communities , the average value of the ranking parameters of each group falls approximately to the middle point of each ranking range due to the uniform distribution of the parameters : from figs .[ fig : scatter2 ] and [ fig : p1000 ] we see that only a maximum of four to five of these communities exist in practice at any one time , so we limit our approximation to these groups . by assuming the groups to be fully connected , as they seem to be in the graphs , and approximating sums of the products of the degrees and ranking parameters with the products of their average values we get where denotes the average ranking parameter and the number of members of the largest group . substituting the values given in table [ t:1 ] we get which in turn can be inserted into ( [ cgsimple ] ) : finally , using the definition of the condition ( [ cgsimple2 ] ) can be written in the form where . from condition ( [ cgsimple3 ] ) we can see that the probability of agent following the conventional wisdom diminishes with rising and decreasing , which is what we saw happening in our simulations judging from the results shown in the previous section . to take an example , for the case the largest community of agents with positive ranking pressures comprises of about agents .this means , according to the inequality ( [ cgsimple3 ] ) , that the agents with could definitively be expected to always choose to have positive opinion variables . from fig .[ fig : scatter2 ] we can tell that the real threshold is closer to , which is , however , in remarkably good agreement with the approximate value of when one takes into the consideration the fact that the appearance of the contrarians themselves was not taken into account in the derivation of ( [ cgsimple3 ] ) , and that for they are already very prominent . if one were to derive the condition equivalent to ( [ cgsimple3 ] ) with contrarian strategies taken into account , one would need to consider the effect that the contrarians have on their neighbours total social value .thus , condition ( [ cgsimple3 ] ) will most likely not hold for networks with larger .the last question we need to address as regards to the contrarians is the fact that they are often embedded in the groups of normal agents .so why would it be status - wise beneficial for an agent using normal strategy to retain , let alone form , a link with a contrarian agent ?let us consider a situation where an agent pursues contrarian strategies in a group of normal agents .returning to inequality ( [ expla1 ] ) , we see that it is acceptable for a normal agent with and maximal to form a link with a contrarian agent with and minimal if and we assume that all the neighbours of agents and also have the maximal opinion parameters .the striking fact about this relation is that it is always fulfilled in this case , meaning that would always find formation of links with contrarians acceptable . from the point of view of agent , however , linking to is only acceptable if which leads to the very same result as before .i.e. will not form a link with unless , if and are to belong to the same group .thus the contrarian agents would behave and be treated as normal agents when forming relations , which is surprising considering that their contribution to the total social value of other agents is negative .the latter fact is demonstrated in the simulations with some of the most counterintuitive behaviours of the model , namely , relations between agents being first broken and immediately reinstated .let us use eq .( [ cutgainij ] ) to determine , whether the agent from the previous calculations would benefit from cutting the link with agent , even when expression ( [ cnrel ] ) says that would also form a link with in the event that such a link did not exist . with the previously stated assumptions , we arrive to the following condition for the link to be cut . from ( [ cncut ] )we see that unless , agent will cut its ties with the contrarians .since for the largest groups , the inequality ( [ cncut ] ) is likely to be true most of the time , leading to links between agents and being cut and immediately reformed repeatedly , since inequality ( [ cnrel ] ) also holds . as suggested above, we have observed this behavioural pattern in our simulations , and to some extent it can be observed in fig .[ fig:1bis ] , in which it is seen that only the connections between contrarian and normal agents and can the signs of and being of opposite signs .it should again be stressed , however , that the calculations above do not take into account the existence of more than one contrarian .having more contrarians in the system allows them to form links between each other , which has a sizeable effect on the overall structure of the network . in summary, it could be said that while conditions ( [ expla3 ] ) and ( [ cnrel2 ] ) provide surprisingly well fitting approximations as to how a given agent chooses to link with other agents , the conditions ( [ cgsimple3 ] ) , ( [ cnrel ] ) and ( [ cncut ] ) ( though pointing to the right direction ) only give vague qualitative explanations for the behaviour of contrarians and can not be expected to yield precise numerical predictions .the interpretation of the ranking parameter serves as the key to find possible parallels between our model and the real world .as it describes a single property of an agent , the links between the agents only correspond to exchanges of opinion on whether the agents with larger are `` better '' than the agents with smaller , or vice versa .agents could be considered as being embedded a larger social context , and in this context they could , in principle , have other social connections . in this case , the results presented in the previous section are best interpreted in terms of echo chambers , which means that agents prefer such a social hierarchy in which they have better relative rank , and seek to communicate their opinion to others .the agents whose ranking parameters are further away from the average , are more vocal in broadcasting their views and gather supporters , since they rank highly in their chosen hierarchy and , therefore , would benefit from their hierarchy becoming more widely accepted . on the other ,the agents with average ranking parameters are much more reluctant to take part in the conversation at all , since they do not rank highly in either of the hierarchies .then the end result for small system sizes is that agents divide themselves according to their ranking pressure into two or more distinct communities supporting opposing hierarchies , in which the agents with similar rankings lump together and refuse to communicate with those that disagree .it is the shutting out of the opposing point of view that makes this system s behaviour reminiscent of echo chambers found in reality .however , with increasing system size the agents develop more nuanced positions on their preferred hierarchies due to mounting social competition , as is seen in the emergence of the contrarians .there is , however , an alternative way to interpret the ranking parameter .it could be taken to represent an aggregate of all the social properties of an agent , thereby representing its total standing in the societal status measures .in this case the connections could represent the totality of the agents social interactions , and the opinion variables the agents attitude to the ( current ) state society at large . with this interpretation the rupture between the different communities observed for smaller system sizes would actually represent a real disintegration of the society .this might have implications concerning early human migrations , as they could easily have been influenced by social pressures as well as material needs .if the environmental pressures define a minimum group size necessary for a comfortable life for a tribe , and this minimum is smaller than the limit at which the tribe is forced to be adopting more advanced strategies to enhance social stability , as exemplified by the contrarians in our simulations , the tribe may well split , with splinter groups migrating elsewhere .other than the different economic games , bth can also shed some light into the well known paradox of value , also known as diamond - water paradox , which refers to the fact that diamonds are far more valued in monetary terms than water , even though water is necessary for life and diamonds not . from the bth viewpoint the solution to this paradox is obvious : water , being necessary requirement for life , has to be available in sufficient quantities to all living humans , which means that owning water or its source does not set an individual apart from others , that is , an individual can not really compare favorably to others on grounds of having water .diamonds , on the other hand , are relatively rare , and thus can not be owned by everyone . therefore , an individual possessing diamonds is compared favourably to others , and so diamonds acquire a relatively high value in comparison to water in the minds of humans , in a very similar manner with which the veblen goods become valuable .then bth , in a sense , contains in itself a natural definition of value , although further work is needed to determine how exactly this status - value relates to other forms of value , such as value derived from usefulness or necessity .in section [ model ] we only analysed the behaviour of the accepter , since this is straightforward in comparison to predicting the behaviour of the proposer .the experiments on the ultimatum game often find that the proposers tend to offer fair shares to accepters , which is easily explained in the context of bth by the desire of the proposer to have the proposal accepted : the proposers only offer shares that they would accept themselves , and in this way eq.([accepted ] ) also restricts the proposers offers , although it can not tell the exact amount of money offered . to be able to give a better estimate for the offers one would need to study the learning processes that shape the proposers experience on how uneven treatment people are usually willing to tolerate .this is , however , outside the scope of this paper .the behaviour of dictators in the dictator game is somewhat more difficult to analyse using bth .the dictator game is similar to the ultimatum game , the only difference being that the other player does not even get to make a choice , and only receives what the first player , or dictator , endows .it has been observed that in this game the dictators tend to be rather generous , which is difficult but not impossible to explain in the context of bth , if one takes into account the effect of reputation and other `` social goods '' .the nature of such influence on the behaviour of the dictator will be studied in a later work . however , there are some indications that bth could very well be applied to the dictator game when all the social effects are taken into account .it has been reported that when the rules of the dictator game are modified so that instead of giving money to the other players , the dictator gets to take some or all of the money given to the other players ( thus turning the game into a `` taking game '' ) , the dictator s behaviour changes from egalitarian to self serving , i.e. taking often the majority or even all of the available money . from the bth point of view , the dictator s observed behaviour change can potentially be explained in terms of social norms . in the ordinary dictator gamethe dictator may still feel bound by the usual norms of the society , while in the `` taking game '' it is encouraged to go against these norms .this sets the `` taker '' apart from the other player in particular , and other members of the society in general .hence the dictator feels `` better '' than the others when breaking the norms with impunity , and act on this feeling by taking money from the other players .the fact that bth can possibly lead to formation of norms as well as rebellion against these norms is well worth of further studies .relating to the known results of the ultimatum game , we have formulated a hypothesis explaining the observed behaviour of humans in terms of superiority maximization , or `` better than''-hypothesis , and presented a simple agent - based model to implement this hypothesis .the model describes agents with constant ranking parameters and raises the question whether the agents with larger ranks are `` better '' than agents with smaller ranks or the other way around .we have found that the social system produced by our model , features homophily , meaning that agents forming social ties with other agents with similar ranking parameters , and assortativity , describing the tendency of highly / lowly connected agents forming links with other highly / lowly connected agents .in addition we find community formation , both in terms of there being communities with opposing opinions and in terms of the communities with the same opinion fracturing into smaller ones according to their ranking parameters .furthermore , we have observed the formation of a hierarchy , in the sense of a connectivity hierarchy being formed on top of the one defined by the ranking parameters , with the agents with extreme ranking parameters presenting higher connectivity than the agents with average ranking parameters .moreover , we have found that the resulting social networks tend to be disconnected for small system sizes , but mostly connected for larger system sizes .this fact may have some relevance for research of early human migrations , hinting of the effects of social pressure in shaping the social network .j.e.s . acknowledges financial support from niilo helander s foundation , g.i . acknowledges a visiting fellowship from the aalto science institute , and k.k .acknowledges financial support by the academy of finland research project ( cosdyn ) no .276439 and eu horizon 2020 fet open ria project ( ibsen ) no .r.a.b . wants to thank aalto university for kind hospitality during the development of this work .rab acknowledges financial support from conacyt through project 799616 .we acknowledge the computational resources provided by the aalto science - it project .fagundes , m. s. , ossowski , s. , cerquides , j. & noriega , p. ( 2016 ) design and evaluation of norm - aware agents based on normative markov decision processes ._ international journal of approximate reasoning _ , * 78 * , 3361 . j. , h. , boyd , r. , bowles , s. , camerer , c. , fehr , e. & gintis , h. ( 2004 ) _ foundations of human sociality : economic experiments and ethnographic evidence from fifteen small - scale societies_. oxford university press
in human societies , people s willingness to compete and strive for better social status as well as being envious of those perceived in some way superior lead to social structures that are intrinsically hierarchical . here we propose an agent - based , network model to mimic the ranking behaviour of individuals and its possible repercussions in human society . the main ingredient of the model is the assumption that the relevant feature of social interactions is each individual s keenness to maximise his or her status relative to others . the social networks produced by the model are homophilous and assortative , as frequently observed in human communities and most of the network properties seem quite independent of its size . however , it is seen that for small number of agents the resulting network consists of disjoint weakly connected communities while being highly assortative and homophilic . on the other hand larger networks turn out to be more cohesive with larger communities but less homophilic . we find that the reason for these changes is that larger network size allows agents to use new strategies for maximizing their social status allowing for more diverse links between them . community formation , opinion formation , social hierarchy
in recent years , gaussian filters have assumed a central role in image filtering and techniques for accurate measurement .the implementation of the gaussian filter in one or more dimensions has typically been done as a convolution with a gaussian kernel , that leads to a high computational cost in its practical application .computational efforts to reduce the gaussian convolution complexity are discussed in .more advantages may be gained by employing a _ spatially recursive filter _ , carefully constructed to mimic the gaussian convolution operator . + recursive filters ( rfs ) are an efficient way of achieving a long impulse response , without having to perform a long convolution . initially developed in the context of timeseries analysis , they are extensively used as computational kernels for numerical weather analysis , forecasts , digital image processing .recursive filters with higher order accuracy are very able to accurately approximate a gaussian convolution , but they require more operations .+ in this paper , we investigate how the rf mimics the gaussian convolution in the context of variational data assimilation analysis .variational data assimilation ( var - da ) is popularly used to combine observations with a model forecast in order to produce a _ best _ estimate of the current state of a system and enable accurate prediction of future states .here we deal with the three - dimensional data assimilation scheme ( 3d - var ) , where the estimate minimizes a weighted nonlinear least - squares measure of the error between the model forecast and the available observations .the numerical problem is to minimize a cost function by means of an iterative optimization algorithm .the most costly part of each step is the multiplication of some grid - space vector by a covariance matrix that defines the error on the forecast model and observations .more precisely , in 3d - var problem this operation may be interpreted as the convolution of a covariance function of background error with the given forcing terms .+ here we deal with numerical aspects of an oceanographic 3d - var scheme , in the real scenario of oceanvar .ocean data assimilation is a crucial task in operational oceanography and the computational kernel of oceanvar software is a linear system resolution by means of the conjugate gradient ( gc ) method , where the iteration matrix is relate to an errors covariance matrix , having a gaussian correlation structure .+ in , it is shown that a computational advantage can be gained by employing a first order rf that mimics the required gaussian convolution .instead , we use the 3rd - rf to compute numerically the gaussian convolution , as how far is only used in signal processing , but only recently used in the field of var - da problems .+ in this paper we highlight the main sources of error , introduced by these new numerical operators .we also investigate the real benefits , obtained by using 1-st and 3rd - rfs , through a careful error analysis .theoretical aspects are confirmed by some numerical experiments . finally , we report results in the case study of the oceanvar software . + the rest of the paper is organized as follows . in the next section we recall the three - dimensional variational data assimilation problem and we remark some properties on the conditioning for this problem .besides , we describe our case study : the oceanvar problem and its numerical solution with cg method . in section iii , we introduce the -th order recursive filter and how it can be applied to approximate the discrete gaussian convolution . in section iv, we estimate the effective error , introduced at each iteration of the cg method , by using 1st - rf and 3rd - rf instead of the gaussian convolution . in sectionv , we report some experiments to confirm our theoretical study , while the section vi concludes the paper .the aim of a generic variational problem ( var problem ) is to find a best estimate , given a previous estimate and a measured value . with these notations ,the var problem is based on the following regularized constrained least - squared problem : where is defined in a grid domain .the objective function is defined as follows : where measured data are compared with the solution obtained from a nonlinear model given by .+ in ( [ ls pb ] ) , we can recognize a quadratic data - fidelity term , the first term and the general regularization term ( or penalty term ) , the second one .when and the regularization term can be write as : we deal with a three - dimensional variational data assimilation problem ( 3d - var da problem ) .the purpose is to find an optimal estimate for a vector of states ( called the analysis ) of a generic system , at each time given : * a prior estimate vector ( called the background ) achieved by numerical solution of a forecasting model , with error ; * a vector of observations , related to the nonlinear model by that is an effective measurement error : at each time t , the errors in the background and the errors in the observations are assumed to be random with mean zero and covariance matrices and , respectively .more precisely , the covariance of observational error is assumed to be diagonal , ( observational errors statistically independent ) .the covariance of background error is never assumed to be diagonal as justified in the follow .to minimize , with respect to and for each , the problem becomes : in explicit form , the functional cost of ( [ da pb ] ) problem can be written as : it is often numerically convenient to approximate the effects on of small increments of , using the linearization of . for small increments ,follows , it is : where the linear operator is the matrix obtained by the first order approximation of the jacobian of evaluated at . +now let be the _misfit_. then the function in ( [ da1 pb ] ) takes the following form in the increment space : at this point , at each time , the minimum of ( [ da2 pb ] ) is obtained by requiring .this gives rise to the linear system : or equivalently : for each time , iterative methods , able to converge toward a practical solution , are needed to solve the linear system ( [ system1 ] ) .however this problem , so as formulated , is generally very ill conditioned . more precisely , by following , and assuming that [ lambda_m ] is a diagonal matrix , it can be proved that the conditioning of is strictly related to the conditioning of the matrix ( the covariance matrix ) . in general , the matrix is a block - diagonal matrix , where each block is related to a single state of vector and it is ill conditioned . +this assertion is exposed in starting from the expression of for one - state vectors as : where is the background error variance and is a matrix that denotes the correlation structure of the background error . assuming that the correlation structure of matrix is homogeneous and depends only on the distance between states and not on positions , an expression of as a symmetric matrix with a circulant form is given ; i. e. as a toeplitz matrix . by means of a spectral analysis of its eigenvalues , the ill - conditioning of the matrix is checked . as in , it follows that is ill - conditioned and the matrix , of the linear system ( [ system1 ] ) , too . a well - known technique for improving the convergence of iterative methods for solving linear systems is to _ preconditioning _ the system and thus reduce the condition number of the problem .+ in order to precondition the system in ( [ system1 ] ) , it is assumed that can be written in the form , where is the square root of the background error covariance matrix . because is symmetric gaussian , is uniquely defined as the symmetric ( ) gaussian matrix such that . + as explained in , the cost function ( [ da2 pb ] ) becomes : \ \\quad \!=\ !\frac{1}{2 } ( d_t \!- \ ! { \bf h } \delta x_t)^t { \bf r}^{-1 } ( d_t \!- \ !{ \bf h } \delta x_t ) + \frac{1}{2 } \delta x_t^t ( { \bf v^t})^{-1 } { \bf v}^{-1}\delta x_t \end{array}\ ] ] now , by using a new control variable , defined as , at each time and observing that we obtain a new cost function : equation ( [ da3 pb ] ) is said the _ dual problem _ of equation ( [ da2 pb ] ) .finally , to minimize the cost function in ( [ da3 pb ] ) leads to the new linear system : upper and lower bounds on the condition number of the matrix are shown in . in particularit holds that : moreover , under some special assumptions , it can be proved that is very well - conditioned ( ) . as described in , at each time , oceanvar software implements an oceanographic three - dimensional variational da scheme ( 3d var - da ) to produce forecasts of ocean currents for the mediterranean sea .the computational kernel is based on the resolution of the linear system defined in ( [ system2 ] ) . to solve it , the conjugate gradient ( cg ) methodis used and a basic outline is described in * algorithm 1*. [ cg ] ; , the initial guess ; + ; + ; + ; ; ; + ; ; + ; ; + [ cga ] we focus our attention on step 5 . :at each iterative step , a matrix - vector product is required , where is the residual at step and depends on the number of observations and is characterized by a bounded norm ( see for details ) .more precisely , we look to the matrix - vector product which can be schematized as shown in * algorithm 2*. [ five_gaussian ] ; + ; + ; + ; [ pcg ] the steps 1 . and 3 .in * algorithm 2 * consist in a matrix - vector product .these products , as detailed in next section , can be considered discrete gaussian convolutions and the matrix , for one - dimensional state vectors , has gaussian structure . even for state vectors defined on two ( or more ) dimensions , the matrix can be represented as product of two ( or more ) gaussian matrices . since a single matrix - vector product of this form becomes prohibitively expensive if carried out explicitly , a computational advantage is gained by employing gaussian rfs to mimic the required gaussian convolution operators . + in the previous oceanvar scheme , it was implemented a 1st - rf algorithm , as described in . here , we study the 3rd - rf introduction , based on .+ the aim of the following sections is to precisely reveal how the -th order recursive filters are defined and , through the error analysis , to investigate on their effect in terms of error estimate and perfomences .in this section we describe gaussian recursive filters as approximations of the discrete gaussian convolution used in steps 1 . and 3 .of * algorithm 2*. let denote by the normalized gaussian function and by the square matrix whose entries are given by now let be a vector ; the discrete gaussian convolution of is a new vector defined by means of the matrix - vector product the discrete gaussian convolution can be considered as a discrete representation of the continuous gaussian convolution .as is well known , the continuous gaussian convolution of a function with the normalized gaussian function is a new function defined as follows : (x ) = \int_{-\infty}^{+\infty } g(x- \tau ) s^0(\tau ) d \tau.\ ] ] discrete and continuous gaussian convolutions are strictly related .this fact could be seen as follows .let assume that is a grid of evaluation points and let set for by assuming that is outside of $ ] and by discretizing the integral ( [ convgauss ] ) with a rectangular rule , we obtain an optimal way for approximating the values is given by gaussian recursive filters .the -order rf filter computes the vector as follows : s_i^k=\beta_i p_i^{k}+ \displaystyle{\sum_{j=1}^n } \alpha_{i , j } s_{i+j}^k \,\,\,\i = m,\ldots,1 \end{array } \right .. \ ] ] the iteration counter goes from to , where is the total number of filter iterations . observe that values are computed taking in the sums terms provided that .analogously values are computed taking in the sums terms provided that . the values and , at each grid point , are often called _ smoothing coefficients _ and they obey to the constraint in this paper we deal with first - order and third - order rfs . the first - order rf expression ( )becomes : s_m^k=\beta_m p_m^{k } , \\\ , s_i^k=\beta_i p_i^{k}+ \alpha_i s_{i+1}^k \qquad i = m-1,\ldots,1 .\end{array } \right.\ ] ] if is the correlation radius at , by setting coefficients e are given by : the third - order rf expression ( ) becomes : s_i^k=\beta_i p_i^{k}+ \displaystyle{\sum_{j=1}^3 } \alpha_{i , j } s_{i+j}^k \,\,\,\ i = m,\ldots,1 . \end{array } \right.\ ] ] third - order rf coefficients and , for one only filter iteration ( ) , are computed in .if the coefficients expressions are : \alpha_{i,2}=-(3.382473 \sigma_i^2 + 3 \sigma_i^3)/a_i\\[.2 cm ] \alpha_{i,3}= \sigma_i^3 /a_i\\[.2 cm ] \beta_i=1-(\alpha_{i,1}+\alpha_{i,2}+\alpha_{i,3})=3.738128/a_i .\end{array}\ ] ] in is proposed the use of a value instead of .the value is : 3.97156 - 4.14554 \sqrt{1 - 0.26891\sigma_i } \quad \text{oth . }\end{array } \right.\ ] ] in order to understand how gaussian rfs approximate the discrete gaussian convolution it is useful to represent them in terms of matrix formulation . asexplained in , the -order recursive filter computes from as the solution of the linear system where matrices and are respectively lower and upper band triangular with nonzero entries by formally inverting the linear system ( [ sysl - u ] ) it results where .a direct expression of and its norm could be obtained , for instance , for the first order recursive filter in the homogenus case ( ) . however , in the following , it will be shown that has always bounded norm , i.e. observe that is the matrix operator that substitutes the gaussian operator in ( [ gauss_oper ] ) , then a measure of how well approximates can be derived in terms of the operator distance ideally one would expect that goes to ( and ) as approaches to , yet this does not happen due to the presence of edge effects . in the next sections we will investigate about the numerical behaviour of the distance for some case study and we will show its effects in the cg algorithm .here we are interested to analyze the error introduced on the matrix - vector operation at step 5 . of * algorithm 1 * , when the gaussian rf is used instead of the discrete gaussian convolution .as previously explained , in terms of matrices , this is equivalent to change the matrix operator , then * algorithm 2 * can be rewritten as shown in * algorithm 3*. [ five_rf ] ; + ; + ; + ; [ pcg2 ] now we are able to give the main result of this paper : indeed the following theorem furnishes an upper bound for the error , made at each single iteration of the cg ( * algorithm 1 * ) .this bound involves the operator norms the distance and the error accumulated on at previous iterations .+ let be , , , as in * algorithm 2 * and * algorithm 3*. let be and let denote by the difference between values and . then it holds \ \+ \vert { \bf f_n^{(k ) } } \!-\!{\bf v } \vert \!\cdot\ ! \vert{\bf\psi}\vert \!\cdot \!\big(\vert{\bf v } \vert \!+\!\vert { \bf f_n^{(k)}}\vert\big ) \!\cdot\ !\vert\widetilde{\rho}_k\vert.\end{aligned}\ ] ] * proof : * a direct proof follows by using the values and introduced in algorithm 2 and in algorithm 3 .it holds : then , for the difference , we get the bound hence , for the difference , we obtain in the second - last inequality we used the fact that finally , observing that and taking the upper bound of , the thesis is proved .+ previous theorem shows that , at each iteration of the cg algorithm , the error bound on the computed value at step 5 ., is characterized by two main terms : the first term can be considered as the contribution of the standard forward error analysis and it is not significant , if is small ; the second term highlights the effect of the introduction of the rf .more in detail , at each iteration step , the computed value is biased by a quantity proportional to three factors : * the distance between the original operator ( the gaussian operator * v * ) and its approximation ( the operator ) ; * the norm of ; * the sum of the operator norms and .* table 1 * : operator norms + [ cols="<,^,^",options="header " , ] these case study shows that , neglecting the edge effects , the 3-rd rf filter is more accurate the the 1st - rf order with few iterations .this fact is evident by observing the results in figure [ fig : pr ] and the operator norms in table 3 .finally , we remark that the 1-st order rf has to use 100 iteration in order to obtain the same accuracy of the 3-rd order rf .this is a very interesting numerical feature of the third order filter .the theoretical considerations of the previous sections are useful to understand the accuracy improvement in the real experiments on ocean var .the preconditioned cg is a numerical kernel intensively used in the model minimizations .implementing a more accurate convolution operators gives benefits on the convergence of gc and on the overall data assimilation scheme .here we report experimental results of the 3rd - rf in a global ocean implementation of oceanvar that follows .these results are extensively discussed in the report . in real scenarios scientific libraries and an high performance computing environments are needed .the case study simulations were carried - out on an ibm cluster using 64 processors .the model resolution was about degree and the horizontal grid was tripolar , as described in .this configuration of the model was used at cmcc for global ocean physical reanalyses applications ( see ) .the model has 50 vertical depth levels .the three - dimensional model grid consists of 736141000 grid - points .the comparison between the 1st - rf and 3rd - rf was carried out for a realistic case study , where all in - situ observations of temperature and salinity from expendable bathythermographs ( xbts ) , conductivity , temperature , depth ( ctds ) sensors , argo floats and tropical mooring arrays were assimilated .the observational profiles are collected , quality - checked and distributed by .the global application of the recursive filter accounts for spatially varying and season - dependent correlation length - scales ( clss ) .correlation length - scale were calculated by applying the approximation given in to a dataset of monthly anomalies with respect to the monthly climatology , with inter - annual trends removed. the analysis increments from a 3dvar applications that uses the 1st - rf with 1 , 5 and 10 iterations and the 3rd - rf are shown in figure 5 with a zoom in the same area of western pacific area as in figure 5 , for the temperature at 100 m of depth .the figure also displays the differences between the 3rd - rf and the 1st - rf with either 1 or 10 iterations .the patterns of the increments are closely similar , although increments for the case of 1st - rf ( k=1 ) are generally sharper in the case of both short ( e.g. off japan ) or long ( e.g. off indonesian region ) clss .the panels of the differences reveal also that the differences between 3rd - rf and the 1st - rf ( k=10 ) are very small , suggesting once again that the same accuracy of the 3rd - rf can be achieved only with a large number of iterations for the first order recursive filter .finally , in [ arxiv ] was also observed that the 3rd - rf compared to the 1st - rf ( k=5 ) and the 1st - rf ( k=10 ) reduces the wall clock time of the software respectively of about 27% and 48% .recursive filters ( rfs ) are a well known way to approximate the gaussian convolution and are intensively applied in the meteorology , in the oceanography and in forecast models . in this paper , we deal with the oceanographic 3d - var scheme oceanvar .the computational kernel of the oceanvar software is a linear system solved by means of the conjugate gradient ( gc ) method .the iteration matrix is related to an error covariance matrix , with a gaussian correlation structure . in other words , at each iteration , a gaussian convolution is required .generally , this convolution is approximated by a first order rf . in this work, we introduced a 3rd - rf filter and we investigated about the main sources of error due to the use of 1st - rf and 3rd - rf operators .moreover , we studied how these errors influence the cg algorithm and we showed that the third order operator is more accurate than the first order one .finally , theoretical issues were confirmed by some numerical experiments and by the reported results in the case study of the oceanvar software .99 m. abramowitz , i. stegun - _ handbook of mathematical functions_. dover , new york , 1965 .m. belo pereira , l. berre - _ the use of an ensemble approach to study the background - error covariances in a global nwp model_. mon .2466 - 2489 , 2006 . c. cabanes , a. grouazel , k. von schuckmann , m. hamon , v. turpin , c. coatanoan , f. paris , s. guinehut , c. bppne , n. ferry , c. de boyer montgut , t. carval , g. reverding , s. puoliquen , p.y. l. traon - _ the cora dataset : validation and diagnostics of in - situ ocean temperature and salinity measurements_. ocean sci 9 , pp . 1 - 18 , 2013. s. cuomo , a. galletti , g. giunta and a. starace-_surface reconstruction from scattered point via rbf interpolation on gpu _ , federated conference on computer science and information systems ( fedcsis ) , 2013 , pp .433 - 440 .g. dahlquist and a. bjorck - _ numerical methods_. prentice hall , 573 pp .j. derber , a. rosati - _ a global oceanic data assimilation system_. journal of phys .1333 - 1347 , 1989. l. d amore , r. arcucci , l. marcellino , a. murli- _ hpc computation issues of the incremental 3d variational data assimilation scheme in oceanvarsoftware_. journal of numerical analysis , industrial and applied mathematics , 7(3 - 4 ) , pp 91 - 105 , 2013 .r. deriche - _ separable recursive filtering for efficient multi - scale edge detection_. proc .workshop machine vision machine intelligence , tokyo , japan , pp 18 - 23 , 1987 s. dobricic , n. pinardi - _ an oceanographic three - dimensional variational data assimilation scheme_. ocean modeling 22 , pp 89 - 105 , 2008 .r. farina , s. cuomo , p. de michele , f. piccialli-_a smart gpu implementation of an elliptic kernel for an ocean global circulation model _, applied mathematical sciences , 7 ( 61 - 64 ) , 2013 pp.3007 - 3021 .r. farina , s. dobricic , s. cuomo-_some numerical enhancements in a data assimilation scheme _, aip conference proceedings 1558 , 2013 , doi : 10.1063/1.4826017 .n. ferry , b. barnier , g. garric , k. haines , s. masina , l. parent , a. storto , m. valdivieso , s. guinehut , s. mulet - _ nemo : the modeling engine of global ocean reanalysis_. mercator ocean quaterly newsletter 46 , pp 60 - 66 , 2012 .s. haben , a. lawless , n. nichols - _ conditioning of the 3dvar data assimilation problem_. university of reading , dept . of mathematics ,math report series 3 , 2009 ; s. haben , a. lawless , n. nicholos - _ conditioning and preconditioning of the variational data assimilation problem_. computers and fluids 46 , pp 252 - 256 , 2011 .l. haglund - _ adaptive multidimensional filtering_. linkping university , sweden , 1992 .lorenc - _ iterative analysis using covariance functions and filters_. quartely journal of the royal meteorological society 1 - 118 , pp 569 - 591 , 1992 .lorenc - _ development of an operational variational assimilation scheme_. journal of the meteorological society of japan 75 , pp 339 - 346 , 1997 .g. madec , m. imbard - _ a global ocean mesh to overcome the north pole singularity_. clim .dynamic 12 , pp 381 - 388 , 1996 .purser , w .- s .parish , n.m .roberts - _ numerical aspects of the application of recursive filters to variational statistical analysis .part ii : spatially inhomogeneous and anisotropic covariances_. monthly weather review 131 , pp 1524 - 1535 , 2003 . c. hayden ,r. purser - _ recursive filter objective analysis of meteorological field : applications to nesdis operational processing_. journal of applied meteorology 34 , pp 3 - 15 , 1995 .a. storto , s. dobricic , s. masina , p. d. pietro - _ assimilating along - track altimetric observations through local hydrostatic adjustments in a global ocean reanalysis system_. mon .139 , pp 738 - 754 , 2011 .vliet , i. young , p.verbeek - _ recursive gaussian derivative filters_. international conference recognition , pp 509 - 514 , 1998 .van vliet , p.w .verbeek - _ estimators for orientation and anisotropy in digitized images_. proc .asci95 , heijen ( netherlands ) , pp 442 - 450 , 1995 .a. t. weaver , p. courtier - _ correlation modelling on the sphere using a generalized diffusion equation_. quarterly journal of the royal meteorological society 127 , pp 1815 - 1846 , 2001 .a. witkin - _ scale - space filtering_. proc .joint conf . on artificial intelligence , karlsruhe , germany , pp 1019 - 1021 , 1983 .young , l.j .van vliet - _ recursive implementation of the gaussian filter_. signal processing 44 , pp 139 - 151 , 1995 .
computational kernel of the three - dimensional variational data assimilation ( 3d - var ) problem is a linear system , generally solved by means of an iterative method . the most costly part of each iterative step is a matrix - vector product with a very large covariance matrix having gaussian correlation structure . this operation may be interpreted as a gaussian convolution , that is a very expensive numerical kernel . recursive filters ( rfs ) are a well known way to approximate the gaussian convolution and are intensively applied in the meteorology , in the oceanography and in forecast models . in this paper , we deal with an oceanographic 3d - var data assimilation scheme , named oceanvar , where the linear system is solved by using the conjugate gradient ( gc ) method by replacing , at each step , the gaussian convolution with rfs . here we give theoretical issues on the discrete convolution approximation with a first order ( 1st - rf ) and a third order ( 3rd - rf ) recursive filters . numerical experiments confirm given error bounds and show the benefits , in terms of accuracy and performance , of the 3-rd rf .
the contact pattern among individuals in a population is an essential factor for the spread of infectious diseases . in deterministic models, the transmission is usually modelled using a contact rate function , which depends on the contact pattern among individuals and also on the probability of disease transmission . the contact function among individuals with different ages , for instance ,may be modelled using a contact matrix or a continuous function .however , using network analysis methods , we can investigate more precisely the contact structure among individuals and analyze the effects of this structure on the spread of a disease .the degree distribution is the fraction of vertices in the network with degree .scale - free networks show a power - law degree distribution where is a scaling parameter .many real world networks are scale - free .in particular , a power - law distribution of the number of sexual partners for females and males was observed in a network of human sexual contacts .this finding is consistent with the preferential - attachment mechanism ( ` the rich get richer ' ) in sexual - contact networks and , as mentioned by liljeros et al . , may have epidemiological implications , because epidemics propagate faster in scale - free networks than in single - scale networks .epidemic models such as the susceptible infected ( si ) and susceptible infected susceptible ( sis ) models have been used , for instance , to model the transmission dynamics of sexually transmitted diseases and vector - borne diseases , respectively .many studies have been developed about the dissemination of diseases in scale - free networks and in small - world and randomly mixing networks .scale - free networks present a high degree of heterogeneity , with many vertices with a low number of contacts and a few vertices with a high number of contacts . in networks of human contacts or animal movements , for example , this heterogeneity may influence the potential risk of spread of acute ( e.g. influenza infections in human and animal networks , or foot - and - mouth disease in animal populations ) and chronic ( e.g. tuberculosis ) diseases .thus , simulating the spread of diseases on these networks may provide insights on how to prevent and control them . in a previous publication , we found that networks with the same degree distribution may show very different structural properties .for example , networks generated by the barabsi - albert ( ba ) method are more centralized and efficient than the networks generated by other methods . in this work, we studied the impact of different structural properties on the dynamics of epidemics in scale - free networks , where each vertex of the network represents an individual or even a set of individuals ( for instance , human communities or animal herds ) .we developed routines to simulate the spread of acute ( short infectious period ) and chronic ( long infectious period ) infectious diseases to investigate the disease prevalence ( proportion of infected vertices ) levels and how fast these levels would be reached in networks with the same degree distribution but different topological structure , using si and sis epidemic models .this paper is organized as follows . in section [ sec : hypothetical ] , we describe the scale - free networks generated . in section [ sec : model ] , we show how the simulations were carried out .the results of the simulations are analyzed in section [ sec : results ] .finally , in section [ sec : conclusions ] , we discuss our findings .we generated scale - free networks following the barabsi - albert ( ba ) algorithm , using the function barabasi.game( , , directed ) from the r package igraph , varying the number of vertices ( = , and ) , the number of edges of each vertex ( = 1 , 2 and 3 ) and the parameter that defines if the network is directed or not ( directed = true or false ) . for each combination of and ,10 networks were generated .then , in order to guarantee that all the generated networks would follow the same degree distribution and that the differences on the topological structure would derive from the way the vertices on the networks were assembled , we used the degree distribution from ba networks as input , to build the other networks following the method a ( ma ) , method b ( mb ) , molloy - reed ( mr ) and kalisky algorithms , all of which were implemented and described in detail in ref .as mentioned above , these different networks have distinct structural properties .in particular , the networks generated by mb are decentralized and with a larger number of components , a smaller giant component size , and a low efficiency when compared to the centralized and efficient ba networks that have all vertices in a single component .the other three models ( ma , mb and kalisky ) generate networks with intermediate characteristics between mb and ba models .the element of the adjacency matrix of the network , , is defined as if there is an edge between vertices and and as , otherwise .we also define the elements of the vector of infected vertices , . if vertex is infected , then , and , if it is not infected , .the result of the multiplication of the vector of infected vertices , , by the adjacency matrix , , is a vector , , whose element corresponds to the number of infected vertices that are connected to the vertex and may transmit the infection using matlab , the spread of the diseases with hypothetical parameters along the vertices of the network was simulated using the following algorithm : 1 . a proportion ( ) of the vertices is randomly chosen to begin the simulation infected . for our simulations , , since we are interested in the equilibrium state and this proportion guarantees that the disease would not disappear due to the lack of infected vertices at the beginning of the simulations .2 . in the sis ( susceptible infected susceptible ) epidemic model , a susceptible vertex can get infected , returning , after the infectious period , to the susceptible state . for each time step : 1 . we calculate the probability ( ) of a susceptible vertex , that is connected to infected vertices , to get infected , using the following equation : where is the probability of disease spread .[ item2.b ] we determine which susceptible vertices were infected in this time step : if (0,1 ) ,the susceptible vertex becomes infected . foreach vertex infected , we generate the time ( ) that the vertex will be infected following a uniform distribution : uniform ( , ) , where and are , respectively , the minimum and the maximum time of the duration of the disease .3 . decrease in 1 time step the duration of the disease on the vertices that were already infected , verifying if any of them returned to the susceptible state ; 4 .update the status of all vertices . for the si ( susceptible infected ) epidemic model , we chose in order to guarantee that an infected vertex remains infected until the end of the simulation . varying the values of the parameter , we simulated the behaviour of hypothetical acute and chronic diseases , using different values of , considering that once a vertex gets infected it would remain in this state during an average fixed time ( an approach that can be used when we lack more accurate information about the duration of the disease in a population ) or that there would be a variation in this period , representing more realistically the process of detection and treatment of individuals ( table [ tab : table1 ] shows the diseases simulated and the values of assumed ) . [cols="^,^,^,^,^ " , ] we adopted a total time of simulation ( ) of 1000 arbitrary time steps . for each spreading model , we carried out 100 simulations for each network , calculating the prevalence of the simulated disease for each time step .then we calculated the average of the prevalence of these simulations on each network .after that we grouped the simulations by network algorithm .finally , we calculated the average prevalence of these network models .on the undirected networks , we observed that the disease spreads independently of the value of used , and that an increase in leads to an increase in the prevalence of the infection ( figure [ fig : figure1 ] ) .also , we observed that the prevalence tends to stabilize approximately in the same level despite the addition of vertices ( figure not shown ) .when we increase the number of edges of each vertex , there is an increase in the prevalence of the infection .a result that stands out is that , when , there is a great difference in the equilibrium level of the prevalence in each network .however , as we increase the value of , the networks tend to show closer values of equilibrium ( figure [ fig : figure3 ] ) . among the undirected networks, the networks generated using the mb algorithm presented the lowest values of prevalence in the spreading simulations ( figure [ fig : figure4 ] ) .on the directed networks , we observed that , despite the simulations of acute diseases , the disease spreads independently of the value of used and , as in the undirected networks , an increase in leads to an increase in the prevalence of the infection ( figure [ fig : figure5 ] ) . also , similarly to what was observed for undirected networks, the prevalence tends to stabilize approximately in the same level despite the addition of vertices ( figure not shown ) .when we increase the number of edges of each vertex , there is an increase in the prevalence of the infection ( figure [ fig : figure7 ] ) .a result that stands out is that , when , the prevalence in the kalisky networks tend to stabilize in a level a little bit higher than the other ones . among the networks , those generated using the ba algorithm presented the lowest values of prevalence in the spreading simulations ( figure [ fig : figure8 ] ) . to compare the numerical results of the simulation with a theoretical approach ,it is possible to deduce , for an undirected scale - free network assembled following the ba algorithm , the equilibrium prevalence , given by . \label{eq:3}\ ] ] this expression applies to the ba undirected network with and a fixed infectious period of one time unit .for instance , for and , we obtain . in figure [ fig : figure9 ] , we observe that the equilibrium prevalence in the simulation reaches the value predicted by equation ( [ eq:3 ] ) .our approach , focusing on different networks with the same degree distribution , allows us to show how the topological features of a network may influence the dynamics on the network . analyzing the results of the spreading simulations, we have , as expected , that the variation in the number of vertices of the hypothetical networks had little influence in the prevalence of the diseases simulated , a result that is consistent with the characteristics of the scale - free complex networks as observed by pastor - satorras and vespignani . with respect to the effect of the increase of the probability of spreading on the prevalence in undirected networks ( figure [ fig : figure1 ] ), we observed that the prevalence reaches a satured level .for undirected networks , if the probability of infection is high , there is a saturation of infection on the population for chronic diseases and therefore no new infections can occur . regarding the variation in the number of edges , the increase in the prevalencewas also expected since it is known that the addition of edges increases the connectivity on the networks studied , allowing a disease to spread more easily . about the effect of considering the networks directed or undirected , we have that the diseases tend to stay in the undirected networks independently of the spreading model and the value of considered , while in the directed cases due to the limitations imposed by the direction of the movements , when , the acute diseases tend to disappear on some of the networks . also due to the direction of the links , in directed networks ,the disease may not reach or may even disappear from parts of the network , explaining to some degree why the prevalence in directed networks ( figure [ fig : figure3 ] ) is smaller than in undirected networks ( figure [ fig : figure7 ] ) . in the si simulations , we could observe what would be the average maximum level of prevalence of a disease on a network and how fast this level would be reached . in the sis simulations , the oscillations on the stability of the prevalence observed result from the simultaneous recovery of a set of vertices . with a fixed time of infection ,the set of vertices that simultaneously recover is greater than in the case of a variable time of infection , since in the latter , due to the variability of the disease duration , the vertices form smaller subsets that will recover in different moments of the simulation .a result that called attention is that , in the directed networks with , when we simulated the chronic diseases using a fixed time , the equilibrium levels achieved were lower than the ones achieved when we used a variable time . examining the results of the simulations on each network model, we have that among the undirected ones , the mb network has the lowest prevalence , with a plausible cause for this being how this network is composed , since there is a large number of vertices that are not connected to the most connected component of the network . among the directed networks ,the ba network has the lowest prevalence observed , what is also probably due to the topology of this network , since it is composed of many vertices with outgoing links only and a few vertices with many incoming and few outgoing links , thus preventing the spread of a disease .using the methodology of networks , it is possible to analyze more clearly the effects that the heterogeneity in the connections between vertices have on the spread of infectious diseases , since we observed different prevalence levels in the networks generated with the same degree distribution but with different topological structures .moreover , considering that the increase in the number of edges led to an increase in the prevalence of the diseases on the networks , we have indications that the intensification of the interaction between vertices may promote the spread of diseases .so , as expected , in cases of sanitary emergency , the prevention of potentially infectious contacts may contribute to control a disease .this work was partially supported by fapesp and cnpq .
the transmission dynamics of some infectious diseases is related to the contact structure between individuals in a network . we used five algorithms to generate contact networks with different topological structure but with the same scale - free degree distribution . we simulated the spread of acute and chronic infectious diseases on these networks , using si ( susceptible infected ) and sis ( susceptible infected susceptible ) epidemic models . in the simulations , our objective was to observe the effects of the topological structure of the networks on the dynamics and prevalence of the simulated diseases . we found that the dynamics of spread of an infectious disease on different networks with the same degree distribution may be considerably different . * adv . studies theor . phys . , vol . 7 , 2013 , no . 16 , 759 - 771 * * hikari ltd , www.m-hikari.com * * http://dx.doi.org/10.12988/astp.2013.3674 * * raul ossada , jos h. h. grisi - filho , * * fernando ferreira and marcos amaku * faculdade de medicina veterinria e zootecnia universidade de so paulo so paulo , sp , 05508 - 270 , brazil copyright 2013 raul ossada et al . this is an open access article distributed under the creative commons attribution license , which permits unrestricted use , distribution , and reproduction in any medium , provided the original work is properly cited . * keywords : * scale - free network , power - law degree distribution , dynamics of infectious diseases
entanglement is considered to be the key resource of the rapidly developing fields of quantum information processing and quantum computation , for an introduction see e.g. . among the early proposals concerning the applications of entangled statesare those for quantum key distribution , quantum dense coding , and quantum teleportation . despite the resulting great interest in entangled quantum states ,their complete characterization is still an unsolved problem .it is known that entanglement can be fully identified by applying all positive but not completely positive ( pncp ) maps to a given state .the problem of this approach , however , consists in the fact that the general form of the pncp maps is essentially unknown . the presently best studied pncp map is the partial transposition ( pt ) .it is known that pt gives a complete characterization of entanglement in hilbert spaces of dimension and .bipartite entanglement can also be completely characterized via pt in infinite - dimensional hilbert spaces , as long as only gaussian states are considered , whose moments up to second order fully describe their properties . by using higher - order moments ,a complete characterization has been given for those entangled states which exhibit negativities after application of the pt map .this approach gives no insight into bound entangled states remaining positive after pt .to overcome this limitation , to the matrices of moments other kinds of pcnp maps have been applied , including kossakowsky , choi and breuer maps .the identification of bound entanglement in this way , however , turned out to be a cumbersome problem. an equivalent approach of identifying entanglement is based on special types of hermitian operators , called entanglement witnesses .the witnesses were introduced as a class of linear operators , whose mean values are non - negative for separable states but can become negative for entangled states .presently only some classes of entanglement witnesses are available .once a witness is known , an optimization can be performed .also nonlinear witnesses have been studied , which may increase the number of entangled states to be identified by a single witness , in comparison with a given linear witness . however ,if one is able to construct the general form of the linear witnesses , the problem of identifying entanglement is completely solved . in the present contributionwe show that any entanglement witness can be expressed in terms of completely positive hermitian operators , whose general form is well known . on this basiswe derive entanglement inequalities , which are formulated solely in terms of general hermitian operators .we also provide an approach for optimizing such inequalities , by introducing a separability eigenvalue problem .our method is a powerful tool for analyzing experimental data , to verify any kind of entanglement .one may also identify general bound entangled states whose density operator has a positive partial transposition ( ppt ) .the paper is structured as follows . in sec .[ sec : ii ] we derive the most general form of entanglement conditions in terms of hermitian operators .this leads us to an optimization problem the separability eigenvalue problem which is studied in sec .[ sec : iii ] . in sec .[ sec : iv ] this problem is solved for a class of projection operators and a general numerical implementation of entanglement tests for arbitrary quantum states is given . a method for the identification of bound entangled statesis considered in sec .[ sec : v ] . finally , in sec .[ sec : vi ] we give a brief summary and some conclusions .let us consider two systems and , represented by arbitrary hilbert spaces and with orthonormal bases being and respectively , with and being arbitrary sets .note that the hilbert spaces are not necessarily finite or separable .even spaces with an uncountable bases are under study .an entanglement witness is a bounded hermitian operator , which has positive expectation values for separable states and it has negative eigenvalues . for our purposesa generalization of the class of entanglement witnesses is useful .one can think of bounded hermitian operators , which have positive expectation values for separable states : all operators fulfilling the conditions ( [ cond1 ] ) and ( [ cond2 ] ) shall define the set , operators in this set are called partial positive operators .all operators fulfilling the conditions together with eq .( [ cond2 ] ) , with in place of , shall denote the set of positive semi - definite operators .so all entanglement witnesses are elements of the difference of these sets : .it was shown by the horodeckis , that for any entangled state there exists an entanglement witness , so that the expectation value becomes negative : .for this inseparability theorem only linear entanglement witnesses were used , which are sufficient to identify all entangled states . for this reasonwe restrict our considerations to linear witnesses , which are elements of the set .let us consider the important example of witnesses based on the pt map .recently it has been shown , that for any state with a negative pt ( npt ) there exists an operator , such that these operators have been studied in detail as functions of the annihilation and creation operators of two harmonic oscillators : .all the resulting are examples of elements of , in particular they include all entanglement witnesses for npt states .now we will turn to the construction of entanglement witnesses in their most general form . as outlined above, the problem of finding all entanglement witnesses via pncp maps is very difficult . herewe will introduce a different but equivalent approach to entanglement witnesses , which requires the class of operators only .a hermitian operator is positive , if and only if it can be written as in the first step we will now generate any entanglement witness out of a difference of positive operators . for any entanglement witness a real number and a positive hermitian operator so that can be written as . proof .: : the bounded operator in spectral decomposition is , with being a projection - valued measure and the bounded set of eigenvalues .let the supremum of all eigenvalues be .for all separable quantum states , must be a positive map , which implies . which is the demanded form .to formulate a new entanglement theorem for positive hermitian operators , we need the definition of optimal entanglement witnesses as given by lewenstein _et al . _an entanglement witness is finer than , if and only if the entanglement of any state detected by is also detected by ( beside other entangled states , which are not detected by ) .an entanglement witness is optimal , if and only if no witness is finer than .therefore a state is separable , if and only if for all optimal entanglement witnesses the expectation value is positive . to find these witnesses , we need the function , which maps a general hermitian operator to its maximal expectation value for a separable state : it is obvious , that a hermitian operator is a general element of , if and only if . andit is optimal , if and only if . [ theo1 ] a state is entangled , if and only if there exists : . proof .: : let be an optimal witness , which detects the entanglement of : the other way around , a state is separable , if and only if for all : .this is a kind of distance criterion .our entanglement theorem [ theo1 ] does no longer require the explicit form of any entanglement witness .entanglement is completely verified by hermitian operators , which are given by eq .( [ nonneg ] ) .the needed functions are readily obtained from eq .( [ distfct ] ) .let us now consider a bounded hermitian operator , which can always be expressed in terms of a positive operator and a real number , it is obvious , that all bounded hermitian operators can be written in the form of .this can be used to further simplify the theorem [ theo1 ] .[ theo2 ] a state is entangled , if and only if there exists a hermitian operators : . proof .: : note that .the function is from theorem [ theo1 ] follows : from theorem [ theo2 ] entanglement can be numerically tested .the set and the set of bounded hermitian operators have the same cardinality .now the construction of positive hermitian operators , eq .( [ nonneg ] ) , becomes superfluous and the number of tests does not increase .let us consider a simple implication of theorem [ theo2 ] .the entanglement test for a bounded hermitian operator reads as the entanglement condition for the state with the operator is | a , b \rangle \ } & < { \rm tr}(\hat{\varrho}[-\hat{a}])\\ \leftrightarrow \quad \inf\ { \langle a , b | \hat{a } | a , b \rangle \ } & > { \rm tr}(\hat{\varrho}\hat{a}).\label{eq:2ndcond}\end{aligned}\ ] ] equation ( [ eq:2ndcond ] ) is a second entanglement condition for the operator and it is equivalent to the original condition ( [ eq : cond2 ] ) for the operator .entanglement witnesses of the form had been considered before , see .here we gave the proof that any entanglement witness can be given in this form , which is a much stronger statement .this has been done for arbitrary dimensional hilbert spaces .let us consider the calculation of the function . the hermitian operator has the following projections : = \langle a | \hat{a } | a\rangle , \\\hat{a}_b&={\rm tr}_b [ \hat{a } \left ( \hat{1}_a\otimes |b\rangle\langle b| \right ) ] = \langle b | \hat{a } | b\rangle.\end{aligned}\ ] ] now the extrema of can be obtained . from eq .( [ distfct ] ) , the extremum of the function is calculated under the constraints and .the functional derivatives are denoted by and .this leads to two lagrange multipliers and the conditions this can be written as : multiplying the first equation with and the second with , the lagrange multipliers are obtained as .this leads to the equations with the constraints , are called separability eigenvalue equations . the separability eigenvalue problem can be solved by computers or , in simple cases , by hand .the closed set is bounded in .the smooth function defined on this set is bounded by .therefore a solution of the separability eigenvalue equation exists . according to the calculation ( [ beginlem2 ] ) ( [ endlem2 ] ) we can formulate the following the function be written as with being eigenvalues of the separability eigenvalue equations .: : see the calculations ( [ beginlem2 ] ) ( [ endlem2 ] ) and eq .( [ distfct ] ) .we also obtain , with separability eigenvalue . in the following we want to consider some properties of the solution of the separability eigenvalue equations .let , be a solution of the separability eigenvalue problem of the bounded hermitian operator vector has a schmidt decomposition , see , with the term , 2 . if , ( or ) is another solution then ( or ) .if , is another solution then and are linearly independent . proof . : : 1 .the general form of the vector is . thus , . 2 .the first part of separability eigenvalue equations read as these are eigenvalue equations for the hermitian operator acting on for different eigenvalues .thus , . 3 .assume linear dependence , . since , we obtain .this would imply , which is a contradiction to .these properties of the separability eigenvalue equations can be easily seen for the solution of the example given in sec .[ ssec : a ] .in the following we want to study two aspects of the results obtained so far . in sec . [ssec : a ] , an analytical solution of the separability eigenvalue problem will be derived for a special class of projection operators . in sec . [ssec : b ] , a general entanglement test for arbitrary quantum states is under study . for example , let us solve the separability eigenvalue equations for the special class of operators of the form .the normalized vector can be expanded as in the same way the vectors and can be written as and .the separability eigenvalue equations can be written for each component as inserting eqs .( [ exse1 ] ) and ( [ exse2 ] ) into each other and using eq .( [ beginlem2 ] ) in the form for ( being a trivial case ) we can separate eq .( [ exse1 ] ) from eq .( [ exse2 ] ) , with the interpretation we get the positive and compact operators and can be given in spectral decomposition as thus the non - trivial separability eigenvalues are . using and , the state reads as and are orthonormal in each hilbert space .this is the schmidt decomposition of , cf .e.g. . by the above calculations we get the solutions for the state is factorized , and does not detect any entanglement . in all other cases , is useful to identify entanglement .now we can write the special condition for entanglement , by use of in theorem [ theo2 ] , as let us consider the example of two harmonic oscillators in a mixture of a superposition of coherent states , , with vacuum , , where and ^{-1/2} ] . .from the npt side , a typical witness is ., width=283 ] note that an arbitrary operator is a hermitian operator as well .since we can shift any operator without changing the entanglement witness , any entanglement test can be performed with an operator of the form .all entangled states which remain non - negative under pt are bound entangled ones , see .the following characterization of ppt bound entangled states can be given : the first condition refers to ppt .the second condition identifies entanglement .the difference between as a witness for npt entanglement and for entanglement in general is equal to the minimal separability eigenvalue , see fig . 2 .in the present paper we have proven the general form of entanglement witnesses . on this basiswe have derived necessary and sufficient conditions for bipartite entanglement .optimal entanglement inequalities have been given in the most general form .they have been formulated with arbitrary hermitian operators , which are easy to handle because of their well known structure and they are useful for applications in experiments .separability eigenvalue equations have been formulated .they serve for the optimization of the entanglement conditions for all chosen hermitian operators .some properties of the solution of these equations have been analyzed .the separability eigenvalue problem resembles the ordinary eigenvalue problem of hermitian operators , with the additional restriction that the solution is a factorizable vector .we have analytically solved the separability eigenvalue equations for a special class of projection operators . using these solutions, we could demonstrate entanglement of a mixed state given in terms of continuous variables .a general entanglement test of the proposed form can be implemented numerically , with its error being related to the available experimental precision .the method under study can also be used to identify any bound entangled state with a positive partial transposition .this requires to test the given states for the negativity of its partial transposition .it turned out that the separability eigenvalues remain unchanged under partial transposition .eventually , bound entanglement can be demonstrated by a combination of a general entanglement test and a test for the negativity of the partial transposition of the state under study .
necessary and sufficient conditions for bipartite entanglement are derived , which apply to arbitrary hilbert spaces . motivated by the concept of witnesses , optimized entanglement inequalities are formulated solely in terms of arbitrary hermitian operators , which makes them useful for applications in experiments . the needed optimization procedure is based on a separability eigenvalue problem , whose analytical solutions are derived for a special class of projection operators . for general hermitian operators , a numerical implementation of entanglement tests is proposed . it is also shown how to identify bound entangled states with positive partial transposition .
dynamic traffic assignment ( dta ) is the descriptive modeling of time - varying flows on traffic networks consistent with traffic flow theory and travel choice principles .dta models describe and predict departure rates , departure times and route choices of travelers over a given time horizon .it seeks to describe the dynamic evolution of traffic in networks in a fashion consistent with the fundamental notions of traffic flow and travel demand ; see for some review on dta models and recent developments . _ dynamic user equilibrium _ ( due ) of the open - loop type , which is one type of dta , remains a major modern perspective on traffic modeling that enjoys wide scholarly support .it captures two aspects of driving behavior quite well : departure time choice and route choice . within the due model , travel cost for the same trip purpose is identical for all utilized route - and - departure - time choices .the relevant notion of travel cost is _ effective travel delay _ , which is a weighted sum of actual travel time and arrival penalties .in the last two decades there have been many efforts to develop a theoretically sound formulation of dynamic network user equilibrium that is also a canonical form acceptable to scholars and practitioners alike .there are two essential components within the due models : ( 1 ) the mathematical expression of nash - like equilibrium conditions ; and ( 2 ) a network performance model , which is , in effect , an embedded _ dynamic network loading _( dnl ) problem .the dnl model captures the relationships among link entry flow , link exit flow , link delay and path delay for any given set of path departure rates .the dnl gives rise to the notion of _ path delay operator _, which is viewed as a mapping from the set of feasible path departure rates to the set of path travel times or , more generally , path travel costs .properties of the path delay operator are of fundamental importance to due models .in particular , continuity of the delay operators plays a key role in the existence and computation of due models .the existence of dues is typically established by converting the problem to an equivalent mathematical form and applying some version of brouwer s fixed - point existence theorem ; examples include and .all of these existence theories rely on the continuity of the path delay operator . on the computational side of analytical due models ,every established algorithm requires the continuity of the delay operator to ensure convergence ; an incomplete list of such algorithms include the fixed - point algorithm , the route - swapping algorithm , the descent method , the projection method , and the proximal point method it has been difficult historically to show continuity of the delay operator for general network topologies and traffic flow models . over the past decade , only a few continuity results were established for some specific traffic flow models . use the link delay model to show the continuity of the path delay operator .their result relies on the _ a priori _ boundedness of the path departure rates , and is later improved by a continuity result that is free of such an assumption . in ,continuity of the delay operator is shown for networks whose dynamics are described by the lwr - lax model , which is a variation of the lwr model that does not capture vehicle spillback .their result also relies on the _ a priori _ boundedness of path departure rates . vickrey s point queue model and show the continuity of the corresponding path delay operator for general networks without invoking the boundedness on the path departure rates .all of these aforementioned results are established for network performance models that do not capture vehicle spillback . to the best of our knowledge , there has not been any rigorous proof of the continuity result for dnl models that allow queue spillback to be explicitly captured . on the contrary ,some existing studies even show that the path travel times may depend discontinuously on the path departure rates , when physical queue models are used .for example , uses the cell transmission model and signal control to show that the path travel time may depend on the path departure rates in a discontinuous way .such a finding suggests that the continuity of the delay operator could very well fail when spillback is present .this has been the major hurdle in showing the continuity or identifying relevant conditions under which the continuity is guaranteed .this paper bridges this gap by articulating these conditions and providing accompanying proof of continuity .this paper presents , for the first time , a rigorous continuity result for the path delay operator based on the lwr network model , which explicitly captures physical queues and vehicle spillback . in showing the desired continuity ,we propose a systematic approach for analyzing the well - posedness of two specific junction models : a merge and a diverge model , both originally presented by .the underpinning analytical framework employs the _ wave - front tracking _methodology and the technique of _ generalized tangent vectors _ . a major portion of our proof involves the analysis of the interactions between kinematic waves and the junctions , which is frequently invoked for the study of well - posedness of junction models ;see for more details .such analysis is further complicated by the fact that vehicle turning ratios at a diverge junction are determined endogenously by drivers route choices within the dnl procedure . as a result ,special tools are developed in this paper to handle this unique situation .as we shall later see , a crucial step of the process above is to estimate and bound from below the minimum network supply , which is defined in terms of local vehicle densities in the same way as in .in fact , if the supply somewhere tends to zero ( that is , when traffic approaches the jam density ) , the well - posedness of the diverge junction may fail , as we demonstrate in section [ subsubsecexample ] .this has also been confirmed by the earlier study of , where a wave of jam density is triggered by a signal red light and causes spillback at the upstream junction , leading to a jump in the path travel times .remarkably , in this paper we are able to show that ( 1 ) if the supply is bounded away from zero ( that is , traffic is bounded away from the jam density ) , then the diverge junction model is well posed ; and ( 2 ) the desired boundedness of the supply is a natural consequence of the dynamic network loading procedure that involves only the merge and diverge junction models we study here .this is a highly non - trivial result because it not only plays a role in the continuity proof , but also implies that gridlock can never occur in the network loading procedure in any finite time horizon .the final continuity result is presented in section [ subseccontinuity ] , following a number of preliminary results set out in previous sections .although our continuity result is established only for networks consisting of simple merge and diverge nodes , it can be extended to networks with more complex topologies using the procedure of decomposing any junction into several simple merge and diverge nodes .moreover , the analytical framework employed by this paper can be invoked to treat other and more general junction topologies and/or merging and diverging rules , and the techniques employed to analyze wave interactions will remain valid .the main contributions made in this paper include : * formulation of the lwr - based dynamic network loading ( dnl ) model with spillback as a system of _ partial differential algebraic equations _ ( pdaes ) ; * a continuity result for the path delay operator based on the aforementioned dnl model ; * a novel method for estimating the network supply , which shows that gridlock can never occur within a finite time horizon .the rest of this paper is organized as follows .section [ seclwrdnl ] recaps some essential knowledge and notions regarding the lwr network model and the dnl procedure .section [ secpdae ] articulates the mathematical contents of the dnl model by formulating it as a pdae system .section [ subsecwellposed ] introduces the merge and diverge models and establishes their well - posedness . section [ seccontinuityproof ] provides a final proof of continuity and some discussions. finally , we offer some concluding remarks in section [ secconclusion ] .throughout this paper , the time interval of analysis is a single commuting period expressed as ] : ~\rightarrow~\mathbb{r}_+\ ] ] where denotes the set of non - negative real numbers .each path departure rate is interpreted as a time - varying path flow measured at the entrance of the first arc of the relevant path , and the unit for is _ vehicles per unit time_. we next define to be a vector of departure rates .therefore , can be viewed as a vector - valued function of , the departure time . instead of to denote path flow vectors . ]the single most crucial ingredient t is the path delay operator , which maps a given vector of departure rates to a vector of path travel times .more specifically , we let ,\quad \forall p\in \mathcal{p}\ ] ] be the path travel time of a driver departing at time and following path , given the departure rates associated with all the paths in the network .we then define the path delay operator by letting , which is a vector consisting of time - dependent path travel times .we recap the network extension of the lwr model , which captures the formation , propagation , and dissipation of spatial queues and vehicle spillback .discussion provided below relies on general assumptions on the fundamental diagram and the junction model , and involves no _ ad hoc _ treatment of flow propagation , flow conservation , link delay , or other critical model features .we consider a road link expressed as a spatial interval \subset \mathbb{r} ] expresses vehicle flow at as a function of , where denotes the jam density , and denotes the flow capacity . throughout this paper, we impose the following mild assumption on : * ( f ) . *the fundamental diagram is continuous , concave and vanishes at and .an essential component of the network extension of the lwr model is the specification of boundary conditions at a road junction .derivation of the boundary conditions should not only obey physical realism , such as that enforced by entropy conditions , but also reflect certain behavioral and operational considerations , such as vehicle turning preferences , driving priorities , and signal controls .articulation of a junction model is facilitated by the notion of _ riemann problem _, which is an initial value problem at the junction with constant initial condition on each incident link .there exist a number of junction models that yield different solutions of the same riemann problem .in one line of research , an entropy condition is defined based on a minimization / maximization problem . in another line of research, the boundary conditions are defined using link _ demand _ and _ supply _ , which represent the link s sending and receiving capacities . models following this approach include and .the solution of a riemann problem is given by the _riemann solver _( rs ) , to be defined below .we consider a general road junction with incoming roads and outgoing roads , as shown in figure [ chapdnl : figlwrjunction ] .incoming links and outgoing links . ]we denote by the incoming links and by the outgoing links .in addition , for every , the dynamic on is governed by the lwr model \times [ a_i,\,b_i]\ ] ] where link is expressed as the spatial interval , and we always use the subscript ` ' to indicate dependence on the link . the initial condition for this conservation law is \ ] ] notice that the above initial value problems are coupled together via the boundary conditions to be specified at the junction .such a system of coupling equations is commonly analyzed using the riemann problem .[ chapdnl : rpdef]*(riemann problem ) * the riemann problem at the junction is defined to be an initial value problem on a network consisting of the single junction with incoming links and outgoing links , all extending to infinity , such that the initial densities are constants on each link : ,\qquad \qquad i\in\{1,\,\ldots,\,m\ } \\\rho_j(0,\,x)~\equiv~\hat\rho_j\qquad & x\in [ a_j,\,+\infty ) , \qquad\qquad j\in\{m+1,\,\ldots,\,m+n\ } \end{cases}\ ] ] where ] , which represents , in every unit of flow , the fraction associated with path .we call these variables _ path disaggregation variables _ ( pdv ) . for each car moving along the link , its surrounding traffic can be distinguished by path ( e.g. 20% following path , 30% following path , 50% following path ) . as thiscar moves , such a composition will not change since its surrounding traffic all move at the same speed under the first - in - first - out ( fifo ) principle ( i.e. no overtaking is allowed ) . in mathematical terms , this means that the path disaggregation variables , , are constants along the trajectories of cars in the space - time diagram , where is the trajectory of a moving car on link .that is , which , according to the chain rule , becomes which further leads to another set of partial differential equations on link : here , is the solution of .the following obvious identity holds where means path contains ( or traverses ) link " , and the summation appearing in is with respect to all such . by convention , if , then for all .for reason that will become clear later , we introduce the concept of an _ordinary node_. an ordinary node is neither the origin nor the destination of any trip .we use the notation to represent the set of ordinary nodes in the network . as mentioned earlier ,the partial differential equations on links incident to are all coupled together through a given junction model , i.e. , a riemann solver . a common prerequisite for applying the riemann solveris the determination of the flow distribution ( turning ratio ) matrix , which relies on knowledge of the pdvs for all .we define the time - dependent flow distribution matrix associated with : ^{|\mathcal{i}^j| + |\mathcal{o}^j|}\ ] ] where by convention , we use subscript to indicate incoming links , and to indicate outgoing links .each element represents the turning ratios of cars discharged from that enter downstream link .then , for all that traverses , the following holds . it can be easily verified that ] denotes the -th component of the mapping , . intuitively , - mean that , given the current traffic states and adjacent to the junction , the riemann solver specifies , for each incident link or , the corresponding boundary conditions or . in prose, at each time instance the riemann solver inspects the traffic conditions near the junction , and proposes the discharging ( receiving ) flows of its incoming ( outgoing ) links .such a process is based on the flow distribution matrix and often reflects traffic control measures at junctions .furthermore , the riemann solver operates with knowledge of every link incident to the junction , thus the boundary condition of any relevant link is determined jointly by all the links connected to the same junction .therefore , the lwr equations on all the links are coupled together through this mechanism .for this reason the lwr - based dnl models are highly challenging , both theoretically and computationally .the upstream boundary conditions associated with pdes are : where the numerator expresses the exit flow on link associated with path , which , by flow conservation , is equal to the entering flow of link associated with the same path ; the denominator represents the total entering flow of link .unlike the density - based pde , the pdv - based pde does not have any downstream boundary condition due to the fact that the traveling speeds of the pdvs are the same as the car speeds ( they can be interpreted as lagrangian labels that travel with the cars ) ; thus information regarding the pdvs does not propagate backwards or spills over to upstream links .we consider a node that is either the origin or the destination of some path .one immediate observation is that the flow conservation constraint no longer holds at such a node since vehicles either are ` generated ' ( if is an origin ) or ` vanish ' ( if is a destination ) . a simple and effective way to circumvent this issue is to introduce a _virtual link_. a virtual link is an imaginary road with certain length and fundamental diagram , and serves as a buffer between an ordinary node and an origin / destination ; see figure [ chapdnl : figvirtuallink ] for an illustration . by introducing virtual links to the original network, we obtain an augmented network in which all road junctions are ordinary , and hence fall within the scope of the previous section . .right : a virtual link connecting a destination ( t ) to an ordinary junction . ]let us denote by the set of origins in the augmented network .for any , we denote by the set of paths that originate from , and by the virtual link incident to this origin . for each denote by the departure rate ( path flow ) along .it is expected that a buffer ( point ) queue may form at in case the receiving capacity of the downstream is insufficient to accommodate all the departure rates . for this buffer queue ,denoted , we employ a vickrey - type dynamic ; that is , where denotes the supply of the virtual link .the only difference between and vickrey s model is the time - varying downstream receiving capacity provided by the virtual link .it remains to determine the dynamics for the path disaggregation variables ( pdv ) .more precisely , we need to determine for the virtual link where , and is the upstream boundary of .this will be achieved using the vickrey - type dynamic and the fifo principle .specifically , we define the queue exit time function where denotes the time at which drivers depart and join the point queue , if any ; expresses the time at which the same group of drivers exit the point queue .clearly , fifo dictates that where the two integrands on the left and right hand sides of the equation are flow entering the queue and flow leaving the queue , respectively .we may determine the path disaggregation variables as : notice that , if , then the flow leaving the point queue at time is also zero ; thus there is no need to determine the path disaggregation variables .therefore , the identity is well defined and meaningful . with all preceding discussions, we may finally express the path travel times , which are the outputs of a complete dnl model .the path travel time consists of link travel times plus possible queuing time at the origin .mathematically , the link exit time function for any is defined , in a way similar to , as for a path expressed as , the time to traverse it is calculated as where means the composition of two functions .this is due to the assumption that cars leaving the previous link ( or queue ) immediately enter the next link without any delay .we are now ready to present a generic pdae system for the dynamic network loading procedure .let us begin by summarizing some key notations .* the original network with link set and node set ; * the set of virtual links ; * the augmented network including virtual links ; * the set of origins in ; * the set of paths originating from ; * the set of ordinary junctions in ; * the set of incoming links of a junction ; * the set of outgoing links of a junction ; * the flow distribution matrix associated with junction ; * the riemann solver for junction , which depends on . we also list some key variables of the pdae system below . * the path departure rate along ; * the vehicle density on link ; * the proportion of flow on link associated with path ( path disaggregation variable ) ; * the point queue at the origin ; * the point queue exit time function at origin . given any vector of path departure rates , the proposed pdae system for calculating path travel times is summarized as follows .=0\qquad & ( t,\,x)\in[0,\,t]\times [ a_i,\,b_i ] \\ \label{chapdnl : lwrpdaeeqn3 } \partial_t\mu_i^p\big(t,\,x\big)+v_i\big(\rho_i(t,\,x)\big)\cdot \partial_x\mu_i^p\big(t,\,x\big)=0\qquad & ( t,\,x)\in[0,\,t]\times [ a_i,\,b_i ] \\ \label{chapdnl : lwrpdaeeqn65 } \mu_s^p\big(\lambda_s(t),\,a_s\big)~=~ { h_p(t)\over \sum_{q\in\mathcal{p}^s}h_q(t ) } \qquad & \forall s\in\mathcal{s},\ , p\in\mathcal{p}_s \\ \label{chapdnl : lwrpdaeeqn66 } \mu_{j}^{p}(t,\ , a_j)~=~{f_{i}\big(\rho_{i}(t,\ , b_i)\big)\cdot \mu_{i}^p(t,\ , b_i ) \over f_{j}\big(\rho_{j}(t,\,a_j)\big ) } \qquad & \forall p \supset \{i_i,\,i_j\ } \\ \label{chapdnl : lwrpdaeeqn4 } a^j(t)=\big\{\alpha^j_{ij}(t)\big\},\quad \alpha^j_{ij}(t)=\sum_{p\ni i_i,\,i_j } \mu_{i}^p(t,\,b_i ) \qquad & \forall i_i\in\mathcal{i}^j,\ , i_j\in\mathcal{o}^j \\ \label{chapdnl : lwrpdaeeqn5 } \rho_{k}(t,\,b_k)~=~rs^{a^j}_k\left[\big(\rho_{i}(t,\ , b_i-)\big)_{i_i\in\mathcal{i}^j}~,~\big(\rho_{j}(t,\,a_j+)\big)_{i_j\in\mathcal{o}^j}\right]\qquad & \forall i_k \in \mathcal{i}^j \\ \label{chapdnl : lwrpdaeeqn6 } \rho_{l}(t,\,a_l)~=~rs^{a^j}_l\left[\big(\rho_{i}(t,\ , b_i-)\big)_{i_i\in\mathcal{i}^j}~,~\big(\rho_{j}(t,\,a_j+)\big)_{i_j\in\mathcal{o}^j}\right]\qquad & \forall i_l\in \mathcal{o}^j \\ \label{chapdnl : lwrpdaeeqn7 } d_p(t,\,h)~=~\lambda_s \circ \lambda_1 \circ \lambda_2 \ldots \circ\lambda_{m(p ) } ( t ) \qquad & \forall p\in\mathcal{p},\quad \forall t\in[0,\,t]\end{aligned}\ ] ] eqn describes the ( potential ) queuing process at each origin .eqns and express the queue exit time function for a point queue , and the link exit time function for a link , respectively .eqns - express the link dynamics in terms of car density and pdv ; eqns - specifies the upstream boundary conditions for the pdv as these variables can only propagate forward in space .eqns - determine the boundary conditions at junctions .finally , eqn determines the path travel times .the above pdae system involves partial differential operators and .solving such a system requires solution techniques from the theory of numerical _ partial differential equations _ ( pde ) such as finite difference methods and finite element methods .in mathematical modeling , the term _ well - posedness _ refers to the property of having a unique solution , and the behavior of that solution hardly changes when there is a slight change in the initial / boundary conditions .examples of well - posed problems include the initial value problem for scalar conservation laws , and the initial value problem for the hamilton - jacobi equations . in the context of traffic network modeling ,well - posedness is a desirable property of network performance models capable of supporting analyses and computations of dta models .it is also closely related to the continuity of the path delay operator , which is the main focus of this paper .this section investigates the well - posedness ( i.e. continuous dependence on the initial / boundary conditions ) of two specific junction models .these two junctions are depicted in figure [ figtwojunc ] , and the corresponding merge and diverge rules are proposed initially by in a discrete - time setting with fixed vehicle turning ratios and driving priority parameters .we first consider the diverge junction shown on the left part of figure [ figtwojunc ] , with one incoming link and two outgoing links and . the demand and the supplies , and , are defined by and respectively .the riemann solver for this junction relies on the following two conditions .* cars leaving advance to and according to some turning ratio which is determined by the pdv in the dnl model . *subject to ( a1 ) , the flow through the junction is maximized . in the original diverge model , the vehicle turning ratios , denoted and with obvious meaning of notations , are constants known _a priori_. this is not the case in a dynamic network loading model since they are determined endogenously by drivers route choices , as expressed mathematically by eqn . the diverge junction model , described by ( a1 ) and ( a2 ) ,can be more explicitly written as : where denotes the exit flow of link , and denotes the entering flow on link , .we now turn to the merge junction in figure [ figtwojunc ] , with two incoming links and and one outgoing link . in view of this merge junction , assumption ( a1 ) becomes irrelevant as there is only one outgoing link ; and assumption ( a2 ) can not determine a unique solution . to address this issue , we consider a _right - of - way _parameter and the following priority rule : + ( r1 ) the actual link exit flows satisfy .+ ) are the optimal solution sets of .left : rule ( r1 ) is compatible with ( a2 ) ; and there exists a unique point satisfying both ( a2 ) and ( r1 ) .right : ( r1 ) is incompatible with ( a2 ) ; in this case , the model selects point within the set that is closest to the ray ( ) from the origin with slope . ]notice that ( r1 ) may be incompatible with assumption ( a2 ) ; and we refer the reader to figure [ figmerge ] for an illustration .whenever there is a conflict between ( r1 ) and ( a2 ) , we always respect ( a2 ) and relax ( r1 ) so that the solution is chosen to be the point that is closest to the line among all the points yielding the maximum flow .clearly , such a point is unique .mathematically , we let be the set of points that solves the following maximization problem : moreover , we define the ray .then the solution of the merge model is defined to be the projection of onto ; that is , \ ] ] where ] .our finding is that the jam density can never be reached within any finite time horizon , and the supply anywhere in the network is bounded away from zero . as a consequence , grid lock will never occur in the dynamic network .we denote by the set of destinations in the augmented network with virtual links ( see section [ subsecvirtuallink ] ) ; that is , every destination is incident to a virtual link that connects to the rest of the network ; see figure [ figflowintod ] . for each , we introduce its supply , denoted , to be the maximum rate at which cars can be discharged from the virtual link connected to .effectively , there exists a bottleneck between the virtual link and the destination ; and the supply of the destination is equal to the flow capacity of this bottleneck ; see figure [ figflowintod ] . notice that in some literature such a bottleneck is completely ignored and the destination is simply treated as a sink with infinite receiving capacity .this is of course a special case of ours once we set the supply to be infinity .however , such a supply may be finite and even quite limited under some circumstances due to , for example , ramp metering , limited parking spaces , or the fact that the destination is an aggregated subnetwork that is congested . .] we introduce below a few more concepts and notations . : : : the minimum link length in the network , including virtual links . : : : the minimum link flow capacity in the network , including virtual links . : : : the maximum backward wave speed in the network . : : , where is the priority parameter for the merge junction ( see section [ subsecmergemodel ] ) , denotes the set of merge junctions in the network . : : : the minimum supply among all the destination nodes . }\inf_{x\in[a_i,\,b_i]}s_i\left(\rho_i(t,\,x)\right) ] are uniformly bounded below by some constant that depends only on , although such a constant may decay exponentially as increases .given that any dynamic network loading problem is conceived in a finite time horizon , we immediately obtain a lower bound on the supplies , as shown in the corollary below .[ cordelta ] under the same setup as theorem [ thmdelta ] , we have }s_i\big(\rho_i(t,\,x)\big)~\geq~\bar p^{\lfloor { t\over l/\lambda}\rfloor } \left\{\delta^{\mathcal{d}}~,~\bar p c^{min}\right\}\qquad\forall t\in[0,\,t]\ ] ] and }\inf_{t\in[0,\,t]}s_i\big(\rho_i(t,\,x)\big)~\geq~\bar p^{\lfloor { t\lambda\over l}\rfloor}\min\left\{\delta^{\mathcal{d}}~,~\bar p c^{min}\right\}\ ] ] where the operator rounds its argument to the nearest integer from below .both inequalities are immediately consequences of .corollary [ cordelta ] shows that , for any network consisting of the merge and diverge junctions , the supply values are uniformly bounded from below at any point in the spatial - temporal domain . in particular , a jam density can never occur .moreover , such a network is free of complete gridlock .is formed somewhere in the network and the static queues do not dissipate in finite time . ] in this section we establish some properties of the path disaggregation variable , which serve to validate the second hypothesis of theorem [ divwpthm ] .[ lemma : ben ] assume that there exists and so that the following hold : 1 . for all links ,the fundamental diagrams are uniformly linear near the zero density ; more precisely , is constant for ] .the departure rates are uniformly bounded , have bounded total variation ( tv ) , and bounded away from zero when they are non - zero ; i.e. and ] . then the path disaggregation variables are either zero , or uniformly bounded away from zero . moreover , they have bounded variation .the proof is moved to [ subsecapplemma : ben ] .notice that assumptions 1 and 2 of lemma [ lemma : ben ] are satisfied by the newell - daganzo ( triangular ) fundamental diagrams , where both the free - flow and congested branches of the fd are linear .moreover , given an arbitrary fundamental diagram , one can always make minimum modifications at and to comply with conditions 1 and 2 .assumption 3 of lemma [ lemma : ben ] is satisfied by any departure rate that is a result of a finite number of cars entering the network . and ,again , any departure rate can be adjusted to satisfy this condition with minor modifications . in this sectionwe discuss the continuous dependence of the queues and the solutions and with respect to the path departure rates , . the analysis below relies on knowledge of the wave - front tracking algorithm and generalized tangent vector , for which an introduction is provided in [ secappemb ] .following , we introduce a generalized tangent vector for the triplet where is a scalar shift of the queue , i.e. the shifted queue is , while the tangent vectors of and are defined in the same way as in [ secgtv ] .the tangent vector norm of is simply its absolute value .[ lemmabenqueue ] assume that the departure rates are piecewise constant , and let be a tangent vector defined via shifting the jumps in .then the tangent vector is well defined , and its norm is equal to that of and bounded for all times . the proof is postponed until [ subsecapplemmabenqueue ] . at the end of this paper , we are able to prove the continuity result for the delay operator based on a series of preliminary results presented so far . *( continuity of the delay operator ) * consider a network consisting of merge and diverge junctions described in section [ secmergediverge ] , under the same assumptions stated in lemma [ lemma : ben ] , the path delay operator , as a result of the dynamic network loading model - , is continuous .we have shown that at each node ( origin , diverge node , or merge node ) , the solution depends continuously on the initial and boundary values .in addition , between any two distinct nodes , the propagation speeds of either -waves or -waves are uniformly bounded .thus such well - posedness continues to hold on the network level .consequently , the vehicle travel speed for any depends also continuously on the departure rates .we thus conclude that the path travel times depend continuously on the departure rates .the assumption that the network consists of only merge and diverge nodes is not restrictive since junctions with general topology can be decomposed into a set of elementary junctions of the merge and diverge type . in addition , junctions that are also origins / destinations can be treated in a similar way by introducing virtual links . here, we would like to comment on the assumptions made in the proof of the continuity .in a recent paper , a counterexample of uniqueness and continuous dependence of solutions is provided under certain conditions .more precisely , the authors construct the counterexample by assuming that the path disaggregation variables have infinite total variation .this shows the necessity of the second assumption of theorem [ divwpthm ] .moreover , again in , the authors prove that for density oscillating near zero the solution may not be unique ( even in the case of bounded total variation ) , which shows the necessity of the third assumption in lemma [ lemma : ben ] . provides an example of discontinuous dependence of the path travel times on the path departure rates using the cell transmission model representation of a signal - controlled network .in particular , the author showed that when a queue generated by the red signal spills back into the upstream junction , the experienced path travel time jumps from one value to another .this , however , does not contradict our result presented here for the following reason : the jam density caused by the red signal in does not exist in our network , which has only merge and diverge junctions ( without any signal controls ) . indeed , as shown in corollary [ cordelta ] , the supply functions at any location in the network are uniformly bounded below by a positive constant , and thus the jam density never occurs in a finite time horizon .the reader is reminded of the example presented in section [ subsubsecexample ] , where the ill - posedness of the diverge model is caused precisely by the presence of a jam density .the counterexample from is constructed essentially in the same way as our example , by using signal controls that create the jam density .we offer some further insights here on the extension of the continuity result to second - order traffic flow models . to the best of our knowledge , second - order models on traffic networks are mainly based on the aw - rascle - zhang model and the phase - transition model .solutions of the aw - rascle - zhang system may present the vacuum state , which , in general , may prevent continuous dependence . on the other hand , the phase transition model on networksbehaves in a way similar to the lwr scalar conservation law model .however , continuous dependence may be violated for density close to maximal ; see .therefore , extensions of the continuity result to second - order models appear possible only for the phase - transition type models with appropriate assumptions on the maximal density achievable on the network , but this is beyond the scope of this paper .this paper presents , for the first time , a rigorous continuity result for the path delay operator based on the lwr network model with spillback explicitly captured .this continuity result is crucial to many dynamic traffic assignment models in terms of solution existence and computation .similar continuity results have been established in a number of studies , all of which are concerned with non - physical queue models . as we show in section [ subsubsecexample ] , the well - posedness of a diverge modelmay not hold when spillback occurs .this observation , along with others made in previous studies , have been the major source of difficulty in showing continuity of the delay operator . in this paper , we bridge this gap through rigorous mathematical analysis involving the wave front tracking method and the generalized tangent vectors . in particular , by virtue of the finite propagation speeds of -waves and -waves ,the continuity of the delay operator boils down to the well - posedness of nodal models , including models for the origins , diverge nodes , and merge nodes .minor assumptions are made on the fundamental diagram and the path departure rates in order to provide an upper bound on the total variations of the density and the path disaggregation variables , which subsequently leads to the desired well - posedness of the nodal models and eventually the continuity of the operator .a crucial step of the above process is to estimate and bound from below the minimum network supply , which is defined in terms of local vehicle densities .in fact , if the supply of some link tends to zero , the well - posedness of the diverge junction may fail as we demonstrate in section [ subsubsecexample ] .this has also been confirmed by an earlier study , where a wave of jam density is triggered by a signal red light and causes spillback at the upstream junction , leading to a jump in the path travel times .remarkably , in this paper we are able to show that ( 1 ) if the supply is bounded away from zero , then the diverge junction model is well posed ; and ( 2 ) the desired boundedness of the supply is a natural consequence of the dynamic network loading procedure that involves only the simple merge and diverge junction models .this is a highly non trivial result because it not only plays a role in the continuity proof , but also implies that gridlock can never occur in the network loading procedure in a finite time horizon .however , we note that in numerical computations gridlock may very well occur due to finite approximations and numerical errors , while our no - gridlock result is conceived in a continuous - time and analytical ( i.e. , non - numerical ) framework .it is true that , despite the correctness of our result regarding the lower bound on the supply , zero supply ( or gridlock ) can indeed occur in real - life traffic networks as a result of signal controls .in fact , this is how constructs the counter example of discontinuity .nevertheless , signal control is an undesirable feature of the dynamic network loading model for far more obvious reasons : the on - and - off signal control creates a lot of jump continuities in the travel time functions , making the existence of dynamic user equilibria ( due ) almost impossible .given that the main purpose of this paper is to address fundamental modeling problems pertinent to dues , it is quite reasonable for us to avoid junction models that explicitly involve the on - and - off signal controls .nevertheless , there exist a number of ways in which the on - and - off signal control can be approximated using continuum ( homogenization ) approaches .some examples are given in and .regarding the exponential decay of the minimum supply , we point out that , as long as the jam density ( gridlock ) does not occur , our result is still applicable . from a practical point of viewmost networks reach congestion during day time but are almost empty during night time .in other words , the asymptotic gridlock condition is never reached in reality .therefore , we think that our result is a reasonable approximation of reality for due problems .we would also like to comment on the continuity result for other types of junction models .prove that lipschitz continuous dependence on the initial conditions may fail in the presence of junctions with at least two incoming and two outgoing roads .notably , the counterexample that they use to disprove the lipschitz continuity is valid even when the turning ratio matrix is constant and not dependent on the path disaggregation variables .it is interesting to note that , in the same book , continuity results for general networks are provided , but only for riemann solvers at junctions that allow re - directing traveling entities a feature not allowed in the dynamic network loading model but instead is intended for the modeling of data networks .the findings made in this paper has the following important impacts on dta modeling and computation .( 1 ) the analytical framework proposed for proving or disproving the well - posedness of junction models and the continuity property can be applied to other junction types and riemann solvers , leading to a set of continuity / discontinuity results of the delay operators for a variety of networks .( 2 ) the established continuity result not only guarantees the existence and computation of continuous - time dues based on the lwr model , but also sheds light on discrete - time due models and provides useful insights on the numerical computations of dues .in particular , the various findings made in this paper provide valuable guidance on numerical procedures for the dnl problem based on the cell transmission model , the link transmission model , or the kinematic wave model , such that the resulting delay operator is continuous on finite - dimensional spaces . as a result, the existence and computation of discrete - time dues will also benefit from this research .the wave - front tracking ( wft ) method was originally proposed by as an approximation scheme for the following initial value problem where the initial condition is assumed to have _ bounded variation _ ( bv ) .the wft approximates the initial condition using _ piecewise constant _ ( pwc ) functions , and approximates using _ piecewise affine _ ( pwa ) functions .it is an event - based algorithm that resolves a series of wave interactions , each expressed as a riemann problem ( rp ) .the wft method is primarily used for showing existence of weak solutions of conservation laws by successive refinement of the initial data and the fundamental diagram ; see and . extend the wft to treat the network case and show the existence of the weak solution on a network .we provide a brief description of this procedure below .fix a riemann solver ( rs ) for each road junction in the network .consider a family of piecewise constant approximations of the initial condition on each link and a family of piecewise affine approximation of the fundamental diagrams , where is a parameter such that and as .a wft approximate solution on the network is constructed as follows . 1 . within each link ,solve a riemann problem at each discontinuity of the pwc initial data .at each junction , solve a riemann problem with the given rs .2 . construct the solution by gluing together the solutions of individual rps up to the first time when two traveling waves interact , or when a wave interacts with a junction .3 . for each interaction , solve a new rp and prolong the solution up to the next time of any interaction .4 . repeat the processes 2 - 3 . to show that the procedure described above indeed produces a well - defined approximate solution on the network, one needs to ensure that the following three quantities are bounded : ( 1 ) the total number of waves ; ( 2 ) the total number of interactions ( including wave - wave and wave - junction interactions ) ; and ( 3 ) the _ total variation _ ( tv ) of the piecewise constant solution at any point in time .these quantities are known to be bounded in the single conservation law case ; in fact , they all decrease in time .however , for the network case , one needs to proceed carefully in estimating these quantities as they may increase as a result of a wave interacting with a junction , which may produce new waves in all other links incident to the same junction .the reader is referred to for more elaborated discussion on these interactions . for a sequence of approximate wft solutions , , if one can show that the total variation of is uniformly bounded , then as a weak entropy solution on the network is obtained .the generalized tangent vector is a technique proposed by to show the well - posedness of conservation laws , and is used later by to show the well - posedness of junction models in connection with the lwr model .its mathematical contents are briefly recapped here . given a piecewise constant function \to \mathbb{r} ] , ] .it is also useful to keep in mind that densities beyond the critical density always propagate backwards in space . the proof is divided into several steps .+ * [ step 1 ] .* .we have .since the network is initially empty , all the supplies , ] .let be the time of the interaction .in view of , if the minimum is attained at , then the interaction does not bring any increase in density at the downstream end of , hence no decrease in the supply there .+ if the minimum is attained at , we deduce in a similar way as that the last inequality is due to the fact that a backward wave such as must be created at the downstream end of at a time earlier than , thus its supply value must be bounded below by . if the minimum is attained at , we have the last inequality is because any lower - than - maximum supply on must be created in the previous time interval . + to sum up ,the supply values corresponding to the backward waves generated at , if any , are bounded below by . * * case ( 5 ) .* for the merge junction , we begin with the case illustrated in figure [ figmerge](a ) . assuming the backward wave that interacts with the merge junction from has the density value ] for every and .therefore , at diverging junctions , the coefficients and satisfy the same properties .+ * [ step 4 ] .* let us now turn to the total variation of the path disaggregation variables .we know , from * step 2 * , that has bounded variation on virtual links incident to the origins .we also know from * step 3 * that is bounded away from zero whenever it is non - zero ; and the same holds for the turning ratios at diverge junctions .consider a -wave , then its variation can change only upon interaction with diverge junctions .more precisely , denote by and the wave respectively before and after the interaction , we have where or and is bounded away from zero .since the -waves travel only forward on the links with uniformly bounded speed , we must have that the interactions with diverge junctions can occur only finite number of times . thus we conclude that has bounded total variation .let be the shift of the -th jump of , which occurs at time , then the expression of is possibly affected on the interval $ ] ( assuming ) . more precisely , if then no wave is generated on the virtual link while a shift in the queue is generated with , where is the jump in occurring at time . on the other hand ,if then a wave is produced at time on the virtual link with shift where is the speed of the wave .we can then compute : moreover , , thus the norm of the tangent vector generated is the same as before . to prove that the norm of the tangent vector is bounded for all times , let us first consider a backward - propagating wave interacting with the queue , and with a shift .then we can write : where and are the time - derivatives of before and after the interaction , respectively .therefore , denoting the speed of the wave , we get : thus the norm of the tangent vector is bounded .aboudolas , k. , papageorgiou , m. , komsmatopoulous , e. , 2009 .store - and - forward based methods for the signal control problem in large - scale congested urban road networks .transportation research part c , 17 ( 2 ) , 163 - 174 .han , k. , friesz , t.l . ,szeto , w.y . ,liu , h. , 2015a .elastic demand dynamic network user equilibrium : formulation , existence and computation .transportation research part b , doi : 10.1016/j.trb.2015.07.008 .han , k. , friesz , t.l . , yao , t. , 2013a . a partial differential equation formulation of vickrey s bottleneck model , part i : methodology and theoretical analysis . transportation research part b 49 , 55 - 74 .han , k. , piccoli , b. , szeto , w.y .continuous - time link - based kinematic wave model : formulation , solution existence , and well - posedness .transportmetrica b : transport dynamics , doi : 10.1080/21680566.2015.1064793 .yperman , i. , logghe , s. , immers , l. , 2005 .the link transmission model : an efficient implementation of the kinematic wave theory in traffic networks , advanced or and ai methods in transportation , proc .10th ewgt meeting and 16th mini - euro conference , poznan , poland , 122 - 127 , publishing house of poznan university of technology .
this paper establishes the continuity of the path delay operators for _ dynamic network loading _ ( dnl ) problems based on the lighthill - whitham - richards model , which explicitly capture vehicle spillback . the dnl describes and predicts the spatial - temporal evolution of traffic flow and congestion on a network that is consistent with established route and departure time choices of travelers . the lwr - based dnl model is first formulated as a system of _ partial differential algebraic equations _ ( pdaes ) . we then investigate the continuous dependence of merge and diverge junction models with respect to their initial / boundary conditions , which leads to the continuity of the path delay operator through the _ wave - front tracking _ methodology and the _ generalized tangent vector _ technique . as part of our analysis leading up to the main continuity result , we also provide an estimation of the minimum network supply without resort to any numerical computation . in particular , it is shown that gridlock can never occur in a finite time horizon in the dnl model . ( 1,0)469 path delay operator , continuity , dynamic network loading , lwr model , spillback , gridlock
within the last few years _ relational database management systems _ ( rdbms or db in short ) have become essential for control system configuration . with availability of generic applications and hardware standards the interplay of components as well asthe adaptation to site specific needs has become crucial .proper networking has been the central problem of the past .today s challenge is a central repository of reference and configuration data as well as an appropriate standard suite of db applications .target is a management system providing consistency in a programmable , comprehensible and automatic way for development , test and production phases of the control system . instead of careful bookkeeping of innumerable hand - edited filesthe ever needed modifications of the facility require ` only ' change of atomic and unique configuration data in the db and eventually adaptation of structures in the db ( applications ) .the update of tool configurations is then consistently accomplished by direct db connection , renewal of snapshot files generated by extraction scripts etc .very early a device oriented approach has been chosen for the description of the 3rd generation light source bessy ii .a naming convention was developed for easy identification and parsing of classifying properties like installation location , device family , type and instance . around the ` bootstrap ' information contained in the devicenames a first reference database has been set up , describing wiring , calibrations , geometries etc .utilisation of the epics control system toolkit implicates the network protocol called _ channel access _ for the i / o of process variables . at bessy the device model results in a scheme : < channel>. generation of the epics _ real time databases _ ( rtdb ) for device classes with high multiplicity and simple i / o ( power supplies , vacuum system , timings ) is based on two components : device class specific data are stored in the db .functionality and logic of is modelled with a graphical editor and stored in a template file .generating scripts merge both into the actual rtdb . for unique and complex systems ( rf , insertion devices ) [ rf ] structuring takes place in where no common naming convention has been defined so far .contrary to the previous approach of complex template / atomic substitution here all channels involved are assigned to sub - units and hierarchies .the structure is fully implemented in the db and for rtdb generation only connected with simple atomic templates .intermediate systems ( scraper , gp - ib devices ) are either adapted to one of these approaches or simply set up by ad - hoc created files .documentation of and transitions between different facility operation conditions are handled by a save / restore / compare tool that is working on a set of snapshot files .coarse configuration is provided by sql retrievals of name lists .hierarchies and partitioning are derived from device - name patterns and mapped into directories and files .this is feasible only because the relevant channels are restricted to setpoint , readback and status .deviations of this scheme are few and easily maintainable by hand .information required by modelling tools for conversion between engineering units of device i / o and the physics views ( magnet function , length , position , conversion factor etc . )are fully available in the db . therefore the linear optics correction tools ( orbit , tune ) are instantaneously consistently configured provided the installed hardware matches the entries in the db .as long as the alarm handler has to monitor only hardware trips db retrieval of device name collections and script based interpretation of device name patterns are helpful at least for the simple devices with high multiplicity : typical channels are on / off status .again grouping and hierarchies can be derived from the class description embedded in the device name .for the complex devices ( rf ) control functionality and logic has already been mapped into db structures ( see [ rf ] ) . heregenuine db calls produce alarm handler configuration files with sophisticated error reporting capabilities . in a similar fashion creation of configuration files for the data collector engine(s ) of the archiving systemis simplified by db calls : for each device class a limited number of signals and associated frequencies are of interest for long term monitoring .the db basically serves as source for device name collections .high level software using the cdev api ( configured with a _ device description language _ file ) and ` foreign ' networks ( connected by the _ channel access gateway _ ) benefit also from the db : for cdev access definition and permission to selected i / o channels for each device class is defined by appropriate prototype descriptions ( in analogy to the rtdb templates ) .the associated lists of devices are compiled with db calls ._ ca _ gateway takes advantage of the naming convention by regular expression evaluation .facility modifications typically change the device inventory .the db supports consistent propagation of innovations : during an installation campaign devices are added or deleted in the db .in addition , new device classes are modelled within script logics and template ( prototype ) descriptions . running the configuration scripts on all control system levelsupdates the configurations within development environment , test system and production area .a number of restrictions imposed by the present device oriented db model are solvable by minor structural modifications and consequent introduction of channels necessary for the adequate description of essential device properties .dependent on the specific point of view , definition of device classes and assignment of equipment comprises ambiguities .e.g. the device class _ magnet _ ( m ) is tightly bound to the model aspect of the storage ring , whereas the the class _ power supply _ ( p ) covers the engineering aspects of the current converters .pulsed elements ( kicker , septa ) are very similar devices , but do not fit into this scheme - neither into the model nor the i / o aspects .the decision to assign a single dedicated device - class ( k ) introduces a new pattern ( fig .[ devnew ] ) and breaks the naming structure .: : + horizontal corrector ( windings ) in sext 4 of : : + power supply : : + i d ue56 as a complex device with lots of _ internal _ hardware units : : + power supply of + horizontal ue56bending magnet _ external access !_ difficulties get worse for complex pieces of equipment : insertion devices have to be treated as ` units ' .they can be completely replaced by different entities with similar complexity .typically they consist of a number of devices partly belonging to already described device classes ( like bipolar power supplies ) .the straightforward solution here is the extension of the device naming convention : substructures and naming rules have to be introduced also for the ( up to now monolithic ) device property describing name element ( ` genome chart ' ) .corresponding db structures have to be implemented . similarly naming rules have to be developed where structuring takes place in ( fig . [ rfnew ] ) . in summary ,the distinction and boundary between device and are dictated by specifics of the i / o connectivity and arbitrary with respect to the required db structure .it does not necessarily coincide with the most adequate classification aspects .several global states are well defined for the whole facility ( e.g. ` shutdown ' , ` machine development ' , ` user service ' etc . ) or for sub - sections or device collections : ` injection running ' , ` wave length shifter on ' etc .the role of devices depend on these states : in the context of ` shutdown ' insufficient liquid helium level at super - conducting devices has to generate a high severity alarm . for power supplieseven off states are then no failure . but during ` user service ' alarms should notify the operator already when power supply readbacks are out of meaningful bounds .similar case distinctions have to be made for all conditioning applications ( reload constraints for save / restore , active / inactive elements for modelling ) , for data archiving ( active periods , frequency / monitor mode ) , for sequencing programs etc .today only the most general conditions are statically configured and available from the database / configuration scripts .exceptions are partly coded within the applications or have to be handled by the operators an error - prone situation .a clearly laid out man - machine interface provides an effective protection against accidental maloperation .presentation details and accessibility of devices should be tailored to the user groups addressed : operators , device experts , accelerator physicists .the required attributes attached to the device channels should be easily retrievable from the db .starting from the device model a very specific view of the control system structure has been mapped into db structure : device classes correspond to tailored set of tables .deviating aspects that do not fit into the scheme are modelled by relations , constraints , triggers and presented to the applications by pre - built views .the result is not very flexible , hard to extend and complicated to maintain in a consistent state free from redundancies .numerous device attributes are scattered over template / prototype files and configuration creating scripts in an implicit format without clear and explicit relation to the data within the db .configurations of archiver retrieval tools are much more demanding than those of data collector processes . in search of correlationsarbitrary channel names have to be detected by their physical dimension : for a drift analysis the data sources for temperatures [ c ] , rf frequency [ mhz ] , bpm deviations [ mm ] and corrector kicks [ mrad ] have to be discovered . any non - static , not pre - configured retrieval interface to the most important performance analysis tool of the facility archived data requires clues to signal names , meaning , functionality , dimensions , attributes . with the rtdb template approachthe channel attributes are hidden within these files and not available to the db .consequently , no db based data relations are available for correlations ( comparable dimensions ) , projections ( user , expert , device responsible ) or dependencies ( triggered by ) .there is no browsable common data source that allows to coordinate consistency of configurations requiring channel attributes ( that go beyond simple and clear assignments to well defined device classes ) .the device model is tailored for modelling applications . for the archiver retrievalit is useless continuously causing errors and inconsistencies .in a kind of clean - up effort the existing fragments used for rtdb generation will be put into a new scheme .sacrificing the feasibility to generate complex rtdb templates with a graphical editor , now device class tables , templates and i / o software modules ( _ records _ ) will be put together as a new db core . in view of the extendibility for high level applicationsa purely db based approach has been taken even though for rtdb generation the framework of xml seems to be an attractive and well suited alternative . a new entity _gadget _ is introduced connecting the data spaces _ device _ , _ channel _ and _ iostruct_. these db sections have to be set up consequently in 3rd normal form , i.e. all device class properties , channel attributes and i / o specifics have to be modelled in data and relations , not in table structures . in blue - printthe resulting data model looks abstract and ( intimidating ) complex , but promising with respect to db flexibility , extendibility and cleanness .expectation that today s complicated configuration scripts collapse to simple sequences of db queries seem to be justified .the most difficult problem of a comprehensive configuration management system is the determination of ` natural ' data structures and breaking them down into localised data areas , dependencies and relations . for a relatively small installation like bessy ii the expected reward in stability and configuration consistency does not clearly justify effort and risk of the envisaged re - engineering task .one could argue that the present mixture of db based and hand edited configuration is an efficient compromise . on the other handthe persistent and boring work - load due to manual configuration update requirements is a constant source of motivation to persue this reengineering project .
the reference rdbms for bessy ii has been set up with a device oriented data model . this has proven adequate for e.g. template based rtdb generation , modelling etc . but since assigned i / o channels have been stored outside the database ( a ) numerous specific conditions had to be maintained within the scripts generating configuration files and ( b ) several generic applications could not be set up automatically by scripts . in a larger re - design effort the i / o channels are introduced into the rdbms . that modification allows to generate a larger set of rtdbs , map specific conditions into database relations and maintain application configurations by relatively simple extraction scripts .
quantum key distribution ( qkd ) enables an unconditionally secure means of distributing secret keys between two spatially separated parties , alice and bob .the security of qkd has been rigorously proven based on the laws of quantum mechanics .nevertheless , owing to the imperfections in real - life implementations , a large gap between its theory and practice remains unfilled .in particular , an eavesdropper ( eve ) may exploit these imperfections and launch specific attacks .this is commonly called quantum hacking . the first successful quantum hacking against a commercial qkd system was the time - shift attack based on a proposal in .more recently , the phase - remapping attack and the detector - control attack have been implemented against various practical qkd systems .also , other attacks have appeared in the literature .these results suggest that quantum hacking is a major problem for the real - life security of qkd . to close the gap between theory and practice , a natural attempt was to characterize the specific loophole and find a countermeasure . for instance ,yuan , dynes and shields proposed an efficient countermeasure against the detector - control attack . once an attack is known , the prevention is usually uncomplicated .however , unanticipated attacks are most dangerous , as it is impossible to fully characterize real devices and account for _ all _ loopholes .hence , researchers moved to the second approach ( full ) device - independent qkd .it requires no specification of the internal functionality of qkd devices and offers nearly perfect security .its legitimate users ( alice and bob ) can be treated as a _black box by assuming no memory attacks .nevertheless , device - independent qkd is not really practical because it requires near - unity detection efficiency and generates an extremely low key rate .therefore , to our knowledge , there has been no experimental paper on device - independent qkd .fortunately , lo , curty and qi have recently proposed an innovative scheme measurement - device - independent qkd ( mdi - qkd ) that removes all detector side - channel attacks , the most important security loophole in conventional qkd implementations . as an example of a mdi - qkd scheme ( see fig .[ fig : model ] ) , each of alice and bob locally prepares phase - randomized signals ( this phase randomization process can be realized using a quantum random number generator such as ) in the bb84 polarization states and sends them to an _ untrusted _ quantum relay , charles ( or eve ) .charles is supposed to perform a bell state measurement ( bsm ) and broadcast the measurement result .since the measurement setting is only used to post - select entanglement ( in an equivalent virtual protocol ) between alice and bob , it can be treated as a _ true _ black box .hence , mdi - qkd is inherently immune to all attacks in the detection system .this is a major achievement as mdi - qkd allows legitimate users to not only perform secure quantum communications with untrusted relays but also out - source the manufacturing of detectors to untrusted manufactures .conceptually , the key insight of mdi - qkd is _time reversal_. this is in the same spirit as one - way quantum computation .more precisely , mdi - qkd built on the idea of a time - reversed epr protocol for qkd . by combining the decoy - state method with the time - reversed epr protocol , mdi - qkd gives both good performance and good security .mdi - qkd is highly practical and can be implemented with standard optical components .the source can be a non - perfect single - photon source ( together with the decoy - state method ) , such as an attenuated laser diode emitting weak coherent pulses ( wcps ) , and the measurement setting can be a simple bsm realized by linear optics .hence , mdi - qkd has attracted intensive interest in the qkd community .a number of follow - up theoretical works have already been reported in . meanwhile ,experimental attempts on mdi - qkd have also been made by several groups .nonetheless , before it can be applied in real life , it is important to address a number of practical issues .these include : 1 .modelling the errors : an implementation of mdi - qkd may involve various error sources such as the mode mismatch resulting in a non - perfect hong - ou - mandel ( hom ) interference .thus , the first question is : how will these errors affect the performance of mdi - qkd ? or , what is the physical origin of the quantum bit error rate ( qber ) in a practical implementation ? 2 .finite decoy - state protocol and finite - key analysis : as mentioned before , owing to the lack of true single - photon sources , qkd implementations typically use laser diodes emitting wcps and single - photon contributions are estimated by the decoy - state protocol . in addition , a real qkd experiment is completed in finite time , which means that the length of the output keys is finite .thus , the estimation of relevant parameters suffers from statistical fluctuations .this is called the finite - key effect .hence , the second question is : how can one design a practical finite decoy - state protocol and perform a finite - key analysis in mdi - qkd ? 3 .choice of intensities : an experimental implementation needs to know the optimal intensities for the signal and decoy states in order to optimize the system performance .previously , and have independently discussed the finite decoy - state protocol .however , the high computational cost of the numerical approach proposed in , together with the lack of a rigorous discussion of the finite - key effect in both and , makes the optimization of parameters difficult .thus , the third question is : how can one obtain these optimal intensities ? 4 .asymmetric mdi - qkd : as shown in fig .[ fig : general ] , in real life , it is quite common that the two channels connecting alice and bob to charles have different transmittances .we call this situation _ asymmetric _ mdi - qkd . importantly ,this asymmetric scenario appeared naturally in a recent proof - of - concept experiment , where a tailored length of fiber was intentionally added in the short arm to balance the two channel transmittances .since an additional loss is introduced into the system , it is unclear whether this solution is optimal .hence , the final question is : how can one optimize the performance of this asymmetric case ?the second question has already been discussed in and solved in . in this paper , we offer additional discussions on this point and answer the other questions .our contributions are summarized below . 1 . to better understand the physical origin of the qber , we propose generic models for various error sources . in particular , we investigate two important error sources polarization misalignment and mode mismatch .we find that in a polarization - encoding mdi - qkd system , polarization misalignment is the major source contributing to the qber and mode mismatch ( in the time or frequency domain ) , however , does not appear to be a major problem .these results are shown in fig .[ misalignment : asymptotic ] and fig .[ timejitter : simulation ] .moreover , we provide a mathematical model to simulate a mdi - qkd system .this model is a useful tool for analyzing experimental results and performing the optimization of parameters .although this model is proposed to study mdi - qkd , it is also useful for other non - qkd experiments involving quantum interference , such as entanglement swapping and linear optics quantum computing .this result is shown in [ app : qande ] .a previous method to analyze mdi - qkd with a finite number of decoy states assumes that alice and bob can prepare a vacuum state . here, however , we present an analytical approach with two _ general _ decoy states , _i.e. _ , without the assumption of vacuum .this is particularly important for the practical implementations , as it is usually difficult to create a vacuum state in decoy - state qkd experiments .the different intensities are usually generated with an intensity modulator , which has a finite extinction ratio ( _ e.g. _ , around 30 db ) .additionally , we also simulate the expected key rates numerically and thus present an optimized method with two decoy states .ignoring for the moment the finite - key effect , experimentalists can directly use this method to obtain a rough estimation of the system performance .table [ tab : finite : equations ] contains the main results for this point .by combining the system model , the finite decoy - state protocol , and the finite - key analysis of , we offer a general framework to determine the optimal intensities of the signal and decoy states . notice that this framework has already been adopted and verified in the experimental demonstration reported in .these results are shown in fig .[ fig : flowchart ] and fig .[ fig : optimal : intensity ] .4 . finally ,we model and evaluate the performance of an _ asymmetric _ mdi - qkd system .this allows us to study its properties and determine the experimental configuration that maximizes its secret key rate .these results are shown in table [ tab : opt : parameters ] and fig .[ fig : advantage : key ] .the secure key rate of mdi - qkd in the asymptotic case ( _ i.e. _ , assuming an infinite number of decoy states and signals ) is given by -q_{z}f_{e}(e_{z})h_{2}(e_{z}),\ ] ] where and are , respectively , the yield ( _ i.e. _ , the probability that charles declares a successful event ) in the rectilinear ( ) basis and the error rate in the diagonal ( ) basis given that both alice and bob send single - photon states ( denotes this probability in the basis ) ; is the binary entropy function given by = ; and denote , respectively , the gain and qber in the basis and is the error correction inefficiency function .here we use the basis for key generation and the basis for testing only . in practice , and directly measured in the experiment , while and can be estimated using the finite decoy - state method .next , we introduce some additional notations .we consider one signal state and two weak decoy states for the finite decoy - state protocol .the parameter is the intensity ( _ i.e. _ , the mean photon number per optical pulse ) of the signal state . ] . and are the intensities of the two decoy states , which satisfy .the sets \{,, } and \{,, } contain respectively alice s and bob s intensities .the sets \{,, } and \{,, } denote the optimal intensities that maximize the key rate . and ( and )denote the channel distance and transmittance from alice ( bob ) to charles . in the case of a fiber - based system , = with denoting the channel loss coefficient ( .2 db / km for a standard telecom fiber ) . is the detector efficiency and is the background rate that includes detector dark counts and other background contributions .the parameters , , and denote , respectively , the errors associated with the polarization misalignment , the time - jitter and the total mode mismatch ( see definitions below ) .in this section , we consider the original mdi - qkd setting , _ i.e. _ , the symmetric case with = . the asymmetric case will be discussed in section [ sec : asymmetric ] . to model the practical error sources , we focus on the fiber - based polarization - encoding mdi - qkd system proposed in and demonstrated in .notice , however , that with some modifications , our analysis can also be applied to other implementations such as free - space transmission , the phase - encoding scheme and the time - bin - encoding scheme .see also and respectively for models of phase - encoding and time - bin - encoding schemes .a comprehensive list of practical error sources is as follows . 1 . polarization misalignment ( or rotation ) .mode mismatch including time - jitter , spectral mismatch and pulse - shape mismatch .fluctuations of the intensities ( modulated by alice and bob ) at the source .4 . background rate .asymmetry of the beam splitter . here, we primarily analyze the first two error sources , _i.e. _ , polarization misalignment and mode mismatch .the other error sources present minor contributions to the qber in practice , and are discussed in [ app : othererrors ] .polarization misalignment ( or rotation ) is one of the most significant factors contributing to the qber in not only the polarization - encoding bb84 system but also the polarization - encoding mdi - qkd system . since mdi - qkd requires two transmitting channels and one bsm ( instead of one channel and a simple measurement as in the bb84 protocol ), it is cumbersome to model its polarization misalignment . here, we solve this problem by proposing a simple model in fig .[ fig : model ] .one of the polarization beam splitters ( pbs2 in fig .[ fig : model ] ) is defined as the fundamental measurement basis .three unitary operators , \{ , , } , are considered to model the polarization misalignment of each channel .the operator ( ) represents the misalignment of alice s ( bob s ) channel transmission , while models the misalignment of the other measurement setting , pbs1 . for simplicity, we consider a simplified model with a 2-dimensional unitary matrix and , and the outgoing modes by and , then the unitary operator yields an evolution of the form and .this unitary matrix is a simple form rather than the general one ( see section i.a in ) .nonetheless , we believe that the result for a more general unitary transformation will be similar to our simulation results . ] where =1 , 2 , 3 and ( polarization - rotation angle ) is in the range of [ , . for each value of , we define the polarization misalignment error = and the total error = .note that is equivalent to the systematic qber in a polarization - encoding bb84 system ..list of practical parameters for all numerical simulations .these experimental parameters , including the detection efficiency , the total misalignment error and the background rate , are from the 144 km qkd experiment reported in .since two spds are used in , the background rate of each spd here is roughly half of the value there .we assume that the four spds in mdi - qkd ( see fig . [fig : model ] ) have identical and .the parameter is the total mode mismatch that is quantified from the experimental values of . [cols="^,^,^,^,^",options="header " , ] the _ optimal choice _ ( indicated by optimum in fig .[ fig : advantage : key ] and table [ tab : opt : parameters ] ) that maximizes the key rate can be determined from numerical optimizations .here we perform such optimizations and also analyze the properties of asymmetric mdi - qkd .our main results are : 1 . in the asymptotic case ,the optimal choice for and does not always satisfy = , but the ratio is near 1 . for , [0.3 , 1 ) ; for , [1 , 3.5 ] ; this result can be seen from fig .[ simu : mumu : withdark ] ) . in the practical case with the two decoy - state protocol and finite - key analysis , \{ , } and \{ , }satisfy a similar condition with the ratio ( or ) near 1 , while \{ , } are optimized at their smallest value .see table [ tab : opt : parameters ] for further details .2 . in an asymmetric system with =0.1 ( 50 km length difference for two standard fiber links ) ,the advantage of the optimal choice is shown in fig . [fig : advantage : key ] , where the key rate with the optimal choice is around 80% larger than that with the symmetric choice in both asymptotic and practical cases .we remark that when is far from 1 , this advantage is more significant .for instance , with =0.01 ( 100 km length difference ) , the key rate with the optimal choice is about 150% larger than that with the symmetric choice .3 . in the asymptotic case , at a short distance where background counts can be ignored : and are _ only _ determined by instead of or ( see the optimal intensities in table [ tab : opt : parameters ] and theorem [ theorem1 ] in [ app : asymptotic ] ) ; assuming a fixed , and can be analytically derived and the optimal key rate is quadratically proportional to ( see [ app : asymptotic ] ) .finally , notice that the channel transmittance ratio in calgary s asymmetric system is near 1 ( =0.752 ) , hence the optimal choice can slightly improve the key rate compared to the symmetric choice ( around 2% improvement ) .however , in tokyo s asymmetric system ( =0.017 ) , the optimal choice can significantly improve the key rate by over 130% .a key assumption in mdi - qkd is that alice and bob trust their devices for the state preparation , _i.e. _ , they can generate ideal quantum states in the bb84 protocol .one approach to remove this assumption is to quantify the imperfections in the state preparation part and thus include them into the security proofs .we believe that this assumption is practical because alice s and bob s quantum states are prepared by themselves and thus can be experimentally verified in a fully protected laboratory environment outside of eve s interference .for instance , based on an earlier proposal , c. c. w. lim _ et al ._ have introduced another interesting scheme in which each of alice and bob uses an entangled photon source ( instead of wcps ) and quantifies the state - preparation imperfections via random sampling .that is , alice and bob randomly sample parts of their prepared states and perform a local bell test on these samples .such a scheme is very promising , as it is in principle a fully device - independent approach .it can be applied in short - distance communications . in conclusion , we have presented an analysis for practical aspects of mdi - qkd . to understand the physical origin of the qber, we have investigated various practical error sources by developing a general system model . in a polarization - encoding mdi - qkd system ,polarization misalignment is the major source contributing to the qber .hence , in practice , an efficient polarization management scheme such as polarization feedback control can significantly improve the polarization stabilization and thus generate a higher key rate .we have also discussed a simple analytical method for the finite decoy - state analysis , which can be directly used by experimentalists to demonstrate mdi - qkd .in addition , by combining the system model with the finite decoy - state method , we have presented a general framework for the optimal intensities of the signal and decoy states .furthermore , we have studied the properties of the asymmetric mdi - qkd protocol and discussed how to optimize its performance .our work is relevant to both qkd and general experiments on quantum interference .we thank w. cui , s. gao , l. qian for enlightening discussions and v. burenkov , z. liao , p. roztocki for comments on the presentation of the paper . support from funding agenciesnserc , the crc program , european regional development fund ( erdf ) , and the galician regional government ( projects cn2012/279 and cn 2012/260 , consolidation of research units : atlanttic " ) is gratefully acknowledged .f. xu would like to thank the paul biringer graduate scholarship for financial support .here , we discuss other practical error sources and show that their contribution to the qber is not very significant in a practical mdi - qkd system . for this reason ,they are ignored in our simulations .the intensity fluctuations of the signal and decoy states at the source are relatively small ( 0.1 db ) .additionally , alice and bob can in principle locally and precisely quantify their own intensities .therefore , this error source can be mostly ignored in the theoretical model that analyzes the performance of practical mdi - qkd ( but one could easily include it in the analysis ) .the threshold single photon detector ( spd ) can be modeled by a beam splitter with transmission and ( 1- ) reflection .the transmission part is followed by a unity efficiency detector , while the reflection part is discarded . is defined as the detector efficiency .background counts can be treated to be independent of the incoming signals . for simplicity, the system model discussed in section [ app : qande ] assumes that the four spds ( see fig . [ fig : model ] ) are identical and have a detection efficiency and a background rate .note , however , that if this condition is not satisfied ( _ i.e. _ , there is some detection efficiency mismatch ) our system model can be adapted to take care also of this case .all the simulations reported in the main text already consider a background rate of = ( see table [ tab : exp : parameters ] ) .[ detector : darkcounts ] simulates more general cases of the asymptotic key rates at different background count rates . at 0 km ,the mdi - qkd system can tolerate up to ( per pulse ) background counts . in practice , for telecom wavelengths ,the asymmetry of the beam splitter ( bs ) ( _ i.e. _ , not 50:50 ) is usually small .for instance , the wavelength dependence of the fiber - based bs in our lab ( newport-13101550 - 5050 fiber coupler ) is experimentally quantified in fig .[ fig : bsratio ] .if the laser wavelength is 1542 nm , the bs ratio is 0.5007 , which introduces negligible qber ( below 0.01% ) in a mdi - qkd system .hence , this error source can also be ignored in the theoretical model of mdi - qkd .in this section , we discuss an analytical method to model a polarization - encoding mdi - qkd system .that is , we calculate , , and and thus estimate the expected key rate from eq .( [ eqn : key : formula ] ) . to simplify our calculation , we make two assumptions about the practical error sources : a ) since most practical error sources do not contribute significantly to the system performance , we only consider the polarization misalignment , the background count rate and the detector efficiency ; b ) for the model of the polarization misalignment , we consider only two unitary operators , and , to represent respectively the polarization misalignment of alice s and bob s channel transmission , _i.e. _ , set = in the generic model of sec .[ sec : polarization ] . for simplicity , a more rigorous derivation with not shown here , but it can be easily completed following our procedures discussed below . in the asymptotic case , we assume that and in eq .( [ eqn : key : formula ] ) can be perfectly estimated with an infinite number of signals and decoy states .thus , they are given by ,\end{aligned}\ ] ] where = .importantly , we can see that ignoring the imperfections of polarization misalignment and background counts ( _ i.e. _ , , ) , is zero , while ( = ) can be maximized with = .thus , the optimal choice of intensities is ==1 .however , in practice , it is inevitable to have certain practical errors , which result in this optimal choice being a _ function _ of the values of practical errors. now , let us calculate and , which are eventually given by eq .( [ qrect : erect ] ) . to further simplify our discussion , we use \{horizonal , vertical,45-degree,135-degree } to represent the bb84 polarization states .also , \{ } will denote alice s and bob s encoding modes .we define the following notations : first , both alice and bob encode their states in the h mode ( symmetric to v mode ) .we assume that and ( see eq .( [ unitary ] ) ) rotate the polarization in the _ same _ direction , i.e. . the discussion regarding rotation in the opposite direction ( i.e. ) is in [ qhh : oppositeangle ] . in charles s lab , after the bs and pbs ( see fig . [fig : model ] ) , the optical intensities received by each spd are given by where denotes the relative phase between alice s and bob s weak coherent states .thus , the detection probability of each threshold spd is where .then , the coincident counts are where and denote , respectively , the probability of the projection on the triplet = ) and the singlet = ) . here from fig . [fig : model ] , triplet means the coincident detections of \{ch & cv } or \{dh & dv } ; singlet means the coincident detections of \{ch & dv } or \{cv & dh}. after averaging over the relative phase ( integration over [ 0,2 ) , we have , \\ \nonumber q_{z}^{hh,\psi^{-}}&=2e^{-\frac{\gamma}{2}}(1-y_{0})^{2}[i_{0}(\beta-2\beta e_{d1})+(1-y_{0})^{2}e^{-\frac{\gamma}{2 } } \\\nonumber & -(1-y_{0})e^{-\frac{\gamma(1-e_{d1})}{2}}i_{0}(e_{d1}\beta)-(1-y_{0})e^{-\frac{\gamma e_{d1}}{2}}i_{0}(\beta - e_{d1}\beta)],\end{aligned}\ ] ] [ qhh : final ] where is the modified bessel function .therefore , is given by here , to simplify eq .( [ qhh : final ] ) , we ignore background counts , _i.e. _ , , and use a 2nd order approximation ( as both and are typically on the order of ) such that then , eq . ( [ qhh : final ] ) can be estimated as and is given by alice ( bob ) encodes her ( his ) state in the h ( v ) mode ( symmetric to v ( h ) ) .we also assume . at charless side , the optical intensities received by each spd are given by the detection probability of each spd is described by eq .( [ qhh : detection : prob ] ) . and can be calculated similarly to eq .( [ qhh : model : final ] ) . after averaging over ,the results are ,\\ \nonumber q_{z}^{hv,\psi^{-}}&=2e^{-\frac{\gamma}{2}}(1-y_{0})^{2}[1+(1-y_{0})^{2}e^{-\frac{\gamma}{2}}\\ \nonumber & -(1-y_{0})e^{-\frac{\omega}{2}}i_{0}(\lambda)-(1-y_{0})e^{-\frac{\gamma-\omega}{2}}i_{0}(\lambda)].\end{aligned}\ ] ] therefore , is given by to simplify eq .( [ qhv : final ] ) we once again ignore the background counts and take a 2nd order approximation .( [ qhv : final ] ) can be estimated as and is given by finally , and can be expressed as where the different terms on the r.h.s . of this equation are given by eqs .( [ qhh : final ] , [ qhh : finalfinal ] , [ qhv : final ] , [ qhv : finalfinal ] ) . therefore , together with eq .( [ q11e11:nonideal : final ] ) , we could derive the analytical key rate of eq .( [ eqn : key : formula ] ) . if we ignore background counts and take the 2nd order approximation from eqs .( [ qhh : final : estimation : two ] , [ qhv : final : estimation : two ] ) , and can be written as when and rotate the polarization in the _ opposite _ direction , _i.e. _ , , eq . ( [ qhh : meanphotonnumbers : hh ] ) changes to after performing similar procedures to those of section [ sec : qhh ] , eq .( [ qhh : final : estimation : two ] ) is altered to since the qber is mainly determined by , by comparing eq .( [ qhh : final : estimation : two ] ) to ( [ qhh : opposite : final : estimation : two ] ) , we conclude that : : projection on results in a larger qber than that on : : projection on results in a smaller qber than that on an equivalent analysis can also be applied to following section [ sec : qhv ] , and thus eq . ( [ qhv : final : estimation : two ] ) is altered to therefore , the key rates of ( projections on the triplet ) and ( projections on the singlet ) are _ correlated _ with the relative direction of the rotation angles , while the overall key rate ( ) is _ independent _ of the relative direction of the rotation angles .we finally remark that in a practical polarization - encoding mdi - qkd system , the polarization rotation angle of each quantum channel ( or ) can be modeled by a gaussian distribution with a standard deviation of ( ) , which means that both and ( mostly ) distribute in the range of [ , and the relative direction between them also randomly distributes between and .hence , the effect of the polarization misalignment is the same for and , _i.e. _ , both and are independent of the total polarization misalignment .we can experimentally choose to measure either the singlet or the triplet by using only two detectors ( but sacrificing half of the total key rate ) , such as in the experiments of .here we discuss the properties of a practical asymmetric mdi - qkd system . for this , we derive an analytical expression for the estimated key rate and we optimize the system performance numerically . the estimated key rate is defined under the condition that background counts can be ignored . notethat this is a reasonable assumption for a short distance transmission .[ theorem1 ] and only depend on rather than on or ; under a fixed , is quadratically proportional to . _ proof _ : when is ignored , and are given by ( see eq .( [ q11e11:nonideal : final ] ) ) if we take the 2nd order approximation , and are estimated as ( see eq .( [ qrect : erect : final : estimation ] ) ) }{4 } , \\ \nonumber e_{z}=\frac{(\mu_{b}+x\mu_{a})^{2}(2e_{d}-e_{d}^{2})}{2[2x\mu_{a}\mu_{b}+(\mu_{b}^{2}+x^{2}\mu_{a}^{2})(2e_{d}-e_{d}^{2})]}.\end{aligned}\ ] ] by combining the above two equations with eq .( [ eqn : key : formula ] ) , the overall key rate can be written as where has the form \\ \nonumber & -\frac{2x\mu_{a}\mu_{b}+(\mu_{b}^{2}+x^{2}\mu_{a}^{2})(2e_{d}-e_{d}^{2})}{2}\times f_e h_2(e_z),\end{aligned}\ ] ] where is given by eq .( [ qrect : erect : final : estimation : xmuamub ] ) and is also a function of .therefore , optimizing is equivalent to maximizing and the optimal values , and , are _ only _ determined by . under a fixed , the optimal key rate is quadratically proportional to . for a given , the maximization of can be done by calculating the derivatives over and and verified using the jacobian matrix .we numerically study the properties of an asymmetric mdi - qkd system . in our simulations below , the asymptotic key rate , denoted by , is rigorously calculated from the key rate formula given by eq .( [ eqn : key : formula ] ) in which each term is shown in [ app : qande ] . denotes the estimated key rate from eq .( [ key : formula : xmuamub ] ) .the practical parameters are listed in table [ tab : exp : parameters ] .we used the method of for the finite - key analysis .firstly , fig .[ simu : key : comparision ] simulates the key rates of and at different channel lengths . for short distances ( _ i.e._ , total length km ) , the overlap between and demonstrates the accuracy of our estimation model of eq .( [ key : formula : xmuamub ] ) .therefore , in the short distance range , we could focus on to understand the behaviors of the key rate .moreover , from the curve of =1 m , we have that this asymmetric system can tolerate up to =0.004 ( 120 km length difference for standard fiber links ) . secondly , fig .[ simu : mumu : withdark ] shows and , when both and are scanned from 1 m to 100 km .these parameters numerically verify theorem [ theorem1 ] : at short distances ( .5 ) , and depend _ only _ on , while at long distances ( .5 ) , background counts contribute significantly and result in non - smooth behaviors . and are both in . finally , we simulate the optimal key rates under two _ fixed _ in fig . [fig : key:2decoy : fluc ] . 1 .solid curves are the asymptotic keys : at short distances ( + km ) , the maximal is fixed with a fixed ( see eq .( [ key : formula : gxmuamub ] ) ) . taking the logarithm with base 10 of and writing = , eq .( [ key : formula : xmuamub ] ) can be expressed as hence , the scaling behavior between the logarithm ( base 10 ) of the key rate and the channel distance is linear , which can be seen in the figure . here , db / km ( standard fiber link ) results in a slope of -0.4 .dotted curves are the two decoy - state key rates with the finite - key analysis : we consider a total number of signals and a security bound of ; for the dotted curve with =0.1 , the optimal intensities satisfy , which means that the ratios for the optimal and are roughly the same and this ratio is mainly determined by . even taking the finite - key effect into account ,the system can still tolerate a total fiber link of 110 km .10 bennett c h and brassard g 1984 quantum cryptography : public key distribution and coin tossing " _ proc .conf . on computers , systems and signal processing _ ( ieee new york ) pp .175 - 179 mayers d 2001 _ j. acm _ * 48 * 351 ; lo h - k and chau h f 1999 _ science _ * 283 * 2050 ; shor p and preskill j 2000 _ phys . rev .lett . _ * 85 * 441 ; scarani v , bechmann - pasquinucci h , cerf n j , duek m , ltkenhaus n and peev m 2009 _ rev .modern phys . _ * 81 * 1301 fung c - h f , qi b , tamaki k and lo h - k 2007 phase - remapping attack in practical quantum - key - distribution systems " _ phys .rev . a _ * 75 * 32314 ; xu f , qi b and lo h - k 2010 experimental demonstration of phase - remapping attack in a practical quantum key distribution system " _ new j. phys . _* 113026 lydersen l , wiechers c , wittmann c , elser d , skaar j and makarov v 2010 _ nat .photonics _ , * 4 * 686 ; gerhardt i , liu q , lamas - linares a , skaar j , kurtsiefer c and makarov v 2011 _ nat . communications _ ,* 2 * 349 weier h , krauss h , rau m , fuerst m , nauerth s and weinfurter h 2011 _ new j. phys ._ * 13 * 073024 ; jain n , wittmann c , lydersen l , wiechers c , elser d , marquardt c , makarov v and g. leuchs 2011 _ phys .lett . _ * 107 * 110501 ; h .- w .li , s. wang , j .- z .huang , w. chen , z .- q .yin , f .- y . li , z. zhou , d. liu , y. zhang , g .- c .guo , w .- s .bao , and z .- f .han 2011 _ phys .a _ * 84 * 062308 ; sun s , jiang m and liang l , 2011 _ phys .a _ * 83 * 062331 hwang w y 2003 _ phys . rev. lett . _ * 91 * 057901 ; lo h - k , ma x and chen k 2005 _ phys .* 94 * 230504 ; wang x - b 2005 _ phys .lett . _ * 94 * 230503 ; ma x , qi b , zhao y and lo h - k 2005 _ phys .rev . a _ * 72 * 012326 rubenok a , slater j - a , chan p , lucio - martinez i and tittel w 2013 real - world two - photon interference and proof - of - principle quantum key distribution immune to detector attacks " _ phys .lett . _ * 111 * 130501 liu y , chen t - y , wang l - j , liang h , shentu g - l , wang j , cui k , yinh - l , liun - l , li l , ma x , pelc j s , feje r m m , zhang q and pan j - w 2012 experimental measurement - device - independent quantum key distribution " _ physlett . _ * 111 * 130502 from the security aspect , one can in principle operate a mdi - qkd system without knowing the exact origin of the observed qber . however , from the performance aspect , it is important to study the origin of the qber in order to maximize the secure key rate. rosenberg d , harrington j w , rice p r , hiskett p a , peterson c g , hughes r j , lita a e , nam s w and nordholt j e 2007 _ phys ._ * 98 * 010503 ; dixon a , yuan z , dynes j , sharpe a and shields a 2008 _ opt .express _ * 16 * 18790 rosenberg d , peterson c , harrington j , rice p , dallmann n , tyagi k , mccabe k , nam s , baek b , hadeld r , hughes r j and nordholt j e 2009 practical long - distance quantum key distribution system using decoy levels " _ new j. phys . _ * 11 * 045009 this list does not consider the state - preparation error , because a strict discussion about this problem is related to the security proof of mdi - qkd , which will be considered in future publications .if we denote the two incoming modes in the horizontal and vertical polarization by the creation operators and , and the outgoing modes by and , then the unitary operator yields an evolution of the form and .this unitary matrix is a simple one rather than the most general one .nonetheless , we believe that the result for a more general unitary transformation will be similar to our simulation results .ursin r , _ et al _ 2007 entanglement - based quantum communication over 144 km " _ nat .physics _ , * 3 * 481 two remarks for the distribution of the three unitary operators : a ) we assume that the two channel transmissions , _ i.e. _ , and , introduce much larger polarization misalignments than the other measurement basis , ( pbs1 in fig . [ fig : model ] ) , because pbs1 is located in charles s local station and can be carefully aligned ( in principle ) . hence , we choose ==0.475 and =0.05 .b ) notice that the simulation result is more or less _ independent _ of the distribution of . suppose both alice and bob encode their optical pulses in the same mode of horizontal polarization in the basis .ignoring the polarization misalignment , the interference result will _ only _ generate a click on the horizontal detectors rather than create coincident detections .this holds both for the cases of perfect interference ( =0 ) and non - interference ( =1 ) .thus , =1 does not increase the qber in the basis . in the derivation of and with mode mismatch , the non - overlapping modes can be essentially treated as background counts increasing the background count rate of each detector .the final result is a summation over the overlapping and non - overlapping modes . according to , it is also possible for other two combinations of intensities : 1 ) choosing intensity pairs from \{ , , , } and \{ , , , } , substituting with for in eq .( [ y11:2decoy : qrectm ] ) , and then performing similar calculations ; 2 ) choosing intensity pairs from \{ , , , } and \{ , , , } , substituting with for and substituting with for in eq .( [ y11:2decoy : qrectm ] ) , and performing similar calculations . herewe numerically found that the optimal intensity choice is to choose from \{ , , , } and \{ , , , }. similar to the estimation of , according to , it is also possible for other two combinations , _i.e. _ , \{ , , , } or \{ , , , }. we numerically found that the optimal choice is to choose from \{ , , , }. in theory , .thus , we can decide to implement the standard decoy - state method only in the basis and estimate from , while in the basis , we only implement the decoy states instead of the signal state .the advantage of such an implementation is to increase the key rate .see also for a similar discussion .we assume that , the intensity of the signal state is about 0.5 and the maximum extinction ratio of a practical intensity modulator is around 30 db .thus , the lowest intensity that can be modulated is .xavier g b , walenta n ,de faria g v , temporo g p , gisin n , zbinden h and von der weid j p 2009 experimental polarization encoded quantum key distribution over optical fibres with real - time continuous birefringence compensation " _ new j. phys ._ * 11 * 045015
a novel protocol , measurement - device - independent quantum key distribution ( mdi - qkd ) , removes all attacks from the detection system , the most vulnerable part in qkd implementations . in this paper , we present an analysis for practical aspects of mdi - qkd . to evaluate its performance , we study various error sources by developing a general system model . we find that mdi - qkd is highly practical and thus can be easily implemented with standard optical devices . moreover , we present a simple analytical method with only two ( general ) decoy states for the finite decoy - state analysis . this method can be used directly by experimentalists to demonstrate mdi - qkd . by combining the system model with the finite decoy - state method , we present a general framework for the optimal choice of the intensities of the signal and decoy states . furthermore , we consider a common situation , namely _ asymmetric _ mdi - qkd , in which the two quantum channels have different transmittances . we investigate its properties and discuss how to optimize its performance . our work is of interest not only to experiments demonstrating mdi - qkd but also to other non - qkd experiments involving quantum interference .
quantum information processing has attracted a lot of interest in recent years , following deutsch s investigations concerning the potentiality of a quantum computer , i.e. , a computer where information is stored and processed in quantum systems .their application as quantum information carriers gives rise to outstanding possibilities , like secret communication ( quantum cryptography ) and the implementation of quantum networks and quantum algorithms that are more efficient than classical ones .many investigations concern the transmission of quantum information from one party ( usually called alice ) to another ( bob ) through a communication channel . in the most basic configurationthe information is encoded in qubits .if the qubits are perfectly protected from environmental influence , bob receives them in the same state prepared by alice . in the more realistic case , however , the qubits have a nontrivial dynamics during the transmission because of their interaction with the environment . therefore , bob receives a set of distorted qubits because of the disturbing action of the channel . up tonow investigations have focused mainly on two subjects : determination of the channel capacity and reconstruction schemes for the original quantum state under the assumption that the action of the quantum channel is known . herewe focus our attention on the problem that precedes , both from a logical and a practical point of view , all those schemes : the problem of determining the properties of the quantum channel .this problem has not been investigated so far , with the exception of very recent articles .the reliable transfer of quantum information requires a well known intermediate device .the knowledge of the behaviour of a channel is also essential to construct quantum codes . in particular, we consider the case when alice and bob use a finite amount of qubits , as this is the realistic case .we assume that alice and bob have , if ever , only a partial knowledge of the properties of the quantum channel and they want to estimate the parameters that characterize it .the article is organized as follows . in section [ generaldescript ]we shall give the basic idea of quantum channel estimation and introduce the notation as well as the tools to quantify the quality of channel estimation protocols .we shall then continue with the problem of parametrizing quantum channels appropriately in section [ parametrization ] .then we are in a position to envisage the estimation protocol for the case of one parameter channels in section [ oneparameter ] .in particular , we shall investigate the optimal estimation protocols for the depolarizing channel , the phase damping channel and the amplitude damping channel .we shall also give the estimation scheme for an arbitrary qubit channel . in section [ qubitpauli ]we explore the use of entanglement as a powerful nonclassical resource in the context of quantum channel estimation .section [ quditpauli ] deals with higher dimensional quantum channels before we conclude in section [ conclude ] .the determination of all properties of a quantum channel is of considerable importance for any quantum communication protocol . in practicesuch a quantum channel can be a transmission line , the storage for a quantum system , or an uncontrolled time evolution of the underlying quantum system .the behaviour of such channels is generally not known from the beginning , so we have to find methods to gain this knowledge .this is in an exact way only possible if one has infinite resources , which means an infinite amount of well prepared quantum systems .the influence of the channel on each member of such an ensemble can then be studied , i.e. , the corresponding statistics allows us to characterize the channel . in a pratical application , however , such a condition will never be fulfilled .instead we have to come along with low numbers of available quantum systems .we therefore can not determine the action of a quantum channel perfectly , but only up to some accuracy .we therefore speak of channel estimation rather than channel determination , which would be the case for infinite resources .a quantum channel describes the evolution affecting the state of a quantum system .it can describe effects like decoherence or interaction with the environment as well as controlled or uncontrolled time evolution occuring during storage or transmission . in mathematical terms a quantum channel is a completely positive linear map ( cp - map ) , which transforms a density operator to another density operator each quantum channel can be parametrized by a vector with components . for a specific channel we shall therefore write throughout the paper . depending on the initial knowledge about the channel, the number of parameters differs .the goal of channel estimation is to specify the parameter vector .the protocol alice and bob have to follow in order to estimate the properties of a quantum channel is depicted in figure [ figurescheme ] .alice and bob agree on a set of quantum states , which are prepared by alice and then sent through the quantum channel .therefore , bob receives the states .he can now perform measurements on them . from the results he has to deduce an estimated vector which should be as close as possible to the underlying parameter vector of the quantum channel .quantum state to bob .the channel maps these states onto the states , on which bob can perform arbitrary measurements .note that bob s measurements are designed with the knowledge of the original quantum states .his final aim will be to present an estimated vector being as close as possible to the underlying parameter vector .,width=302 ] how can we quantify bob s estimation ? to answer this we introduce two errors or cost functions which describe how good the channel is estimated .the first obvious cost function is the _ statistical error_ c_s(n,)_=1^l ( _ -_^est(n))^2 [ ciesse ] in the estimation of the parameter vector .note that the elements of the estimated parameter vector strongly depend on the available resources , i.e. the number of systems prepared by alice .we also emphasize that describes the error for one single run of an estimation protocol .however , we are not interested in the single run error ( [ ciesse ] ) but in the average error of a given protocol. therefore , we sum over all possible measurement outcomes to get the _ mean statistical error _ while keeping the number of resources fixed . though this looks as a good benchmark to quantify the quality of an estimation protocol it has a major drawback .the cost function strongly depends on the parametrization of the quantum channel . while this is not so important if one compares different protocols using the same parametrization it anyhow would be much better if we could give a cost function which is independent of any specifications .we define such a cost function with the help of the average overlap ( c_1,c_2 ) _ [ channelfidelity ] between two quantum channels and , where we average the fidelity f(_1,_2)^2 [ mixedstatefidelity ] between two mixed states and over all possible pure quantum states .since the fidelity ranges from zero to one , the _ fidelity error _ is given by c_f(n,)1-f(c_,c_^est(n ) ) [ costfct_f ] which now is zero for identical quantum channels .again we average over all possible measurement outcomes to get the _mean fidelity error _ _ f(n,)c_f(n,)_j [ costfct_fav ] which quantifies the whole protocol and not a specific single run . in the first part of this paper we are only dealing with qubitsdescribed by the density operator 12(1+s ) [ blochvector ] with bloch vector and the pauli matrices .the action of a channel is then completely described by the function s=c(s ) , [ mapbloch ] which maps the bloch vector to a new bloch vector . in particular , the mixed state fidelity equation ( [ mixedstatefidelity ] ) , for two qubits with bloch vectors and , see equation ( [ blochvector ] ) , then simplifies to f(s_1,s_2)=12[fidss1 ] which leads to an average channel overlap , equation ( [ channelfidelity ] ) , ( c_1,c_2 ) f(c_1(n),c_2(n ) ) [ fidss2 ] for the two qubit channels and . as emphasized above we only average over pure input states with unit bloch vector and bloch sphere element . by inserting equations ( [ fidss1 ] ) and ( [ fidss2 ] ) into equations ( [ costfct_f ] ) and ( [ costfct_fav ] )we get a cost function for the comparison of two qubit channels which is independent of the chosen parametrization .as we have already mentioned in section [ generaldescript ] , the qubits that alice sends to bob are fully characterized by their bloch vector .therefore , the disturbing action of the channel modifies the initial bloch vector according to equation ( [ mapbloch ] ) .it has been shown that for completely positive operators the action of the quantum channel is given by an affine transformation where denotes a invertible matrix and is a vector .the transformation is thus described by 12 parameters , 9 for the matrix and 3 for the vector .these 12 parameters have to fullfill some constraints to guarantee the complete positivity of .the definition of the parameters is somewhat arbitrary .we will not use the parameters as defined in , but adopt a different parametrization in terms of the parameter vector .in general , the choice of the best parametrization of quantum channels depends on the relevant features of the channel and also on which observables can be measured .the reason for the choice of the parametrization , equation ( [ ssprime ] ) , will become clear at the end of the section [ oneparameter ] , in which a protocol for the general channel is described .although the characterization of the general quantum channel requires the determination of 12 parameters , only a smaller number of parameters must be determined in practice for a given class of quantum channels .indeed , the knowledge of the properties of the physical devices used for quantum communication gives information on some parameters and allows to reduce the number of parameters to be estimated .we shall now examine in detail some known channels described by only one parameter .the first channel we consider is the depolarizing channel .the relation ( [ eq12 ] ) between the bloch vectors and of alice s and bob s qubit , respectively , reduces to the simple form where is the only parameter that describes this quantum channel .this channel is a good model when quantum information is encoded in the photon polarization that can change along the transmission fiber via random rotations of the polarization direction .if alice prepares the qubit in the pure state , the qubit bob receives is described by the state where denotes the state orthogonal to .we note that the depolarizing channel has no preferred basis. therefore , its action is isotropic in the direction of the input state .this means that the action of the channel is described by equation ( [ dep ] ) even after changing the basis of states .bob must estimate , which ranges between 0 ( noiseless channel ) and 1/2 ( total depolarization ) .the estimation protocol is the following : alice prepares qubits in the pure state which denotes spin up in the direction , equation ( [ enne ] ) , in terms of eigenvectors and of .the state is sent to bob through the channel .bob knows the direction and measures the spin of the qubit he receives along .the outcome probabilities are since alice sends a finite number of qubits , bob can only determine frequencies of measurements instead of probabilities .after bob has measured qubits with spin down and the remaining with spin up , his estimate of is .note that in a single run of the probabilistic estimation method we can get results , which is outside the range of the depolarizing channel . however , for the calculation of the average errors we do not have to take that into account . the cost function , equation ( [ ciesse ] ) , is thus the average error is easily obtained when one considers that bob s frequencies of measurements occur according to a binomial probability distribution , since each of the measurements of spin down occurs with probability and each of the spin up measurements occurs with probability .therefore , the mean statistical error reads this function , shown in figure [ depolarizingfigure ] a ) , scales with the available finite resources .it vanishes when the channel faithfully preserves the polarization , .the largest average error occurs for , when the two actions of preserving and changing the polarization have the same probability to occur. equation ( [ eq16 ] ) , and b ) the fidelity mean error , equation ( [ eqdepolarizingcf ] ) , for a depolarizing channel with parameter , are shown for different values of the number of qubits n.,width=453 ] the cost function is obtained from equations ( [ costfct_f ] ) and ( [ fidss2 ] ) ^ 2,\ ] ] which leads to the fidelity mean error which is shown in figure [ depolarizingfigure ] b ) . although does not show a simple , it clearly decreases for increasing values of the number of qubits . fora given quality or one can use figure [ depolarizingfigure ] and read off the number of needed resources .the phase - damping channel acts only on two components of the bloch vector , leaving the third one unchanged : here is the damping parameter .this channel , contrarily to the depolarizing channel , has a preferred basis . in terms of density matrices, it transforms the initial state where , etc ., into we note that here the parameter only appears in the off - diagonal terms . for this reason ,the phase damping channel is a good model to describe decoherence .indeed , a repeated application of this channel leads to a vanishing of the off diagonal terms in , whereas its diagonal terms are preserved . since the parameter of the phase damping channel appears in the off - diagonal elements of , the protocol is the following : alice sends qubits with the bloch vector in the plane ; for instance , qubits in the state can be used .this would correspond to a density operator whose matrix elements are all equal to 1/2 , can be used .the density matrix of bob s qubit is then given by now he measures the spin in the direction .the theoretical probabilities are and we denote the frequency of a spin down measurement as , leading to .the mean statistical error has again the form as for the depolarizing channel . for the fidelity , equation ( [ fidss1 ] ), we obtain \left(s_{1}^{2}+s_{2}^{2}\right)\end{aligned}\ ] ] and after averaging over the bloch surface , according to equation ( [ fidss2 ] ) , therefore the fidelity mean error reads \nonumber \\= \frac{4}{3}\left [ \lambda \left ( 1-\lambda\right ) -\frac{1}{n}\sum_{i=0}^{n } { n \choose i } \lambda^{i}\left ( 1-\lambda \right)^{n - i } \sqrt{\lambda(1-\lambda)i(n - i ) } \right ] \label{eqphasecf}\end{aligned}\ ] ] which is shown in figure [ phasedampingfigure ] b ) .this mean error is very similar to that obtained for the depolarizing channel ., equation ( [ eq7 ] ) , and b ) the fidelity mean error , equation ( [ eqphasecf ] ) , for a phase damping channel with parameter , are shown for different values of the number of qubits n.,width=453 ] the amplitude damping channel affects all components of the bloch vector according to where is the damping parameter .the density matrix , equation ( [ denmat ] ) , is transformed into this channel is a good model for spontaneous decay from an atomic excited state to the ground state . repeated applications of this channel cause all elements but one of the density matrix to vanish .now the parameter appears in all the elements of and the channel clearly possesses a preferred basis .if alice and bob know that they are using an amplitude damping channel , alice sends all qubits in the state .the density operator of the qubit received by bob is he measures the spin along the direction ( the diagonal elements of are the probabilities to find spin down and spin up , respectively ) .we denote the frequency of spin down measurements with .the statistical cost function is again using equation ( [ eq28 ] ) we obtain {3 } \nonumber \\ \left .+ \lambda \lambda^{\rm est}+ ( 1-s_{3})^2\sqrt{\lambda(1-\lambda ) } \sqrt{\lambda^{\rm est}(1-\lambda^{\rm est } ) } \right]\end{aligned}\ ] ] for the fidelity cost function , equation ( [ fidss1 ] ) , and \end{aligned}\ ] ] for the averaged fidelity ( [ fidss2 ] ) .thus \nonumber \\ = \frac{1}{3 } \left [ 1+\lambda(1 - 2\lambda)-\sqrt{\frac{1-\lambda}{n } } \sum_{i=0}^{n } { n \choose i } \lambda^{n - i}(1-\lambda)^{i}\sqrt{i } \right . \nonumber \\ \left .-\frac{2}{n}\sqrt{\lambda(1-\lambda ) } \sum_{i=0}^{n } { n \choose i } \lambda^{i}(1-\lambda)^{n - i}\sqrt{i(n - i ) } \right ] \label{eqamplitudecf}\end{aligned}\ ] ] which is illustrated by figure [ amplitudedampingfigure ] . , equation ( [ eq9 ] ) , and b ) the fidelity mean error , equation ( [ eqamplitudecf ] ) , for the amplitude damping channel with parameter , are shown for different values of the number of qubits n.,width=453 ] we see from the figures ( [ depolarizingfigure])([amplitudedampingfigure ] ) that for a fixed number of resources the mean error has a local maximum for small values of the parameter .for the amplitude damping channel there is also a second local maximum for large values of .we now come to the problem of estimating the 12 parameters of the general quantum channel .the protocol is summarized in table 1 : alice prepares 12 sets of qubits , divided into 4 groups .bob measures the spin of the three sets in each group along , , and , respectively . from the measurements on each set he gets an estimate of one of the parameters .the parametrization ( [ ssprime ] ) has been chosen for this purpose .the statistical cost function for the general channel is thus a generalization of the cost functions for one parameter channels , \sum_{j=1}^{12}\left ( \lambda_{j}-\lambda_{j}^{\rm{est } } \right)^2 \nonumber \\ & = & \frac{1}{n/12}\sum_{j=1}^{12}\lambda_{j}(1-\lambda_{j}).\end{aligned}\ ] ] the mean fidelity error can be calculated numerically .however , we do not give the expression here as it gives no particular further insight .up to now we have only considered estimation methods based on measuring single qubits sent through the quantum channel .however , we are not restricted to these estimation schemes . instead of sending single qubits we could use entangled qubit pairs . in this sectionwe will demonstrate the superiority of such entanglement - based estimation methods for the estimation of the so called pauli channel by comparing both schemes ( for a different use of entanglement as a powerful resource when using pauli channels , see ) .the pauli channel is widely discussed in the literature , especially in the context of quantum error correction .its name originates from the error operators of the channel .these error operators are the pauli spin matrices , and . the operators define the quantum mechanical analogue to bit errors in a classical communication channel since causes a bit flip ( transforms into and vice versa ) , causes a phase flip ( transforms into ) and results in a combined bit and phase flip . the pauli channel is described by three probabilities for the occurrence of the three errors . if an initial quantum state is sent through the channel , the pauli channel transforms into thus we see that the density operator remains unchanged with probability , whereas with probability the qubit undergoes a bit flip , with probability there occurs a phase flip , and with probability both a bit flip and a phase flip take place. we will first describe the single qubit estimation scheme for the pauli channel , before we switch to the entanglement - based one in the next subsection .following the general estimation scheme presented in section [ generaldescript ] the protocol to estimate the parameters of the pauli channel , equation ( [ pau ] ) , requires the preparation of three different quantum states with spin along three orthogonal directions .alice sends ( i ) qubits in the state , ( ii ) qubits in the state , and ( iii ) qubits in the state .bob measures their spins along the direction of spin - preparation , namely the , , and axes , respectively .the measurement probabilities of spin down along those three directions are then given by the estimated parameter values can be calculated via \\\lambda_{2}^{\rm{est } } & = & \frac{1}{2}\left [ \frac{i_{1}}{m}-\frac{i_{2}}{m } + \frac{i_{3}}{m}\right ] \\\lambda_{3}^{\rm{est } } & = & \frac{1}{2}\left [ \frac{i_{2}}{m}-\frac{i_{3}}{m } + \frac{i_{1}}{m}\right]\end{aligned}\ ] ] where , , denote the frequencies of spin down results along the directions , and .we note that , although the probabilities are positive or vanish , their estimated values may be negative .this occurs because in the present case the measured frequencies are not the estimates of the parameters .nonetheless , the average cost functions can always be evaluated . for the statistical cost functionwe find \nonumber \\ & = & \frac{9}{2n}\left [ \lambda_{1}(1-\lambda_{1}-\lambda_{2})+\lambda_{2}(1-\lambda_2-\lambda_3 ) \right .\nonumber \\ & & \left .+ \lambda_{3}(1-\lambda_3-\lambda_1)\right ] .\label{ave}\end{aligned}\ ] ] for the average error of the estimation with separable qubits .for fixed the average error has a maximum at , in which case all acting operators occur with the same probability . on the other hand, the average error vanishes when faithful transmission or one of the errors occurs with certainty .instead of using the statistical cost function we can also use the fidelity based cost function ( [ costfct_f ] ) . the average error of the estimation via separable qubits is then given by the cost function can not be calculated analytically .we shall compare the results ( [ ave ] ) and ( [ eqpaulicfsep ] ) with the same mean errors for a different estimation scheme , where entangled pairs are used , that we are now going to illustrate .all the protocols that we have considered so far are based on single qubits prepared in pure states .these qubits are sent through the channel one after another .however , one can envisage estimation schemes with different features .an interesting and powerful alternative scheme is based on the use of entangled states . in this casethe estimation scheme requires alice and bob to share entangled qubits in the bell state .thus from initial qubits alice and bob can prepare bell states .alice sends her qubits of the entangled pairs through the pauli channel , which transforms the entangled state into the mixed state latexmath:[\ ] ] if we use the fidelity based cost function we can also write down the average error for the entangled estimation scheme . it reads and can be evaluated numerically . in figure [ deltafigure ]we show the difference between the average error obtained with single qubits and entangled qubits .we compare the two average errors when the same number of qubits are used .the figure shows that the use of entangled pairs always leads to an enhanced estimation and therefore we can consider entanglement as a nonclassical resource for this application .equation ( [ eqdelta ] ) between the fidelity mean errors , equation ( [ eqpaulicfsep ] ) , and , equation ( [ eqpaulicfent ] ) for the qubit pauli channel . is plotted as a function of and , while keeping and fixed ., width=302 ] before ending this section we want to add some comments about our findings . we have found that the mean statistical error has the form whenever the estimates of the parameters are directly given by the frequencies of measurements . this occurs also for the relevant case of the entanglement based protocol for the pauli channel , but not for the qubit based protocol .although the parametrization we have used is better indicated since the represent the probabilities of occurence of the logic errors , one might be tempted to use a different parametrization that gives again the expression ( [ univ ] ) . indeed , this can be done and actually the errors for the are smaller .nonetheless , one can show that in spite of this improvement , the scheme based on entangled pairs still gives an enhanced estimation even with the new parameters .this is not the case of the one parameter channels , where the use of entangled pairs does not lead to enhanced estimation .in the previous section we have proposed an entanglement based method for estimating the parameters which define the pauli channel . as we shall show in this section , this method can be easily extended to the case of quantum channels defined on higher dimensional hilbert spaces .let us start by considering the most general possible trace preserving transformation of a quantum system described initially by a density operator .this transformation can be written in terms of quantum operations as with . according to the stinespring theorem is the most general form of a completely positive linear map .the set of operators can be interpreted as error operators which characterize the action of a given quantum channel onto a quantum system and the set of parameters as probabilities for the action of error operators .in particular , we can consider the action of the quantum channel , equation ( [ general quantum channel ] ) , onto only one , say the second , of the subsystems of a bipartite quantum system .for sake of simplicity we assume that the two subsystems are systems .if only the second particle is affected by the quantum channel , equation ( [ general quantum channel ] ) , then the final state is given by where is the identity operator .in the special case of an initially pure state , i. e. , the final state becomes where we have defined .our aim is the estimation of the parameter values which define the quantum channel .this can be done by projecting the state of the composite quantum system onto the set of states . as in the previous protocols ,the probabilities can be inferred from the relative frequencies of each state and the quality of the estimation can be quantified with the help of a cost function .however , these states can be perfectly distinguished if and only if they are mutually orthogonal .hence , we impose the condition on the initial state .it means that the initial state of the bipartite quantum system should be chosen in such a way that it is mapped onto a set of mutually orthogonal states by the error operators .it is worth to remark the close analogy between this estimation strategy and quantum error correction .a non - degenerate quantum error correcting code corresponds to a hilbert subspace which is mapped onto mutually orthogonal subspaces under the action of the error operators . in this sense ,a state satisfying the condition ( [ condition ] ) is a one - dimensional non - degenerated error correcting code . a necessary but not sufficient condition for the existence of a state satisfying equation([condition ] ) is given by the inequality where is the total number of error operators which describe the quantum channel and is the dimension of the hilbert space .this inequality shows why the use of an entangled pair enhances the estimation scheme .although the first particle is not affected by the action of the noisy quantum channel , it enlarges the dimension of the total hilbert space in such a way that the condition ( [ condition ] ) can be fulfilled .it is also clear from this inequality that if the state is a separable one , a set of orthogonal states can only be designed in principle if the number of error operators is smaller than .for instance , in the case of the pauli channel for qubits considered in the previous section , condition ( [ bound ] ) does not hold if we use single qubits for the estimation .the action of the pauli channel is defined by a set of error operators ( including the identity ) acting onto qubits ( ) .let us now apply our consideration to the case of a generalized pauli channel .the action of this channel is given by where the kraus operators are defined by the operation is the generalization of a phase flip in dimensions .the bit flip is given by these errors occur with probabilities , which are normalized , . the states form a -dimensional basis of the hilbert space of the quantum system .this extension of the pauli channel to higher dimensional hilbert spaces has been studied previously in the context of quantum error correction , quantum cloning machines and entanglement .now we will apply the estimation strategy based on condition ( [ condition ] ) to this particular channel .thus , we must look for a state of the bipartite quantum system which satisfies the set of conditions ( [ condition ] ) , for the error operators , i.e. it can be easily shown that such a state is where and , , are basis states for the two particles .in fact , the action of the operators onto state generates the basis latexmath:[\[|\psi_{\alpha,\beta}\rangle\equiv { \bf 1}\otimes u_{\alpha,\beta}|\psi_{0,0}\rangle= \frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}e^{i\frac{2\pi}{d}\alpha k } of maximally entangled states for the total hilbert space .thereby , the action of the generalized pauli channel yields .the particular values of the coefficients can now be obtained by projecting onto the states . the quality of this estimation strategy according to the statistical erroris given by with the number of pairs of quantum systems used in the estimation .we have examined the problem of determining the parameters that describe a noisy quantum channel with finite resources .we have given simple protocols for the determination of the parameters of several classes of quantum channels .these protocols are based on measurements made on qubits that are sent through the channel .we have also introduced two cost functions that estimate the quality of the protocols . in the most simple protocols measurementsare performed on each qubit .we have also shown that more complex schemes based on entangled pairs can give a better estimate of the parameters of the pauli channel .our investigations stress once more the usefulness of entanglement in quantum information .we acknowledge support by the dfg programme `` quanten - informationsverarbeitung '' , by the european science foundation qit programme and by the programmes `` qubits '' and `` quest '' of the european commission .m a c wishes to thank v buek for interesting discussions and remarks .99 for recent books on the topic , see deutsch d 1985 _ proc .a _ * 400 * 97 gruska j 1999 _ quantum computing _( london : mcgraw hill ) ; nielsen m a and chuang i l 2000 _ quantum computation and quantum information _( cambridge : cambridge university press ) ; bouwmeester d , ekert a and zeilinger a 2000 _ the physics of quantum information _( berlin : springer ) ; g. alber _ et al_. 2001 _ quantum information _( berlin : springer ) schumacher b 1996 _ phys . rev .a _ * 54 2614 ; schumacher b and nielsen m a 1996 _ phys .rev . a _ * 54 2629 ; bennett c h , divincenzo d p , smolin j a and wootters w k 1996 _ phys .a _ * 54 3824 ; lloyd s 1997 _ phys .rev . a _ * 55 1613 ; bennett c h , divincenzo d p , and smolin j a 1997 _ phys .lett . _ * 78 3217 ; schumacher b and westmoreland m d 1997 _ phys .a _ * 56 131 ; adami c and cerf n j 1997 _ phys . rev .a _ * 56 3470 ; barnum h , nielsen m a and schumacher b 1998 _ phys .a _ * 57 4153 ; divincenzo d p , shor p w and smolin j a 1998 _ phys .rev . a _ * 57 830 mack h , fischer d g and freyberger m 2000 _ phys . rev . a _ * 62 * 042301 fujiwara a 2001 _ phys .a _ * 63 * 042304 fischer d g , mack h , cirone m a and freyberger m 2001 _ phys .a _ * 64 * 022309 shor p w 1995 _ phys .a _ * 52 r2493 ; calderbank a r and shor p w 1996 _ phys . rev .a _ * 54 1098 ; laflamme r , miquel c , paz j p and zurek w h 1996 _ phys .lett . _ * 77 198 ; steane a m 1996 _ phys . rev . lett . _ * 77 793 knill e and laflamme r 1997 _ phys .rev . a _ * 55 900 kraus k 1983 _ states , effects , and operations _ , lecture notes in physics vol .190 ( berlin : springer ) stinespring w f 1955 _ proc .soc . _ * 6 * 211 jozsa r 1994 _ j. mod* 41 * 2315 fujiwara a and algoet p 1999 _ phys .a _ * 59 * 3290 for recent experiments on decoherence , see brune m , hagley e , dreyer j , maitre x , maali a , wunderlich c , raimond j m and haroche s 1996 _ phys .lett . _ * 77 * 4887 ; myatt c j , king b e , turchette q a , sackett c a , kielpinski d , itano w m , monroe c and wineland d j 2000 _ nature _ * 403 * 269 ; kokorowski d a cronin a d roberts t d and pritchard d e 2001 _ phys . rev . lett . _ * 86 * 2191 ; see also giulini d _et al_. 1996 decoherence and the appearance of a classical world in quantum theory ( berlin : springer ) carmichael h 1993 _ an open system approach to quantum optics _( berlin : springer ) ; carmichael h j 1999 _ statistical methods in quantum optics i _( berlin : springer ) ; macchiavello c and palma g m 2001 _ preprint _ quant - ph/0107052 bennett c h and wiesner s j 1992 _ phys .lett . _ * 69 * 2881 knill e 1998 _ preprint _ quant ph/9808049 gottesmann d 1998 _ preprint _ quant ph/9802007 cerf n j 2000 _ j. mod_ * 47 * 187 fivel d i 1995 _ phys .* 74 * 835 * * * * * * * * * * * * * *
we investigate the problem of determining the parameters that describe a quantum channel . it is assumed that the users of the channel have at best only partial knowledge of it and make use of a finite amount of resources to estimate it . we discuss simple protocols for the estimation of the parameters of several classes of channels that are studied in the current literature . we define two different quantitative measures of the quality of the estimation schemes , one based on the standard deviation , the other one on the fidelity . the possibility of protocols that employ entangled particles is also considered . it turns out that the use of entangled particles as a new kind of nonclassical resource enhances the estimation quality of some classes of quantum channel . further , the investigated methods allow us to extend them to higher dimensional quantum systems . # 1|#1 # 1#1| # 1#2#1|#2 # 1#2#3#1|#2|#3
the detection of gravitational waves ( g.w . ) is one of the most fascinating and challenging subjects in physics research nowadays . besides checking the general relativity theory , the detection of this phenomenon will mark the beginning of a new phase in the comprehension of astrophysical phenomena by the use of gravitational wave astronomy .although these waves were predicted at the beginning of the century , the research on their detection only started around 1960 , with the studies of joseph weber .the major obstacle to this detection is the tiny amplitude the g.w .have .even though the more sensitive detector now operating is capable to detect amplitudes near , this value must be decreased by several orders of magnitude so that impulsive waves can be detected regularly . on the other hand ,the discovery of pulsars with periods lying in the milliseconds range stimulated the investigations on the detection of gravitational waves of periodic origin .although these waves generally have amplitudes even smaller than those emitted by impulsive sources , periodic sources are continuously emitting gravitational waves in space and they can be detected as soon as the correct sensitivity is reached . since many of the resonant mass antennae now operating are designed to detect frequencies near 1000 hz , the millisecond pulsars will probably be detected if these antennae ever become sensitive to amplitudes .this value is bigger if we consider the crab pulsar ( ) : .there is a resonant mass detector with a torsional type antenna ( crab iv ) being developed by the tokyo group to detect gravitational waves emitted by the crab pulsar .this group expects to reach soon .the main purpose of this paper is a contribution towards the increase in sensitivity of resonant mass continuous gravitational wave detectors looking at the use of adequate filters .we study two kinds of filters , the first optimizes the signal - to - noise ratio ( snr ) , and is normally used in the detection of impulsive waves .the second filter reproduces the wave with minimum error .both filters apparently were not investigated in the continuous gravitational wave context yet .linear , stationary filters obey the relation is the impulse response function that characterizes the filter , is the input at the filter and is the filter output . generally has a useful part , , and an unwanted part , : .we have a similar relation for the filter output , given by .it is well known from noise theory that the filter that optimizes snr at its output represents the average value of .] , must have the following transfer function : with is the instant in which the observation takes place , is the fourier transform of ( * denotes complex conjugation ) and is the noise power spectrum density : the maximum snr at the optimal filter output is given by the expression from ( [ 2 ] ) and ( [ 3 ] ) we conclude that a very weak signal will leave the filter when the noise is much stronger than the useful signal at the relevant frequency range .equation ( [ 2 ] ) is valid as long as is well behaved .for example , if were a strictly monochromatic wave like it would be difficult to build this filter since . in order to use the optimal filter ( [ 2 ] ) in continuous gravitational wave detectorswe will describe these waves as _ quasi - monochromatic _ useful signals .it means that the waves that reach the antenna will be of the form " . ] the constant _ a _ is related to the signal spectral density bandwidth , , and the corresponding spectral density is of the form real and . ].\ ] ] the signal ( [ 6 ] ) is quasi - monochromatic whenever , being its central frequency . note that when we recover ( [ 5 ] ) , the monochromatic case .the continuous gravitational waves emitted by periodic sources can be regarded as quasi - monochromatic waves .the frequency of the crab pulsar , for example , which is centered near , has a slow down rate of .besides , the orbital motion of the earth causes a maximum variation of , and the spinning motion of earth implies a maximum variation of . for future use in the optimal filter expression , ( [ 2 ] ) , we write the fourier transform of the quasi - monochromatic signal ( [ 6 ] ) : . \label{8}\ ] ]a resonant mass detector can be represented by the scheme of figure [ figure 1 ] . in this model represents the gravitational interaction force between the g.w . and the antenna .the two - port circuit is related to the massive antenna and the transducer , and it is described by its admittance matrix , which relates the force and the velocity at the input port to the current and the voltage at the output port : the transducer and the amplifier have force and velocity noise generators represented by the stochastic , stationary functions and , respectively . [ is the spectral density of [ ] .we will assume that these functions are not correlated , so that . ( 400,250 ) ( 0,0)(400,250 ) in this model the optimal filter follows the lock - in amplifier .in figure [ figure 2 ] the elements that precede the optimal filter in the detector are redrawn . ( 300,200 ) ( 0,0)(300,200 ) in this figure is the antenna effective mass , its elastic constant and the damping constant . represents the mechanical dissipation at the antenna , is associated to the transducer back - reaction on the antenna and represents the wideband , serial noise introduced by the amplifier . the equation of motion of the system in given by -f(t),\ ] ] where represents the displacement .however , in the calculations we will deal with the velocity ; at the amplifier output we have . in the absence of can obtain the velocity noise spectral density : at the denominator of this expression , the minus signal is used when , and the plus signal is used when . since the thermal and the back - reaction noises are white noises , the force spectral densities generated by them obey the following nyquist relations : and . in these expressions , and the time constants of mechanical loss at the antenna and of electrical loss at the transducer and amplifier , respectively .they are related to the energy decay time of the antenna , , according to the expression , where is the antenna quality factor . is the antenna temperature and is the back - reaction noise temperature . is the boltzmann constant .the function that appears in ( [ 10 ] ) represents a serial white noise introduced in the useful signal by the electrical network , and it has the following expression : , where comes from ( [ 9 ] ) . is the real part of the impedance at the transducer output and is the circuit noise temperature .we assume that the amplifier has a large but limited bandwidth , as it occurs in practice , given by .generally introducing the antenna s _ equivalent temperature_ , , defined by the relation , equation ( [ 10 ] ) becomes this is the complete expression for the total noise spectral density at the filter input .the velocity that the g.w .( [ 6 ] ) generates at the antenna in the absence of the noises and has the following fourier transform : is the fourier transform of the g.w .force on the antenna , which is given by the relation is the dynamic part of the mass quadrupole tensor of the antenna .it is a matrix of constant elements which depend on the antenna s geometry and mass distribution . for our calculations we use , where is the characteristic length and is the density of the antenna . equation ( [ 13 ] ) is the useful signal contribution at the filter input , which corresponds to the spectral density henceforth . ] is the spectral density of the force ( [ 13a ] ) and it has the form ( [ 2 ] ) , ( [ 12 ] ) and ( [ 13 ] ) and adopting we obtain the transfer function of the filter that optimizes snr for the model considered in the preceding section : [1-\imath \tau_0 ( \omega -\omega_0 ) ] } } { \frac{n_p}{1+(\omega-\omega_0)^2\tau_0 ^ 2}+\frac{n_s}{1+(\omega-\omega_0)^2 \tau_a^2}}. \label{18}\ ] ] to simplify this expression we introduced the following definitions : , and .the maximum snr related to this filter is obtainable from ( [ 3 ] ) , ( [ 12 ] ) and ( [ 14 ] ) , and it corresponds to note that when , this expression becomes assuming , for simplicity , that , we find by imposing we obtain the following condition on the parameters of the detector : to illustrate the use of the filter ( [ 18 ] ) we will adopt two different realistic detectors . in both cases we will assume that and ( see equation ( [ 11 ] ) .the first of them , detector a , is designed to detect crab pulsar ( , and .this bandwidth arises from the slow down of the pulsar after 100 milliseconds of observation ) ; this detector has the following characteristics : , , , , . under these conditions , ( [ 18 ] )shows a very narrow peak centered in .it implies a maximum snr at its output given by .if the signal bandwidth is smaller ( implying a smaller observation time ) , we have .on the other hand , if detector a has a lower equivalent temperature we attain with a longer observation time .for instance , we obtain this result if , and .such bandwidth corresponds to a 10 seconds observation time of the crab pulsar s slow down .the other detector considered , detector b , is designed to detect the millisecond pulsar psr 1937 + 214 ( , ) . since we did not find any information about the bandwidth of the g.w .emitted by this pulsar , we will assume that it has the value .this detector is characterized by , , , and ; these are typical values of several ultracryogenic cylindrical antennae. for detector b , ( [ 18 ] ) also shows a very narrow peak centered in , implying . like detectora , is greater if the signal bandwidth is smaller .after the g.w .is detected we have to determine its shape with minimum error .this can be accomplished with the help of an adequate linear filter , , designed to reproduce the useful signal with the greatest possible accuracy ( depending on the noise and the useful signal present at its input ) .this accuracy is characterized by the mean square error , , which is obtained from the instantaneous reproduction error , , defined by is the desired signal at the filter output . in a simple filtering process , as the one we are considering in this work, must be equal to the useful signal at the filter input , .we obtain the transfer function of the filter by imposing .this condition implies supposing there is no cross - correlation between and .the corresponding mean square error is from this equation it is evident that the error becomes smaller if so becomes the noise . on the other hand , if the noise is too strong ( ) it results and no signal leaves the filter .if we define the _ total _ power of a signal by , the snr at the filter input will be ) is different from ( [ 1 ] ) .this happens because we are now interested on the total spectrum of the useful signal , while in the analysis of the first kind of filter we were interested only on the maximum amplitude of this signal . ] at the filter output the signal will have the following total power : using this relation we can find the snr at the filter output , now consider the particular case of noise spectral density given by ( [ 12 ] ) and useful signal spectral density given by ( [ 13 ] ) . in this casethe filter that reproduces with minimum error has the following transfer function : [1+\tau_0 ^ 2(\omega-\omega_0)^2 ] } } \right ) ^{-1}. \label{27}\ ] ] the minimum error introduced by this filter is obtained from ( [ 23 ] ) using ( [ 12 ] ) and ( [ 14 ] ) : [1+\tau_0 ^ 2(\omega-\omega_0)^2 ] } { { \cal g}^2a } + \frac{1 } { \frac{n_p}{1+(\omega-\omega_0)^2\tau_0 ^ 2 } + \frac{n_s}{1+(\omega-\omega_0)^2\tau_a^2 } } \right ) ^{-1 } d\omega . \label{29}\ ] ] in detector a the total noise power at the filter input is .if we use filter ( [ 27 ] ) in this detector we obtain , which is times smaller than . comparing ( [ 25 ] ) and ( [ 25a ] ) for this case , we find and , so that .on the other hand , using filter ( [ 27 ] ) in detector b we obtain an output error of , which is almost times smaller then the input noise , . in this case , and , which imply .if the crab bandwidth were we would obtain ; this bandwidth also implies .we would find the same if the bandwidth of psr1937 + 214 were , corresponding to .we have derived expressions for the transfer functions of two kinds of filters , both designed to detect continuous monochromatic waves . the first filter , , optimizes snr at its output ( equation ( [ 1 ] ) ) and is important for a first detection of the wave .the second filter , , reproduces the wave with minimum error and should be used when we intend to know the complete shape of the wave .in the study of we have first analysed the detection of crab pulsar .supposing ( see section ) we concluded that this pulsar could be detected if its signal were as monochromatic as ; it means a sampling time of the order of 100 msec .we have also analysed a possible detection of psr 1937 + 214 and suggested a maximum limit for its bandwidth , allowing its detection by third generation resonant mass detectors . with the continuous optimization of present detectorsthe condition of monochromaticity of the signal becomes weaker .for example , if , the bandwidths of the signal can be larger than those we obtained in this paper . besides , the inequality ( [ 211 ] ) can be used as a reference to optimize resonant mass continuous gravitational detectors .note that continuous sources with high frequency , small bandwidth and high amplitude are the most favourable for detection with less improvement of the detector .on the other hand , the detector should have a small equivalent temperature and materials with high quality factor and density should be preferred for the antenna body , which should be as large as possible .thanks cnpq ( braslia - df , brazil ) and fapesp ( so paulo - sp , brazil ) for financial support , and c.o.e . thankscnpq for partial support .we are grateful to o.d.aguiar and c.frajuca for fruitful discussions .
we determine the transfer functions of two kinds of filters that can be used in the detection of continuous gravitational radiation . the first one optimizes the signal - to - noise ratio , and the second reproduces the wave with minimum error . we analyse the behaviour of these filters in connection with actual detection schemes .
the recent complete dna sequences of many organisms are available to systematically search of genome structure . for the large amount of dna sequences ,developing methods for extracting meaningful information is a major challenge for bioinformatics . to understand the one - dimensional symbolic sequences composed of the four letters ` a ' , ` c ' , ` g ' and ` t ' ( or ` u ' ) , some statistical and geometrical methods were developed . in special , chaos game representation ( cgr) , which generates a two - dimensional square from a one - dimensional sequence , provides a technique to visualize the composition of dna sequences .the characteristics of cgr images was described as genomic signature , and classification of species in the whole bacteria genome was analyzed by making an euclidean metric between two cgr images . based on the genomic signature ,the distance between two dna sequences depending on the length of nucleotide strings was presented and the horizontal transfers in prokaryotes and eukaryotes were detected and charaterized .recently , a one - to - one metric representation of the dna sequences , which was borrowed from the symbolic dynamics , makes an ordering of subsequences in a plane .suppression of certain nucleotide strings in the dna sequences leads to a self - similarity of pattern seen in the metric representation of dna sequences .self - similarity limits of genomic signatures were determined as an optimal string length for generating the genomic signatures . moreover , by using the metric representation method , the recurrence plot technique of dna sequences was established and employed to analyze correlation structure of nucleotide strings . as a eukaryotic organism ,yeast is one of the premier industrial microorganisms , because of its essential role in brewing , baking , and fuel alcohol production .in addition , yeast has proven to be an excellent model organism for the study of a variety of biological problems involving the fields of genetics , molecular biology , cell biology and other disciplines within the biomedical and life sciences . in april 1996, the complete dna sequence of the yeast ( saccharomyces cevevisiae ) genome , consisting of 16 chromosomes with 12 million basepairs , had been released to provide a resource of genome information of a single organism .however , only 43.3% of all 6000 predicted genes in the saccharomyces cerevisiae yeast were functionally characterized when the complete sequence of the yeast genome became available .moreover , it was found that dna transposable elements have ability to move from place to place and make many copies within the genome via the transposition .therefore , the yeast complete dna sequence remain a topic to be studied respect to its genome architecture structure in the whole sequence . in this paper , using the metric representation and recurrence plot methods , we analyze global transposable characteristics in the yeast complete dna sequence , i.e. , 16 chromosome sequences .for a given dna sequence ( ) , a plane metric representation is generated by making the correspondence of symbol to number or and calculating values ( , ) of all subsequences ( ) defined as follows where is 0 if or 1 if and is 0 if or 1 if .thus , the one - dimensional symbolic sequence is partitioned into subsequences and mapped in the two - dimensional plane ( ) .subsequences with the same ending -nucleotide string , which are labeled by , correspond to points in the zone encoded by the -nucleotide string . taking a subsequence , we calculate where is the heaviside function [ , if ; , if and is a subsequence ( ) . when , i.e. , , a point is plotted in a plane .thus , repeating the above process from the beginning of one - dimensional symbolic sequence and shifting forward , we obtain a recurrence plot of the dna sequence . for presenting correlation structure in the recurrence plot plane ,a correlation intensity is defined at a given correlation distance the quantity displays the transference of -nucleotide strings in the dna sequence . to further determine positions and lengths of the transposable elements , we analyze the recurrent plot plane .since and , the transposable element has the length at least . from the recurrence plot plane, we calculate the maximal value of to satisfy i.e. , and .thus , the transposable element with the correction distance has the length .the transposable element is placed at the position and .the saccharomyces cevevisiae yeast has 16 chromosomes , which are denoted as yeast i to xvi . using the metric representation and recurrence plot methods, we analyze correlation structures of the 16 dna sequences . according to the characteristics of the correlation structures , we summarize the results as follows : \(1 ) the correlation distance has a short period increasing .the yeast i , ix and xi have such characteristics .let me take the yeast i as an example to analyze .fig.1 displays the correlation intensity at different correlation distance with .a local region is magnified in the figure .it is clearly evident that there exist some equidistance parallel lines with a basic correlation distance .( 4 ) , we determine positions and lengths of the transposable elements in table i , where their lengths are limited in .many nucleotide strings have correlation distance , which is the integral multiple of .they mainly distribute in two local regions of the dna sequence ( 25715 - 26845 ) and ( 204518 - 206554 ) or ( 11.2 - 11.7% ) and ( 88.8 - 89.7% ) expressed as percentages .the yeast ix and xi have similar behaviors .the yeast ix has the basic correlation distance .many nucleotide strings ( ) with the integral multiple of mainly distribute in a local region of the dna sequence ( 391337 - 393583 ) or ( 89.0 - 89.5% ) expressed as percentages. the yeast xi has the basic correlation distance .many nucleotide strings ( ) with the integral multiple of mainly distribute in a local region of the dna sequence ( 647101 - 647783 ) or ( 97.1 - 97.2% ) expressed as percentages .\(2 ) the correlation distance has a long major value and a short period increasing .the yeast ii , v , vii , viii , x , xii , xiii , xiv , xv and xvi have such characteristics .let me take the yeast ii as an example to analyze .fig.2 displays the correlation intensity at different correlation distance with .the maximal correlation intensity appears at the correlation distance .a local region is magnified in the figure .it is clearly evident that there exist some equidistance parallel lines with a basic correlation distance . in table ii, positions and lengths ( ) of the transposable elements are given .the maximal transposable elements mainly distribute in two local regions of the dna sequence ( 221249 - 224565 , 259783 - 263097 ) or ( 27.2 - 27.6% , 31.9- 32.4% ) expressed as percentage .near the positions , there also exist some transposable elements with approximate values for . moreover , many nucleotide strings have correlation distance , which is the integral multiple of .they mainly distribute in a local region of the dna sequence ( 391337 - 393583 ) or ( 89.0 - 89.5% ) expressed as percentages . in the other 9 dna sequences , the yeast v , x , xii , xiii , xiv , xv and xvi have the same basic correlation distance and similar behaviors with different major correlation distance , 5584 , 9137 , 12167 , 5566 , 447110 and 45988 , respectively .the yeast vii and viii have different basic correlation distance and 135 , and similar behaviors with the major correlation distance and 1998 , respectively .\(3 ) the correlation distance has a long quasi - period increasing .the yeast iii has such characteristics .3 displays the coherence intensity at different correlation distance with .the correlation intensity has the maximal value at the correlation distance and two vice - maximal values at the correlation distance and .since , the coherence distance has a quasi - period increasing .a local region is magnified in the figure .these does not exist any clear short period increasing of the correlation distance . using eq .( 4 ) , we determine positions and lengths ( ) of the transposable elements in table iii .the maximal and vice - maximal transposable elements mainly distribute in local regions of the dna sequence ( 11499 - 13810 , 197402 - 199713 ) , ( 198171 - 199796 , 291794 - 293316 ) and ( 12268 - 12932 , 291794 - 292460 ) or ( 3.6 - 4.4% , 62.6 - 63.6% ) , ( 62.8 - 63.4% , 92.5 - 93.0% ) and ( 3.9 - 4.1% , 92.5 - 92.7% ) expressed as percentage .\(4 ) the correlation distance has a long major value and a long quasi - period and two short period increasing .the yeast iv has such characteristics .4 displays the coherence intensity at different correlation distance with .the maximal coherence intensity appears at the correlation distance .there also exist three vice - maximal values at the correlation distance , and , which forms a long quasi - period increasing of the correlation distance , i.e. , .a local region is magnified in the figure .it is clearly evident that there exist two short period increasing with and in the correlation distance . in tableiv , positions and lengths ( ) of the transposable elements are determined by using eq . ( 4 ) . all correlation distance with the long major value and the long quasi - period and two short period increasing are denoted .the transposable elements with , , , , and mainly distribute in local regions of the dna sequence ( 527570 - 538236 ) , ( 871858 - 876927 , 981207 - 986276 ) , ( 645646 - 651457 , 878346 - 884257 ) , ( 646379 - 651032 , 987600 - 992253 ) , ( 1307733 - 1308591 ) and ( 758135 - 759495 ) or ( 34.4 - 35.1% ) , ( 56.9 - 57.2% , 64.0 - 64.4% ) , ( 42.1 - 42.5% , 57.3 - 57.7% ) , ( 42.2 - 42.5% , 64.4 - 64.8% ) , ( 85.36 - 85.41% ) and ( 49.5 - 49.6% ) expressed as percentages .\(5 ) the dna sequence is hardly relevant .the yeast vi has such characteristics .5 displays the coherence intensity at different correlation distance with .the maximal coherence intensity appears at the correlation distance .a local region is magnified in the figure .the sequence has not a short period increasing of the coherence distance . in tablev , positions and lengths ( ) of the transposable elements are given .only one nucleotide string with the length 337 has the correlation distance .the yeast vi is almost never relevant , so the yeast vi approaches a random sequence .global transposable characteristics in the yeast complete dna sequence is determined by using the metric representation and recurrence plot methods .positions and lengths of all transposable nucleotide strings in the 16 chromosome dna sequences of the yeast are determined . in the form of the correlation distance of nucleotide strings ,the fundamental transposable characteristics displays a short period increasing , a long quasi - period increasing , a long major value and hardly relevant .the 16 chromosome sequences are divided into 5 groups , which have one or several of the 4 kinds of the fundamental transposable characteristics .p. j. deschavanne , a. giron , j. vilain , g. fagot , and b. fertil , genomic signature : characterization and classification of species assessed by chaos game representation of sequences .* 16 * ( 1999 ) 1391 .
global transposable characteristics in the complete dna sequence of the saccharomyces cevevisiae yeast is determined by using the metric representation and recurrence plot methods . in the form of the correlation distance of nucleotide strings , 16 chromosome sequences of the yeast , which are divided into 5 groups , display 4 kinds of the fundamental transposable characteristics : a short period increasing , a long quasi - period increasing , a long major value and hardly relevant . * keywords * yeast , dna sequences , coherence structure , metric representation , recurrence plot +
dans le modle cosmologique , dit de concordance car il est en conformit avec tout un ensemble de donnes observationnelles , la matire ordinaire do nt sont constitus les toiles , le gaz , les galaxies , etc .( essentiellement sous forme baryonique ) ne forme que 4% de la masse - nergie totale , ce qui est dduit de la nuclosynthse primordiale des lments lgers , ainsi que des mesures de fluctuations du rayonnement du fond diffus cosmologique ( cmb ) le rayonnement fossile qui date de la formation des premiers atomes neutres dans lunivers .nous savons aussi quil y a 23% de matire noire sous forme _non baryonique _ et do nt nous ne connaissons pas la nature .et les 73% qui restent ?et bien , ils sont sous la forme dune mystrieuse nergie noire , mise en vidence par le diagramme de hubble des supernovae de type ia , et do nt on ignore lorigine part quelle pourrait tre sous la forme dune constante cosmologique .le contenu de lunivers grandes chelles est donc donn par le `` camembert '' de la figure [ fig1 ] do nt 96% nous est inconnu !la matire noire permet dexpliquer la diffrence entre la masse dynamique des amas de galaxies ( cest la masse dduite du mouvement des galaxies ) et la masse de la matire lumineuse qui comprend les galaxies et le gaz chaud intergalactique .mais cette matire noire ne fait pas que cela !nous pensons quelle joue un rle crucial dans la formation des grandes structures , en entranant la matire ordinaire dans un effondrement gravitationnel , ce qui permet dexpliquer la distribution de matire visible depuis lchelle des amas de galaxies jusqu lchelle cosmologique .des simulations numriques trs prcises permettent de confirmer cette hypothse .pour que cela soit possible il faut que la matire noire soit non relativiste au moment de la formation des galaxies .on lappelera matire noire _ froide _ou cdm selon lacronyme anglais , et il y a aussi un nom pour la particule associe : un wimp pour `` weakly interacting massive particle '' .il ny a pas dexplication pour la matire noire ( ni pour lnergie noire ) dans le cadre du modle standard de la physique des particules .mais des extensions au - del du modle standard permettent de trouver des bons candidats pour la particule ventuelle de matire noire . par exemple dans un modle de super - symtrie ( qui associe tout fermion un partenaire super - symtrique qui est un boson et rciproquement ) lun des meilleurs candidats est le _ neutralino _ , qui est un partenaire fermionique super - symtrique dune certaine combinaison de bosons du modle standard ., qui fut introduit dans une tentative pour rsoudre le problme de la violation cp en physique des particules , est une autre possibilit .il y a aussi les tats de kaluza - klein prdits dans certains modles avec dimensions suplmentaires .quant lnergie noire , elle apparat comme un milieu de densit dnergie _ constante _ au cours de lexpansion , ce qui implique une violation des `` conditions dnergie '' habituelles avec une pression ngative .lnergie noire pourrait tre la fameuse constante cosmologique queinstein avait introduite dans les quations de la relativit gnrale afin dobtenir un modle dunivers statique , puis quil avait abandonne lorsque lexpansion fut dcouverte .depuis zeldovich on interprte comme lnergiedu vide associe lespace - temps lui - mme .le problme est que lestimation de cette nergie en thorie des champs donne une valeur fois plus grande que la valeur observe ! on ne comprend donc pas pourquoi la constante cosmologique est si petite .malgr lnigme de lorigine de ses constituents , le modle -cdm est plein de succs , tant dans lajustement prcis des fluctuations du cmb que dans la reproduction fidle des grandes structures observes .une leon est que la matire noire apparat forme de particules ( les wimps ) grande chelle .la matire noire se manifeste de manire clatante dans les galaxies , par lexcs de vitesse de rotation des toiles autour de ces galaxies en fonction de la distance au centre cest la clbre courbe de rotation ( voir la figure [ fig2 ] ) .les mesures montrent qu partir dune certaine distance au centre la courbe de rotation devient pratiquement plate , cest - - dire que la vitesse devient constante .daprs la loi de newton la vitesse dune toile sur une orbite circulaire ( keplerienne ) de rayon est donne par o est la masse contenue dans la sphre de rayon .pour obtenir une courbe de rotation plate il faut donc supposer que la masse crot proportionnellement ( et donc que la densit dcrot comme ) , ce qui nest certainement pas le cas de la matire visible . on est oblig dinvoquer lexistence dun gigantesque halode matire noire invisible ( qui ne rayonne pas ) autour de la galaxie et do nt la masse dominerait celle des toiles et du gaz .cette matire noire peut - elle tre faite de la mme particule que celle suggre par la cosmologie ( un wimp ) ?des lments de rponse sont fournis par les simulations numriques de cdm en cosmologie qui sont aussi valables lchelle des galaxies , et qui donnent un profil de densit universel pour le halo de matire noire . a grande distancece profil dcroit en soit plus rapidement que ce quil faudrait pour avoir une courbe plate , mais ce nest pas trs grave car on peut supposer que la courbe de rotation est observe dans un rgime intermdiaire avant de dcrotre . plus grave est la prdiction dun pic central de densit au centre des galaxies , o les particules de matire noire tendent sagglomrer cause de la gravitation , avec une loi en pour petit . or les courbes de rotation favorisent plutt un profil de densit sans divergence , avec un coeur de densit constante .dautres problmes rencontrs par les halos simuls de cdm sont la formation dune multitude de satellites autour des grosses galaxies , et la loi empirique de tully et fisher qui nest pas explique de faon naturelle .cette loi montre dans la figure [ fig3 ] relie la luminosit des galaxies leur vitesse asymptotique de rotation ( qui est la valeur du plateau dans la figure [ fig2 ] ) par .noter que cette loi ne fait pas rfrence la matire noire !la vitesse et la luminosit sont bien sr celles de la matire ordinaire , et la matire noire semble faire ce que lui dicte la matire visible .mais le dfi le plus important de cdm est de pouvoir rendre compte dune observation tonnante appele _ loi de milgrom _ , selon laquelle la matire noire intervient uniquement dans les rgions o le champ de gravitation ( ou , ce qui revient au mme , le champ dacclration ) est plus _ faible _ quune certaine acclration critique mesure la valeur `` universelle '' . tout se passe comme si dans le rgime des champs faibles , la matire ordinaire tait acclre non par le champ newtonien mais par un champ donn simplement par .la loi du mouvement sur une orbite circulaire donne alors une vitesse _ constante _ et gale .ce rsultat nous rserve un bonus important : puisque le rapport masse - sur - luminosit est approximativement le mme dune galaxie lautre , la vitesse de rotation doit varier comme la puissance de la luminosit , en accord avec la loi de tully - fisher ! pour avoir une rgle qui nous permette dajuster les courbes de rotation des galaxies il nous faut aussi prendre en compte le rgime de champ fort dans lequel on doit retrouver la loi newtonienne .on introduit une fonction dinterpolation dpendant du rapport et qui se ramne lorsque , et qui tend vers 1 quand .notre rgle sera donc ici dsigne la norme du champ de gravitation ressenti par les particules dpreuves .une formule encore plus oprationnelle est obtenue en prenant la divergence des deux membres de ce qui mne lquation de poisson modifie , o est le laplacien et le potentiel newtonien local .loprateur appliqu une fonction scalaire est le gradient , appliqu un vecteur cest la divergence : .par convention , on note les vecteurs en caractres gras . ] : = -4 \pi \ , g\,\rho_\text{b } \,,\ ] ] do nt la source est la densit de matire baryonique ( le champ gravitationnel est irrotationnel : ) .on appellera lquation la formule mond pour `` modified newtonian dynamics '' .le succs de cette formule ( on devrait plus exactement dire cette _ recette _ ) dans lobtention des courbes de rotation de nombreuses galaxies est impressionnant ; voir la courbe en trait plein dans la figure [ fig2 ] .cest en fait un ajustement un paramtre libre , le rapport de la galaxie qui est donc _ mesur _ par notre recette .on trouve que non seulement la valeur de est de lordre de 1 - 5 comme il se doit , mais quelle est remarquablement en accord avec la couleur observe de la galaxie .beaucoup considrent la formule mond comme `` exotique '' et reprsentant un aspect mineur du problme de la matire noire .on entend mme parfois dire que ce nest pas de la physique .bien sr ce nest pas de la physique _ fondamentale _ cette formule ne peut pas tre considre comme une thorie fondamentale , mais elle constitue de lexcellente physique !elle capture de faon simple et puissante tout un ensemble de faits observationnels .au physicien thoricien dexpliquer pourquoi .la valeur numrique de se trouve tre trs proche de la constante cosmologique : .cette concidence cosmique pourrait nous fournir un indice !elle a aliment de nombreuses spculations sur une possible influence de la cosmologie dans la dynamique locale des galaxies .face la `` draisonnable efficacit '' de la formule mond , trois solutions sont possibles . 1 . laformule pourrait sexpliquer dans le cadre cdm .mais pour rsoudre les problmes de cdm il faut invoquer des mcanismes astrophysiques compliqus et effectuer un ajustement fin des donnes galaxie par galaxie .2 . on esten prsence dune modification de la loi de la gravitation dans un rgime de champ faible .cest lapproche traditionnelle de mond et de ses extensions relativistes .la gravitation nestpas modifie mais la matire noire possde des caractristiques particulires la rendant apte expliquer la phnomnologie de mond .cest une approche nouvelle qui se prte aussi trs bien la cosmologie .la plupart des astrophysiciens des particules et des cosmologues des grandes structures sont partisans de la premire solution .malheureusement aucun mcanisme convainquant na t trouv pour incorporer de faon naturelle la constante dacclration dans les halos de cdm .dans la suite nous considrerons que la solution 1 .est dores et dj exclue par les observations .les approches 2 .de gravitation modifie et 3 .que lon peut qualifier de _ matire noire modifie _ croient toutes deux dans la pertinence de mond , mais comme on va le voir sont en fait trs diffrentes .notez que dans ces deux approches il faudra expliquer pourquoi la matire noire semble tre constitue de wimps lchelle cosmologique .cette route , trs dveloppe dans la littrature , consiste supposer quil ny a pas de matire noire , et que reflte une violation fondamentale de la loi de la gravitation .cest la proposition initiale de milgrom un changement radical de paradigme par rapport lapproche cdm .pour esprer dfinir une thorie il nous faut partir dun lagrangien . or il est facile de voir que dcoule dun lagrangien , celui - ci ayant la particularit de comporter un terme cintique non standard pour le potentiel gravitationnel , du type $ ] au lieu du terme habituel , o est une certaine fonction que lon relie la fonction .ce lagrangien a servi de point de dpart pour la construction des thories de la gravitation modifie . on veut modifier larelativit gnrale de faon retrouver mond dans la limite non - relativiste , cest - - dire quand la vitesse des corps est trs faible par rapport la vitesse de la lumire . en relativit gnrale la gravitation est dcrite par un champ tensoriel deux indices appel la mtrique de lespace - temps .cette thorie est extrmement bien vrifie dans le systme solaire et dans les pulsars binaires , mais peu teste dans le rgime de champs faibles qui nous intresse ( en fait la relativit gnrale est le royaume des champs gravitationnels forts ) .la premire ide qui vient lesprit est de promouvoir le potentiel newtonien en un champ scalaire ( sans indices ) et donc de considrer une thorie _ tenseur - scalaire _ dans laquelle la gravitation est dcrite par le couple de champs .on postule , de manire _ ad - hoc _ , un terme cintique non standard pour le champ scalaire : o est reli , et on choisit le lagrangien deinstein - hilbert de la relativit gnrale pour la partie concernant la mtrique .tout va bien pour ce qui concerne le mouvement des toiles dans une galaxie , qui reproduit mond .mais notre thorie tenseur - scalaire est une catastrophe pour le mouvement des photons ! en effet ceux - ci ne ressentent pas la prsence du champ scalaire cens reprsenterla matire noire .dans une thorie tenseur - scalaire toutes les formes de matire se propagent dans un espace - temps de mtrique _ physique _ qui diffre de la mtrique deinstein par un facteur de proportionalit dpendant du champ scalaire , soit .une telle relation entre les mtriques est dite conforme et laisse invariants les cnes de lumire de lespace - temps .les trajectoires de photons seront donc les mmes dans lespace - temps physique que dans lespace - temps deinstein ( cela se dduit aussi de linvariance conforme des quations de maxwell ) .comme on observe dnormes quantits de matire noire grce au mouvement des photons , par effet de lentille gravitationnelle , la thorie tenseur - scalaire est limine .pour corriger cet effet dsastreux du mouvement de la lumire on rajoute un nouvel lment notre thorie .puisque cest cela qui cause problme on va transformer la relation entre les mtriques et .une faon de le faire est dy insrer ( encore de faon _ ad - hoc _ ) un nouveau champ qui sera cette fois un vecteur avec un indice .on aboutit donc une thorie dans laquelle la gravitation est dcrite par le triplet de champs .cest ce quon appelle une thorie _ tenseur - vecteur - scalaire _ ( teves ) .la thorie teves a t mise au point par bekenstein et sanders .comme dans la thorie tenseur - scalaire on aura la partie deinstein - hilbert pour la mtrique , plus un terme cintique non standard pour le champ scalaire .quant au champ vectoriel on le munit dun terme cintique analogue celui de llectromagntisme , mais dans lequel le rle du potentiel lectromagntique est tenu par notre champ .la thorie teves rsultante est trs complique et pour linstant non relie de la physique microscopique .il a t montr que cest un cas particulier dune classe de thories appeles thories einstein-_ther _ dans lesquelles le vecteur joue le rle principal , en dfinissant un rfrentiel priviligi un peu analogue lther postul au xix sicle pour interprter la non - invariance des quations de maxwell par transformation de galile .si elle est capable de retrouver mond dans les galaxies , la thorie teves a malheureusement un problme dans les amas de galaxies car elle ne rend pas compte de toute la matire noire observe .cest en fait un problme gnrique de toute extension relativiste de mond .cependant ce problme peut tre rsolu en supposant lexistence dune composante de matire noire _ chaude _ sous la forme de neutrinos massifs , ayant la masse maximale permise par les expriences actuelles soit environ .rappelons que toute la matire noire ne peut pas tre sous forme de neutrinos : dune part il ny aurait pas assez de masse , et dautre part les neutrinos tant relativistes auraient tendance lisser lapparence des grandes structures , ce qui nest pas observ .nanmoins une pince de neutrinos massifs pourrait permettre de rendre viables les thories de gravitation modifie .de ce point de vue les expriences prvues qui vont dterminer trs prcisment la masse du neutrino ( en vrifiant la conservation de lnergie au cours de la dsintgration dune particule produisant un neutrino dans ltat final ) vont jouer un rle important en cosmologie .teves a aussi des difficults lchelle cosmologique pour reproduire les fluctuations observes du cmb .l aussi une composante de neutrinos massifs peut aider , mais la hauteur du troisime pic de fluctuation , qui est caractristique de la prsence de matire noire sans pression , reste difficile ajuster .une alternative logique la gravit modifie est de supposer quon est en prsence dune forme particulire de matire noire ayant des caractristiques diffrentes de cdm .dans cette approche on a lambition dexpliquer la phnomnologie de mond , mais avec une philosophie nouvelle puisquon ne modifie pas la loi de la gravitation : on garde la relativit gnrale classique , avec sa limite newtonienne habituelle .cette possibilit merge grce lanalogue gravitationnel du mcanisme physique de polarisation par un champ extrieur et quon va appeler `` polarisation gravitationnelle '' .la motivation physique est une analogie frappante ( et peut - tre trs profonde ) entre mond , sous la forme de lquation de poisson modifie , et la physique des milieux dilectriques en lectrostatique .en effet nous apprenons dans nos cours de physique lmentaire que lquation de gauss pour le champ lectrique ( cest lune des quations fondamentales de maxwell ) , est modifie en prsence dun milieu dilectrique par la contribution de la polarisation lectrique ( voir lappendice ) . de mme , mond peut - tre vu comme la modification de lquation de poisson par un milieu `` digravitationnel '' .explicitons cette analogie .on introduit lanalogue gravitationnel de la susceptibilit , soit qui est reli la fonction mond par .la `` polarisation gravitationnelle '' est dfinie par la densit des `` masses de polarisation '' est donne par la divergence de la polarisation soit .avec ces notations lquation devient qui apparat maintenant comme une quation de poisson ordinaire , mais do nt la source est constitue non seulement par la densit de matire baryonique , mais aussi par la contribution des masses de polarisation .il est clair que cette criture de mond suggre que lon est en prsence non pas dune modification de la loi gravitationnelle , mais dune forme nouvelle de matire noire de densit , cest - - dire faite de moments dipolaires aligns dans le champ de gravitation .ltape suivante serait de construire un modle microscopique pour des diples gravitationnels ( tels que ) .lanalogue gravitationnel du diple lectrique serait un vecteur sparant deux masses . on se heurte donc un problme svre : le milieu dipolaire gravitationnel devrait contenir des masses ngatives !ici on entend par masse lanalogue gravitationnel de la charge , qui est ce quon appelle parfois la masse grave .ce problme des masses ngatives rend _ a priori _ le modle hautement non viable .nanmoins , ce modle est intressant car il est facile de montrer que le coefficient de susceptibilit gravitationnelle doit tre ngatif , , soit loppos du cas lectrostatique . or cest prcisment ce que nous dit mond : comme la fonction interpole entre le rgime mond o et le rgime newtonien o , on a et donc bien .il est donc tentant dinterprter le champ gravitationnel plus intense dans mond que chez newton par la prsence de `` masses de polarisation '' qui _ anti - crantent _ le champ des masses gravitationnel ordinaires , et ainsi augmentent lintensit effective du champ gravitationnel ! dans le cadre de ce modle on peut aussi se convaincre quun milieu form de diples gravitationnels est intrinsquement instable , car les constituants microscopiques du diple devraient se repousser gravitationnellement .il faut donc introduire une force interne dorigine _ non - gravitationnelle _ , qui va supplanter la force gravitationnelle pour lier les constituants dipolaires entre eux . on pourrait qualifier cette nouvelle interaction de `` cinquime force '' .pour retrouver mond , on trouve de faon satisfaisante que ladite force doit dpendre du champ de polarisation , et avoir en premire approximation la forme dun oscillateur harmonique .par leffet de cette force , lquilibre , le milieu dipolaire ressemble une sorte d``ther statique '' , un peu limage du dilectrique do nt les sites atomiques sont fixes .les arguments prcdents nous laissent penser que mond a quelque chose voir avec un effet de polarisation gravitationnelle .mais il nous faut maintenant construire un modle cohrent , reproduisant lessentiel de cette physique , et _ sans _ masses graves ngatives , donc respectant le principe dquivalence .il faut aussi bien sr que le modle soit _ relativiste _ ( en relativit gnrale ) pour pouvoir rpondre des questions concernant la cosmologie ou le mouvement de photons . on va dcrire le milieu comme un fluide relativiste de quadri - courant ( o est la densit de masse ) , et muni dun quadri - vecteur jouant le rle du moment dipolaire .le vecteur de polarisation est alors .on dfinit un principe daction pour cette matire dipolaire , que lon rajoute laction deinstein - hilbert , et la somme des actions de tous les champs de matire habituels ( baryons , photons , etc ) .on inclue dans laction une fonction potentielle dpendant de la polarisation et cense dcrire une force interne au milieu dipolaire .par variation de laction on obtient lquation du mouvement du fluide dipolaire , ainsi que lquation dvolution de son moment dipolaire .on trouve que le mouvement du fluide est affect par la force interne , et diffre du mouvement godsique dun fluide ordinaire .ce modle ( propos dans ) reproduit bien la phnomnologie de mond au niveau des galaxies .il a t construit pour !mais il a t aussi dmontr quil donne satisfaction en cosmologie o lon considre une perturbation dun univers homogne et isotrope .en effet cette matire noire dipolaire se conduit comme un fluide parfait sans pression au premier ordre de perturbation cosmologique et est donc indistinguable du modle cdm . en particulier le modle est en accord avec les fluctuations du fond diffus cosmologique ( cmb ) . en cesens il permet de rconcilier laspect particulaire de la matire noire telle quelle est dtecte en cosmologie avec son aspect `` modification des lois '' lchelle des galaxies .de plus le modle contient lnergie noire sous forme dune constante cosmologique . il offre une sorte dunification entre lnergie noire et la matire noire _ la _ mond . en consquencede cette unification on trouve que lordre de grandeur naturel de doit tre compatible avec celui de lacclration , cest - - dire que , ce qui est en trs bon accord avec les observations .le modle de matire noire dipolaire contient donc la physique souhaite .son dfaut actuel est de ne pas tre connect de la physique microscopique fondamentale ( _ via _ une thorie quantique des champs ) .il est donc moins fondamental que cdm qui serait motiv par exemple par la super - symtrie .ce modle est une description effective , valable dans un rgime de champs gravitationnels faibles , comme la lisire dune galaxie ou dans un univers presque homogne et isotrope .lextrapolation du modle au champ gravitationnel rgnant dans le systme solaire nest pas entirement rsolue .dun autre ct le problme de comment tester ( et ventuellement falsifier ) ce modle en cosmologie reste ouvert .m. milgrom , astrophys .j. * 270 * , 365 ( 1983 ) .bekenstein , phys . rev .d * 70 * , 083509 ( 2004 ) .sanders , mon . not .363 * , 459 ( 2005 ) .l. blanchet , class .* 24 * , 3529 ( 2007 ) .l. blanchet and a. le tiec , phys .d * 78 * , 024031 ( 2008 ) ; and submitted , arxiv:0901.3114 ( 2009 ) .un dilectrique est un matriau isolant , qui ne laisse pas passer les courants , car tous les lectrons sont rattachs des sites atomiques .nanmoins , les atomes du dilectriqueragissent la prsence dun champ lectrique extrieur : le noyau de latome charg positivement se dplace en direction du champ lectrique , tandis que le barycentre des charges ngatives cest - - dire le nuage lectronique se dplace dans la direction oppose .on peut modliser la rponse de latome au champ lectrique par un diple lectrique qui est une charge spare dune charge par le vecteur , et align avec le champ lectrique .la densit des diples nous donne la polarisation .le champ cre par les diples se rajoute au champ extrieur ( engendr par des charges extrieures ) et a pour source la densit de charge de polarisation qui est donne par la divergence de la polarisation : .ainsi lquation de gauss ( qui scrit normalement ) devient en prsence du dilectrique en utilisant les conventions habituelles , avec .on introduit un coefficient de susceptibilit lectrique qui intervient dans la relation de proportionalit entre la polarisation et le champ lectrique : , ainsi : .la susceptibilit est positive , , ce qui implique que le champ dans un dilectrique est plus faible que dans le vide .cest leffet d_crantage _ de la charge par les charges de polarisation .ainsi garnir lespace intrieur aux plaques duncondensateur avec un matriau dilectrique diminue lintensit du champ lectrique , et donc augmente la capacit du condensateur pour une tension donne .
pour lastrophysicien qui aborde le puzzle de la matire noire , celle - ci apparat sous deux aspects diffrents : dune part en cosmologie , cest - - dire trs grandes chelles , o elle semble tre forme dun bain de particules , et dautre part lchelle des galaxies , o elle est dcrite par un ensemble de phnomnes trs particuliers , qui paraissent incompatibles avec sa description en termes de particules , et qui font dire certains que lon est en prsence dune modification de la loi de la gravitation . rconcilier ces deux aspects distincts de la matire noire dans un mme formalisme thorique reprsente un dfi important qui pourrait peut - tre conduire une physique nouvelle en action aux chelles astronomiques .
given an input sequence , _ segmentation _ is the problem of identifying and assigning tags to its subsequences .many natural language processing ( nlp ) tasks can be cast into the segmentation problem , like named entity recognition , opinion extraction , and chinese word segmentation . properly representing _ segment _ is critical for good segmentation performance .widely used sequence labeling models like conditional random fields represent the contextual information of the segment boundary as a proxy to entire segment and achieve segmentation by labeling input units ( e.g. words or characters ) with boundary tags .compared with sequence labeling model , models that directly represent segment are attractive because they are not bounded by local tag dependencies and can effectively adopt segment - level information .semi - markov crf ( or semi - crf ) is one of the models that directly represent the entire segment . in semi - crf ,the conditional probability of a semi - markov chain on the input sequence is explicitly modeled , whose each state corresponds to a subsequence of input units , which makes semi - crf a natural choice for segmentation problem. however , to achieve good segmentation performance , conventional semi - crf models require carefully hand - crafted features to represent the segment .recent years witness a trend of applying neural network models to nlp tasks .the key strengths of neural approaches in nlp are their ability for modeling the compositionality of language and learning distributed representation from large - scale unlabeled data . representinga segment with neural network is appealing in semi - crf because various neural network structures have been proposed to compose sequential inputs of a segment and the well - studied word embedding methods make it possible to learn entire segment representation from unlabeled data . in this paper, we combine neural network with semi - crf and make a thorough study on the problem of representing a segment in neural semi - crf . proposed a segmental recurrent neural network ( srnn ) which represents a segment by composing input units with rnn .we study alternative network structures besides the srnn .we also study segment - level representation using _ segment embedding _ which encodes the entire segment explicitly .we conduct extensive experiments on two typical nlp segmentation tasks : named entity recognition ( ner ) and chinese word segmentation ( cws ) .experimental results show that our concatenation alternative achieves comparable performance with the original srnn but runs 1.7 times faster and our neural semi - crf greatly benefits from the segment embeddings . in the ner experiments , our neural semi - crf model with segment embeddingsachieves an improvement of 0.7 f - score over the baseline and the result is competitive with state - of - the - art systems . in the cws experiments ,our model achieves more than 2.0 f - score improvements on average . on the pku and msr datasets ,state - of - the - art f - scores of 95.67% and 97.58% are achieved respectively .we release our code at https://github.com/expresults/segrep-for-nn-semicrf .figure [ fig : ne - and - cws ] shows examples of named entity recognition and chinese word segmentation . for the input word sequence in the ner example , its segments ( _ `` michael jordan'':per , `` is'':none , `` a'':none , `` professor'':none , `` at'':none , `` berkeley'':org _ )reveal that `` michaels jordan '' is a person name and `` berkeley '' is an organization . in the cws example , the subsequences ( utf8gkai``/pudong '' , `` /development '' , `` /and '' , `` /construction '' ) of the input character sequence are recognized as words .both ner and cws take an input sequence and partition it into disjoint subsequences . formally , for an input sequence of length , let denote its subsequence .segment _ of is defined as which means the subsequence is associated with label .a _ segmentation _ of is a _ segment _ sequence , where and .given an input sequence , the _ segmentation _ problem can be defined as the problem of finding s most probable _ segment _ sequence .0.241 0.241 0.241 0.241semi - markov crf ( or semi - crf , figure [ fig : std ] ) models the conditional probability of on as where is the feature function , is the weight vector and is the normalize factor of all possible _ segmentations _ over . by restricting the scope of feature function within a segment and ignoring label transition between segments ( 0-order semi - crf ), can be decomposed as where maps segment into its representation .such decomposition allows using efficient dynamic programming algorithm for inference . to find the best segmentation in semi - crf ,let denote the best segmentation ends with ^th^ input and is recursively calculated as where is the maximum length manually defined and is the transition weight for in which .previous semi - crf works parameterize as a sparse vector , each dimension of which represents the value of corresponding feature function . generally , these feature functions fall into two types : 1 ) the _ crf style features _ which represent input unit - level information such as `` the specific words at location '' 2 ) the _ semi - crf style features _ which represent segment - level information such as `` the length of the segment '' . proposed the segmental recurrent neural network model ( srnn , see figure [ fig : rnn ] ) which combines the semi - crf and the neural network model . in srnn , is parameterized as a bidirectional lstm ( bi - lstm ) .for a segment , each input unit in subsequence is encoded as _ embedding _ and fed into the bi - lstm .the rectified linear combination of the final hidden layers from bi - lstm is used as . pioneers in representing a segment in neural semi - crf .bi - lstm can be regarded as `` neuralized '' _ crf style features _ which model the input unit - level compositionality .however , in the srnn work , only the bi - lstm was employed without considering other input unit - level composition functions .what is more , the _ semi - crf styled _ segment - level information as an important representation was not studied . in the following sections , we first study alternative input unit - level composition functions ( [ sec : alt - inp - rep ] ) .then , we study the problem of representing a segment at segment - level ( [ sec : seg - rep ] ) . besides recurrent neural network ( rnn ) andits variants , another widely used neural network architecture for composing and representing variable - length input is the convolutional neural network ( cnn ) . in cnn ,one or more filter functions are employed to convert a fix - width segment in sequence into one vector .with filter function `` sliding '' over the input sequence , contextual information is encoded .finally , a pooling function is used to merge the vectors into one . in this paper, we use a filter function of width 2 and max - pooling function to compose input units of a segment .following srnn , we name our cnn segment representation as scnn ( see figure [ fig : cnn ] ) .however , one problem of using cnn to compose input units into segment representation lies in the fact that the max - pooling function is insensitive to input position .two different segments sharing the same vocabulary can be treated without difference . in a cws example , utf8gkai`` '' ( racket for sell ) and `` '' ( ball audition ) will be encoded into the same vector in scnn if the vector of utf8gkai`` '' that produced by filter function is always preserved by max - pooling .concatenation is also widely used in neural network models to represent fixed - length input .although not designed to handle variable - length input , we see that in the inference of semi - crf , a maximum length is adopted , which make it possible to use padding technique to transform the variable - length representation problem into fixed - length of . meanwhile , concatenation preserves the positions of inputs because they are directly mapped into the certain positions in the resulting vector . in this paper, we study an alternative concatenation function to compose input units into segment representation , namely the sconcate model ( see figure [ fig : concate ] ) .compared with srnn , sconcate requires less computation when representing one segment , thus can speed up the inference . for segmentation problems ,a segment is generally considered more informative and less ambiguous than an individual input . incorporating segment - level featuresusually lead performance improvement in previous semi - crf work .segment representations in section [ sec : alt - inp - rep ] only model the composition of input units .it can be expected that the segment embedding which encodes an entire subsequence as a vector can be an effective way for representing a segment . in this paper, we treat the segment embedding as a lookup - based representation , which retrieves the embedding table with the surface string of entire segment . with the entire segment properly embed , it is straightforward to combine the segment embedding with the composed vector from the input so that multi - level information of a segment is used in our model ( see figure [ fig : with - seg ] ) .however , how to obtain such embeddings is not a trivial problem .a natural solution for obtaining the segment embeddings can be collecting all the `` correct '' segments from training data into a lexicon and learning their embeddings as model parameters .however , the in - lexicon segment is a strong clue for a subsequence being a correct segment , which makes our model vulnerable to overfitting .unsupervised pre - training has been proved an effective technique for improving the robustness of neural network model . to mitigate the overfitting problem , we initialize our segment embeddings with the pre - trained one . word embedding gains a lot of research interest in recent years and is mainly carried on english texts which are naturally segmented .different from the word embedding works , our segment embedding requires large - scale segmented data , which can not be directly obtained .following which utilize automatically segmented data to enhance their model , we obtain the auto - segmented data with our neural semi - crf baselines ( srnn , scnn , and sconcate ) and use the auto - segmented data to learn our segment embeddings .another line of research shows that machine learning algorithms can be boosted by ensembling _heterogeneous _ models .our neural semi - crf model can take knowledge from heterogeneous models by using the segment embeddings learned on the data segmented by the heterogeneous models . in this paper, we also obtain the auto - segmented data from a conventional crf model which utilizes hand - crafted sparse features . once obtaining the auto - segmented data , we learn the segment embeddings in the same with word embeddings .a problem that arises is the fine - tuning of segment embeddings .fine - tuning can learn a task - specific segment embeddings for the segments that occur in the training data , but it breaks their relations with the un - tuned out - of - vocabulary segments .figure [ fig : wo - ft ] illustrates this problem . since oov segments can affect the testing performance , we also try learning our model without fine - tuning the segment embeddings . in this section ,we describe the detailed architecture for our neural semi - crf model .following , we use a bi - lstm to represent the input sequence . to obtain the input unit representation ,we use the technique in and separately use two parts of input unit embeddings : the pre - trained embeddings without fine - tuning and fine - tuned embeddings . for the input , and merged together through linear combination and form the input unit representation + b^\mathcal{i})\ ] ] where the notation of $ ] equals to s linear combination and is the bias . after obtaining the representation for each input unit ,a sequence is fed to a bi - lstm .the hidden layer of forward lstm and backward lstm are combined as +b^\mathcal{h})\ ] ] and used as the ^th^ input unit s final representation .given a segment , a generic function scomp stands for the segment representation that composes the input unit representations . in this work, scomp is instantiated with three different functions : srnn , scnn and sconcate . besides composing input units, we also employ the segment embeddings as segment - level representation .embedding of the segment is denoted as a generic function semb which converts the subsequence into its embedding through a lookup table . at last, the representation of segment is calculated as +b^\mathcal{s})\ ] ] where is the embedding for the label of a segment ..hyper - parameter settings[ cols= " > , < " , ] table [ tbl : cws - stoa ] shows the comparison with the state - of - the - art cws systems .the first block of table [ tbl : cws - stoa ] shows the neural cws models and second block shows the non - neural models .our neural semi - crf model with multi - level segment representation achieves the state - of - the - art performance on pku and msr data . on ctb6 data ,our model s performance is also close to which uses semi - supervised features extracted auto - segmented unlabeled data . according to ,significant improvements can be achieved by replacing character embeddings with character - bigram embeddings .however we did nt employ this trick considering the unification of our model .semi - crf has been successfully used in many nlp tasks like information extraction , opinion extraction and chinese word segmentation .its combination with neural network is relatively less studied . to the best of our knowledge ,our work is the first one that achieves state - of - the - art performance with neural semi - crf model .domain specific knowledge like capitalization has been proved effective in named entity recognition . segment - level abstraction like whether the segment matches a lexicon entry also leads performance improvement . to keep the simplicity of our model, we did nt employ such features in our ner experiments .but our model can easily take these features and it is hopeful the ner performance can be further improved . utilizing auto - segmented data to enhancechinese word segmentation has been studied in .however , only statistics features counted on the auto - segmented data was introduced to help to determine segment boundary and the entire segment was not considered in their work .our model explicitly uses the entire segment .in this paper , we systematically study the problem of representing a segment in neural semi - crf model .we propose a concatenation alternative for representing segment by composing input units which is equally accurate but runs faster than srnn .we also propose an effective way of incorporating segment embeddings as segment - level representation and it significantly improves the performance . experiments on named entity recognition and chinese word segmentation show that the neural semi - crf benefits from rich segment representation and achieves state - of - the - art performance .this work was supported by the national key basic research program of china via grant 2014cb340503 and the national natural science foundation of china ( nsfc ) via grant 61133012 and 61370164 .chris dyer , miguel ballesteros , wang ling , austin matthews , and noah a. smith .transition - based dependency parsing with stack long short - term memory . in _ acl-2015 _ , pages 334343 ,beijing , china , july 2015 .acl .wenbin jiang , meng sun , yajuan l , yating yang , and qun liu .discriminative learning with natural annotations : word segmentation as a case study . in _acl-2013 _ , pages 761769 , sofia , bulgaria , august 2013 .john d. lafferty , andrew mccallum , and fernando c. n. pereira .conditional random fields : probabilistic models for segmenting and labeling sequence data . in _icml 01 _ , pages 282289 , san francisco , ca , usa , 2001 .daisuke okanohara , yusuke miyao , yoshimasa tsuruoka , and junichi tsujii . improving the scalability of semi - markov conditional random fields for named entity recognition . in _ acl-2006 _ , pages 465472 ,sydney , australia , july 2006 .acl .xu sun , yaozhong zhang , takuya matsuzaki , yoshimasa tsuruoka , and junichi tsujii . a discriminative latent variable chinese segmenter with hybrid word / character information . innaacl-2009 _ , pages 5664 , boulder , colorado , june 2009 .yiou wang , junichi kazama , yoshimasa tsuruoka , wenliang chen , yujie zhang , and kentaro torisawa .improving chinese word segmentation and pos tagging with semi - supervised methods using large auto - analyzed data . in _ ijcnlp-2011 _ , pages 309317 , chiang mai , thailand , november 2011 .
many natural language processing ( nlp ) tasks can be generalized into segmentation problem . in this paper , we combine semi - crf with neural network to solve nlp segmentation tasks . our model represents a segment both by composing the input units and embedding the entire segment . we thoroughly study different composition functions and different segment embeddings . we conduct extensive experiments on two typical segmentation tasks : named entity recognition ( ner ) and chinese word segmentation ( cws ) . experimental results show that our neural semi - crf model benefits from representing the entire segment and achieves the state - of - the - art performance on cws benchmark dataset and competitive results on the conll03 dataset .
nowadays , digital images and other multimedia files can become very large in size and , therefore , occupy a lot of storage space .in addition , owing to their size , it takes more time to move them from place to place and a larger bandwidth to download and upload them on the internet .so , digital images may pose problems if we regard the storage space as well as file sharing . to tackle this problem , _image compression _ which deals with reducing the size of an image ( or any other multimedia ) file can be used .image compression actually refers to the reduction of the amount of image data ( bits ) required for representing a digital image without causing any major degradation of the image quality . by eliminating redundant data and efficiently optimizing the contents of a file image , provided that as much basic meaning as possible is preserved , image compression techniques , make image files smaller and more feasible to share and store .the study of digital image compression has a long history and has received a great deal of attention especially with respect to its many important applications .references for theory and practice of this method are and , to name but a few .image compression , as well as other various fields of digital image processing , benefits from the theory of linear algebra as a helpful tool .in particular , singular value decomposition ( svd ) is one of the most useful tools for image compression .the matrix can be written in the form of , where and are unitary matrices , i.e. , , where denotes complex conjugate transpose and is identity matrix .the matrix is an diagonal matrix in such a way that its nonnegative entries are ordered in a non - increasing order ( see for example , theorem 7.3.5 of ) . with respect to the influences of singular values of in compressing an image , and considering the important pointthat the singular values of are the positive square roots of the eigenvalues of matrices and , the present study concerns itself with the eigenvalue of the normal matrices and on the purpose of establishing certain techniques for image compression that are efficient , lead to desirable results and need fewer calculations . in the next section ,we briefly present some definitions and concepts about normal matrices .section [ comp ] consists of two subsections in which the proposed image compression methods are explained . in section [ exp ] ,the validity rates of the presented image compression schemes are investigated and compare their efficiencies by experimental results .in this section , we review the definition and some properties of normal matrices . see and the references mentioned there as the suggested sources on a series of conditions on normal matrices . in the next section , we will describe the proposed method on the basis of these presented properties .a matrix is called _ normal _ if . assuming as an -square normal matrix , there exists an orthonormal basis of that consists of eigenvectors of , and is unitarily diagonalizable .that is , let the scalars , counted according to multiplicity , be eigenvalues of the normal matrix and let be its corresponding orthonormal eigenvectors .then , the matrix can be factored as the following : ,\ ] ] where the matrix satisfies . maintaining the generality ,assume that eigenvalues are ordered in a non - ascending sequence of magnitude , i.e. , .it is to be noticed that , if all the elements of the matrix are real , then , where refers to the transpose of the matrix .a square matrix is called _ symmetric _ if and called _ skew - symmetric _ if .that symmetric and skew - symmetric matrices are normal is easy to see . also , the whole set of the eigenvalues of a real symmetric matrix are real , but all the eigenvalues of a real skew - symmetric matrix are purely imaginary .a general square matrix satisfies , for which the symmetric matrix is called the _ symmetric part _ of and ,similarly , the skew - symmetric matrix is called the _ skew - symmetric _ part of . as a consequence, every square matrix may be written as the sum of two normal matrices : a symmetric matrix and a skew - symmetric one .we specially use this point in the proposed image compression techniques .this section consists of two subsections where methods for image compression are presented using normal matrices . to this purpose, the matrix representing the image is transformed into the space of normal matrices .next , the properties of its eigenvalue decomposition are utilized , and some less significant image data are deleted . finally , by returning to the original space , the compressed image can be constructed .let be an matrix to represent the image .two distinct methods are taken into account .first , the symmetric parts of are dealt with to establish an image compression scheme .this procedure can be performed in the same way for the skew - symmetric parts of the matrix .next , another technique is explained using both symmetric and skew - symmetric parts of the matrix .what is noticeable is that finding the eigenvalues and eigenvectors of a matrix requires fewer calculations than finding its singular values and singular vectors .moreover , it is possible to calculate the eigenvalues and eigenvectors of a normal ( especially symmetric or skew - symmetric ) matrix by explicit formulas and , therefore , may yet again need less computation . in this subsection , a technique of image compression comes into focus on the basis of the eigenvalue decomposition of the symmetric parts of the matrix .this can be performed in the same way for the skew - symmetric parts of .it is to be noted that , for the same image , the results obtained by these two techniques ( i.e. using symmetric or skew - symmetric part ) may be different .assume as the symmetric part of the matrix .the normal matrix can be factored as in the following : now , bearing in mind that the eigenvalues are sequenced in a non - ascending order of magnitude , compress the symmetric part of the image by wiping off the small enough eigenvalues of .if of the larger eigenvalues remains , then there is where the total storage for is .here comes the stage of reconstructing the compressed image from its symmetric part . to this purpose ,all the elements located above the main diagonal of the matrix are needed , and this calls for storage spaces . because can be considered as an acceptable approximation of , let us take all the elements above the main diagonal of as the elements of .obviously , the elements located below the main diagonal of the matrix should be determined too .in addition , it is inferred from the fact that the elements located below the main diagonal of can be obtained by subtracting the elements below the main diagonal of from the elements of .see the following equation where and denote the given and unknown entries , respectively .}\limits_{\tilde{x } } + \mathop { \left [ { \begin{array}{*{20}c } \times & { } & \times \\ { } & \ddots & { }\\ \checkmark & { } & \times \\ \end{array } } \right]}\limits_{\tilde{x}^t } .\ ] ] it is clear that the main diagonal elements of are the same as those on the main diagonal of .it is also to be noted that , by this procedure , only the elements located below the main diagonal of are modified , and its other elements are remain untouched .moreover , to reconstruct the compressed image , the elements located below the main diagonal of may be reserved instead of those above the main diagonal , and then a procedure similar to what has been performed newly is to be followed .indeed , can be partitioned into several segments , and those segments may be reserved provided that is reconstructed .this may be useful specially when some segments of the image have more significance or unchanging ( or uncompressing ) some partitions of the image is desirable during the image compression .finally , it should be pointed out that reconstruction of the compressed image requires storage spaces when the symmetric part of is dealt with . however , if the skew - symmetric part of is concerned , the diagonal elements of the can not be obtained from and , therefore , they must be reserved . reconstructing the compressed image , thus , requires storage spaces when the skew - symmetric part of is considered for the image compression method . in the next subsection ,another image compression technique is provided , for which reconstructing the compressed image demands fewer storage spaces .the previous subsection introduced an image compression scheme dealing with almost half of an image . by that technique ,more than half of the elements of the image remained unchanged .this may be of some usages and advantages .compressed images obtained by the presented method have a high quality .however , compressing of just half of the original image may cause the method to lose its reliability as compared to some other image compression schemes . to tackle this problem , a new method is presented here about both symmetric and skew - symmetric parts of the matrix in order to compress the image all over .this new image compression technique is found to be of a remarkably high reliability .as previously mentioned , every matrix equals the sum of its symmetric and skew - symmetric parts . to establish the new method , with the definition of , borne in mind, is used to represent the skew - symmetric part of .the matrix can be written as follows : with respect to the method described in the previous subsection , for an integer , set through ( [ sym ] ) and ( [ ssym ] ) , the compressed image will be as in the case of reserving the matrix , storage spaces are required for saving the matrix . as a result ,the total storage requirement for is .in this section , the validity and the influence of the proposed image compression method are examined .let us note that the ideas presented in this paper , can readily be used to establish a block - based compression scheme .this scheme concerns dividing of an image into non - overlapping blocks and compressing of each block .the peak signal to noise ratio ( psnr ) is calculated to measure the quality of the compressed image . in the case of gray scale images of size ,whose pixels are represented with 8 bits , psnr is computed as follows : where and refer to the elements of the original and the compressed images respectively . in the above relationship, mse stands for the mean square error between the original image and the compressed image pixels .in addition , compression ratio ( cr ) may be calculated as an important index to evaluate how much of an image is compressed .cr is the amount of bits in the original image divided by the amount of bits in the compressed image ; that is , for the sake of simplicity , method # 1 and method # 2 are branded as image compression techniques which use the symmetric and the skew - symmetric parts of the matrix representing the image , in the image compression schemes , respectively .in addition , let method # 3 denominate the image compression technique described in subsection [ sb2 ] and let method # 4 stand for the image compression method using svd .consequently , the following relationships emerge for cr of these methods : in the experiments conducted in this study , three gray scale images were considered , including ( a ) lena , ( b ) baboon and ( c ) gold hill presented in fig [ fig : images1 ] .the psnr results are shown in tables [ table : lenap ] , [ table : baboonp ] and [ table : goldhillp ] , for some integer values of , for images lena , baboon and gold hill , respectively . also , the cr results are given in table [ table : goldhillc ] for a image .the results obtained by these techniques are compared to those achieved by method # 4 in our tables .furthermore , figures [ fig : compressed50 ] and [ fig : compressed100 ] show the compressed image lena obtained by the proposed techniques as well as image compression method using svd , method # 4 , for and , respectively ..psnr results for lena [ cols="^,^,^,^,^ " , ] [ table : goldhillc ] . ] . ]in this paper , the eigenvalue decomposition of symmetric and skew - symmetric matrices , as two important kinds of normal matrices , and their properties were applied to obtain image compression schemes .the proposed method is straightforward and uncomplicated ones requiring clear and fewer computations as compared to some exiting methods .this image compression method is the first one ever introduced and concerns the symmetric part of an image , which can also be applicable in the case of the skew - symmetric part of the image .the method is capable of keeping half ( or some selected segments ) of an image unchanged , and this feature may be of some usages .however , as observed in this study , just half of the original image could be changed , which causes the compressed image to lose its reliability .this is what makes the technique different from some other image compression techniques .hence , the second image compression scheme using both symmetric and skew - symmetric parts of the original image was proposed .the experimental results show the high reliability of this method . finally , it is to be pointed out that the proposed method can be used to devise block - based compression techniques .j. cullum , r.a .willoughby , computing eigenvalues of very large symmetric matrices an implementation of a lanczos algorithm with no re - orthogonalization , journal of computational physics , 44 ( 1981 ) 329358 .jia , yin - jie , peng - fei xu , xu - ming pei ., an investigation of image compression using block singular value decomposition . ,communications and information processing .springer berlin heidelberg , ( 2012 ) .723 - 731 .tian , w. luo , l.z .liao , an investigation into using singular value decomposition as a method of image compression . , proceedings of the fourth international conference on machine learning and cybernetics , pp . 5200-5204 ( 2005 ) .jia , yin - jie , peng - fei xu , xu - ming pei ., an investigation of image compression using block singular value decomposition . ,communications and information processing .springer berlin heidelberg , ( 2012 ) .723 - 731 .
in this paper , we present methods for image compression on the basis of eigenvalue decomposition of normal matrices . the proposed methods are convenient and self - explanatory , requiring fewer and easier computations as compared to some existing methods . through the proposed techniques , the image is transformed to the space of normal matrices . then , the properties of spectral decomposition are dealt with to obtain compressed images . experimental results are provided to illustrate the validity of the methods . _ keywords : _ image compression , transform , normal matrix , eigenvalue .
we can study the effect of electromagnetic fields on fluids only if we know the stress induced due to the fields in the fluids . despite its importance , this topicis glossed over in most works on the otherwise well - established subjects of fluid mechanics and classical electrodynamics . the resultant force and torque acting on the body as a wholeare calculated but not the density of body force which affects flow and deformation of materials .helmholtz and korteweg first calculated the body force density in a newtonian dielectric fluid in the presence of an electric field , in the late nineteenth century .however , their analysis was criticized by larmor , livens , einstein and laub , who favoured a different expression proposed by lord kelvin .it was later on shown that the two formulations are not contradictory when used to calculate the force on the body as whole and that they can be viewed as equivalent if we interpret the pressure terms appropriately .we refer to bobbio s treatise for a detailed account of the controversy , the experimental tests of the formulas and their eventual reconciliation .the few published works on the topic like the text books of landau and lifshitz , panofsky and phillips and even bobbio treat fluids and elastic solids separately .further , they restrict themselves to electrically and magnetically linear materials alone . in this paper , we develop an expression for stress due to external electromagnetic fields for materials with simultaneous fluid and elastic properties and which may have non - linear electric or magnetic properties .our analysis is thus able to cater to dielectric viscoelastic fluids and ferro - fluids as well .we also extend rosensweig s treatment , by allowing ferro - fluids to have elastic properties .let us first see why the problem of finding stress due to electric or magnetic fields inside materials is a subtle one while that of calculating forces on torques on the body as a whole is so straightforward .the standard approach in generalizing a collection of discrete charges to a continuous charge distribution is to replace the charges themselves with a suitable density function and sums by integrals .thus , the expression for force , ( is the electric field at the location of the charge . ) on a body on discrete charges in an electric field , is replaced with , when the body is treated as a continuum of charge , the integral being over the volume of the body .the integral can be written as where is the force density in the body due to an external electric field .it can be shown that that the same expression for force density is valid even inside the body . if instead , the body were made up of discrete dipoles instead of free charges , then the force on the body as a whole would be written as where is the dipole moment of the point dipole and is the electric field at its position .if the body is now approximated as a continuous distribution of dipoles with polarization , then the force on the whole body is written as while this is a correct expression for force on the body as a whole , it is not valid if applied to a volume element inside the material . in other words , is not a correct expression for density of force in a continuous distribution of dipoles although is the density of force in the analogous situation for monopoles .we shall now examine why it is so .consider two bodies and that are composed of charges and dipoles respectively .( the subscripts of quantities indicate their composition . )let and be volume elements of and respectively .the volume elements are small compared to dimensions of the body but big enough to have a large number of charges or dipoles in them .the forces and on and respectively due to the surrounding body are where is the number of charges or dipoles inside the volume element under consideration . in both these expressions , is the macroscopic electric field at the position of charge or dipole .it is the average value of the microscopic electric field at that location .that is , where denotes the spatial average of the enclosed quantity .the microscopic field can be written as where is the microscopic field due to the charges or dipole outside the volume element and is the field due to charges or dipoles inside the volume element other than the charge or dipole .for the volume element of point charges , where is the microscopic electric field at the position of charge due to charge inside .therefore , newton s third law makes the second sum on the right hand side of the above equation zero . is thus due to charges outside alone for which the standard approach of replacing sum by integral and discrete charge by charge density is valid .therefore , continues to be the volume force density inside the body .if the same analysis were to be done for the volume element of point dipoles , it can be shown that the contribution of dipoles inside is not zero .in fact , the contribution depends on the shape of .that is the reason why , also called kelvin s formula , is not a valid form for force density in a dielectric material .we would have got the same results for a continuous distribution of magnetic monopoles , if they had existed , and magnetic dipoles .that is is not the correct form of force density of a volume element in a material with magnetization in a magnetic field .the goal of this paper is to develop an expression for stress inside a material with both viscous and elastic properties in the presence of an external electric or magnetic field , allowing the materials to have non - linear electric and magnetic properties .we demonstrate that by making some fairly general assumptions about thermodynamic potentials , it is possible to develop a theory of stresses for materials with fluid and elastic properties .we check the correctness of our results by showing that they reduce to the expressions developed in earlier works when the material is a classical fluid or solid . to our knowledge, there is no theory of electromagnetic stresses in general continua with simultaneous fluid and elastic properties .since we are using techniques of equilibrium thermodynamics for our analysis , we will not be able to get results related to dissipative phenomena like viscosity .deriving an expression for viscosity for even a simple case of a gas requires full machinery of kinetic theory .developing a theory of electro and magneto viscous effects is a much harder problem and we shall not attempt to solve it in this paper .we begin our analysis in section ( [ sec : thermod ] ) by reviewing expressions for the thermodynamic free energy of continua in electric and magnetic fields .after pointing out the relation between stress and free energy in section ( [ sec : dielectric ] ) , we obtain a general relation for stress in a dielectric material in presence of an electric field .we check its correctness by showing that it reduces to known expressions for stress in newtonian fluids and elastic solids .the framework for deriving electric stress is useful for deriving magnetic stress in materials that are not permanently magnetized .section ( [ sec : magnetic ] ) mentions the expression for stress in a continuum in presence of a static magnetic field .we then point out the assumptions in derivations of ( [ sec : dielectric ] ) and ( [ sec : magnetic ] ) that render the expressions of stress unsuitable for ferro - fluids and propose the one that takes into account the permanent magnetization of ferro particles .we derive expressions for ponderomotive forces in section ( [ sec : ponder ] ) from the expressions for stress obtained in previous sections .most of our analysis rests on framework scattered in the classic works of landau and lifshitz on electrodynamics and elasticity generalizing it for continua of arbitrary nature .electromagnetic fields alter thermodynamics of materials only if they are able to penetrate in their bulk . conducting materials have plenty of free charges to shield their interiors from external static electric fields .therefore , the effect of external static electric fields are restricted to their surface alone , in the form of surface stresses .the situation in dielectrics is different - a paucity of free charges allows an external static field to penetrate throughout its interior polarizing its molecules . the external field has to do work to polarize a dielectric .this is akin to work done by an external agency in deforming a body .the same argument applies to a body exposed to a magnetic field . unlike static electric fields that are shielded in conductors ,magnetic fields always penetrate in bodies , magnetizing them .the nature of the response depends on whether a body is diamagnetic , paramagnetic or ferromagnetic . in all the cases , magnetic fields have to do work to magnetize them and therefore the thermodynamics of continua is always affected by a magnetic field .we shall develop thermodynamic relations for materials exposed to static electromagnetic fields in this section . at a molecular level ,electric and magnetic fields deform matter for which the fields have to do work .the material and the field together form a thermodynamic system .the work done on it is of the form where is an intensive quantity and a related extensive quantity denotes the possibly inexact differential of a quantity . ] . in the case of a dielectric material in a static electric field , the intensive quantity is the electric field and the extensive quantity is the total dipole moment , being the polarization and being the volume of the material . in the case of a material getting magnetized , the intensive quantity is the magnetizing field and the extensive quantity is the total magnetic moment , being the polarization and being the volume of the material .the corresponding work amounts are and respectively .we added a subscript because this is only one portion of the work .the other portion of the work is required to increment the fields themselves to achieve a change in polarization or magnetization .they are and respectively , where is the permittivity of free space and is the permeability of free space respectively and that of a magnetic field is . ] therefore , the total work needed to polarize and magnetize a material , at constant volume , are we have derived these relations for linear materials .we will now show that they are true for any material .imagine a dielectric immersed in an electric field .let the electric field be because of a charge density .let the electric field be increased slightly by changing the charge density by an amount .work done to accomplish this change is where is the electric potential . since , the charge density is localized then the volume of integration can be taken as large as we like .we do so and also convert the first integral on the right hand side to a surface integral .the first term then makes a vanishingly small contribution to the total and the work done in polarizing a material can be written as let a material be magnetized by immersing it in a magnetic field .the magnetic field can be assumed to be created because of a current density .let the magnetic field be increased slightly by changing the current density .we further assume that the rate of increase of current is so small that at all stages .the source of current has to do an additional work while increasing the amount of current density in order to overcome the opposition of the induced electromotive force .if is the induced emf , then the sources will have to do an additional work at the rate , where is the magnetic flux and the dot over head denotes total time derivative .the amount of work needed is . if is the cross sectional area of the current , then but , therefore , since we assumed the current to be increased at an infinitesimally slow rate , there are no displacement currents and . where we have used the vector identity .we once again assume that the current density is localized and therefore converting the second integral on the right hand side of equation ( [ thermod : e6 ] ) into a surface integral results in an infinitesimally small quantity .the work done in magnetizing a material is therefore , a change in the helmholtz free energy of a system is equal to the work done by the system in an isothermal process , which in turn is related to stresses in the continuum. we will show how stress is related to the helmholtz free energy .let us consider the example of an ideal gas .the change in its helmholtz free energy , is given by .using the first and the second laws of thermodynamics we have .therefore , , which under isothermal conditions means . in this simple system , is the isotropic portion of the stress and is related to the isotropic strain .thus , we can get is we know change in helmholtz free energy and volume .therefore , a first step toward getting an expression for stress is to find the helmholtz free energy . under isothermal conditions ,the first law of thermodynamics is where is the total free energy , is the absolute temperature , is the total entropy and is the work done on the system .first law of thermodynamics for polarizable and magnetizable media is where is the mechanical work done on the system .if , and are internal energy , mechanical work and entropy of the media _ per unit volume _, first law of thermodynamics for polarizable and magnetizable media is the mechanical work done on a material is where is the stress tensor and is the strain tensor in the medium .further , with this substitution , all quantities in equations ( [ fe:4 ] ) and ( [ fe:5 ] ) become exact differentials allowing us to replace with . if is the helmholtz free energy per unit volume , these relations give change in helmholtz free energy in terms of change in and .the field s source is free charges alone while the field s source is all currents . in an experiment, we can control the total charge and free currents .therefore , it is convenient to express free energy in terms of , whose source is all charges - free and bound , and , whose source is free currents .we therefore introduce associated helmholtz free energy function for polarizable media as and for magnetizable media as .equations ( [ fe:6 ] ) and ( [ fe:7 ] ) therefore become if is the deviatoric stress , , where is the hydrostatic pressure .therefore we have , the quantity is the dilatation of the material during deformation . therefore the thermodynamic potential of a polarizable ( magnetizable ) medium is thus , a function of , , and ( ) .equivalently , it can be considered a function of , , and ( ) , where is the mass density of the medium .we will now calculate the stress tensor in a polarizable medium .we consider a small portion of the material and find out the work done by the portion in a deformation in presence of an electric field .the portion is small enough to approximate the field to be uniform throughout its extent .we emphasize that through this assumption we are not ruling out non - uniform fields but only insisting that the portion be small enough to ignore variations in it .since a sufficiently small portion of a material can be considered to be plane , the volume element under consideration can be assumed to be in form of a rectangular slab of height .let it be subjected to a virtual displacement which need not be parallel to the normal to the surface .the virtual work done by the medium per unit area in this deformation is , where is the stress _ on _ the portion .if is the stress _ due to _ the portion _ on _ the medium , then . therefore , the virtual work done by the medium on the portion is .further , since both and are symmetric , the virtual work can also be written as . the change in helmholtzfree energy during the deformation is per unit surface area .if we assume the deformation to be isothermal , change in height of the slab is [ dielectric : f1 ] the geometry of the problem is described in figure 2 . for an isothermal variation depart from the convention in thermodynamics , to indicate variables held constant as subscripts to partial derivatives , to make our equations appear neater .we shall also use the traditional notation for partial derivatives .we will now get expressions for each term on the right hand side of equation ( [ dielectric:3 ] ) . 1 .if is the helmholtz free energy in absence of electric field , , where is the permittivity tensor .permittivity is known to be a function of mass density of a material , the dependence being given by clausius - mossotti relation .electric field is _ usually _ independent of mass density of the material .however , that is not so if the material has a pronounced density stratification like a fluid heated from above .if and are two elements of such a fluid , at the top and bottom respectively , both having identical volume then will have less number of dipoles than .the electric field inside them , due to matter within their confines too will differ .we point out that although divergence of depends only on the density of free charges , itself is produced by all multipoles .therefore , the last term in equation ( [ dielectric:4 ] ) is absent if the material has a uniform temperature .it is not included in the prior works of bobbio and landau and lifshitz .if the ( or ) axis is assumed to be along the normal and the deformation is uniform , the displacement of a layer of the volume element can be described as where is the vertical distance from the lower surface .since is fixed , and since the strain tensor is always symmetric , the electric field does not depend on strain but permittivity does .this is because , deformation may change the anisotropy of the material , which determines its permittivity .likewise , permittivity and strain tensors do not depend on electric field ] .therefore they can be pulled out of the integral and 3 . from equation ( [ fe:8 ] ), therefore , the last term is just .using equations ( [ dielectric:4 ] ) , ( [ dielectric:9 ] ) and ( [ dielectric:10 ] ) in ( [ dielectric:3 ] ) , we get substituting ( [ dielectric:11 ] ) and ( [ dielectric:2 ] ) in ( [ dielectric:1 ] ) we get we have gathered terms independent of electric field in the first curly bracket of equation ( [ dielectric:12 ] ) , keeping the contribution of electric field to stress in the rest .we still have to find out the expressions for and .a change in density of a layer depends on the change it its height ( or thickness ) , therefore , or , we will now estimate change in electric field due to deformation . consider a volume element at a point .let it undergo a deformation by . as a result ,matter that used to be at now appears at . in a virtual homogeneous deformation ,every volume element carries its potential as the material deforms .therefore , the change in potential at is . since ( see equation ( [ dielectric:5 ] ) ) , since , we have used the assumption that the region is small enough to have almost uniform electric field and therefore it can be pulled out of the gradient operator .equation ( [ dielectric:12 ] ) therefore becomes stress in a polarized viscoelastic material at rest is therefore , we can simplify equation ( [ dielectric:16a ] ) by writing the terms in the first curly bracket as familiar thermodynamic quantities . if is the total helmholtz free energy of the substance in absence of electric field and is the helmholtz free energy per unit volume then , where is the volume of the substance , the mass and the density .maxwell relation for pressure in terms of total helmholtz free energy is similarly , the dependence of on strain tensor can be written as , where we have retained only the deviatoric of the strain tensor because the isotropic part is already accounted in hydrostatic pressure of equation ( [ dielectric:16a ] ) .the constant is the shear modulus of the substance .therefore , equation ( [ dielectric:16a ] ) can therefore be written as we will now look at some special cases of ( [ dielectric:16 ] ) , 1 . if there is no matter , terms with pressure , density and strain tensor will not be present .further and equation ( [ dielectric:16 ] ) becomes the maxwell stress tensor for electric field in vacuum . we emphasize that the general expression for stress in a material exposed to static electric field reduces to maxwell stress tensor only when we ignore all material properties .if there is no electric field , all terms in the second and third curly bracket of ( [ dielectric:16 ] ) vanish .further , if the medium is a fluid without elastic properties , , will not depend on and the stress will be thus the stress in a fluid without elastic properties is purely hydrostatic .we do not see viscous terms in ( [ dielectric:17 ] ) because viscosity is a dissipative effect while is obtained from helmholtz free energy which has information only about energy than can be extracted as work .if the material were a solid and if there are no electric fields as well , the stress is it is customary to write the first term of equation ( [ dielectric:18 ] ) in terms of , the bulk modulus as 4 . for a fluid dielectric with isotropic permittivity tensor , is independent of and .if the fluid has a uniform density , equation ( [ dielectric:16 ] ) then becomes this expression matches the one obtained in , after converting to gaussian units , and after accounting for the difference in the interpretation of stress tensor .landau and lifshitz s stress tensor is .5 . for a fluid dielectric with isotropic permittivity tensor andin which the electric field depends on density equation ( [ dielectric:16 ] ) then becomes 6. for a solid dielectric we can assume that , and are independent of . equation ( [ dielectric:16 ] ) now becomes if the solid is isotropic and remains to be so after application of electric field , and ( [ dielectric:20 ] ) simplifies to where is the part of stress tensor that exists even in absence of electric field .this expression matches with the one in if one converts it to gaussian units , assumes the constitutive relation and takes into account that their stress tensor is .7 . for a viscoelastic liquid that is also a linear dielectric with uniform density , order to calculate stress in a magnetic fluid , we continue to use the physical set up used in section ( [ sec : dielectric ] ) of a small slab of viscoelastic liquid subjected to magnetic field .if there are no conduction and displacement currents , ampere s law becomes , making the field conservative .it can then be treated like the electrostatic field of section ( [ sec : dielectric ] ) . in order to extend the analysis of section ( [ sec : dielectric ] ) to magnetic fluids ,we need an additional assumption of magnetic permeability being independent of .although the first assumption , of no conduction and displacement currents , is valid in the case of ferro - viscoelastic fluids , the second assumption of field - independent permeability is not .therefore , this analysis is valid only for the single - valued , linear section of the versus curve of ferro - viscoelastic liquid , giving where is the magnetic permeability tensor . we have omitted the term accounting for dependence of on mass density because we are not aware of a situation where it may happen .the expressions derived in sections ( [ sec : dielectric ] ) and ( [ sec : magnetic ] ) are valid only if permittivity and permeability are independent of electric and magnetic fields respectively .ferro - fluids are colloids of permanently magnetized particles .as the applied magnetic field increases from zero , an increasing number of sub - domain magnetic particles align themselves parallel to the field opposing the random thermal motion leading to a magnetization that increased in a non - linear manner . the magnetic susceptibility andtherefore permeability depend on the field .it can not be pulled out of the integral sign .equation ( [ magnetic:1 ] ) should be written as if the elastic effects are negligible , equation ( [ ferro:1 ] ) reduces to if , as is normally for ferro - fluids , since the applied magnetic field is independent of density , where to get the last equation we have used the relation and the fact that does not depend on .if is the specific volume , that is , equation ( [ ferro:5 ] ) can be written as further , and imply , using equations ( [ ferro:5 ] ) and ( [ ferro:6 ] ) in equation ( [ ferro:3 ] ) , we get \right\}\delta_{ij } - h_i b_j\ ] ] this is same as rosensweig s equation ( 4.28 ) except that he calculates , which is related to our stress tensor as .we do not know of electric analogues of ferro fluids ( electro - rheological fluids are analogues of magneto - rheological fluids , not ferro fluids ) . however , there are permanently polarized solids , called ferro - electrics . for such materials ,the stress is old term `` ponderable media '' means media that have weight .ponderomotive force is the one that cause motion or deformation in a ponderable medium . in contemporary terms , it is the density of body force in a material .it is related to the stress tensor as we mention a few familiar special cases of this equation for fluids of various kinds . 1 . for incompressible , newtonian fluids the stress tensoris given by ( [ dielectric:17 ] ) and the force density is note that the force density does not include the dissipative component due to viscosity .2 . for an incompressible ,newtonian , dielectric fluid in presence of static electric field , assuming that the electric field inside the fluid is independent of density , the stress tensor is given by equation ( [ dielectric:19 ] ) and the ponderomotive force is where is the density of free charges in the fluid and is its relative permittivity .in deriving equation ( [ ponder:3 ] ) we used gauss law and the fact that we are dealing with an electrostatic field ( ) , for which . in an ideal , dielectric fluid and the relative permittivity is a function of temperature and the term is significant in a single - phase fluid only if there is a temperature gradient .the third term in equation ( [ ponder:4 ] ) is called the electro - striction term and it is present only when the electric field or or both are non - uniform .the derivative of the relative permittivity with respect to mass density is calculated using the clausius - mossotti relation , .3 . continuing with the same fluid as above but now having a situationin which the electric field is a function of mass density , we have an additional term in equation ( [ ponder:4 ] ) given by we come across such a situation when there is a strong temperature gradient in the fluid resulting in a gradient of dielectric constant .since the electric field depends on and depends on mass density through the clausius - mossotti relation , the electric field is a function of mass density and we have to consider this additional term .we hasten to add that it not necessary for there to be a temperature gradient to have such a situation , a gradient of electric permittivity suffices to give rise to such a situation .the derivation for force density in an incompressible , newtonian , diamagnetic or paramagnetic fluid in presence of a static magnetic field is similar except that we use the maxwell s equations and .we also assume the auxiliary magnetic field , , inside the fluid is independent of density .we get the term is the lorentz force term and it is zero if the fluid is not conducting . is the relative permeability of the fluid . the fourth term in equation ( [ ponder:5 ] )is called the magneto - striction force .it is present only if the magnetic field or or both are non - uniform .the derivative of relative permeability with respect to mass density is calculated using the magnetic analog of the clausius - mossotti relation .several forms of body force density , all equivalent to each other , can be derived for ferro - fluids from equations ( [ ferro:7 ] ) and ( [ ponder:1 ] ) .we refer to for more details .[ ponder : i5 ] if the material is dielectric and viscoelastic , we assume that the permittivity depends on the strain .even though the material was isotropic before applying electric field , it may turn anisotropic as its molecules get polarized and align with the field .the scalar permittivity is then replaced with a second order permittivity tensor . following landau and lifshitz s treatment of solid dielectrics , we assume that the permittivity tensor is a linear function of the strain tensor and write it as where and are constants indicating rate of change of permittivity with strain .we call them and to differentiate them from and used to describe behavior of solid dielectric .if we assume the material to be incompressible , and for an incompressible material , equation ( [ dielectric:16c ] ) becomes , since and shear modulus are constants , they do not survive in the expression for .the expression for ponderomotive force in a dielectric , viscoelastic fluid is same as that for a dielectric , newtonian fluid .[ ponder : i6 ] the same conclusion follows for a viscoelastic fluid subjected to a magnetic field if we assume that , and being constants , when a fluid is magnetized .time - varying electric fields can penetrate conductors up to a few skin depths that depends on the frequency of the fields and physical parameters of the material like its ohmic conductivity or magnetic permeability .the general problem of response of materials to time - varying fields is quite complicated .however , the results in this paper can be applied for slowly varying fields , that is , the ones that do not significantly radiate . for such fields, the time varying terms of maxwell equations can be ignored . whether a time varying field can be considered quasi - static or not depends on the linear dimension of the materials involved . if is the angular frequency of the fields , the wavelength of corresponding electromagnetic wave is , being the velocity of light in vacuum . if the linear dimension of the materials is much lesser than , for any element of the path of current , there is another within that carries same current in the opposite direction , effectively canceling the effect of current . for power line frequencies ,the value of is a few hundred miles and even for low frequency radio waves , with hz , is of the order of m. thus the _ slowly - varying fields _ or _ quasi - static _approximation is valid for frequencies up to that of radio waves and our results can be applied under those conditions .1 . proof of equation ( [ dielectric:8 ] ) . interchange the indices in the first term , since is a symmetric tensor , 99 bobbio , s. , electrodynamics of materials : forces stresses , and energies in solids and fluids , academic , new york , chapter 4 , ( 2002 ) .landau l. d. and lifshitz e. m. , electrodynamics of continuous media , pergamon , oxford , ( 1960 ). panofsky w. k. h. and phillips m. , classical electricity and magnetism , 2nd edition , dover publications inc ., new york , ( 1962 ) .rosensweig r. e. , ferrohydrodynamics , dover publications , mineola , new york , ( 1997 ) .reitz , j. r. , milford , f. j. and christy , r. w. , foundations of electromagnetic theory , 3rd edition , narosa publishing house , new delhi , ( 1990 ) .jackson , j. d. , classical electrodynamics , 3rd edition , john wiley & sons inc , new york , ( 1999 ) .chapman , s. and cowling , t. g. , mathematical theory of non - uniform gases , 3rd edition , cambridge university press , cambridge , ( 1970 ) .landau l. d. and lifshitz e. m. , theory of elasticity , pergamon , oxford , ( 1960 ) .joshi amey , radhakrishna m.c . andrudraiah n , rayleigh - taylor instability in dielectric fluids , physics of fluids , volume 22 , issue 6 , ( 2010 ) .
a clear understanding of body force densities due to external electromagnetic fields is necessary to study flow and deformation of materials exposed to the fields . in this paper , we derive an expression for stress in continua with viscous and elastic properties in presence of external , static electric or magnetic fluids . our derivation follows from fundamental thermodynamic principles . we demonstrate the soundness of our results by showing that they reduce to known expressions for newtonian fluids and elastic solids . we point out the extra care to be taken while applying these techniques to permanently polarized or magnetized materials and derive an expression for stress in a ferro - fluid . lastly , we derive expressions for ponderomotive forces in several situations of interest to fluid dynamics and rheology .
while sports are often analogized to a wide array of other arenas of human activity particularly war well known story lines and elements of sports are conversely invoked to describe other spheres .each game generates a probablistic , rule - based story , and the stories of games provide a range of motifs which map onto narratives found across the human experience : dominant , one - sided performances ; back - and - forth struggles ; underdog upsets ; and improbable comebacks . as fans , people enjoy watching suspenseful sporting events unscripted stories and following the fortunes of their favorite players and teams . despite the inherent story - telling nature of sporting contests and notwithstanding the vast statistical analyses surrounding professional sports including the many observations of and departures from randomness the ecology of game stories remains a largely unexplored , data - rich area .we are interested in a number of basic questions such as whether the game stories of a sport form a spectrum or a set of relatively isolated clusters , how well models such as random walks fare in reproducing the specific shapes of real game stories , whether or not these stories are compelling to fans , and how different sports compare in the stories afforded by their various rule sets . here , we focus on australian rules football , a high skills game originating in the mid 1800s .we describe australian rules football in brief and then move on to extracting and evaluating the sport s possible game stories .early on , the game evolved into a winter sport quite distinct from other codes such as soccer or rugby while bearing some similarity to gaelic football. played as state - level competitions for most of the 1900s with the victorian football league ( vfl ) being most prominent , a national competition emerged in the 1980s with the australian football league ( afl ) becoming a formal entity in 1990 . the afl is currently constituted by 18 teams located in five of australia s states .games run over four quarters , each lasting around 30 minutes ( including stoppage time ) , and teams are each comprised of 18 on - field players . games ( or matches ) are played on large ovals typically used for cricket in the summer and of variable size ( generally 135 to 185 meters in length ) .the ball is oblong and may be kicked or handballed ( an action where the ball is punched off one hand with the closed fist of the other ) but not thrown . marking ( cleanly catching a kicked ball ) is a central feature of the game , and the afl is well known for producing many spectacular marks and kicks for goals .the object of the sport is to kick goals , with the customary standard of highest score wins ( ties are relatively rare but possible ) .scores may be 6 points or 1 point as follows , some minor details aside .each end of the ground has four tall posts . kicking the ball ( untouched ) through the central two posts results in a ` goal ' or 6 points .if the ball is touched or goes through either of the outer two sets of posts , then the score is a ` behind ' or 1 point .final scores are thus a combination of goals ( 6 ) and behinds ( 1 ) and on average tally around 100 per team .poor conditions or poor play may lead to scores below 50 , while scores above 200 are achievable in the case of a ` thrashing ' ( the record high and low scores are 239 and 1 ) .wins are worth 4 points , ties 2 points , and losses 0 . of interest to us hereis that the afl provides an excellent test case for extracting and describing the game story space of a professional sport .we downloaded 1,310 afl game scoring progressions from http://www.afltables.com[http://afltables.com ] ( ranging from the 2008 season to midway through the 2014 season ) .we extracted the scoring dynamics of each game down to second level resolution , with the possible events at each second being ( 1 ) a goal for either team , ( 2 ) a behind for either team , or ( 3 ) no score .each game thus affords a ` worm ' tracking the score differential between two teams .we will call these worms ` game stories ' and we provide an example in fig . [ fig : sog.example_worm ] .the game story shows that geelong pulled away from hawthorn their great rival over the preceding decade towards the end of a close , back and forth game .each game story provides a rich representation of a game s flow , and , at a glance , quickly indicates key aspects such as largest lead , number of lead changes , momentum swings , and one - sidedness . andgame stories evidently allow for a straightforward quantitative comparison between any pair of matches . for the game story ecology we study here, an important aspect of the afl is that rankings ( referred to as the ladder ) , depend first on number of wins ( and ties ) , and then percentage of ` points for ' versus ` points against ' .teams are therefore generally motivated to score as heavily as possible while still factoring in increased potential for injury .we order the paper as follows . in sec .[ sec : sog.basics ] , we first present a series of basic observations about the statistics of afl games .we include an analysis of conditional probabilities for winning as a function of lead size .we show through a general comparison to random walks that afl games are collectively more diffusive than simple random walks leading to a biased random walk null model based on skill differential between teams .we then introduce an ensemble of 100 sets of 1,310 biased random walk game stories which we use throughout the remainder of the paper . in secs .[ sec : sog.gameshapes ] and [ sec : sog.gamemotifs ] , we demonstrate that game stories form a spectrum rather than distinct clusters , and we apply coarse - graining to elucidate game story motifs at two levels of resolution .we then provide a detailed comparison between real game motifs and the smaller taxonomy of motifs generated by our biased random walk null model .we explore the possibility of predicting final game margins in sec .[ sec : sog.prediction ] .we offer closing thoughts and propose further avenues of analysis in sec .[ sec : sog.conclusion ] .while every afl game is officially comprised of four 20 minute quarters of playing time , the inclusion of stoppage time means there is no set quarter or game length , resulting in some minor complications for our analysis .we see an approximate gaussian distribution of game lengths with the average game lasting a little over two hours at 122 minutes , and 96% of games run for around 112 to 132 minutes ( minutes ) . in comparing afl games, we must therefore accommodate different game lengths .a range of possible approaches include dilation , truncation , and extension ( by holding a final score constant ) , and we will explain and argue for the latter in sec .[ sec : sog.gameshapes ] . in post - game discussions, commentators will often focus on the natural chapters of a given sport . for quarter - based games ,matches will sometimes be described as ` a game of quarters ' or ` a tale of two halves . ' for the afl , we find that scoring does not , on average , vary greatly as the game progresses from quarter to quarter ( we will however observe interesting quarter - scale motifs later on ) . for our game database , we find there is slightly more scoring done in the second half of the game ( 46.96 versus 44.91 ) , where teams score one more point , on average , in the fourth quarter versus the first quarter ( 23.48 versus 22.22 ) . this minor increase may be due to a heightened sense of the importance of each point as game time begins to run out , the fatiguing of defensive players , or as a consequence of having ` learned an opponent ' .-72 , -71 to -66 , , -6 to -1 , 1 to 6 , 7 to 12 , , 72 . as for most sports , the probability of scoring next increases approximately linearly as a function of current lead size . ] in fig .[ fig : sog.conditional_scoring ] , we show that , as for a number of other sports , the probability of scoring next ( either a goal or behind ) at any point in a game increases linearly as a function of the current lead size ( the national basketball association is a clear exception ) .this reflects a kind of momentum gain within games , and could be captured by a simple biased model with scoring probability linearly tied to the current lead .other studies have proposed this linearity to be the result of a heterogeneous skill model , and , as we describe in the following section , we use a modification of such an approach .we next examine the conditional probability of winning given a lead of size at a time point in a game , .we consider four example time points the end of each of the first three quarters and with 10 minutes left in game time and plot the results in fig .[ fig : sog.conditional_winning_quarters ] .we fit a sigmoid curve ( see caption ) to each conditional probability .as expected , we immediately see an increase in winning probability for a fixed lead as the game progresses .these curves could be referenced to give a rough indication of an unfolding game s likely outcome and may be used to generate a range of statistics . as an example , we define likely victory as and find = 32 , 27 , 20 , and 11 are the approximate corresponding lead sizes at the four time points . losing games after holding any of these leads might be viewed as ` snatching defeat from the jaws of victory . ' similarly , if we define close games as those with , we find the corresponding approximate lead sizes to be 6 , 5 , 4 , and 2 .these leads could function in the same way as the save statistic in baseball is used , i.e. , to acknowledge when a pitcher performs well enough in a close game to help ensure their team s victory . expanding beyond the afl , such probability thresholds for likely victory or uncertain outcomemay be modified to apply to any sport , and could be greatly refined using detailed information such as recent performances , stage of a season , and weather conditions . at the end of the first three quarters * ( a c ) * and with 10 minutes to go in the game * ( d)*. bins are comprised of the aggregate of every 6 points as in fig . [ fig : sog.conditional_scoring ] .the dark blue curve is a sigmoid function of the form ^{-1 } ] .that is , at time , we predict the final margin of , , using minutes of memory and analog games as : where is the set of indices for the games closest to the current game over the time span $ ] , and is the final second of game . ) .we start with a game story ( red curve ) for which we know up until , for this example , 60 minutes ( ) .we find the closest game stories based on matching over the time period 45 to 60 minutes ( memory ) , and show these as gray curves .we indicate the average final score for these analog games with the horizontal blue curve . ]analogs and a memories of ( blue curve ) and ( green curve ) , compared with the naive model of assuming that the current leader will ultimately win ( red curve ) . ] for an example demonstration , in fig .[ fig : sog.prediction_example ] , we attempt to predict the outcome of an example game story given knowledge of its first 60 minutes ( red curve ) and by finding the average final margin of the closest games over the interval 45 to 60 minutes ( , shaded gray region ) . most broadly , we see that our predictor here would correctly call the winning team . at a more detailed level, the average final margin of the analog games slightly underestimates the final margin of the game of interest , and the range of outcomes for the 50 analog games is broad with the final margin spanning from around -40 to 90 points .having defined our prediction method , we now systematically test its performance after 30 , 60 , and 90 minutes have elapsed in a game currently under way . in aiming to find the best combination of memory and analog number , and , we use eq . ( [ eq : sog.prediction ] ) to predict the eventual winner of all 1,310 afl games in our data set at these time points .first , as should be expected , the further a game has progressed , the better our prediction . more interestingly , in fig . [fig : sog.memory_vs_analogs ] we see that for all three time points , increasing elevates the prediction accuracy , while increasing has little and sometimes the opposite effect , especially for small .the current score differential serves as a stronger indicator of the final outcome than the whole game story shape unfolded so far . the recent change in scores momentum is also informative , but to a far lesser extent than the simple difference in scores at time .based on fig .[ fig : sog.memory_vs_analogs ] , we proceed with analogs and two examples of low memory : and .we compare with the naive model that , at any time , predicts the winner as being the current leader .we see in fig .[ fig : sog.prediction_performance ] that there is essentially no difference in prediction performance between the two methods .thus , memory does not appear to play a necessary role in prediction for afl games .of interest going forward will be the extent to which other sports show the same behavior . for predicting the final score, we also observe that simple linear extrapolation performs well on the entire set of the afl games ( not shown ) .nevertheless , we have thus far found no compelling evidence for using game stories in prediction , nuanced analyses incorporating game stories for afl and other professional sports may nevertheless yield substantive improvements over these simple predictive models .overall , we find that the sport of australian rules football presents a continuum of game types ranging from dominant blowouts to last minute , major comebacks . consequently , and rather than uncovering an optimal number of game motifs , we instead apply coarse - graining to find a varying number of motifs depending on the degree of resolution desired .we further find that ( 1 ) a biased random walk affords a reasonable null model for afl game stories ; ( 2 ) the scoring bias distribution may be numerically determined so that the null model produces a distribution of final margins which suitably matches that of real games ; ( 3 ) blowout and major comeback motifs are much more strongly represented in the real game whereas tighter games are generally ( but not entirely ) more favorably produced by a random model ; and ( 4 ) afl game motifs are overall more diverse than those of the random version . our analysis of an entire sport through its game story ecology could naturally be applied to other major sports such as american football , association football ( soccer ) , basketball , and baseball .a cross - sport comparison for any of the above analysis would likely be interesting and informative . and at a macro scale, we could also explore the shapes of win - loss progressions of franchises over years .it is important to reinforce that a priori , we were unclear as to whether there would be distinct clusters of games or a single spectrum , and one might imagine rough theoretical justifications for both .our finding of a spectrum conditions our expectations for other sports , and also provides a stringent , nuanced test for more advanced explanatory mechanisms beyond biased random walks , although we are wary of the potential difficulty involved in establishing a more sophisticated and still defensible mechanism . finally , a potentially valuable future project would be an investigation of the aesthetic quality of both individual games and motifs as rated by fans and neutral individuals .possible sources of data would be ( 1 ) social media posts tagged as being relevant to a specific game , and ( 2 ) information on game - related betting .would true fans rather see a boring blowout with their team on top than witness a close game ? is the final margin the main criterion for an interesting game ?to what extent do large momentum swings engage an audience ?such a study could assist in the implementation of new rules and policies within professional sports .d. p. kiley , a. j. reagan , l. mitchell , c. m. danforth , and p. s. dodds .the game story space of professional sports : australian rules fbootball .draft version of the present paper using pure random walk null model .available online at http://arxiv.org/abs/1507.03886v1 .accesssed january 17 , 2016 , 2015 .b. morris .skeptical football : patriots vs. cardinals and an interactive history of the nfl , 2014 .http://fivethirtyeight.com/features/skeptical-football-patriots-vs-cardinals-and-an-interactive-history-of-the-nfl/ , accessed on june 25 , 2015 .j. bryant and a. a. raney .sports on the screen . in d.zillmann and p. vorderer , editors , _ media entertainment : the psychology of its appeal _ , lea s communication series . , pages 153174 .lawrence erlbaum associates publishers , mahwah , nj , us , 2000 .
sports are spontaneous generators of stories . through skill and chance , the script of each game is dynamically written in real time by players acting out possible trajectories allowed by a sport s rules . by properly characterizing a given sport s ecology of ` game stories ' , we are able to capture the sport s capacity for unfolding interesting narratives , in part by contrasting them with random walks . here , we explore the game story space afforded by a data set of 1,310 australian football league ( afl ) score lines . we find that afl games exhibit a continuous spectrum of stories rather than distinct clusters . we show how coarse - graining reveals identifiable motifs ranging from last minute comeback wins to one - sided blowouts . through an extensive comparison with biased random walks , we show that real afl games deliver a broader array of motifs than null models , and we provide consequent insights into the narrative appeal of real games .
planetpack is a software tool that facilitates the detection and characterization of exoplanets from the radial velocity ( rv ) data , as well as basic tasks of long - term dynamical simulations in exoplanetary systems . the detailed description of the numeric algorithms implemented in planetpack is given in the paper , coming with its initial 1.0 release .after that several updates of the package were released , offering a lot of bug fixes , minor improvements , as well as moderate expansions of the functionality .as of this writing , the current downloadable version of planetpack is 1.8.1 . the current source code , as well as the technical manual , can be downloaded at ` http://sourceforge.net/projects/planetpack ` . herewe pre - announce the first major update of the package , planetpack 2.0 , which should be released in the near future .in addition to numerous bug fixes , this update includes a reorganization of the large parts of its architecture , and several new major algorithms .now we briefly describe the main changes .the following new features of the planetpack 2.0 release deserve noticing : 1 .multithreading and parallelized computing , increasing the performance of some computationally heavy algorithms .this was achieved by migrating to the new ansi standard of the c++ language , c++11 .several new models of the doppler noise can be selected by the user , including e.g. the regularized model from .this regularized model often helps to suppress the non - linearity of the rv curve fit .3 . the optimized computation algorithm of the so - called keplerian periodogram , equipped with an efficient analytic method of calculating its significance levels ( baluev 2014 , in prep . ) .4 . fitting exoplanetary transit lightcurves is now implemented in planetpack .this algorithm can fit just a single transit lightcurve , as well as a series of transits for the same star to generate the transit timing variation ( ttv ) data .these ttv data can be further analysed as well in order to e.g. reveal possible periodic variations indicating the presence of additional ( non - transiting ) planets in the system .the transit lightcurve model is based on the stellar limb darkening model by .also , the transit fitting can be performed taking into account the red ( correlated ) noise in the photometry data .some results of the planetpack ttv analysis of the photometric data from the exoplanet transit database , ` http://var2.astro.cz/etd/ ` , will be soon presented in a separate work . concerning the evolution of the planetpack code, we plan to further develop the transit and ttv analysis module and to better integrate it with the doppler analysis block .we expect that in a rather near future planetpack should be able to solve such complicated tasks as the simultaneous fitting of the rv , transit , and ttv data for the same star .this integration should also take into account subtle intervenue between the doppler and photometry measurements like the rositter - mclaughlin effect .
we briefly overview the new features of planetpack2 , the forthcoming update of planetpack , which is a software tool for exoplanets detection and characterization from doppler radial velocity data . among other things , this major update brings parallelized computing , new advanced models of the doppler noise , handling of the so - called keplerian periodogram , and routines for transits fitting and transit timing variation analysis .
microarray technology is an important tool to monitor gene - expression in bio - medical studies .a common experimental design is to compare two sets of samples with different phenotypes , e.g. diseased and normal tissue , with the goal of discovering differentially expressed genes .statistical testing procedures , such as such as the t - test and significance analysis of microarrays , have been extensively studied and widely used .subsequently , multiple testing corrections are usually applied .a comprehensive review of such approaches are presented in .differential expression analysis based on univariate statistical tests has several well - known limitations .first , due to the low sample size , high dimensionality and the noisy nature of microarray data , individual genes may not meet the threshold for statistical significance after a correction for multiple hypotheses testing .second , the lists of differentially expressed genes discovered from different studies on the same phenotype have little overlap .these limitations motivated the creation of gene set enrichment analysis gsea , which discovers collections of genes , for example , known biological pathways , that show moderate but coordinated differentiation . for example , subramanian and tamayo et al . report that the p53 hypoxial pathway contains many genes that show moderate differentiation between two lung cancer sample groups with different phenotypes . although the genes in the pathway are not individually significant after multiple hypothesis correction , the pathway is . for those familiar with gsea andits output , figure [ fig : pnas2005toy ] shows the gsea results for the p53 hypoxial pathway .gsea also has the advantages of better interpretability and better consistency between the results obtained by different studies on the same phenotype .ackermann and strimmer presented a comprehensive review of different gsea variations in .unfortunately , gsea and related techniques may be ineffective if many individual genes in a phenotype - related gene set have weak discriminative power .a potential solution to this problem is to search for combinations of genes that are highly differentiating even when individual genes are not . for this approach ,the targets are groups of genes that show much stronger discriminative power when combined together .for example , figure [ fig : gb2005toy1 ] illustrates one type of differentially expressed gene combination discovered in .the two genes have weak individual differentiation indicated by the overlapping class symbols on both the two axes .in contrast , these two genes are highly discriminative in a joint manner indicated by the different correlation structure in the two - dimensional plot , i.e. they are correlated along the blue and red dashed line respectively in the triangle and circle class .such a joint differentiation may indicate that the interaction of the two genes is associated with the phenotypes even though the two genes , individually , are not .figure [ fig : gb2005toy2 ] illustrates another type of phenotype - associated gene combination discovered in , usually named differential coexpression , in which the correlation of the two genes are high in one class but much lower in the other class . as discussed in , existing multivariate tests such as hotelling s , dempster s t1 are not suitable to detect such ` complementary ' gene combinations because they only screen for differences in the multivariate mean vectors , and thus will favor pairs that consist of genes with strong marginal effects by themselves but not the genes like the four in fig .[ fig : gb2005toy12 ] . for clarification, we use differential gene combination search ( denoted as dgcs ) to refer to the multivariate data analyses that are designed to detect the complementarity of different genes , rather than those designed to model the correlation structure of different genes ( such as hotelling s and dempster s t1 test ) .a variety of other dgcs measures for complementary gene combination search are proposed for gene pairs in addition to the two illustrated in figure [ fig : gb2005toy12 ] .several measures are designed for higher - order gene combination beyond pairs .these approaches can provide biological insights beyond univariate gene analysis as shown in .the limitations of gsea and the capabilities of dgcs motivate a gsea approach using gene combinations in which the score of a gene set is based on both the scores of individual genes in the set and the scores from the gene combinations in which these genes participate .unfortunately , gene combination techniques have not been used with the gsea approach in any significant way because of two key challenges . 1 . * finding a technique to reduce the vast number of gene combinations .* there are exponentially more gene combinations than individual genes , i.e. in addition to the univariate genes , there are gene - pairs , gene - triplets , etc. many variations of gsea are based on a ranked list of the individual genes as illustrated in figure [ fig : pnas2005toy ] . including combinations in the ranked listmight work for size-2 combinations , but would not be feasible for handling gene combinations of larger sizes .furthermore , this explosion in the number of gene combinations negatively impacts false discovery rates .thus , by adding so many gene combinations , we run the risk that neither groups of genes nor individual genes will show statistically significant differentiation .* combining results from the heterogeneous measures used to score different size gene combinations . * furthermore , because a gene can be associated with the phenotype either as an univariate variable or together with other genes as a combination , the importance of a gene set should be based on both the univariate gene scores and the gene combination based scores of its set members . however , different measures have a different nature , scale and significance , and thus are not directly comparable ( to be detailed in section [ sec : score2pv ] ) . indeed , differences exist even between gene combinations of the same measure but of different sizes .therefore , the challenge lies in how to design a framework to combine different measures ( a univariate measure plus one or more dgcs measures ) together within the gsea framework . to the best of our knowledge, no existing work has sufficiently addressed these two challenges , although recent work presented in have made initial efforts at adding gsea capabilities to gene combination search .more specifically , two approaches are proposed in to help the study of a specific type of size-2 differential combinations as illustrated in [ fig : gb2005toy2 ] .the experiments in these two studies provide some evidence about the benefits of the integration .however , a more general framework is needed that can also handle other types of size-2 differential combinations as illustrated in figure [ fig : gb2005toy1 ] , higher order differential combinations ( e.g. sdc and the n - statistic ) , and multiple types of differential combinations .* contributions : * in this paper , we propose a general framework to address the above challenges for the effective integration of dgcs and gsea .specific contributions are as follows : 1 . * a gene - combination - to - gene score summarization procedure ( procedure _ a _ ) that is designed to handle the exponentially increasing number of gene combinations . * first , for a given gene combination measure and a certain , the score of a size- combination is partitioned into equal parts which are assigned to each of the genes in the combination .because each gene can participate in up to size- combinations , each gene will be assigned with a score from each of these combinations .secondly , an aggregation statistic , e.g. , maximum absolute value is used to summarize the different scores for a gene .with such a procedure , scores for all the size- gene combinations are summarized to scores for genes .this procedure can effectively retain the length of the ranked list while handling gene combinations of size- ( ) .* a score - to - pvalue transformation and summarization procedure ( procedure _ b _ ) that is designed to integrate the scores contributed ( in procedure ) from different gene combination measures and from gene combinations of different sizes .* the transformation is based on p - values obtained from scores derived from phenotype permutations .such a transformation enables the comparison of scores from different measures ( either univariate or gene combination measures ) and scores from the gene combinations of different sizes .subsequently , among all the p - values of a gene , the best is used as an integrated score of statistical significance .* integration of the above two procedures with gsea * more specifically , after procedures _ a _ and _ b _ , each gene has a single integrated score . unlike traditional univariate scores, these integrated scores are based on both the univariate statistic and the gene combination measures . for the type of gsea variations that depend on phenotype permutation test , lists of integrative scores are computed , one for the real class labels and the other for the permutations . for the type of gsea variations that are based on gene - set permutation test , only the list of integrated scores for the real class labels are needed .an independent matlab implementation of the proposed framework is available for download , which allows most existing gsea frameworks to directly utilize the proposed framework to handle gene combinations with almost - zero modification .* experimental results that illustrate the effectiveness of the proposed framework .* we integrated three gene combination measures and the gsea approach presented in and produced experimental results from four gene expression datasets .these results demonstrate that the integrative framework can discover gene sets that would have been missed without the consideration of gene combinations .this includes statistically significant gene sets with moderate differential gene combinations whose individual genes have very weak discriminative power .thus , a gene combination assisted gsea approach can improve traditional gsea approaches by discovering additional disease - associated gene sets .indeed , the integrative approach also improve traditional dgcs since most gene combinations are not statistically significant by themselves .furthermore , we also show that the biologically relevant gene sets discovered by the integrative framework share several common biological processes and improve the consistency of the results among the three lung cancer data sets .* overview : * the rest of the paper is organized as follow . in section [ sec : measures ] , we describe three gene combination measures used in the following discussion and experiments . in section [ sec : methods ] , we present the technical details of the two procedures of the general integrative framework . experimental design and resultsare presented in section [ sec : exp ] , followed by conclusions and discussions in section [ sec : discssion ] .in this section , we describe three dgcs measures for use in the following discussion and experiments .let and be two phenotypic classes of samples of size and respectively . for each sample in and , we have the expression value of genes .first , we have the following two measures ( denoted as and ) defined for a pair of genes as presented in : where and are two genes , and represents the correlation of and over the samples in set .as discussed in , and can detect the joint differential expression of two genes as illustrated in figure [ fig : gb2005toy1 ] and figure [ fig : gb2005toy2 ] respectively . and used as two representative measures for gene pairs .other options for gene combination measures for gene pairs have been investigated in .we use the subspace differential coexpression measure ( denoted as ) proposed in as the representative for measures for size- gene combinations , where can be any integer ( ) . where is a set of genes such that and . and respectively represent the fraction of samples in and over which the genes in are coexpressed . is a generalization of for detecting the differential coexpression of genes ( ) , i.e. the genes are highly coexpressed over many samples in one class but over far fewer samples in the other .other options for size - k combinations include the _ n - statistic _ , _ supmaxpair _ , etc .signal - to - noise ratio ( denoted as ) is used as the representative of traditional univariate statistics as in . where and are the mean expression of in class and respectively , and and are the standard deviation of the expression of in class and respectively .many other univariate statistics can be found in . in this paper , these four measures are used as representatives of each category for the illustration of the proposed integrative framework .however , the framework is general enough to handle other measures from each of these categories .in section [ sec : intro ] , we motivated the integration of dgcs with gsea , discussed two challenges associated with this integration , and briefly described two main procedures in the proposed framework . in this section, we present the technical details of the two procedures and their integration with gsea .there are two steps in procedure . in step , for each dgcs measure and each size- gene combination , its score is divided into equal parts and assigned to each of the genes in the combination . in step , the scores assigned to a gene from all the size - k combinations in which the gene participates are summarized into a single score by an aggregation functions such as .note that , for most univariate statistic and dgcs measures which can be either positive or negative ( e.g. the four measures described in section [ sec : measures ] ) , the maximum is taken over the absolute values of the scores , and the sign of the score with the highest absolute value is recorded for later use .other simple statistics such as mean or median , or sophisticated ones such as weighted summation can also be used . sincethe focus of this paper is the overall integrative framework , we use for simplicity . we provide a conceptual example of procedure for a gene with a certain dgcs measure .this example considers gene combinations up to size for illustration purpose .the gene is associated with scores assigned from gene combinations of size 2 , 3 and 4 ( denoted as , and respectively ) in which participates . in step , the scores from , and are summarized by three maximum values , respectively. please refer to the appendix section for the illustration of this example .procedure serves as a general approach to summarize the scores of all the size- combinations into scores for the genes .if we want to integrate gsea with one dgcs measure and a specific size- , procedure by itself can enable most existing variations of gsea to search , with almost - zero modification , for statistically significant gene sets with moderate but coordinated gene combinations of size- .such a gsea approach can collectively consider the gene combinations affiliated with a gene set , and may provide better statistical power and better interpretability for dgcs , as will be shown in the experiments .the hypothesis tested when one dgcs measure , say , is integrated with gsea ( by procedure ) is that , whether a gene set includes significantly many genes with highly positive ( or highly negative ) combination - based scores measured by .an extended hypothesis can be whether a gene set includes significantly many genes with highly positive ( or highly negative ) scores , either univariate or combination - based scores measured by different dgcs measures .the biological motivation of this extended hypothesis is that , a gene can be associated with the phenotype either as an univariate variable or together with other genes as a combination . to test this extended hypothesis, we design a second procedure ( _ b _ ) that can integrate the scores of a gene from different measures . before describing the steps in this procedure .we first discuss in detail the challenges of integrating heterogeneous scores from different dgcs measures and combinations of different sizes . 1 . * the different nature of different measures * : different measures are designed to capture different aspects of the discriminative power of a gene or a gene combination between the two phenotypic classes .signal - to - noise ratio ( ) , a univariate gene - level statistic , measures the difference between the means of the expression of a gene in the two classes .in contrast , , a differential coexpression measure for a pair of genes describes the difference of the correlations of a gene - pair in the two classes .thus , for a gene , the score of itself measured by and the score assigned and summarized from the size-2 gene combinations measured by are not directly comparable .similarly , the scores of different dgcs measures can also have a different nature , e.g. and as illustrated in figure [ fig : gb2005toy12 ] .* the different scales of different measures * : different measures also have different ranges of values . for example , the range of , , , and are ] , ] respectively . thus , they are not directly comparable .* differences in significance between different measures * : even after we normalize the scores of different measures to a single range , say $ ] , they are still not comparable because the scores of different measures have different statistical significance . for example , a normalized score of may be less significant than a normalized score of 0.5 , if there are many genes with normalized score greater than in the permutation test , but very few genes with normalized score greater than in the permutation test .note that , such differences in statistical significance also exists between gene combinations of different sizes , even for the same measure .take the subspace differential coexpression measure as an example. a score of for a size-2 combination may not be as significant as a score of for a size-3 combination as discussed in .to handle the above heterogeneity , we propose a score - to - pvalue transformation and summarization procedure that can enable the comparison and integration of the scores of different measures and combinations of different sizes .there are three major steps in procedure . in procedure ( score - to - pvalue transformation ) for gene and measure .,scaledwidth=95.0% ] consider a concrete example . for a gene and a measure , procedure computes a single summarized score . in this step, the original phenotype class labels are permutated say times , and for each permutation , the same procedure is applied , and a corresponding score for and is computed .we denote the score of and summarized with the original label as , where is the gene index , and indicates the measure and means it is the score based on the original label .similarly , we denote the scores computed in each of the permutation as , where .these scores are organized in the table on the left in figure [ fig : tabmerge2measures ] .the scores computed in the permutations can be considered as the null - distribution for gene and measure , and a p - value can be estimated for .specifically , if is positive , the p - value is the ratio of the number of scores that are greater or equal to and the number of scores that are positive .similarly if is negative , the p - value is computed as the ratio of the number of scores which are less or equal to and the number of scores which are negative .note that , such a score - to - pvalue transformation is done for both and each of ( ) , if the gsea approach to be integrated is based on phenotype permutation test .otherwise , only needs to be transformed to p - value and will be used by the gsea approaches that are based on gene - set permutation . in this paper, we illustrate the proposed framework using the gsea approach presented by subramanian and tamayo et al . which is based on phenotype permutation test. essentially , step transforms the heterogeneous scores of a gene measured by different measures into their corresponding significance values , which are comparable to each other although their original values are not .suppose that there are different measures to be integrated , one of which is a univariate statistic , and the others are different dgcs measures for which we consider combinations of sizes up to .after step , each gene has a p - value for the univariate measure and up to p - values for each size of gene combination for each measure . in step ,the best transformed p - value ] p - value associated with a gene is selected as the integrated significance .essentially , procedure integrates the scores of different dgcs measures for a gene and the univariate statistic of the gene into a single p - value .such a statistical significance - based integration of heterogeneous scores enables the comparison and thus the ranking of all the genes .however , this ranked list does not maintain the original directionality of the integrated scores of each gene .in particular , most univariate statistics and dgcs measures ( e.g. all the four measures described in section [ sec : measures ] ) can be either positive or negative .such directionality information is lost in step and because the p - value is non - negative .next , we describe a third step to maintain the directionality in the integration .in the simple case , the measures to be integrated capture the same type of differentiation between the two phenotype classes , e.g. and .suppose there are two genes and , whose integrated p - values are transformed respectively from two scores measured by and in step .the signs of these two scores are comparable to each other , because both and capture the change of coexpression of a combination of genes .thus , we simply use the signs of these two scores as the signs associated with the integrated p - values of and . similarly , we associate a sign to all the integrated p - values .and these p - values with associated signs can be used to rank the genes based on their significance as well as their direction of differentiation ,i.e. p - values associated with positive signs are ranked with descending significance , and afterwards , p - values associated with negative signs are ranked with increasing significance . in the other case , if the measures to be integrated capture different types of differentiation between the two phenotype classes , the directionality can not be fully maintained .for example , suppose there are two genes and , whose integrated p - values are transformed respectively from two scores measured by and in step .the signs of these two scores are not comparable , because captures the change of mean expression , and captures the change of coexpression of a combination of genes .specifically , up - regulation of can be associated with either high or low coexpression of another gene - combination in which participates .thus , it is not reasonable to follow the same strategy to associate signs to the integrated p - values .if we know the correspondence of the signs of different genes in advance , e.g. the up - regulation of is associated with the low coexpression of and , then the signs can be maintained .however , because it is not realistic to assume such prior knowledge , we propose the following heuristic approach which has proved a workable solution for our initial experiments . specifically , since the focus of step is to integrate different dgcs measures in addition to the univariate statistic , we considered as the base measure . for the integrated p - values that are transformed from scores measured by in step ( say there are of them ) , we use the signs of these scores for the integrated p - values . for the signs of the other genes , we assign positive signs to all of them once and negative signs to all of them a second time .correspondingly , we have two ranked lists similar to the simple case described above .note that , if the directionality of differential measures can be preserved , the power of this approach will be enhanced . to deal with the situation wheresigns are not comparable , other approaches will be explored . from the above description of procedure and , we know that , if only one dgcs measure is used in the gsea framework , only procedure is needed .if one or multiple dgcs measures are integrated together with the univariate statistic in the gsea framework , procedure is needed in addition . in the first case , the integrative framework outputs a ranked list of scores with associated signs for the original class label , and lists corresponding to the permutation tests . in the second case, we have two sets of lists respectively for the two rounds of maintaining directionality in step in procedure . in either case , the ranked lists along with the appropriate parameter settings and specification of gene sets can be used to run gsea .the only modification to gsea is the elimination of the initial gsea step to generate the scores , simulated and actual , that measure the level of differentiation between genes across different phenotypes .the proposed integrative framework is implemented as a matlab function ( available at http://vk.cs.umn.edu/icg/ ) , independently from the gsea framework to be integrated in this paper . as summarized by ackermann and strimmer ,hundreds of variations of gsea are being used by different research groups .this independently implemented integrative framework can be easily applied to other variations of gsea . in our experiments , in order to have a fair comparison , we transform the ranked lists into the exact sample distribution as the original lists corresponding to .specifically , we only use the ranking information in the integrated ranked lists and map to them to the values in the original lists based on . essentially , the values in the ranked list passed to the gsea framework are exactly the same among , , , and , while the only difference is that the genes have different ranks in the lists .such a mapping ensures that the additionally discovered gene sets are because of the integration of gene - combinations in addition to univariate statistic , rather than simply the different value distributions in the lists .in this section , we present the experimental design and results for the evaluation of the proposed integrative framework .we first provide a brief description of the data sets and parameters used in the experiments .second , we describe and discuss the comparative experiments to study whether the integration of dgcs and gsea ( denoted as dgcs ) improves both dgcs and gsea .the two major evaluation criteria are the statistical power to discover ( additional ) significant results , and the consistency of the results across different datasets for the study of the same phenotype classes .the four datasets used in the experiments are described as follows : 1 .three lung cancer datasets respectively denoted as boston , michigan and standford : all the three data sets consist of gene - expression profiles in tumor samples from respectively 62 , 86 and 24 patients with lung adenocarcinomas and provide clinical outcomes ( classified as `` good '' or `` poor '' outcome ) .the two phenotypic classes in these three datasets are denoted as and as in .2 . a data set from the nci-60 collection of cancer cell lines for the study of p53 status ( denoted as data set ) : the mutational status of the p53 gene has been reported for 50 of the nci-60 cell lines , with 17 being classified as normal and 33 as carrying mutations in the gene .the two phenotypic classes in this dataset are denoted as and as in .all four datasets were downloaded from the gsea website , and were already preprocessed as described in the supplementary file of .for all four data sets , we use the gene sets from in msigdb as in , as well as the same parameters .we consider one univariate statistic ( ) , and three gene - combination measures ( , and ) in our experiments .these four measures are described in section [ sec : measures ] . and are defined only for size-2 combinations . for , we considered gene - combinations of size- and size- for the illustration of concept ..number of with fdr less than discovered from the four data sets by each combination measure [ cols="<,<,<,<,<",options="header " , ] in this section , we study whether the question ( q1 ) of whether integration of dgcs and gsea can improve traditional dgcs . for this comparison , we consider the integration of dgcs and gsea as a gsea - assisted dgcs approach .we first apply the traditional dgcs approaches on the four datasets to find statistically significant gene - combinations .we denote the three dgcs approaches respectively with the names of the three measures , i.e. , and .second , we apply the integrative framework , in which gsea is integrated respectively with the three dgcs measures , to find statistically significant gene sets with moderate but coordinated differential gene - combinations .we denote the three instances of the integrative approach respectively as , and .then , we compare the results of , and , respectively with the results of , and .table 1 lists the number of statistically significant gene combinations discovered respectively by the three measures on each of the four datasets , with an fdr threshold of .table 1 lists the number of statistically significant gene sets discovered by integrating gsea respectively with the three dgcs measures on each of the four datasets , also with the same fdr threshold of .three major observations can be made by comparing the two tables : table 1 shows that , in most cases , traditional dgcs discovers very few ( less than 3 ) statistically significant gene combinations ( although and have and gene - combinations on the boston data set , none of them have fdr lower than ) .in contrast , table 2 shows that the integration of gsea with the three combination measures discover multiple significant gene sets in most of the cases .this difference implies that the discovered statistically significant gene sets include many moderate but coordinated differential gene combinations , even though the combinations are not significant by themselves as shown in table 1 .this comparison demonstrates that traditional dgcs , similar to univariate gene analysis , has limited statistical power , and dgcs can increase that power .we further compare dgcs and dgcs by studying the consistency of their results on the first three data sets that are all from lung cancer studies , as done in .for dgcs , discovered genes on michigan but nothing from boston and stanford ; discovered combinations on boston but only 1 and 2 from michigan and stanford , respectively , and there are no common ones between the 645 , 1 , and 2 gene combinations ; discovered genes on boston but only gene on michigan and nothing from stanford , and the 10 and 1 combinations do not overlap .the inconsistent results make the follow - up biological interpretation very difficult . in contrast ,when the three dgcs measures are integrated with gsea , several consistent themes can be observed : ( i ) apoptosis related pathways ( marked by in table 2 ) : discovered four gene sets on boston , three of which are known to be closely related to cancer and specifically to apoptosis , i.e. _ nfkbpathway _ , _ st - gaq - pathway _ and _ tnf - pathway_. this apoptosis theme is shared by the gene sets discovered by -gsea from michigan and stanford , i.e. _ monocyte - ad - pathway _ , _ hivnefpathway _ , _ deathpathway _ and _ caspasepathway_. these apoptosis related pathways are enriched with the lung cancer samples with good outcome , which makes sense biologically and also corresponds to the proliferation theme supported by the gene sets enriched with the samples with poor outcome as reported in .several other examples of the result consistency , as indicated by other superscripts in table 2 , are in the technical report .this comparison demonstrates that traditional dgcs , like univariate gene analysis , has poor result consistency across the three lung cancer data sets , and dgcs can improve its consistency by integrating dgcs measures with gsea .the number of significant gene sets discovered by the three versions of gsea varies , i.e. and discovered a bit larger number of significant gene sets than .however , still discovered several gene sets that are not discovered by or , e.g. one gene set from the michigan data set and three from the stanford data set .this indicates that , and have complementary perspectives , i.e. different combination measures capture different aspects of the difference between the phenotype classes ( recall the two types of combinations in figure [ fig : gb2005toy12 ] ) .this also demonstrates the proposed framework is general enough to integrate any type of dgcs with gsea . in this section ,we want to answer the question ( q2 ) of whether the integration of dgcs and gsea can improve traditional gsea . for this comparison ,we consider the integration of dgcs and gsea as a dgcs - assisted gsea approach .we design three sets of comparisons .firstly , we compare the traditional univariate - statistic based gsea ( denoted as ) with the integrative framework where one gene - combinations measure is used instead of .specifically , we compare the gene sets discovered by with the gene sets discovered by , and .then , we compare with the integrative framework where one gene - combinations measure is used in addition to , i.e. , and . furthermore , we also study the integration of multiple gene - combinations measure in addition to , e.g. . figure [ fig : bigtable12312009half ] displays the statistically significant gene sets discovered with different ( combinations of ) measures respectively from the four datasets .an fdr threshold of is used as in for comparison purpose .the results presented in are exactly reproduced , i.e. the gene sets listed in the rows corresponding to . in each of these four figures ,we consider the traditional univariate - statistic based gsea ( ) as the baseline , and compare it with the rows corresponding to , , , , , and . from these comparisons , the following observations can be made .gsea ) is considered as the baseline .for the other rows , we only list a gene set if it is only discovered by the integrative approach ( with bolded name ) , or it has a non - trivially decreased fdr when it is discovered by the integrative approach ( with bolded fdr ) .please refer to the appendix section for the complete tables . ]first , we compare the rows corresponding to , , with the rows corresponding to .we bolded the additional gene sets that are only discovered by , , .for example , with , no statistically significant gene sets have been enriched with class a in the boston data set .in contrast , discovered gene sets , three out of which ( discussed in ) are related to apoptosis which is consistent with the results on michigan and stanford . on the michigan dataset, discovered a gene set _ beta - alanine - metabolism _ that is not discovered by .this gene set is related to the responses of hypoxia , which is consistent with the results on boston and stanford .it is worth noting that , although most studies did not report statistically significant gene sets on the stanford dataset due to the very small sample size , , , respectively discovered 4 , 4 and 3 significant gene sets .these additional gene sets were discovered because the three dgcs measures capture different types of the differentiation between the two phenotype classes , compared to the traditional univariate differential expression - based gsea .second , we compare the rows corresponding to , , with the rows corresponding to .we bolded the additional gene sets that are only discovered by the integrative approach .for example , on the boston data set , based gsea discovered 8 gene sets .in addition , discovered the _gene set , and discovered the _ p53-signaling _ gene set .both ubiquitin - proteasome pathway and p53-signaling pathway are well - known cancer - related pathways that are also specifically related to lung cancer .( additional examples are in the technical report . )the gene sets that are discovered by dgcs - assisted gsea but not by -gsea illustrate the benefits of using dgcs to assist gsea .next , we also observed that integrating multiple dgcs measures can further discover statistically significant gene sets . for illustration purpose , we compare the rows corresponding to , , with the rows corresponding to . discovers the _g2pathway _ gene set and the _gsk3pathway _ gene set , respectively from the boston and the michigan dataset .neither of these two pathways are discovered by , , and .the curated gene set _ g2pathway _ contains the genes related to the g2/m transition , which is shown to be regulated by p53 , a well - known cancer - related gene .the curated gene set _ gsk3pathway _ is the signaling pathway of gsk-3- , which has been shown to be related to different types of cancer .these two cancer - related pathways are discovered by but not by , , and .this indicates that different members of these two pathways are differential between the two phenotype groups in different manners , i.e. the differentiation of some genes is captured by , some by , some by and some by .these two pathways can be discovered to be statistically significant only when these measures are used together in the integrative framework .this demonstrates the benefits of the proposed framework for integrating multiple dgcs measures with a univariate measure .it is worth noting that , the gene sets discovered by the integrative framework with multiple measures are not necessarily a superset of those discovered by integrating each individual measure with gsea since , when different dgcs measures are integrated with gsea , the null - hypotheses tested in the gsea framework are correspondingly different .the highlight of the integrative framework is that , additional gene sets can be discovered when different dgcs measures are used to assist the traditional univariate statistic - based gsea . in practice , these different versions of gsea should be used collectively . : even when a gene set is discovered both before and after a dgcs measure is integrated into the framework , we can observe several interesting cases where the fdr of a gene set becomes much lower after the integration .we bolded the fdrs that significantly decreased when they are discovered by the integrative approach .for example , -gsea , in which , , and are integrated together , discovers _p53hypoxialpathway _ with an much lower fdr of , two - order lower than -gsea .this example indicates that several members of _p53hypoxialpathway _ have weak individual differentiation measured by , but have more significant differentiation when they are measured by .this and other similar examples demonstrates the benefits of the proposed framework for integrating multiple dgcs measures .as presented in , discovered and gene sets respectively from the boston and michigan data sets , and 5 of the 8 in boston and 6 of the 11 in michigan are common .the three unmatched gene sets that are discovered in boston but not in michigan are _ glut - down _ , _ leu - down _ and _ cellcyclecheckpoint_. interestingly , the latter two are discovered from both the boston and the michigan data sets by .such observations suggest that dgcs - assisted gsea also provides new insights to the consistency between different data sets . because different combinations of measures are used in the integrative framework , additional issues of multiple hypothesis testing arise , even though multiple hypothesis testing has been addressed for each measure via the phenotype permutation test procedure in the gsea framework proposed in .to investigate this , we designed experiments with 4 of the possibilities of integrations , i.e. , , and . even using a collective ( meta - level ) multiple hypothesis correction , many discovered gene sets would still be significant . for examples, discovers _p53hypoxialpathway _ from the boston data set with a low fdr of , and discovers _ deathpathway _ from the michigan data set with a lower fdr of .we also did additional permutation tests , in which we generate random gene sets with the same sizes as the sets in msigdb , and do the same set of experiments as shown in figure [ fig : bigtable12312009half ] .the fdr values of the random gene sets computed in the integrative framework are mostly insignificant ( higher than ) .in this paper we motivated the integration of differential gene - combination search and gene set enrichment analysis for bi - directional benefits on both them .we proposed a general integrative framework that can handle gene - combinations of different sizes ( ) and different gene - combination measures in addition to an univariate statistic used in traditional gsea .the experimental results demonstrated that , on one hand , gsea - assisted dgcs has better statistical power and result consistency than traditional dgcs . on the other hand , dgcs - assisted gsea can discover additional statistically significant gene sets that are ignored by traditional gsea and further improve the result consistency of the traditional gsea .the proposed framework can be extended in several ways .different variations of gsea will be considered . along these lines, we note that the proposed integrative framework is general enough to integrate most existing variations of gsea approaches summarized in with minimal amount of modification . also , it should be possible to integrate dgcs and gene - subnetwork discovery . both gsea and gene - subnetwork discovery can discover collections of genes , either known gene sets or subnetworks in a molecular network ( e.g. protein interaction network ) , that show moderate but coordinated differentiation . in this paper, we integrate dgcs and gsea as an illustration of the general framework for integrating scores from different gene - combination measures and gene - combinations of different sizes , in addition to the traditional univariate statistic , but the same framework also applies to the integration of dgcs and gene subnetwork discovery . another direction is the use of this framework for the analysis of ( gwas ) snp data , by following the methodology proposed in recently work on pathway / network based analysis of gwas datasets .finally , it may be possible to use constraints on gene - combinations to improve our framework . in procedure , for each gene - combination measure and an integer , the score of a gene is assigned from all the possible gene - combinations . a further extension of procedure is to only consider the gene combinations , in which the genes appear in a common gene set , e.g. a pathway .such gene - set - based constraints may better control false positive gene combinations and improve the statistical power of the whole integrative framework .due to the space limit , table 2 and figure [ fig : bigtable12312009half ] are both summarized from the four complete tables that are available at http://vk.cs.umn.edu / icg/. specifically , table 2 is a high - level summary of the number of gene sets discovered and the biological processes associated with each of the gene sets . in figure[ fig : bigtable12312009half ] , we listed the complete results for ( the baseline ) , while for the other rows , we only list a gene set if it is only discovered by the integrative approach ( with bolded name ) , or it has a non - trivially decreased fdr when it is discovered by the integrative approach ( with bolded fdr ) .g. fang , g. pandey , m. gupta , m. steinbach , and v. kumar . mining low - support discriminative patterns from dense and high - dimensional data .technical report 09 - 011 , department of computer science , university of minnesota , 2009 .k. wang , m. narayanan , h. zhong , m. tompa , e. e. schadt , and j. zhu .meta - analysis of inter - species liver co - expression networks elucidates traits associated with common human diseases . , 5(12):e1000616 , 2009 .
gene set enrichment analysis ( gsea ) and its variations aim to discover collections of genes that show moderate but coordinated differences in expression . however , such techniques may be ineffective if many individual genes in a phenotype - related gene set have weak discriminative power . a potential solution is to search for combinations of genes that are highly differentiating even when individual genes are not . although such techniques have been developed , these approaches have not been used with gsea to any significant degree because of the large number of potential gene combinations and the heterogeneity of measures that assess the differentiation provided by gene groups of different sizes . to integrate the search for differentiating gene combinations and gsea , we propose a general framework with two key components : ( a ) a procedure that reduces the number of scores to be handled by gsea to the number of genes by summarizing the scores of the gene combinations involving a particular gene in a single score , and ( b ) a procedure to integrate the heterogeneous scores from combinations of different sizes and from different gene combination measures by mapping the scores to p - values . experiments on four gene expression data sets demonstrate that the integration of gsea and gene combination search can enhance the power of traditional gsea by discovering gene sets that include genes with weak individual differentiation but strong joint discriminative power . also , gene sets discovered by the integrative framework share several common biological processes and improve the consistency of the results among three lung cancer data sets .
when one is dealing with classical field theories on a spacetime , the metric may appear as a given background field or it may be a genuine dynamic field satisfying the einstein equations .the latter theories are often generally covariant , with the spacetime diffeomorphism group as symmetry group , but the former often are considered to have only the isometry group of the metric as a symmetry group .however , ( see also ) indicated how theories with a background metric can be parametrized , that is , considered as theories that are fully covariant , if one introduces the diffeomorphisms themselves as dynamic fields .the goal of this paper is to develop this idea in the context of multisymplectic classical field theory and to make connections with stress - energy - momentum ( `` sem '' ) tensors .as we shall see , the multimomenta conjugate to these new covariance fields form , to borrow a phrase from elasticity theory , the piola kirchhoff version of the sem tensor , and their euler lagrange equations are vacuously satisfied by virtue of the fact that the sem tensor is covariantly conserved .thus these fields have no physical content ; they serve only to provide an efficient way of parametrizing a field theory . nonetheless , the resulting generally covariant field theory has several attractive features , chief among which is that it is fully dynamic all fields satisfy euler lagrange equations .structurally , such theories are much simpler to analyze than ones with absolute objects or noncovariant elements .we emphasize that the results of this paper are for those field theories whose lagrangians are built from dynamic matter or other fields and a non - dynamic background metric .one of our motivations was to find a way to treat background fields and dynamic fields in a unified way in the context of the adjoint formalism .many of the ideas are applicable to a wider range of field theories , as already indicates , but in this paper we confine ourselves to this important class .the general case is presented in along with a more detailed discussion of parametrization theory and related topics .suppose that we have a metric field theory in which the metric is an absolute object in the sense of .for instance , one might consider a dynamic electromagnetic field propagating on a schwarzschild spacetime .such a theory is not generally covariant , because the spacetime is fixed , and not all fields are on an equal footing , as the electromagnetic field is dynamic while the gravitational field is not .a somewhat different example is provided by nordstr m s theory of gravity ( see 17.6 of ) , which is set against a minkowskian background . in this sectionwe explain how to take such a system and construct from it an equivalent field theory that achieves the following goals : ( i ) : : the new field theory is generally covariant , and ( ii ) : : all fields in the new field theory are dynamic .this `` covariance construction '' is an extension and refinement of the parametrization procedure introduced by . [[ setup . ] ] setup .+ + + + + + as usual for a first order classical field theory , we start with a bundle whose sections , denoted , are the fields under consideration .the dimension of is taken to be , and we suppose that is oriented .let be a lagrangian density for this field theory , where is the first jet bundle of and is the space of top forms on . loosely following the notation of or , we write coordinates for as .in addition , in coordinates , we shall write evaluated on the first jet prolongation of a section , the lagrangian becomes a function of ; we shall abbreviate this when convenient and simply write .we assume that the fields are dynamic .[ [ example . ] ] example .+ + + + + + + + we will intersperse the example of electromagnetism throughout the paper to illustrate our results .then is the cotangent bundle of 4-dimensional spacetime , sections of which are electromagnetic potentials .the corresponding lagrangian is written below . [ [ a - first - attempt - at - general - covariance . ] ] a first attempt at general covariance .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + suppose that the spacetime comes equipped with a fixed , background metric .the obvious first step in attaining general covariance is to allow to vary ; thus the metric will now be regarded as a genuine _ field _ on .( when the metric is regarded as variable , we denote it by , and when we want to revert to its fixed value we use . )so we are led to view the lagrangian density as a map where is the bundle whose sections are lorentz metrics on .we correspondingly write ; the semicolon is used to separate the dynamic from the nondynamic fields .( we emphasize that being variable does not mean that it is dynamic ; we discuss this point momentarily . )notice that we have tacitly assumed that the dependence of on the metric is pointwise that is , we have non - derivative coupling .( the more general case of derivative coupling will be considered in 5 . in any event ,we remark that derivatively - coupled theories are considered by many to be pathological . )[ [ example.-1 ] ] example .+ + + + + + + + the electromagnetic lagrangian density is where next , assume that the given lagrangian density has the following ( eminently reasonable ) covariance property for a diffeomorphism : where we assume that a way to lift the spacetime diffeomorphism to a bundle automorphism of has been chosen .[ [ example.-2 ] ] example .+ + + + + + + + for the electromagnetic 1-form potential , we take the lift to be push - forward on the fiber , which makes it obvious that holds in this case .when condition holds , we say that the theory is generally covariant , i.e. , the lagrangian density is -equivariant .thus we have accomplished objective ( i ) .however , the reader may well remark that this was ` too easy , ' and would be quite right .the problem is that it is not clear how , or even _ if _ , can now be made dynamic .certainly , can not be taken to be variational unless one adds a source term to the lagrangian density for , for otherwise as the metric non - derivatively couples to the other fields .but what should this source term be ? if is gravity , we could use the hilbert lagrangian , but otherwise this is unclear . [ [ the - covariance - field . ] ] the covariance field .+ + + + + + + + + + + + + + + + + + + + + the solution to our problem requires more subtlety .we will sidestep both the issues of making variable , and then making dynamic , in one fell swoop as follows .we introduce an entirely new field , the `` covariance field '' into the theory .it will ` soak up ' the arbitrariness in , and will be dynamic . in this way we are able to generate a new generally covariant field theory , physically equivalent to the original one , in which all fields are dynamic . hereis the construction .the key idea is to introduce a copy of spacetime into the fiber of the configuration bundle .consider ( oriented ) diffeomorphisms , thought of as sections of the bundle .we regard the diffeomorphisms as new fields , and correspondingly replace the configuration bundle by .next , modify to get the new lagrangian defined on : thus , we obtain a modified field theory with the underlying bundle . the general set up is shown in the figure below .-40pt the general set up for the introduction of covariance fields .let coordinates on be denoted and the associated jet coordinates be denoted . then, writing and similarly for , in coordinates equation reads where from the definition of pull - back we obtain from one verifies that the euler lagrange equations for the fields remain unchanged .[ [ example.-3 ] ] example .+ + + + + + + + for the electromagnetic field , our construction produces where is the jacobian of and we pause to point out the salient features of our construction .first , the fixed metric on spacetime is no longer regarded as living on , but rather on the copy of in the fiber of the configuration bundle .so is no longer considered to be a field it has been demoted to a mere geometric object on the fiber .second , the variable metric on is identified with , and thus acquires its variability from that of .so as well is no longer a field per se , but simply an abbreviation for the quantity .finally , we gain a field which we allow to be dynamic ; in the next subsection we will see that this imposes no restrictions on the theory at all .the first key observation is that the modified theory is indeed generally covariant . to this end ,recall that , as was explained earlier , given , there is assumed to be a lift .for the trivial bundle , we define the lagrangian density is -equivariant , that is , this is an easy consequence of the definitions and , and the covariance assumption .indeed & = { \mathcal{l } } \left ( j^1{\mspace{-1.5mu } } ( \sigma_y(\phi))\ , ; ( \sigma^{-1 } ) ^*(\eta^ * g)\right ) \\[1.5ex ] & = \sigma_*\!\left(\mathcal{l}(j^1 { \mspace{-1.5mu}}\phi)\ , ; ( \eta ^ * g))\right)\\[1.5ex ] & = \sigma_*\!\left(\widetilde { \mathcal{l } } ( j^1 { \mspace{-1.5mu}}\phi , j^1 { \mspace{-1.5mu}}\eta ) \right).\end{aligned}\ ] ] -24pt because of this property , we call the covariance field . [ [ example.-4 ] ] example .+ + + + + + + + from it is clear that the modified electromagnetic theory is generally covariant .next we will show something remarkable : the euler lagrange equation for the covariance field is vacuous .this is the main reason that , in the present context , we can introduce as a dynamic field with impunity , namely , its euler lagrange equation does not add any new information to , or impose any restrictions upon , the system .since , as we mentioned earlier , the euler lagrange equations for the fields remain unaltered , we see that _ the parametrized system is physically equivalent to the original system ._ first we compute the multimomenta conjugate to the field for the parametrized field theory with lagrangian . recall that in multisymplectic field theory , the multimomenta conjugate to the multivelocities are defined by using the chain rule together with the relations and, we find that recall from that , as we have assumed that is the only nondynamic field , and does not derivatively couple to the other fields , the sem tensor density for the _ original _ system with lagrangian and metric is given by the hilbert formula : from we conclude that the multimomenta conjugate to to the covariance field are given by the piola - kirchhoff sem tensor density : this is a familiar object in elasticity theory .observe that is a two - point tensor density : it has one leg ( ) in the spacetime in the fiber analogous to the spatial representation in elasticity theory , and the other leg ( ) in the spacetime in the base analogous to the material representation .now we compute the euler lagrange equations for the .these are : for .expanding the derivatives via the chain rule and using the same type of calculation as in the derivation of to write the equations in terms of rather than , the preceding equation becomes replacing by ( half of ) , and differentiating using the product rule , we obtain for . multiplying by the inverse matrix one gets for . and now , we multiply by , the inverse matrix of the jacobian for . taking into account the symmetry , the preceding equation becomes & - \ 2 \left(\frac{\partial \mathfrak t^{\mu \rho}}{\partial x^\mu}+\mathfrak t^{\mu \nu}\eta ^b{}_{,\mu \nu } \kappa^\rho{}_b \right ) = 0.\end{aligned}\ ] ] recalling the expression of the christoffel symbols of the metric , namely , we obtain finally , recall how the christoffel symbols for and the symbols for are related : using this in gives for , which is exactly the vanishing of the covariant divergence of the tensor _ density _ .thus , we have proven the following basic result .the euler lagrange equations for the covariance field are that the covariant divergence of the sem tensor density is zero .it is known from proposition 5 in that the sem tensor is covariantly conserved when the metric is the _ only _ nondynamic field .thus , in our context , the equation is an identity , whence the euler lagrange equations for the covariance field are vacuously satisfied .consequently the covariance field has no physical import .we are free to suppose is dynamic , and so we have accomplished goal ( ii ) : we have constructed a new field theory in which _ all _ fields are dynamic .it is interesting to compare the sem tensors for the original and parametrized systems . in sem tensor density is defined in terms of fluxes of the multimomentum map associated to the action of the spacetime diffeomorphism group .we rapidly recount some of the basic ideas .consider the lift of an infinitesimal diffeomorphism to ; it can be expressed where we suppose that for some coefficients .the largest value of for which one of the top coefficients is nonzero is the differential index of the field theory .we assume henceforth that the index most common and important case ( e.g. , when the fields are all tensor fields ) .in this context , theorem 1 along with remark 4 of shows that the sem tensor density for a lagrangian density is uniquely determined by for all vector fields on with compact support and all hypersurfaces , where is the inclusion .the multimomentum map gives , roughly speaking , the flow of momentum and energy through spacetime ; according to the quoted theorem , the fluxes of this flow across hypersurfaces are realized via the sem tensor density .manipulation of ( see formula ( 3.12 ) of ) shows that is given by where the summation extends over _ all _ fields .we apply this to the newly parametrized theory .note that if the index of the original theory is , then that for the parametrized theory will be also . as well from we see that the lift of to is trivial : that is , there are no terms in the directions in .thus the corresponding coefficients all vanish .the sem tensor for therefore reduces to on the other hand , and so that we can write but the first four terms on the rhs of this equation comprise the sem tensor density of the original theory since the do not derivatively couple to the ( cf .( 4.4 ) in ) .thus the sem tensor densities of the original and parametrized systems are related according to : but then on shell by the hilbert formula .therefore , we explicitly see that the sem tensor density for the fully covariant , fully dynamic modified theory vanishes . one can also obtain this result directly by applying the generalized hilbert formula ( 3.13 ) in to the parametrized theory , since it is fully dynamic .[ [ example.-5 ] ] example .+ + + + + + + + in the case of electromagnetism , one may compute directly from that .one could also compute from that -32pt we briefly consider the situation , although perhaps exotic , when the metric derivatively couples to the other fields .for simplicity , however , we suppose the theory remains first order .so the lagrangian density is taken to be a map as before , modify to get the new lagrangian defined on : ( since depends upon the first derivatives of , will depend upon its second derivatives .thus , we obtain a modified _ second _ order field theory with the underlying bundle . )the discussion proceeds as in the above , with only obvious changes .in particular , if is diff-covariant , then so is . as a simple illustration of a derivatively coupled theory ,consider a vector meson with mass .then is the tangent bundle of spacetime , and its sections are klein gordon vector fields .the lagrangian density is the map defined by where the semicolon denotes the covariant derivative with respect to .our construction produces \nonumber \\[1.5ex ] & \times \big [ \phi^{{\mspace{1.5mu}}\rho}{}_{,\nu } + \big(\eta^h{}_{,\nu\xi } + \eta^p{}_{,\nu } { \mspace{1.5mu}}\eta^q{}_{,\xi } { \mspace{1.5mu}}\gamma ^h _ { pq } \big ) \kappa^\rho{}_h{\mspace{1.5mu}}\phi^\xi\big ] \nonumber \\[1.5ex ] & - m^2 \phi^\sigma \phi^{{\mspace{1.5mu}}\rho } \bigg)\sqrt{-g}\ , ( \det \eta _ * ) \ , d^{{\mspace{1.5mu}}4}{\mspace{-1.5mu}}x\end{aligned}\ ] ] where is the jacobian of and we have made use of .now we turn to the euler lagrange equations for the which , since is second order in the , are : for . the calculation of the lhs is similar to the previous one , but slightly more complicated . in any event , we find that satisfies the euler lagrange equations , where now by the hilbert formula .\ ] ] thus for ( first order ) derivative couplings the covariance field remains vacuously dynamic .it is likely this will remain true for derivative couplings of arbitrary order , but we have not verified this as yet .we dedicate this paper to darryl holm on his 60 birthday .we thank him for his interest in the ideas in this paper and for his many inspiring works over the years .mjg and jem thank the national science foundation for its occasional support of work of this sort .mcl was partially supported by dgsic ( spain ) under grant mtm2007 - 60017 .
we give an exposition of the parametrization method of in the context of the multisymplectic approach to field theory , as presented in . the purpose of the formalism developed herein is to make any classical field theory , containing a metric as a sole background field , generally covariant ( that is , _ parametrized _ , with the spacetime diffeomorphism group as a symmetry group ) as well as fully dynamic . this is accomplished by introducing certain `` covariance fields '' as genuine dynamic fields . as we shall see , the multimomenta conjugate to these new fields form the piola kirchhoff version of the stress - energy - momentum tensor field , and their euler lagrange equations are vacuously satisfied . thus , these fields have no additional physical content ; they serve only to provide an efficient means of parametrizing the theory . our results are illustrated with two examples , namely an electromagnetic field and a klein gordon vector field , both on a background spacetime .
consider a generic factor model with a binary configurational space , , , which is factorized so that the probability to find the system in the state and the partition function are where labels non - negative and finite factor - functions with and represents a subset of variables .relations between factor functions ( checks ) and elementary discrete variables ( bits ) , expressed as and , can be conveniently represented in terms of the system - specific factor ( tanner ) graph .if we say that the bit and the check are neighbors .any spin ( a - posteriori log - likelihood ) correlation function can be calculated using the partition function , , defined by eq .( [ p1 ] ) .general expression for the factor functions of an ldpc code is let us now reproduce the derivation of the belief propagation equation based on the bethe free energy variational principle , following closely the description of .( see also the appendix of . ) in this approach trial probability distributions , called beliefs , are introduced both for bits and checks and , respectively , where , .a belief is defined for given configuration of the binary variables over the code .thus , a belief at a bit actually consists of two probabilities , and , and we use a natural notation .there are beliefs defined at a check , being the number of bits connected to the check , and we introduce vector notation where and .beliefs satisfy the following inequality constraints the normalization constraints as well as the consistency ( between bits and checks ) constraints where stands for the set of with , .the bethe free energy is defined as a difference of the bethe self - energy and the bethe entropy , where , and . the entropy term for a bit enters eq .( [ bethe ] ) with the coefficient to account for the right counting of the number of configurations for a bit : all entries for a bit ( e.g. through the check term ) should give in total .optimal configurations of beliefs are the ones that minimize the bethe free energy ( [ bethe ] ) subject to the constraints ( [ ineq],[norm],[cons ] ) . introducing these constraints into the effective lagrangian through lagrange multiplier terms and looking for the extremum with respect to all possible beliefs leads to , \nonumber\\ & & \!\!\!\!\!\ ! \frac{\deltal}{\delta b_i(\sigma_i ) } = 0 \label{lbi } \\ & & \!\!\rightarrow\quad b_i(\sigma_i)=\exp\left[\frac{1}{q_i-1}\left(\gamma_i+ \sum\limits_{\alpha\ni i}\lambda_{i\alpha}(\sigma_i)\right)-1\right ] .\nonumber\end{aligned}\ ] ] substituting into eq.([lba],[lbi ] ) we arrive at where is used to indicate that we should use the normalization conditions ( [ norm ] ) to guarantee that the beliefs sum up to one .applying the consistency constraint ( [ cons ] ) to eqs .( [ ba ] ) , making summation over all spins but the given , and also using eq .( [ bi ] ) we derive the following bp equations the right hand side of eq .( [ ba0 ] ) rewritten for the ldpc case ( [ factor_ldpc ] ) becomes thus constructing for the ldpc case in two different ways ( correspondent to left and right relations in eq .( [ ba0 ] ) ) , equating the results and introducing the field one arrives at the following bp equations for the fields : iterative solution of this equation corresponding to eq .( [ iter ] ) with is just a standard iterative bp ( which can also be called sum - product ) used for the decoding of an ldpc code . a simplified min - sum version of eq .( [ iter ] ) is \min_{j\neq i}^{j\in\beta } \big| \eta^{(n)}_{j\beta } \big|+\frac{1}{\delta}\sum\limits_{\beta\ni i}\eta_{i\beta}^{(n ) } , \nonumber\end{aligned}\ ] ]to illustrate the standard bp iterative decoding , given by eqs .( [ iter],[min - sum ] ) with , we consider the example of the ] code at few values of snrs .the resulting termination probability curves are shown in fig .[ tc ] for .the simulations show a shift of the probability curve maximum to the right ( towards larger number of iterations ) with the damping parameter decrease however once the maximum is achieved , the decay of the curve at a finite is faster with the number of iterations than in the standard bp case .the decay rate actually increases as decreases .( a ) , ( b ) , and ( c ) .[ tc],width=288 ] ( a ) , ( b ) , and ( c ) .[ tc],width=288 ] ( a ) , ( b ) , and ( c ) .[ tc],width=288 ] we conclude that at the largest the performance of a modified iterative bp is strictly better .however to optimize the modified iterative bp , thus aiming at better performance than given by the standard iterative bp , one needs to account for the trade - off between decreasing leading to a faster decay of the termination probability curve at the largest , but on the other side it comes with the price in the actual number of iteration necessary to achieve the asymptotic decay regime . the last point is illustrated by fig . [ er ] , where the decoding error probability depends non - monotonically on .one can also see that the modification of bp could improve the decoding performance ; e.g. , at and maximally allowed ( after which the decoding unconditionally stops ) the decoding error probability is reduced by factor of about 40 by choosing ( see the bottom curve at fig .[ er](b ) ) . for ( a ) and ( b ) .different curves correspond to different maximally allowed : starting from ( top curve ) and increasing by factor of with each next lower curve .the points on the right correspond to the standard bp ( ) .[ er],width=288 ] for ( a ) and ( b ) .different curves correspond to different maximally allowed : starting from ( top curve ) and increasing by factor of with each next lower curve .the points on the right correspond to the standard bp ( ) .[ er],width=288 ]we presented a simple extension of the iterative bp which allows ( with proper optimization in the plane ) to guarantee not only an asymptotic convergence of bp to a local minimum of the bethe free energy but also a serious gain in decoding performance at finite .in addition to their own utility , these results should also be useful for systematic improvement of the bp approximation .indeed , as it was recently shown in solution of the bp equation can be used to express the full partition function ( or a - posteriori log - likehoods calculated within map ) in terms of the so - called loop series , where each term is associated with a generalized loop on the factor graph .this loop calculus / series offers a remarkable opportunity for constructing a sequence of efficient approximate and systematically improvable algorithms .thus we anticipate that the improved iterative bp discussed in the present manuscript will become an important building block in this future approximate algorithm construction .we already mentioned in the introduction that our algorithm can be advantageous over other bp - based algorithms converging to a minimum of the bethe free energy mainly due to its simplicity and tunability .in particular , the concave - convex algorithms of , as well as related linear programming decoding algorithms , are formulated in terms of beliefs . on the contrary our modification of the iterative bpcan be extensively simplified and stated in terms of the fewer number of fields each associated with an edge of the factor graph rather than with much bigger family of local code - words .thus in the case of a regular ldpc code with checks of the connectivity degree one finds that the number of variables taken at each step of the iterative procedure is and in our iterative scheme and in the concave - convex scheme respectively .having a tunable correlation parameter in the problem is also advantageous as it allows generalizations ( e.g. by turning to a individual bit dependent relaxation rate ) .this flexibility is particularly desirable in the degenerate case with multiple minima of the bethe free energy , as it allows a painless implementation of annealing as well as other more sophisticated relaxation techniques speeding up and/or improving convergence .m. chertkov , v. chernyak , _ loop calculus in statistical physics and information science _ , phys .e * 73 * , 065102(r ) ( 2006 ) ; cond - mat/0601487 .m. chertkov , v. chernyak , _ loop series for discrete statistical models on graphs _ , j. stat .( 2006 ) p06009 , cond - mat/0603189 . j. feldman , m. wainwright , d.r . karger , _ using linear programming to decode linear codes _ , 2003 conference on information sciences and systems , the john hopkins university , march 12 - 14 , 2003 .
the decoding of low - density parity - check codes by the belief propagation ( bp ) algorithm is revisited . we check the iterative algorithm for its convergence to a codeword ( termination ) , we run monte carlo simulations to find the probability distribution function of the termination time , . tested on an example $ ] code , this termination curve shows a maximum and an extended algebraic tail at the highest values of . aiming to reduce the tail of the termination curve we consider a family of iterative algorithms modifying the standard bp by means of a simple relaxation . the relaxation parameter controls the convergence of the modified bp algorithm to a minimum of the bethe free energy . the improvement is experimentally demonstrated for additive - white - gaussian - noise channel in some range of the signal - to - noise ratios . we also discuss the trade - off between the relaxation parameter of the improved iterative scheme and the number of iterations . low - density parity - check ( ldpc ) codes are the best linear block error - correction codes known today . in addition to being good codes , i.e. capable of decoding without errors in the thermodynamic limit of an infinitely long block length , these codes can also be decoded efficiently . the main idea of belief propagation ( bp ) decoding is in approximating the actual graphical model , formulated for solving statistical inference maximum likelihood ( ml ) or maximum - a - posteriori ( map ) problems , by a tree - like structure without loops . being efficient but suboptimal the bp algorithm fails on certain configurations of the channel noise when close to optimal ( but inefficient ) map decoding would be successful . bp decoding allows a certain duality in interpretation . first of all , and following the so - called bethe - free energy variational approach , bp can be understood as a set of equations for beliefs ( bp - equations ) solving a constrained minimization problem . on the other hand , a more traditional approach is to interpret bp in terms of an iterative procedure so - called bp iterative algorithm . being identical on a tree ( as then bp equations are solved explicitly by iterations from leaves to the tree center ) the two approaches are however distinct for a graphical problem with loops . in case of their convergence , bp algorithms find a minimum of the bethe free energy , however in a general case convergence of the standard iterative bp is not guaranteed . it is also understood that bp fails to converge primarily due to circling of messages in the process of iterations over the loopy graph . to enforce convergence of the iterative algorithm to a minimum of the bethe free energy some number of modifications of the standard iterative bp were discussed in recent years . the tree - based re - parametrization framework of suggests to limit communication on the loopy graph , cutting some edges in a dynamical fashion so that the undesirable effects of circles are suppressed . another , so - called concave - convex procedure , introduced in and generalized in , suggests to decompose the bethe free energy into concave and convex parts thus splitting the iterations into two sequential sub - steps . noticing that convergence of the standard bp fails mainly due to overshooting of iterations , we develop in this paper a tunable relaxation ( damping ) that cures the problem . compared with the aforementioned alternative methods , this approach can be practically more advantageous due to its simplicity and tunability . in its simplest form our modification of the bp iterative procedure is given by where latin and greek indexes stand for bits and checks and the bit / check relations , e.g. , express the ldpc code considered ; is the channel noise - dependent value of log - likelihoods ; and is the message associated at the -th iteration with the edge ( of the respective tanner graph ) connecting -th bit and -th check . is a tunable parameter . by choosing a sufficiently small one can guarantee convergence of the iterative procedure to a minimum of the bethe free energy . on the other hand corresponds exactly to the standard iterative bp . in the sequel we derive and explain the modified iterative procedure ( [ iter ] ) in detail . the manuscript is organized as follows . we introduce the bethe free energy , the bp equation and the standard iterative bp in section [ sec : bethe ] . performance of standard iterative bp , analyzed with a termination curve , is discussed in section [ sec : term ] . section [ sec : relax ] describes continuous and sequentially discrete ( iterative ) versions of our relaxation method . we discuss performance of the modified iterative scheme in section [ sec : perf ] , where bit - error - rate and the termination curve for an ldpc code performed over additive - white - gaussian - noise ( awgn ) channel are discussed for a range of interesting values of the signal - to - noise - ratios ( snr ) . we also discuss here the trade - off between convergence and number of iterations aiming to find an optimal strategy for selection of the model s parameters . the last section [ sec : con ] is reserved for conclusions and discussions .
factor analysis is one of the most useful tools for modeling common dependence among multivariate outputs .suppose that we observe data that can be decomposed as where are unobservable common factors ; are corresponding factor loadings for variable , and denotes the idiosyncratic component that can not be explained by the static common component . here , and , respectively , denote the dimension and sample size of the data . model ( [ eq1.1 ] ) has broad applications in the statistics literature .for instance , can be expression profiles or blood oxygenation level dependent ( bold ) measurements for the microarray , proteomic or fmri - image , whereas represents a gene or protein or a voxel .see , for example , .the separations between the common factors and idiosyncratic components are carried out by the low - rank plus sparsity decomposition .see , for example , . the factor model ( [ eq1.1 ] ) has also been extensively studied in the econometric literature , in which is the vector of economic outputs at time or excessive returns for individual assets on day .the unknown factors and loadings are typically estimated by the principal component analysis ( pca ) and the separations between the common factors and idiosyncratic components are characterized via static pervasiveness assumptions .see , for instance , among others . in this paper, we consider static factor model , which differs from the dynamic factor model [ , ( ) ] .the dynamic model allows more general infinite dimensional representations . for this type of model , the frequency domain pca [ ]was applied on the spectral density .the so - called _ dynamic pervasiveness _ condition also plays a crucial role in achieving consistent estimation of the spectral density .accurately estimating the loadings and unobserved factors are very important in statistical applications . in calculating the false - discovery proportion for large - scale hypothesis testing , one needs to adjust accurately the common dependence via subtracting it from the data in ( [ eq1.1 ] ) [ ] . in financial applications , we would like to understand accurately how each individual stock depends on unobserved common factors in order to appreciate its relative performance and risks . in the aforementioned applications, dimensionality is much higher than sample - size .however , the existing asymptotic analysis shows that the consistent estimation of the parameters in model ( [ eq1.1 ] ) requires a relatively large .in particular , the individual loadings can be estimated no faster than . but large sample sizes are not always available . even with the availability of `` big data , '' heterogeneity and other issues make direct applications of ( [ eq1.1 ] ) with large infeasible .for instance , in financial applications , to pertain the stationarity in model ( [ eq1.1 ] ) with time - invariant loading coefficients , a relatively short time series is often used . to make observed data less seriallycorrelated , monthly returns are frequently used to reduce the serial correlations , yet a monthly data over three consecutive years contain merely 36 observations .to overcome the aforementioned problems , and when relevant covariates are available , it may be helpful to incorporate them into the model .let be a vector of -dimensional covariates associated with the variables . in the seminal papers by and , the authors studied the following semi - parametric factor model : where loading coefficients in ( [ eq1.1 ] ) are modeled as for some functions .for instance , in health studies , can be individual characteristics ( e.g. , age , weight , clinical and genetic information ) ; in financial applications can be a vector of firm - specific characteristics ( market capitalization , price - earning ratio , etc . ) . the semiparametric model ( [ eq1.2 ] ) , however , can be restrictive in many cases , as it requires that the loading matrix be fully explained by the covariates .a natural relaxation is the following semiparametric model : where is the component of loading coefficient that can not be explained by the covariates .let .we assume that have mean zero , and are independent of and .in other words , we impose the following factor structure : which reduces to model ( [ eq1.2 ] ) when and model ( [ eq1.1 ] ) when . when genuinely explains a part of loading coefficients , the variability of is smaller than that of .hence , the coefficient can be more accurately estimated by using regression model ( [ eq1.3 ] ) , as long as the functions can be accurately estimated .let be the matrix of , be the matrix of , be the matrix of , be the matrix of and be matrix of . then model ( [ eq1.4 ] )can be written in a more compact matrix form : we treat the loadings and as realizations of random matrices throughout the paper .this model is also closely related to the _ supervised singular value decomposition _ model , recently studied by .the authors showed that the model is useful in studying the gene expression and single - nucleotide polymorphism ( snp ) data , and proposed an em algorithm for parameter estimation .we propose a projected - pca estimator for both the loading functions and factors .our estimator is constructed by first projecting onto the sieve space spanned by , then applying pca to the projected data or fitted values .due to the approximate orthogonality condition of , and , the projection of is approximately , as the smoothing projection suppresses the noise terms and substantially .therefore , applying pca to the projected data allows us to work directly on the sample covariance of , which is under normalization conditions .this substantially improves the estimation accuracy , and also facilitates the theoretical analysis .in contrast , the traditional pca method for factor analysis [ e.g. , , ] is no longer suitable in the current context .moreover , the idea of projected - pca is also potentially applicable to dynamic factor models of , by first projecting the data onto the covariate space .the asymptotic properties of the proposed estimators are carefully studied .we demonstrate that as long as the projection is genuine , the consistency of the proposed estimator for latent factors and loading matrices requires only , and does not need to grow , which is attractive in the typical high - dimension - low - sample - size ( hdlss ) situations [ e.g. , ] .in addition , if both and grow simultaneously , then with sufficiently smooth , using the sieve approximation , the rate of convergence for the estimators is much faster than those of the existing results for model ( [ eq1.1 ] ) .typically , the loading functions can be estimated at a convergence rate , and the factor can be estimated at . throughout the paper , and assumed to be constant and do not grow .let be a matrix of .model ( [ eq1.3 ] ) implies a decomposition of the loading matrix : where and are orthogonal loading components in the sense that .we conduct two specification tests for the hypotheses : the first problem is about testing whether the observed covariates have explaining power on the loadings .if the null hypothesis is rejected , it gives us the theoretical basis to employ the projected - pca , as the projection is now genuine .our empirical study on the asset returns shows that firm market characteristics do have explanatory power on the factor loadings , which lends further support to our projected - pca method .the second tests whether covariates fully explain the loadings .our aforementioned empirical study also shows that model ( [ eq1.2 ] ) used in the financial econometrics literature is inadequate and more generalized model ( [ eq1.5 ] ) is necessary . as claimed earlier ,even if does not hold , as long as , the projected - pca can still consistently estimate the factors as , and may or may not grow .our simulated experiments confirm that the estimation accuracy is gained more significantly for small s .this shows one of the benefits of using our projected - pca method over the traditional methods in the literature .in addition , as a further illustration of the benefits of using projected data , we apply the projected - pca to consistently estimate the number of factors , which is similar to those in and .different from these authors , our method applies to the projected data , and we demonstrate numerically that this can significantly improve the estimation accuracy .we focus on the case when the observed covariates are time - invariant .when is small , these covariates are approximately locally constant , so this assumption is reasonable in practice . on the other hand, there may exist individual characteristics that are time - variant [ e.g. , see ] .we expect the conclusions in the current paper to still hold if some smoothness assumptions are added for the time varying components of the covariates . due to the space limit, we provide heuristic discussions on this case in the supplementary material of this paper [ ] .in addition , note that in the usual factor model , was assumed to be deterministic . in this paper , however , is mainly treated to be stochastic , and potentially depend on a set of covariates .but we would like to emphasize that the results presented in section [ 1541512515 ] under the framework of more general factor models hold regardless of whether is stochastic or deterministic . finally ,while some financial applications are presented in this paper , the projected - pca is expected to be useful in broad areas of statistical applications [ e.g. , see for applications in gene expression data analysis ] . throughout this paper , for a matrix ,let and , denote its frobenius , spectral and max- norms .let and denote the minimum and maximum eigenvalues of a square matrix .for a vector , let denote its euclidean norm .the rest of the paper is organized as follows .section [ sec2 ] introduces the new projected - pca method and defines the corresponding estimators for the loadings and factors .sections [ 1541512515 ] and [ s4 ] provide asymptotic analysis of the introduced estimators .section [ sec5 ] introduces new specification tests for the orthogonal decomposition of the semiparametric loadings .section [ sec6 ] concerns about estimating the number of factors .section [ sec7 ] presents numerical results . finally , section [ sec8 ] concludes .all the proofs are given in the and the supplementary material [ ] .in the high - dimensional factor model , let be the matrix of loadings . then the general model ( [ eq1.1 ] ) can be written as suppose we additionally observe a set of covariates .the basic idea of the projected - pca is to smooth the observations for each given day against its associated covariates .more specifically , let be the fitted value after regressing on for each given .this results in a smooth or projected observation matrix , which will also be denoted by .the projected - pca then estimates the factors and loadings by running the pca based on the projected data . here, we heuristically describe the idea of projected - pca ; rigorous analysis will be carried out afterward .let be a space spanned by , which is orthogonal to the error matrix .let denote the projection matrix onto [ whose formal definition will be given in ( [ eq2.5 ] ) below . at the population level, approximates the conditional expectation operator , which satisfies , then and .hence , analyzing the projected data is an approximately noiseless problem , and the sample covariance has the following approximation : we now argue that and can be recovered from the projected data under some suitable normalization condition .the normalization conditions we impose are under this normalization , using ( [ eq2.1a ] ) , .we conclude that the columns of are approximately times the first eigenvectors of the matrix .therefore , the projected - pca naturally defines a factor estimator using the first principal components of .the projected loading matrix can also be recovered from the projected data in two ( equivalent ) ways .given , from , we see .alternatively , consider the projected sample covariance : where is a remaining term depending on .right multiplying and ignoring , we obtain .hence , the ( normalized ) columns of approximate the first eigenvectors of , the sample covariance matrix based on the projected data .therefore , we can either estimate by given , or by the leading eigenvectors of .in fact , we shall see later that these two estimators are equivalent .if in addition , , that is , the loading matrix belongs to the space , then can also be recovered from the projected data .the above arguments are the fundament of the projected - pca , and provide the rationale of our estimators to be defined in section [ sec2.3 ] .we shall make the above arguments rigorous by showing that the projected error is asymptotically negligible and , therefore , the idiosyncratic error term can be completely removed by the projection step . as one of the useful examples of forming the space and the projection operator, this paper considers model ( [ eq1.4 ] ) , where s and s are the only observable data , and are unknown nonparametric functions .the specific case ( [ eq1.2 ] ) ( with ) was used extensively in the financial studies by , and , with s being the observed `` market characteristic variables . ''we assume to be known for now . in section [ sec6 ] , we will propose a projected - eigenvalue - ratio method to consistently estimate when it is unknown .we assume that does not depend on , which means the loadings represent the cross - sectional heterogeneity only .such a model specification is reasonable since in many applications using factor models , to pertain the stationarity of the time series , the analysis can be conducted within each fixed time window with either a fixed or slowly - growing . through localization in time , it is not stringent to require the loadings be time - invariant .this also shows one of the attractive features of our asymptotic results : under mild conditions , our factor estimates are consistent even if is finite . to nonparametrically estimate without the curse of dimensionality when is multivariate , we assume to be additive : for each , there are nonparametric functions such that each additive component of is estimated by the sieve method .define to be a set of basis functions ( e.g. , b - spline , fourier series , wavelets , polynomial series ) , which spans a dense linear space of the functional space for .then for each , here , are the sieve coefficients of the additive component of , corresponding to the factor loading ; is a `` remaining function '' representing the approximation error ; denotes the number of sieve terms which grows slowly as .the basic assumption for sieve approximation is that as .we take the same basis functions in ( [ eq2.4 ] ) purely for simplicity of notation .define , for each and for each , then we can write let be a matrix of sieve coefficients , be a matrix of basis functions , and be matrix with the element . then the matrix form of ( [ eq2.3 ] ) and ( [ eq2.4 ] ) is substituting this into ( [ eq1.5 ] ) , we write we see that the residual term consists of two parts : the sieve approximation error and the idiosyncratic .furthermore , the random effect assumption on the coefficients makes it also behave like noise , and hence negligible when the projection operator is applied . based on the idea described in section [ sec2.1 ] ,we propose a projected - pca method , where is the sieve space spanned by the basis functions of , and is chosen as the projection matrix onto , defined by the projection matrix the estimators of the model parameters in ( [ eq1.5 ] ) are defined as follows .the columns of are defined as the eigenvectors corresponding to the first largest eigenvalues of the matrix , and is the estimator of .the intuition can be readily seen from the discussions in section [ sec2.1 ] , which also provides an alternative formulation of as follows : let be a diagonal matrix consisting of the largest eigenvalues of the matrix .let be a matrix whose columns are the corresponding eigenvectors . according to the relation described in section [ sec2.1 ], we can also estimate or by we shall show in lemma [ la.1add ] that this is equivalent to ( [ eq2.6 ] ) .therefore , unlike the traditional pca method for usual factor models [ e.g. , , ] , the projected - pca takes the principal components of the projected data .the estimator is thus invariant to the rotation - transformations of the sieve bases .the estimation of the loading component that can not be explained by the covariates can be estimated as follows . with the estimated factors , the least - squares estimator of loading matrix is , by using ( [ eq2.1 ] ) and ( [ eq2.2 ] ) .therefore , by ( [ eq1.5 ] ) , a natural estimator of is consider a panel data model with time - varying coefficients as follows : where is a -dimensional vector of time - invariant regressors for individual ; denotes the unobservable random time effect ; is the regression error term .the regression coefficient is also assumed to be random and time - varying , but is common across the cross - sectional individuals .the semiparametric factor model admits ( [ eq2.8 ] ) as a special case . note that ( [ eq2.8 ] ) can be rewritten as with unobservable `` factors '' and `` loading '' .the model ( [ eq1.4 ] ) being considered , on the other hand , allows more general nonparametric loading functions .let us first consider the asymptotic performance of the projected - pca in the conventional factor model : in the usual statistical applications for factor analysis , the latent factors are assumed to be serially independent , while in financial applications , the factors are often treated to be weakly dependent time series satisfying strong mixing conditions .we now demonstrate by a simple example that latent factors can be estimated at a faster rate of convergence by projected - pca than the conventional pca and that they can be consistently estimated even when sample size is finite .[ ex3.1 ] to appreciate the intuition , let us consider a specific case in which so that model ( [ eq1.4 ] ) reduces to assume that is so smooth that it is in fact a constant ( otherwise , we can use a local constant approximation ) , where .then the model reduces to the projection in this case is averaging over , which yields where , and denote the averages of their corresponding quantities over . for the identification purpose , suppose , and . ignoring the last two terms, we obtain estimators these estimators are special cases of the projected - pca estimators . to see this , define , and let be a -dimensional column vector of ones .take a naive basis ; then the projected data matrix is in fact .consider the matrix , whose largest eigenvalue is .from we have the first eigenvector of equals .hence , the projected - pca estimator of factors is .in addition , the projected - pca estimator of the loading vector is hence , the projected - pca - estimator of equals .these estimators match with ( [ e3.2 ] ) .moreover , since the ignored two terms and are of order , and converge whether or not is large .note that this simple example satisfies all the assumptions to be stated below , and and achieve the same rate of convergence as that of theorem [ th4.1 ] .we shall present more details about this example in appendix g in the supplementary material [ ] .we now state the conditions and results formally in the more general factor model ( [ eq3.1 ] ) . recall that the projection matrix is defined as the following assumption is the key condition of the projected - pca .[ ass3.1 ] there are positive constants and such that , with probability approaching one ( as ) , since the dimensions of and are , respectively , and , assumption [ ass3.1 ] requires , which is reasonable since we assume , the number of factors , to be fixed throughout the paper .assumption [ ass3.1 ] is similar to the _ pervasive _ condition on the factor loadings [ ] . in our context, this condition requires the covariates have nonvanishing explaining power on the loading matrix , so that the projection matrix has spiked eigenvalues .note that it rules out the case when is completely unassociated with the loading matrix ( e.g. , when is pure noise ) .one of the typical examples that satisfies this assumption is the semiparametric factor model [ model ( [ eq1.4 ] ) ]. we shall study this specific type of factor model in section [ s4 ] , and prove assumption [ ass3.1 ] in the supplementary material [ ] .note that and are not separately identified , because for any nonsingular , .therefore , we assume the following. [ ass3.2 ] almost surely , and is a diagonal matrix with distinct entries .this condition corresponds to the pc1 condition of , which separately identifies the factors and loadings from their product .it is often used in factor analysis for identification , and means that the columns of factors and loadings can be orthogonalized [ also see ] .[ ass3.3 ] ( i ) there are and so that with probability approaching one ( as ) , \(ii ) . note that and is a vector of dimensionality .thus , condition ( i ) can follow from the strong law of large numbers .for instance , are weakly correlated and in the population level is well - conditioned . in addition , this condition can be satisfied through proper normalizations of commonly used basis functions such as b - splines , wavelets , fourier basis , etc . in the general setup of this paper , we allow s to be cross - sectionally dependent and nonstationary .regularity conditions about weak dependence and stationarity are imposed only on as follows .we impose the strong mixing condition .let and denote the -algebras generated by and , respectively .define the mixing coefficient [ ass3.4 ] ( i ) is strictly stationary .in addition , for all ; is independent of .strong mixing : there exist such that for all , weak dependence : there is so that exponential tail : there exist satisfying and , such that for any , and , assumption [ ass3.4 ] is standard , especially condition ( iii ) is commonly imposed for high - dimensional factor analysis [ e.g. , ] , which requires be weakly dependent both serially and cross - sectionally .it is often satisfied when the covariance matrix is sufficiently sparse under the strong mixing condition .we provide primitive conditions of condition ( iii ) in the supplementary material [ ] .formally , we have the following theorem : [ th3.1 ] consider the conventional factor model ( [ eq3.1 ] ) with assumptions [ ass3.1][ass3.4 ] .the projected - pca estimators and defined in section [ sec2.3 ] satisfy , as [ may either grow simultaneously with satisfying or stay constant with , to compare with the traditional pca method , the convergence rate for the estimated factors is improved for small . in particular , the projected - pca does not require , and also has a good rate of convergence for the loading matrix up to a projection transformation .hence , we have achieved a finite- consistency , which is particularly interesting in the `` high - dimensional - low - sample - size '' ( hdlss ) context , considered by .in contrast , the traditional pca method achieves a rate of convergence of for estimating factors , and for estimating loadings .see remarks [ re4.1 ] , [ re4.2 ] below for additional details .let be the covariance matrix of .convergence ( [ eq3.4add ] ) in theorem [ th3.1 ] also describes the relationship between the leading eigenvectors of and those of . to see this ,let be the eigenvectors of corresponding to the first eigenvalues . under the _ pervasiveness condition _, can be approximated by multiplied by a positive definite matrix of transformation [ ] . in the context of projected - pca , by definition , ; here we recall that is a diagonal matrix consisting of the largest eigenvalues of , and is a matrix whose columns are the corresponding eigenvectors . then ( [ eq3.4add ] ) immediately implies the following corollary , which complements the pca consistency in _ spiked covariance models _ [ e.g. , and ] .[ th3.2 ] under the conditions of theorem [ th3.1 ] , there is a positive definite matrix , whose eigenvalues are bounded away from both zero and infinity , so that as [ may either grow simultaneously with satisfying or stay constant with , the semiparametric factor model , it is assumed that , where is a nonparametric smooth function for the observed covariates , and is the unobserved random loading component that is independent of .hence , the model is written as in the matrix form , and does not vanish ( pervasive condition ; see assumption [ ass4.2 ] below ) . the estimators and are the projected - pca estimators as defined in section [ sec2.3 ] .we now define the estimator of the nonparametric function , . in the matrix form , the projected data has the following sieve approximated representation : where is `` small '' because and are orthogonal to the function space spanned by , and is the sieve approximation error .the sieve coefficient matrix can be estimated by least squares from the projected model ( [ eq4.1 ] ) : ignore , replace with , and solve ( [ eq4.1 ] ) to obtain ^{-1}\phi ( \bx ) ' \by{\widehat\bf}.\ ] ] we then estimate by where denotes the support of . when , can be understood as the projection of onto the sieve space spanned by .hence , the following assumption is a specific version of assumptions [ ass3.1 ] and [ ass3.2 ] in the current context .[ ass4.1 ] ( i ) almost surely , and is a diagonal matrix with distinct entries .\(ii ) there are two positive constants and so that with probability approaching one ( as ) , in this section , we do not need to assume to be i.i.d . for the estimation purpose. cross - sectional weak dependence as in assumption [ ass4.2](ii ) below would be sufficient .the i.i.d .assumption will be only needed when we consider specification tests in section [ sec5 ] .write , and [ ass4.2 ] ( i ) and is independent of .\(ii ) , and the following set of conditions is concerned about the accuracy of the sieve approximation .[ ass4.3 ] , \(i ) the loading component belongs to a hlder class defined by for some ; \(ii ) the sieve coefficients satisfy for , as , where is the support of the element of , and is the sieve dimension .\(iii ) . condition ( ii ) is satisfied by common basis .for example , when is polynomial basis or b - splines , condition ( ii ) is implied by condition ( i ) [ see , e.g. , and ] .[ th4.1 ] suppose . under assumptions[ ass3.3 ] , [ ass3.4 ] , [ ass4.1][ass4.3 ] , as , can be either divergent or bounded , we have that in addition , if simultaneously with and , then the optimal simultaneously minimizes the convergence rates of the factors and nonparametric loading function . it also satisfies the constraint as . with , we have and satisfies some remarks about these rates of convergence compared with those of the conventional factor analysis are in order .[ re4.1]the rates of convergence for factors and nonparametric functions do not require .when , the rates still converge fast when is large , demonstrating the blessing of dimensionality .this is an attractive feature of the projected - pca in the hdlss context , as in many applications , the stationarity of a time series and the time - invariance assumption on the loadings hold only for a short period of time . in contrast , in the usual factor analysis , consistency is granted only when . for example , according to ( lemma c.1 ) , the regular pca method has the following convergence rate : which is inconsistent when is bounded .[ re4.2]when both and are large , the projected - pca estimates factors as well as the regular pca does , and achieves a faster rate of convergence for the estimated loadings when vanishes . in this case , , the loading matrix is estimated by , and in contrast , the regular pca method as in yields comparing these rates , we see that when s are sufficiently smooth ( larger ) , the rate of convergence for the estimated loadings is also improved .the loading matrix always has the following orthogonal decomposition : where is interpreted as the loading component that can not be explained by .we consider two types of specification tests : testing , and .the former tests whether the observed covariates have explaining powers on the loadings , while the latter tests whether the covariates fully explain the loadings . the former provides a diagnostic tool as to whether or not to employ the projected - pca ; the latter tests the adequacy of the semiparametric factor models in the literature .testing whether the observed covariates have explaining powers on the factor loadings can be formulated as the following null hypothesis : due to the approximate orthogonality of and , we have .hence , the null hypothesis is approximately equivalent to this motivates a statistic for a consistent loading estimator .normalizing the test statistic by its asymptotic variance leads to the test statistic where the matrix is the weight matrix .the null hypothesis is rejected when is large .the projected - pca estimator is inappropriate under the null hypothesis as the projection is not genuine .we therefore use the least squares estimator , leading to the test statistic here , we take as the traditional pca estimator : the columns of are the first eigenvectors of the data matrix .connor , hagmann and linton ( ) applied the semiparametric factor model to analyzing financial returns , who assumed that , that is , the loading matrix can be fully explained by the observed covariates .it is therefore natural to test the following null hypothesis of specification : recall that so that . therefore , essentially the specification testing problem is equivalent to testing that is , we are testing whether the loading matrix in the factor model belongs to the space spanned by the observed covariates .a natural test statistic is thus based on the weighted quadratic form for some positive definite weight matrix , where is the projected - pca estimator for factors and . to control the size of the test, we take , where is a diagonal covariance matrix of under , assuming that are uncorrelated .we replace with its consistent estimator : let .define then the operational test statistic is defined to be the null hypothesis is rejected for large values of . for the testing purpose , we assume to be i.i.d . , and let simultaneously .the following assumption regulates the relation between and .[ ass5.1 ] suppose ( i ) are independent and identically distributed ; , and ; and satisfy : , and . condition ( ii ) requires a balance of the dimensionality and the sample size . on one hand ,a relatively large sample size is desired [ so that the effect of estimating is negligible asymptotically . on the other hand , as is common in high - dimensional factor analysis , a lower bound of the dimensionality is also required [ condition ] to ensure that the factors are estimated accurately enough . such a required balance is common for high - dimensional factor analysis [ e.g. , , ] and in the recent literature for pca [ e.g. , , ] .the i.i.d .assumption of covariates in condition ( i ) can be relaxed with further distributional assumptions on ( e.g. , assuming to be gaussian ) .the conditions on in condition ( iii ) is consistent with those of the previous sections .we focus on the case when is gaussian , and show that under , and under whose conditional distributions ( given ) under the null are with degree of freedom , respectively , and .we can derive their standardized limiting distribution as .this is given in the following result .[ th5.1 ] suppose assumptions [ ass3.3 ] , [ ass3.4 ] , [ ass4.2 ] , [ ass5.1 ] hold .then under , where and .in addition , suppose assumptions [ ass4.1 ] and [ ass4.3 ] further hold , is i.i.d . with a diagonal covariance matrix whose elements are bounded away from zero and infinity .then under , in practice , when a relatively small sieve dimension is used , one can instead use the upper -quantile of the distribution for .we require be independent across , which ensures that the covariance matrix of the leading term to have a simple form .this assumption can be relaxed to allow for weakly dependent , but many autocovariance terms will be involved in the covariance matrix .one may regularize standard autocovariance matrix estimators such as and to account for the high dimensionality .moreover , we assume be diagonal to facilitate estimating , which can also be weakened to allow for a nondiagonal but sparse .regularization methods such as thresholding [ ] can then be employed , though they are expected to be more technically involved .we now address the problem of estimating when it is unknown .once a consistent estimator of is obtained , all the results achieved carry over to the unknown case using a conditioning argument . , then argue that the results still hold unconditionally as . ] in principle , many consistent estimators of can be employed , for example , , , , .more recently , and proposed to select the largest ratio of the adjacent eigenvalues of , based on the fact that the largest eigenvalues of the sample covariance matrix grow as fast as increases , while the remaining eigenvalues either remain bounded or grow slowly .we extend ahn and horenstein s ( ) theory in two ways .first , when the loadings depend on the observable characteristics , it is more desirable to work on the projected data . due to the orthogonality condition of and , the projected data matrix is approximately equal to .the projected matrix thus allows us to study the eigenvalues of the principal matrix component , which directly connects with the strengths of those factors .since the nonvanishing eigenvalues of and are the same , we can work directly with the eigenvalues of the matrix .second , we allow . let denote the largest eigenvalue of the projected data matrix .we assume , which naturally holds if the sieve dimension slowly grows .the estimator is defined as the following assumption is similar to that of .recall that is a matrix of the idiosyncratic components , and denotes the covariance matrix of .[ ass6.1 ] the error matrix can be decomposed as where : the eigenvalues of are bounded away from zero and infinity , is a by positive semidefinite nonstochastic matrix , whose eigenvalues are bounded away from zero and infinity , is a stochastic matrix , where is independent in both and , and are i.i.d .isotropic sub - gaussian vectors , that is , there is , for all , there are , almost surely , this assumption allows the matrix to be both cross - sectionally and serially dependent . the matrix captures the serial dependence across . in the special case of no - serial - dependence ,the decomposition ( [ eq5.1 ] ) is satisfied by taking .in addition , we require to be sub - gaussian to apply random matrix theories of .for instance , when is , for any , , and thus condition ( iii ) is satisfied .finally , the _ almost surely _ condition of ( iv ) seems somewhat strong , but is still satisfied by bounded basis functions ( e.g. , fourier basis ) .we show in the supplementary material [ ] that when is diagonal ( is cross - sectionally independent ) , both the sub - gaussian assumption and condition ( iv ) can be relaxed .the following theorem is the main result of this section .[ th6.1 ] under assumptions of theorem [ th4.1 ] and assumption [ ass6.1 ] , as , if satisfies and ( may either grow or stay constant ) , we have section presents numerical results to demonstrate the performance of projected - pca method for estimating loadings and factors using both real data and simulated data .we collected stocks in s&p 500 index constituents from crsp which have complete daily closing prices from year 2005 through 2013 , and their corresponding market capitalization and book value from compustat .there are stocks in our data set , whose daily excess returns were calculated .we considered four characteristics as in for each stock : size , value , momentum and volatility , which were calculated using the data before a certain data analyzing window so that characteristics are treated known .see for detailed descriptions of these characteristics .all four characteristics are standardized to have mean zero and unit variance .note that the construction makes their values independent of the current data .we fix the time window to be the first quarter of the year 2006 , which contains observations . given the excess returns and characteristics as the input data and setting , we fit loading functions for using the projected - pca method .the four additive components are fitted using the cubic spline in the r package `` gam '' with sieve dimension .all the four loading functions for each factor are plotted in figure [ fig : gcurves ] .the contribution of each characteristic to each factor is quite nonlinear ., from financial returns of 337 stocks in s&p 500 index .they are taken as the true functions in the simulation studies . in each panel( fixed ) , the true and estimated curves for are plotted and compared . the solid , dashed and dotted red curves are the true curves corresponding to the first , second and third factors , respectively .the blue curves are their estimates from one simulation of the calibrated model with , . ]we now treat the estimated functions as the true loading functions , and calibrate a model for simulations .the `` true model '' is calibrated as follows : take the estimated from the real data as the true loading functions .for each , generate from where is diagonal and sparse .generate the diagonal elements of from gamma( ) with , ( calibrated from the real data ) , and generate the off - diagonal elements of from with , . then truncate by a threshold of correlation to produce a sparse matrix and make it positive definite by r package `` nearpd . ''generate from the i.i.d .gaussian distribution with mean and standard deviation , calibrated with real data .generate from a stationary var model where .the model parameters are calibrated with the market data and listed in table [ table : calibfactor ] .finally , generate . here is a correlation matrix estimated from the real data ..4d2.4c@ & + 0.9076 & 0.0049 & 0.0230 & -0.0371 & -0.1226 & + 0.0049 & 0.8737 & 0.0403 & -0.2339 & 0.1060 & + 0.0230 & 0.0403 & 0.9266 & 0.2803 & 0.0755 & + by projected - pca ( p - pca , red solid ) and traditional pca ( dashed blue ) and , by p - pca over 500 repetitions . left panel : , right panel : . ] and over 500 repetitions , by projected - pca ( p - pca , solid red ) and traditional pca ( dashed blue ) . ]we simulate the data from the calibrated model , and estimate the loadings and factors for and with varying from through .the `` true '' and estimated loading curves are plotted in figure [ fig : gcurves ] to demonstrate the performance of projected - pca .note that the `` true '' loading curves in the simulation are taken from the estimates calibrated using the real data .the estimates based on simulated data capture the shape of the true curve , though we also notice slight biases at boundaries .but in general , projected - pca fits the model well .we also compare our method with the traditional pca method [ e.g. , ] .the mean values of , , and are plotted in figures [ fig : calibg ] and [ fig : calibf ] where [ see section [ design2 ] for definitions of and .the breakdown error for and are also depicted in figure [ fig : calibg ] . in comparison ,projected - pca outperforms pca in estimating both factors and loadings including the nonparametric curves and random noise .the estimation errors for of projected - pca decrease as the dimension increases , which is consistent with our asymptotic theory . and over 500 repetitions .p - pca , pca and sls , respectively , represent projected - pca , regular pca and sieve least squares with known factors : design 2 . here , , so .upper two panels : grows with fixed ; bottom panels : grows with fixed . ] and by projected - pca ( solid red ) and pca ( dashed blue ) : design 2 .upper two panels : grows with fixed ; bottom panels : grows with fixed . ] consider a different design with only one observed covariate and three factors .the three characteristic functions are with the characteristic being standard normal .generate from the stationary var(1 ) model , that is , where .we consider .we simulate the data for or and various ranging from to . to ensure that the true factor and loading satisfy the identifiability conditions , we calculate a transformation matrix such that , is diagonal .let the final true factors and loadings be , .for each , we run the simulation for times .we estimate the loadings and factors using both projected - pca and pc . for projected - pca , as in our theorem , we choose , with and . to estimate the loading matrix, we also compare with a third method : sieve - least - squares ( sls ) , assuming the factors are observable . in this case , the loading matrix is estimated by , where is the true factor matrix of simulated data . the estimation error measured in max and standardized frobenius norms for both loadings and factors are reported in figures [ fig : simpleg ] and [ fig : simplef ] .the plots demonstrate the good performance of projected - pca in estimating both loadings and factors .in particular , it works well when we encounter small but a large . in this design , , so the accuracy of estimating is significantly improved by using the projected - pca .figure [ fig : simplef ] shows that the factors are also better estimated by projected - pca than the traditional one , particularly when is small .it is also clearly seen that when is fixed , the improvement on estimating factors is not significant as grows .this matches with our convergence results for the factor estimator .it is also interesting to compare projected - pca with sls ( sieve least - squares with observed factors ) in estimating the loadings , which corresponds to the cases of unobserved and observed factors .as we see from figure [ fig : simpleg ] , when is small , the projected - pca is not as good as sls .but the two methods behave similarly as increases .this further confirms the theory and intuition that as the dimension becomes larger , the effects of estimating the unknown factors are negligible .we now demonstrate the effectiveness of estimating by the projected - pc s eigenvalue - ratio method .the data are simulated in the same way as in design 2 . or and we took the values of ranging from to .we compare our projected - pca based on the projected data matrix to the eigenvalue - ratio test ( ah ) of and , which works on the original data matrix . .p - pca and ah , respectively , represent the methods of projected - pca and .left panel : mean ; right panel : standard deviation . ] for each pair of , we repeat the simulation for times and report the mean and standard deviation of the estimated number of factors in figure [ fig : estimatek ] . the projected - pca outperforms ah after projection , which significantly reduces the impact of idiosyncratic errors .when , we can recover the number of factors almost all the time , especially for large dimensions ( ) . on the other hand , even when , projected - pca still obtains a closer estimated number of factors .we test the loading specifications on the real data .we used the same data set as in section [ sec7.1 ] , consisting of excess returns from 2005 through 2013 .the tests were conducted based on rolling windows , with the length of windows spanning from 10 days , a month , a quarter and half a year . for each fixed window - length ( ) , we computed the standardized test statistic of and , and plotted them along the rolling windows respectively in figure [ fig : testing ] . in almost all cases ,the number of factors is estimated to be one in various combinations of .figure [ fig : testing ] suggests that the semiparametric factor model is strongly supported by the data .judging from the upper panel [ testing , we have very strong evidence of the existence of nonvanishing covariate effect , which demonstrates the dependence of the market beta s on the covariates .in other words , the market beta s can be explained at least partially by the characteristics of assets .the results also provide the theoretical basis for using projected - pca to get more accurate estimation . from 2006/01/03 to 2012/11/30 .the dotted lines are . ] in the bottom panel of figure [ fig : testing ] ( testing ) , we see for a majority of periods , the null hypothesis is rejected . in other words ,the characteristics of assets can not fully explain the market beta as intuitively expected , and model ( [ eq1.2 ] ) in the literature is inadequate. however , fully nonparametric loadings could be possible in certain time range mostly before financial crisis . during 20082010 ,the market s behavior had much more complexities , which causes more rejections of the null hypothesis .the null hypothesis is accepted more often since 2012 .we also notice that larger tends to yield larger statistics in both tests , as the evidence against the null hypothesis is stronger with larger .after all , the semiparametric model being considered provides flexible ways of modeling equity markets and understanding the nonparametric loading curves .this paper proposes and studies a high - dimensional factor model with nonparametric loading functions that depend on a few observed covariate variables .this model is motivated by the fact that observed variables can explain partially the factor loadings .we propose a projected - pca to estimate the unknown factors , loadings , and number of factors . after projecting the response variable onto the sieve space spanned by the covariates , the projected - pca yields a significant improvement on the rates of convergence than the regular methods .in particular , consistency can be achieved without a diverging sample size , as long as the dimensionality grows .this demonstrates that the proposed method is useful in the typical hdlss situations .in addition , we propose new specification tests for the orthogonal decomposition of the loadings , which fill the gap of the testing literature for semiparametric factor models .our empirical findings show that firm characteristics can explain partially the factor loadings , which provide theoretical basis for employing projected - pca method . on the other hand, our empirical study also shows that the firm characteristics can not fully explain the factor loadings so that the proposed generalized factor model is more appropriate .throughout the proofs , and may either grow simultaneously with or stay constant . for two matrices with fixed dimensions , and a sequence , by writing , we mean . in the regular factor model ,let denote a diagonal matrix of the first eigenvalues of . then by definition , .let .then where still by the equality ( [ ea.1add ] ) , .hence , this step is achieved by bounding for .note that in this step , we shall not apply a simple inequality , which is too crude .instead , with the help of the result achieved in step 1 , sharper upper bounds for can be achieved .we do so in lemma b.2 in the supplementary material [ ] .consider the singular value decomposition : , where is a orthogonal matrix , whose columns are the eigenvectors of ; is a matrix whose columns are the eigenvectors of ; is a rectangular diagonal matrix , with diagonal entries as the square roots of the nonzero eigenvalues of .in addition , by definition , is a diagonal matrix consisting of the largest eigenvalues of ; is a matrix whose columns are the corresponding eigenvectors .the columns of are the eigenvectors of , corresponding to the first eigenvalues . by assumption [ ass3.3 ] , , hence , by lemma b.1 in the supplementary material [ ] , .similarly , using the inequality that for the eigenvalue , , we have , for .hence , it suffices to prove that the first eigenvalues of are bounded away from both zero and infinity , which are also the first eigenvalues of .this holds under the theorem s assumption ( assumption [ ass3.1 ] ) .thus , , which also implies .fan , j. , liao , y. and mincheva , m. ( 2013 ) .large covariance estimation by thresholding principal orthogonal complements ( with discussion ) . _ journal of the royal statistical society , series b _ * 75 * 603680 .
this paper introduces a projected principal component analysis ( projected - pca ) , which employs principal component analysis to the projected ( smoothed ) data matrix onto a given linear space spanned by covariates . when it applies to high - dimensional factor analysis , the projection removes noise components . we show that the unobserved latent factors can be more accurately estimated than the conventional pca if the projection is genuine , or more precisely , when the factor loading matrices are related to the projected linear space . when the dimensionality is large , the factors can be estimated accurately even when the sample size is finite . we propose a flexible semiparametric factor model , which decomposes the factor loading matrix into the component that can be explained by subject - specific covariates and the orthogonal residual component . the covariates effects on the factor loadings are further modeled by the additive model via sieve approximations . by using the newly proposed projected - pca , the rates of convergence of the smooth factor loading matrices are obtained , which are much faster than those of the conventional factor analysis . the convergence is achieved even when the sample size is finite and is particularly appealing in the high - dimension - low - sample - size situation . this leads us to developing nonparametric tests on whether observed covariates have explaining powers on the loadings and whether they fully explain the loadings . the proposed method is illustrated by both simulated data and the returns of the components of the s&p 500 index . ./style / arxiv - general.cfg ,
the recent proliferation of smartphones and tablets has been seen as a key enabler for anywhere , anytime wireless communications .the rise of online services , such as facebook and youtube , significantly increases the frequency of users online activities . due to this continuously increasing demand for wireless access, a tremendous amount of data is circulating over today s wireless networks .this increase in demand is straining current cellular systems , thus requiring novel approaches for network design . in order to cope with this wireless capacity crunch ,device - to - device ( d2d ) communication underlaid on cellular systems , has recently emerged as a promising technique that can significantly boost the performance of wireless networks . in d2d communication , user equipments ( ues ) transmit data signals to each other over a direct link instead of through the wireless infrastructure , i.e. , the cellular network s evolved node bs ( enbs ) .the key idea is to allow direct d2d communication over the licensed band and under the control of the cellular system s operator .recent studies have shown that the majority of traffic in cellular systems consists of downloading contents such as videos or mobile applications .usually , popular contents , such as certain youtube videos , are requested more frequently than others . as a result ,enbs often end up serving different mobile users with the same contents using multiple duplicate transmissions . in this case , following the enb s first transmission of the content , such content is now locally accessible to others in the same area , if ues resource blocks ( rbs ) can be shared with others . newly arriving users that are within the transmission distancecan receive the old " contents directly from those users through d2d communication .here , the enb only serves users that request new " content , which has never been downloaded . through this d2d communication, we can reduce considerable redundant requests to enb , so that the traffic burden of enb can be released .our main contribution is to propose a novel approach to d2d communication , which allows to exploit the social network characteristics so as to reduce the load on the cellular system . to achieve this goal ,first , we propose an approach to establish a d2d subnetwork to maintain the data transmission successfully . as a d2d subnetworkis composed by individual users , the connectivity among users can be intermittent .however , the social relations in real world tend to be stable over time .such social ties can be utilized to achieve efficient data transmission in the d2d subnetwork .we name this social relation assisted data transmission wireless network by offline social network ( offsn ) .second , we assess the amount of traffic that can be offloaded , i.e. , with which probability can the requested contents be served locally . to analyze this problem , we study the probability that a certain content is selected .this probability is affected by both external ( influence from media or friends ) and internal ( user s own interests ) factors .while users interests are difficult to predict , the external influence which is based on users selections can be easily estimated . to this end , we define an online social network ( onsn ) that encompasses users within the offsn , which reflect users online social ties and influence to each other . in this paper , we adopt practical metrics to establish offsn , and the indian buffet process to model the influence in onsn .then we can get solutions for the previous two problems .latter we will integrate offsn and onsn to carry out the traffic offloading algorithm .further more , in order to evaluate the performance of our algorithm , we will set up the chernoff bound of the number of old contents user selects . to make the analysis more accurate, we also derive the approximated probability mass function ( pmf ) and cumulative distribution function ( cdf ) of it . from the numerical results ,we show that under certain circumstances , our algorithm can reduce a considerable amount of of enb s traffic .our simulations based on the real traces also proved our analysis for the traffic offloading performance .consider a cellular network with one enb and multiple users . in this system ,two network layers exist over which information is disseminated .the first layer is the onsn , the platform over which users acquire the links of contents from other users .once a link is accessed , the data package of contents must be transmitted to the ue through the actual physical network .taking advantage of the social ties , the offsn represents the physical layer network in which the requested contents of links to transmit .an illustration of this proposed model is shown in fig.[fig : onsn and offsn ] .each active user in the onsn corresponds to the ue in the offsn . in the offsn , the first request of the contentis served by the enb .subsequent users can thus be served by previous users who hold the content , if they are within the d2d communication distance .information dissemination in both onsn and offsn.,scaledwidth=20.0% ] in the area covered by an enb , the density of users in public areas such as office buildings and commercial sites , is much higher than that in other locations such as sideways and open fields .indeed , the majority of the data transmissions occurs in those fixed places . in such high density locations , forming d2d networks asan offsn becomes a natural process .thus , we can distinguish two types of areas : highly dense areas such as office buildings , and white " areas such as open fields . in the former ,we assume that d2d networks are formed based on users social relations . while in the latter , due to the low density , the users are served directly by the enb .the offsn is a reflection of the local users social ties .proper metrics need to be adopted to depict the degree of the connections among users . in ,the authors identify that human mobility shows a very high degree of temporal and spatial regularity , and that each individual returns to a few highly frequented locations with a significant probability .thus , such social ties lead to higher probabilities to transmit data among users .the contact duration distribution between two users is assumed to be a continuous distribution , which has a positive value for all real values greater than zero .in addition , users encounter duration usually centers around a mean value .so we can adopt a distribution which is widely used in modeling the call durations to model the call duration between two users . to find the value for the two parameter and , we need the mean and variance of the contact duration . as shown in fig .[ fig : encounter history ] , given the contact duration and the number of encounters between ue and ue corresponding to user and user in the onsn , an estimate of the expected contact duration length and variance can be given by : contact history between ue i and ue j.,scaledwidth=11.6% ] given the mean and variance of the contact duration , we can derive the contact duration distribution : .thus , the probability density function ( pdf ) of contact duration will be given by : where .then , we can calculate the probability of the contact durations that are qualified for data transmission .if the contact duration is not sufficient to complete a data package transmission , the communication session can not be carried out successfully .we adopt a closeness metric , to represent the probability of establishing a successful communication period between ue and ue , which ranges from to .the qualified contact duration is the complementary of the disqualified communication duration probability , so can be given by : where is the minimal contact duration required to successfully transmit one content data package , is the lower incomplete gamma function . hence , we can use the closeness metric to describe the communication probability between two ues , which can also be seen as the weight of the link between ue and ue . then , a threshold can be defined to filter the boundary between different offsns and white " areas . to cluster users based on metrics such as closeness, we can adopt algorithms such as those used in social networks as in .then , with a properly chosen , each pair in the offsn will have a strong direct neighboring relationship .onsn is the platform for content links to disseminate .we define the number of users in the onsn as which , in turn , corresponds to ues in the offsn . the total number of available contents in the onsn is denoted by , . given the large volume of content available online , . represents the set of contents that have viewing histories and is the set of contents that do not have any . we adopt the indian buffet process ( ibp ) model which serves as a powerful tool for getting the content popularity distribution and predicting users selections .the ibp is a stochastic process in which each diner samples from an infinite selection of dishes on offer at a buffet .the first customer will select its preferred dishes according to a poisson distribution with parameter . since all dishes are new to this customer , no external information exists so as to influence the selection .however , once the first customer completes the selection , the following customers will have some prior information about those dishes based on the first customer s feedback .customers learn from the previous selections to update their beliefs on the dishes and the probabilities with which they will choose the dishes .the behavior of content selection behavior in onsn is analogous to the dish selection in an ibp .if we view our onsn as an indian buffet , the online contents as dishes , and the users as customers , we can interpret the contents spreading process online by an ibp .so the probability distribution can be implemented from the ibp directly .one realization of indian buffet process.,scaledwidth=19.0% ] in fig .[ fig : ibp ] , we show one realization of an ibp .customers are labeled by ascending numbers in a sequence .the shaded block represents the user selected dish . in ibp, the first customer selects each dish with equal probability of , and ends up with the number of dishes following distribution . for subsequent customers ,the probability of also having dish already belonging to previous customers is , where is the number of customers prior to with dish . repeating the same argument as the first customer, customer will also have new dishes not tasted by the previous customers following a distribution .the probabilities of selecting certain dishes act as the prior information . for old " dishes which have been tasted before , . for new" dishes which have not been sampled before , . after user completes its selection , will be updated to .this learning process is also illustrated in fig .[ fig : ibp ] . is the number of dishes that have not been sampled before user s selection session .in the previous section , we have introduced the basic model to formulate the subnetwork for d2d communication and predict users selection . in this section, we can integrate the two layer networks together and propose the traffic offloading algorithm based on the offsn and onsn models .as the inter - offsn interference of d2d communication can be restricted by methods such as power control , we can ignore the interference among different offsns .thus , we place an emphasis on the intra - offsn interference due to resource sharing between d2d and cellular communication . during the downlink period of d2d communication, ues will experience interference from other cellular and d2d communications as they share the same subchannels .thus , we can define the transmission rate of users served by the enb as , and by d2d communication with co - channel interference as , : where , and are the transmit power of enb , d2d transmitter and , respectively , is the channel response of the link between ue or enb and ue , is the additive white gaussian noise ( awgn ) at the receivers , and represents the presence of interference from d2d to cellular communication , satisfying , otherwise . here, , so represents the interference from the other d2d pairs that share spectrum resources with pair . the transmission rate of the users that are only served by the enb without underlying d2d , thus also without co - channel interference is given by : in the proposed model , even though enb can offload traffic by d2d communication , controlling the switching over cellular and d2d communication causes extra data transmission .thus , there exists a certain cost such as control signals transmission and information feedback during the access process .therefore , for the enb that is serving a certain user , we propose the following utility function : +m_n^0 r_c - m_n c_c,\ ] ] where is the overhead cost for controlling the resource allocation process .we propose a novel and robust algorithm that can offload the traffic of enb without any sacrifice on the quality of service .the algorithm consists of multiple stages . in the first stage, the enb collects the encounter history between users to compute the closeness .then , based on specific situation such as time and locations , enb chooses dynamically . by checking if , i.e. , if ue and ue satisfy the predefined closeness threshold , the enb can decide on whether to add this user into the offsn or not . by choosing a proper and power control ,the interference among different offsns can be avoided .this process will continue until no more users can be further added to the enb s list .then , the users in the established offsn can construct a communication session with only intra - offsn interference . for websites that provide a portal to access content , such as facebook and youtube , the enb will assign a special tag .once a user visits such tagged websites , the enb will inspect whether the user is located in an offsn or a white " area .if the user is in a white " area , any requests of users will be served by the enb directly .if the user is located in an offsn , the enb will wait the user s future activities . by browsing online, current user can have the prior information of the content distribution in the onsn based on previous user s requests .as soon as the user requests data , the enb detects if there are any resources in the offsn , and then choose to set up d2d communication or not based on the feedback .for old content , the enb will send control signal to the ue that has the highest closeness with user .then , ue and ue establish a d2d communication link . even if the d2d communication is setup successfully , the enb still waits until the data transmitting process finishes .if the d2d communication fails , the enb will revert back to serve the user directly . for new contents, the enb serves the user directly . after the selection is complete , the prior updates to the posterior .the proposed d2d communication algorithm is summarized in algorithm . *offsn and onsn generation * enb collects encounter information in cellular network find the closeness between two ues forming onsn by the users of corresponding ues * 2 .user activity detection * * 3 . service based on onsn activities *to evaluate the traffic offloading performance of our algorithm , we derive a bound on the amount of traffic that can be offloaded . we show that this problem is equivalent to the amount of contents that have been downloaded . while those locally accessible contents is related to the number of total contents and new contents selected by the users .before we start to derive a closed - form expression on the number of old content , we first try to find its bound . here, we adopt the chernoff bound for analysis .let be a sequence of independent trials with , , and ] ^\mu,\ ] ] and a bound when ^\mu.\ ] ] for the case of our model , the total number of contents user selects is , the number of new contents .then , the number of old contents .hence , define the expected number of old contents , , with a chernoff bound of ^{\frac{n-1}{n}\alpha},\\ \end{split}\ ] ] when $ ] , and ^{\frac{n-1}{n}\alpha},\\ \end{split}\ ] ] when .as we can see , the number of old contents is the difference of two possion distribution which follows the skellam distribution .we can approximate pmf and cdf for the skellam distribution using the saddlepoint approximation .then , we have the approximated pmf and cdf for the number of old contents as follows : , \end{split}\ ] ] where and . with the cdf function we can get the approximate number of old contents that each user select .thus , we can estimate the traffic that can be offloaded .to evaluate the performance of our algorithm , we exploit a data set of sensor mote encounter records and corresponding social network data of a group of participants at university of st andrews by the crawdad team . in the first data set , they deployed t - mote invent devices over a period of days among users in the department of computer science building .this data set helps us to establish our physical layer offsn . in the second dataset , they collected the participants facebook friend lists to generate a social network topology . with those information we can generate the corresponding onsn .then , we adopt the ibp to generate users selections online under the assumption that the size of content library is unbounded .we assume that the content selection process has already been performed for a number of times .thus , the enb can obtain the prior information of the content distribution .0.3 0.3 0.3 as the d2d communication distance increases , the enb will have more possibilities for detecting available contents providers . as a result, the performance of traffic offloading will be better with a larger maximum distance .this assumption is shown in fig .[ fig : traffic and distance ] . in this figure, we can see that , increasing the maximum communication distance , yields a decrease in the enb s data rate and an increase in the amount of offloaded traffic . however , we note that , with the increase of the transmission distance , the associated ue costs ( e.g. , power consumption ) will also increase .thus , the increase of d2d communication distance will provide additional benefits to the enb , but not for users . in fig .[ fig : traffic and enb cost ] , we show the variation of the sum - rate at the enb as the cost for control signal varies . as the enb has to arrange the inter change process between cellular and d2d communication ,necessary control information is needed . moreover ,additional feedback signals are required for monitoring the d2d communication and checking its status .those costs will affect the traffic offloading performance of the system . in our simulation, we define the cost as the counteract to the gain in data rate from 5% to 50% .as we can see , increasing the cost on control signal , the offload traffic amount is decreased . in order to know the amount of traffic that can be offloaded , we specified a special case when , then plot the chernoff bound and saddlepoint approximation of the cdf of the user s number of old contents in fig .[ fig : bound ] .as we can see , the approximated cdf lies between the chernoff bound .we have mentioned in the previous section , the mean value of the number of old contents is .so there is a gap between the number of and .then we simulate the user s selection and plot the empirical cdf .the simulated empirical cdf also lies between the upper and lower chernoff bound as we expected .in addition , the approximated cdf line is quite close to the simulated empirical cdf line , which proves our analysis in the previous section .in this paper , we have proposed a novel approach for improving the performance of d2d communication underlaid over a cellular system , by exploiting the social ties and influence among individuals .we formed the offsn to divide the cellular network into several subnetworks for carrying out d2d communication with only intra - offsn interference .also we established the onsn to analyse the offsn users online activities . by modeling the influence among users on contents selection online using the indian buffet process, we have obtained the distribution of contents requests , and thus can get the probabilities of each contents to be requested . using our proposed algorithm , the traffic of enb has been reduced .simulation results based on real traces have shown that different parameters for enb and users will lead to different traffic offloading performances . c. xu , l. song , z. han , d. li , and b. jiao , resource allocation using a reverse iterative combinatorial auction for device - to - device underlay cellular networks , " _ ieee globe communication conference ( globecom ) _ , anaheim , ca , dec .3 - 7 , 2012 .n. golrezaei , a. f. molisch , and a. g. dimakis , base - station assisted device - to - device communication for high - throughput wireless video networks , " _ ieee international conference on communications ( icc ) _ , pp. 7077 - 7081 , ottawa , canada , jun .10 - 15 , 2012 .f. li and j. wu , localcom : a community - based epidemic forwarding scheme in disruption - tolerant networks , " _ proc .ieee conference sensor , mesh and ad hoc communications and networks ( secon ) _ , pp .574 - 582 , rome , italy , jun .22 - 26 , 2009 .j. guo , f. liu , and z. zhu estimate the call duration distribution parameters in gsm system based on k - l divergence method , " _ ieee international conference on wireless communications , networking and mobile computing ( wicom ) _ , pp .2988 - 2991 , shanghai , china , sep .21 - 25 , 2007 .g. bigwood , d. rehunathan ma .bateman , t. henderson , and s. bhatti , exploiting self - reported social networks for routing in ubiquitous computing environments , " _ proceedings of ieee international conference on wireless and mobile computing , networking and communication ( wimob 08 ) _ , pp .484 - 489 , avignon , france , oct . 12 - 14 , 2008 .
device - to - device ( d2d ) communication has seen as a major technology to overcome the imminent wireless capacity crunch and to enable new application services . in this paper , we propose a social - aware approach for optimizing d2d communication by exploiting two layers : the social network and the physical wireless layers . first we formulate the physical layer d2d network according to users encounter histories . subsequently , we propose an approach , based on the so - called indian buffet process , so as to model the distribution of contents in users online social networks . given the social relations collected by the evolved node b ( enb ) , we jointly optimize the traffic offloading process in d2d communication . in addition , we give the chernoff bound and approximated cumulative distribution function ( cdf ) of the offloaded traffic . in the simulation , we proved the effectiveness of the bound and cdf . the numerical results based on real traces show that the proposed approach offload the traffic of enb s successfully .
several diagnostic protocols are usually adopted by dermatologists for analyzing and classifying skin lesions , such as the so - called _ abcd - rule _ of dermoscopy . due to the subjective nature of examination ,the accuracy of diagnosis is highly dependent upon human vision and dermatologist s expertise .computerized dermoscopic image analysis systems , based on a consistent extraction and analysis of image features , do not have the limitation of this subjectivity .these systems involve the use of a computer as a second independent and objective diagnostic method , which can potentially be used for the pre - screening of patients performed by non - experienced operators .although computerized analysis techniques can not provide a definitive diagnosis , they can improve biopsy decision - making , which some observers feel is the most important use for dermoscopy .recently , numerous researches on this topic propose systems for the automated detection of malignant melanoma in skin lesions ( e.g. , ) . in our previous study on dermoscopic images , the segmentation of the skin area and the lesion area was achieved by a semi - automatic process based on otsu algorithm , supervised by a human operator . here , we propose a full automatic segmentation method consisting of three main steps : selection of the image roi , selection of the segmentation band , and segmentation .the paper is organized as follows . in section [ proposedapproach ]we describe the proposed algorithm , providing details of its main steps . in section [ expres ]we provide a thorough analysis of experimental results on the isic 217 dataset .conclusions are drawn in section [ conclusioni ] .the block diagram of the segmentation algorithm proposed for dermoscopic images , named sdi algorithm , is shown in fig . [fig : overall ] .the three main steps are described in the following . in order to achieve an easier and more accurate segmentation of the skin lesion ,it is advisable to select the region of interest ( roi ) , i.e. , the subset if image pixels that belong to either the lesion or the skin .this region excludes image pixels belonging to ( usually dark ) areas of the image border and/or corners , as well as those belonging to hair , that will not be taken into account in the subsequent steps of the sdi algorithm . in the proposed approach ,the value band of the image in the hsv color space is chosen in order to select dark image pixels ; these are excluded from the roi if they cover most of the border or the angle regions of the image . concerning hair ,many highly accurate methods have been proposed in the literature . here , we adopted a bottom - hat filter in the red band of the rgb image .an example of the roi selection process is reported in fig .[ fig : roiselection ] for the isic 2017 test image no . 15544 . here , we observe that the wide dark border on the left of the image , as well as the dark hair over the lesion , have properly been excluded from the roi mask .[ cols="^,^ " , ]we proposed the sdi algorithm for dermoscopic image segmentation , consisting of three main steps : selection of the image roi , selection of the segmentation band , and segmentation .the reported analysis of experimental results achieved by the sdi algorithm on the isic 2017 dataset allowed us to highlight its pro s and con s .this leads us to conclude that , although some accurate results can be achieved , there is room for improvements in different directions , that we will go through in future investigations .this research was supported by lab gtp project , funded by miur .w. stolz , a. riemann , a. b. cognetta , l. pillet , w. abmayr , d. holzel , p. bilek , f. nachbar , m. landthaler , and o. braun - falco , `` abcd rule of dermoscopy : a new practical method for early recognition of malignant melanoma , '' _european journal of dermatology _ ,vol . 4 , pp . 521527 , 1994 .m. burroni , r. corona , g. delleva , f. sera , r. bono , p. puddu , r. perotti , f. nobile , l. andreassi , and p. rubegni , `` melanoma computer aided diagnosis : reliability and feasibility study , '' _ clinical cancer research _ , vol .10 , pp . 18811886 , 2004 .m. e. celebi , h. a. kingravi , b. uddin , h. iyatomi , y. a. aslandogan , w. v. stoecker , and r. h. moss , `` a methodological approach to the classification of dermoscopy images . ''_ computerized medical imaging and graphics _ , vol .31 , no . 6 , pp .362373 , september 2007 .i. maglogiannis and c. n. doukas , `` overview of advanced computer vision systems for skin lesions characterization , '' _ ieee transactions on information technology in biomedicine _ , vol . 13 , no . 5 ,pp . 721733 , 2009 .v. cozza , m. r. guarracino , l. maddalena , and a. baroni , `` dynamic clustering detection through multi - valued descriptors of dermoscopic images , '' _ statistics in medicine _ ,30 , no .20 , pp . 25362550 , 2011 .[ online ] .available : http://dx.doi.org/10.1002/sim.4285 m. e. celebi , q. wen , h. iyatomi , k. shimizu , h. zhou , and g. schaefer , `` a state - of - the - art survey on lesion border detection in dermoscopy images , '' in _ dermoscopy image analysis _ , m. e. celebi , t. mendonca , and j. s. marques , eds.1em plus 0.5em minus 0.4emcrc press , 2015 , pp .
we propose an automatic algorithm , named sdi , for the segmentation of skin lesions in dermoscopic images , articulated into three main steps : selection of the image roi , selection of the segmentation band , and segmentation . we present extensive experimental results achieved by the sdi algorithm on the lesion segmentation dataset made available for the isic 2017 challenge on skin lesion analysis towards melanoma detection , highlighting its advantages and disadvantages .
evolutionary algorithms ( ea ) are a group of computational techniques , which employ the theory of natural selection to a population of individuals to generate better individuals .genetic programming ( gp ) is a paradigm of ea which uses hierarchical , tree structure , variable length representation to code solutions of a problem .gp can be used to intelligently search the solution space for finding the optimal solution of a problem .there are many gp tools ( lil - gp , genexprotools , gplab , ecj , open beagle ) developed by gp practitioners .however , none of these address the following demands of end user before applying them to solve symbolic regression problems : ( i ) ease of use and ( ii ) small learning curve .many of these tools are open source and available freely ( lil - gp , ecj , open beagle , gplab for matlab ) whereas the rest are available commercially ( genexprotools ) .many of these tools necessitate modification in source code in order to generate required experimental environment . determining the final solution , produced by these tools , demands translation of the output or digging the log files . due to these reasons , the interest of researchers and engineers in gp may get reduced .theses reasons motivated us to develop our own gp framework which uses the postfix notation for an individual representation .we have considered the following features which need to be supported by postfix - gp framework : ( i ) easy to extend , ( ii ) simple and quick procedure for the configuration and running of gp , ( iii ) a set of algorithm implementation for : ( a ) generating the initial population , ( b ) selection mechanisms , and ( c ) genetic operators , ( iv ) visualization of : ( a ) postfix - gp run analysis and ( b ) evolved solution with statistical measures , ( v ) one - step and multi - step prediction support , ( vi ) visualization of results for one - step and multi - step predictions , and ( vii ) storage and retrieval of evolved solutions to and from file .postfix - gp has been used in experimental work , , , .this paper presents the design and implementation of postfix - gp , an object oriented software framework for genetic programming .section [ sec : introduction to gp ] gives introduction to gp .section [ sec : postfix - gp ] presents the design of postfix - gp .moreover , the section also gives the implementation details and main features of postfix - gp .section [ sec : case study ] presents postfix - gp as a solution modeling tool by solving the benchmark symbolic regression problem . section [ sec : featurecomparison ] compares the features of postfix - gp with lil - gp , ecj , and jclec frameworks .this is followed by conclusions in section [ sec : conclusions ] .standard gp employs a variable length , tree structure scheme for an individual representation .the tree can be used to represent logical expressions ( if - then - else ) , boolean expressions ( and , or , not ) or algebraic expressions . the symbolic regression aims to find the functional relationship ( mathematical expression ) between given instances of inputs - outputs .gp can be used to perform symbolic regression ( sr ) . when using gp for solving symbolic regression problems , the user need to specify the following items : ( i ) gp configuration parameters , ( ii ) terminal set and function set , ( iii ) fitness function .the main steps of standard gp are as follows : 1 .random generation of an initial population of candidate solutions in tree form using the elements of function set and terminal set , selected by the modeler ( user ) .2 . calculating fitness value of every individual of the population on the given training dataset ( fitness cases ) 3 . selecting parents for mating based on the calculated fitness values , determined in previous step 4 . applying sub - tree crossover and mutation ( genetic operators ) on selected parents for generating a new population of individuals .the process is repeated until the termination condition is fulfilled .the important features of the proposed postfix - gp are categorized into : ( i ) training dataset , function set , and terminal set related , ( ii ) gp parameters related , ( iii ) test dataset prediction related , ( iv ) gp run analysis related , and ( v ) serialization and de - serialization of gp experiments and results .training dataset , function set , and terminal set related features : * loading of training dataset * loading of binary and unary functions * loading of constants gp parameters related : * selection of method to generate initial population * configuration of gp parameters like population size , number of generations , crossover rate , mutation rate * selection of crossover and mutation type * selection of selection scheme test dataset prediction related : * loading out - of - sample test dataset for one - step predictions * visualization of results of one - step predictions with statistical measures * loading out - of - sample test dataset for multi - step predictions * visualization of results of multi - step predictions with statistical measures gp run analysis related : * plotting best adjusted fitness vs number of generations * plotting average adjusted fitness vs number of generations * plotting solution size vs number of generations serialization and de - serialization of gp experiments and results : * serialization of gp parameters , function set , terminal set , and obtained solutions to a file * de - serialization of gp parameters , function set , terminal set , and obtained solutions from a file this section presents the implementation details of postfix - gp .this will be useful to the reader in understanding and customizing the proposed postfix - gp framework . for details related to postfix - gp solution representation scheme , refer , .postfix - gp framework is developed using microsoft .net framework on windows xp operating system .the zedgraph class library is used for plotting the charts .zedgraph is an open source graph library for .net platform .+ the class diagram for postfix - gp is depicted in [ fig : classdiagram ] .the classes can be grouped into following categories : ( i ) representation ( , ) , ( ii ) population ( ) , ( iii ) crossover operator ( , , , ) , ( iv ) mutation operator ( , , ) , ( v ) selection schemes ( , , , ) , ( vi ) gp parameters ( ) , and ( vii ) statistical analysis of results ( ) . [ cols="^ " , ] * lil - gp * : lil - gp is a gp toolkit implemented in c programming language .the toolkit is efficient and fast , as it is implemented in c. however , it is difficult to extend the toolkit compare to other object - oriented implementations of gp systems .lil - gp uses only tree structure for an individual representation and provides limited fitness measures .moreover , the toolkit does not provide graphical user interface to read an input ( training ) data file .the toolkit uses a parameter file to load gp parameters .the toolkit produces six reporting files ( .sys,.gen,.prg,.bst,.his and .stt ) that provide the statistical information of the gp run .there are many patches developed by different researchers to fix the bugs and to improve the functionality of the basic lil - gp .* ecj * : ecj is a java based framework for evolutionary computation and genetic programming .ecj is designed using the object - oriented concepts .classes of ecj framework are divided into four layers : ( i ) utility layer , ( ii ) basic and custom evolutionary computation layer , ( iii ) basic and custom genetic programming layer , and ( iv ) problem layer . as the framework is implemented in java, it is slower in speed .moreover , ecj uses a tree ( and not an arrays ) to represent an individual , which requires dynamic memory allocation .thus , the framework consumes large memory .ecj determines gp parameters from a parameter file .ecj determines classes to be loaded , the type of problem to be solved , the type of technique to use to solve the problem , and the way to report the statistical results of the run from the parameter file at the run time .this provides easy to extend functionality to the ecj .ecj stores statistical information of gp run in a text file .moreover , it provides flexibility to produce the extra output files through class customization but does not provide a gui to visualize this information . *jclec * : jclec is a java based framework for evolutionary computation and genetic programming .jclec is designed using the object - oriented concepts .the classes of jclec framework are divided in three layers : ( i ) system core , ( ii ) experiments runner ( reads an ea script file , execute all indicated algorithms and produce report files ) , and ( iii ) genlab ( a gui on the top of experiments runner and system core layers , provides functionality to edit the experiment files and to view the gp run results ) .gp parameters can be set by the user either through genlab gui or through an xml ( configuration ) file .however , the structure of configuration files is not user friendly .the framework provides the gui to visualize statistical information of gp run .the jclec framework is easy to extend .this paper presented the design and implementation of postfix - gp framework .the implementation details of postfix - gp , including an individual representation , different crossover operators , mutations , and selection mechanism were also presented .postfix - gp provides user interactive gui for performing different activities .the user can load training dataset , function set , and constants .the user can set the gp parameters through gui .the evolved solutions with their statistical measures can be visualized through gui .moreover , the user can also perform one - step and multi - step predictions using gui .the evolved solutions can be stored in binary format and can be retrieved later on .the user can also analyze postfix - gp run through gui .postfix - gp as a solution modeling tool was presented by solving symbolic regression problem .postfix - gp addresses the requirements of ease of use and small learning curve before utilizing it to solve the problems .it was developed to minimize the user s time required to set up and run gp experiments . c. gagn and m. parizeau , `` open beagle : a new c++ evolutionary computation framework , '' in _gecco 2002 : proceedings of the genetic and evolutionary computation conference_.1em plus 0.5em minus 0.4em new york : morgan kaufmann publishers , 9 - 13 july 2002 , p. 888 .[ online ] .available : http://www.cs.ucl.ac.uk/staff/w.langdon/ftp/papers/gecco2002/gecco-2002-15.pdf v. dabhi and s. chaudhary , `` semantic sub - tree crossover operator for postfix genetic programming , '' in _ proceedings of seventh international conference on bio - inspired computing : theories and applications ( bic - ta 2012 ) _ , ser .advances in intelligent systems and computing , j. c. bansal , p. k. singh , k. deep , m. pant , and a. k. nagar , eds .201.1em plus 0.5em minus 0.4emspringer india , 2013 , pp . 391402 . , `` time series modeling and prediction using postfix genetic programming , '' in _ advanced computing communication technologies ( acct ) , 2014 fourth international conference on _ , feb 2014 , pp . 307314 .x. li , c. zhou , w. xiao , and p. c. nelson , `` prefix gene expression programming , '' in _ late breaking paper at genetic and evolutionary computation conference ( gecco2005 ) _ , washington , d.c . , usa , 25 - 29 jun .2005 , pp . 2531 .j. h. holland , _ adaptation in natural and artificial systems : an introductory analysis with applications to biology , control and artificial intelligence_.1em plus 0.5em minus 0.4emcambridge , usa : mit press , 1992 .
this paper describes postfix - gp system , postfix notation based genetic programming ( gp ) , for solving symbolic regression problems . it presents an object - oriented architecture of postfix - gp framework . it assists the user in understanding of the implementation details of various components of postfix - gp . postfix - gp provides graphical user interface which allows user to configure the experiment , to visualize evolved solutions , to analyze gp run , and to perform out - of - sample predictions . the use of postfix - gp is demonstrated by solving the benchmark symbolic regression problem . finally , features of postfix - gp framework are compared with that of other gp systems . + * _ keywords- _ * postfix genetic programming ; postfix - gp framework ; object oriented design ; gp software tool ; symbolic regression
at the end of the last century , the astronomical observations of high redshift type ia supernovae ( snia ) indicated that our universe is not only expanding , but also accelerating , which conflicts with our deepest intuition of gravity . with some other observations , such as cosmic microwave background radiation ( cmbr ) , baryon acoustic oscillations ( bao ) and large - scale structure ( lss ) , physicists proposed a new standard cosmology model , , which introduces the cosmological constant back again .although this unknown energy component accounts for 73% of the energy density of the universe , the measured value is too small to be explained by any current fundamental theories.- if one tries to solve this trouble phenomenologically by setting the cosmological constant to a particular value , the so - called fine - tuning problem would be brought up , which is considered as a basic problem almost any cosmological model would encounter .a good model should restrict the fine - tuning as much as possible . in order to alleviate this problem ,various alternative theories have been proposed and developed these years , such as dynamical dark energy , modified gravity theories and even inhomogeneous universes .recently , a new attempt , called torsion cosmology , has attracted researchers attention , which introduces dynamical torsion to mimic the contribution of the cosmological constant .it seems more natural to use a pure geometric quantity to account for the cosmic acceleration than to introduce an exotic energy component .torsion cosmology could be traced back to the 1970s , and the early work mainly focused on issues of early universe , such as avoiding singularity and the origin of inflation . in some recent work , researchers attempted to extend the investigation to the current evolution and found it might account for the cosmic acceleration . among these models , poincar gauge theory ( pgt ) cosmology is the one that has been investigated most widely .this model is based on pgt , which is inspired by the einstein special relativity and the localization of global poincar symmetry .et al_. made a comprehensive survey of torsion cosmology and developed the equations for all the pgt cases. based on goenner s work , nester and his collaborators found that the dynamical scalar torsion could be a possible reason for the accelerating expansion .et al_. extended the investigation to the late time evolution , which shows us the fate of our universe .besides pgt cosmology , there is another torsion cosmology , de sitter gauge theory ( dsgt ) cosmology , which can also be a possible explanation to the accelerating expansion .this cosmological model is based on the de sitter gauge theory , in which gravity is introduced as a gauge field from de sitter invariant special relativity ( dssr ) , via the localization of de sitter symmetry. dssr is a special relativity theory of the de sitter space rather than the conventional minkowski spacetime , which is another maximally symmetric spacetime with an uniform scalar curvature . and the full symmetry group of this space is de sitter group , which unifies the lorentz group and the translation group , putting the spacetime symmetry in an alternatively interesting way .but in the limit of , the de sitter group could also degenerate to the poincar group .to localize such a global symmetry , de sitter symmetry , requires us to introduce certain gauge potentials which are found to represent the gravitational interaction .the gauge potential for de sitter gauge theory is the de sitter connecion , which combines lorentz connection and orthonormal tetrad , valued in (1,4 ) algebra .the gravitational action of dsgt takes the form of yang - mills gauge theory . via variation of the action with repect to the the lorentz connection and orthonormal tetrad, one could attain the einstein - like equations and gauge - like equations , respectively .these equations comprise a set of complicated non - linear equations , which are difficult to tackle .nevertheless , if we apply them to the homogeneous and isotropic universe , these equations would be much more simpler and tractable . based on these equations, one could construct an alternative cosmological model with torsion .analogous to pgt , dsgt has also been applied to the cosmology recently to explain the accelerating expansion. our main motivation of this paper is to investigate ( i)whether the cosmological model based on de sitter gauge theory could explain the cosmic acceleration ; ( ii)where we are going , i.e. , what is the fate of our universe ; ( iii ) the constraints of the parameters of model imposed by means of the comparison of observational data . by some analytical and numerical calculations, we found that , with a wide range of initial values , this model could account for the current status of the universe , an accelerating expanding , and the universe would enter an exponential expansion phase in the end . this paper is organized as follows : first , we summarize the de sitter gauge theory briefly in sec . [sec : de - sitter - gauge ] , and then show the cosmological model based on de sitter gauge theory in sec . [sec : cosm - evol - equat ] .second , we rewrite these dynamical equations as an autonomous system and do some dynamical analysis and numerical discussions on this system in the sec . [sec : autonomous - system ] and [ sec : numer - demonstr ] .next in the [ sec : supern - data - fitt]th section , we compare the cosmological solutions to the snia data and constrain the parameters .last of all , we discuss and summarize the implications of our findings in section [ sec : summary - conclusion ] .[ supernovae data fitting]in dsgt , the de sitter connection is introduced as the gauge potential , which takes the form as -r^{-1}e^b_\mu & 0 \end{array } \right ) \in \mathfrak{so}(1,4),\ ] ] where , and , which combines the lorentz connection and the orthonormal tetrad are 4d coordinate indices , whereas the capital latin indices and the lowercase latin indices , denote 5d and 4d orthonormal tetrad indices , respectively . ] .the associated field strength is the curvature of this connection , which is defined as -r^{-1}t^b_{~\mu\nu } & 0 \end{array } \right ) \in \mathfrak{so}(1,4),\end{aligned}\ ] ] where , is the de sitter radius , and and are the curvature and torsion of lorentz - connection , which also satisfy the respective bianchi identities. the gauge - like action of gravitational fields in dsgt takes the form , .\label{gym}\end{aligned}\ ] ] here , , is a dimensionless constant describing the self - interaction of the gauge field , is a dimensional coupling constant related to and , and is the scalar curvature of the cartan connection . in order to be consistent with einstein - cartan theory ,we take and , where . assuming that the matter is minimally coupled to gravitational fields , the total action of dsgt could be written as : where denotes the action of matter , namely the gravitational source .now we can obtain the field equations via variational principle with respect to , \label{feq2}% & & \nabla_{\nu}f_{ab}^{~~\mu\nu}-r^{-2}\left(y^\mu_{~\,\lambda\nu } e_{ab}^{~~\lambda\nu}+y ^\nu_{~\ , \lambda\nu } e_{ab}^{~~\mu\lambda } + 2t_{[a}^{~\mu\lambda } e_{b]\lambda}\right ) = 16\pi gr^{-2}s^{\quad \mu}_{{\rm m}ab},%\end{aligned}\ ] ] where represent the effective energy - momentum density and spin density of the source , respectively , and is the contorsion .it is worth noticing that the nabla operator in eq .( [ feq1 ] ) and ( [ feq2 ] ) is the covariant derivative compatible with christoffel symbols \{ } for coordinate indices , and lorentz connection for orthonormal tetrad indices .readers can be referred to ref. for more details on dsgt .since current observations favor a homogeneous , isotropic universe , we here work on a robertson - walker ( rw ) cosmological metric .\ ] ] for robertson - walker metric , the nonvanishing torsion tensor components are of the form , where denotes the vector piece of torsion , namely , in components , the trace of the torsion , and indicates the axial - vector piece of torsion , which corresponds in components to the totally antisymmetric part of torsion . and are both functions of time , and their subscripts , + and - , denote the even and odd parities , respectively . the nonvanishing torsion 2-forms in this case are where . according to the rw metric eq . and the torsion eq ., the field equations could be reduced to \label{el-11}% & & \frac{\ddot a^2 } { a^2 } + \left(\dot t_+ + 2\frac{\dot a } a t_+ - 2\frac{\ddot a } a + \frac{6}{r^2}\right)\dot t_+ -\frac 1 4 \left(\dot t_- + 2 \frac { \dot a } a t_-\right)\dot t_- - t_+^4 + \frac 3 2 t_+^2 t_-^2 - \frac1 { 16 } t_-^4 \nonumber\\ & & \quad+ \frac { \dot a } a(4 t_+^2 - 3 t_-^2)t_+ - \left(5\frac{\dot a^2 } { a^2 } + 2 \frac k { a^2 } + \frac3 { r^2}\right ) t_+^2 + \frac 12 \left(\frac 5 2\frac{\dot a^2 } { a^2 } + \frac k { a^2 } + \frac 3 { r^2}\right ) t_-^2- 2\frac{\dot a } a \left(\frac{\ddot a } { a}- 2\frac{\dot a^2 } { a^2 } \right.\nonumber \\[0.2 cm ] & & \quad \left . - 2 \frac k { a^2}- \frac6 { r^2}\right)t_+ - \frac 4 { r^2 } \frac{\ddot a } a -\frac{\dot a^2 } { a^2 } \left(\frac{\dot a^2}{a^2 } + 2\frac k { a^2}\right ) + \frac2 { r^2 } -\frac{k^2}{a^4 } - \frac2 { r^2}\frac k { a^2 } + \frac6 { r^4 } = -\frac{16\pi g p}{r^2 } , \\[0.3 cm ] \label{yang1 } % & & \ddot t_- + 3 \frac{\dot a } a \dot t_- + \left ( \frac 1 2 t_-^2 - 6 t_+^2 + 12 \frac { \dot a } a t_+ + \frac{\ddot a } a - 5\frac{\dot a^2}{a^2 } - 2\frac k { a^2}+ \frac 6 { r^2}\right ) t_-=0 , \\[0.3 cm ] \label{yang2}% & & \ddot t_+ + 3 \frac{\dot a } a \dot t_+ -\left ( 2 t_+^2 -\frac 3 2 t_-^2 - 6\frac{\dot a } a t_+ -\frac { \ddot a } a + 5 \frac { \dot a^2 } { a^2 } + 2 \frac k { a^2}- \frac 3 { r^2}\right ) t_+ - \frac 3 2 \frac{\dot a } a t_-^2-\frac{\dddot a } a - \frac{\dot a\ddot a}{a^2 } \nonumber\\ & & \quad+ 2\frac { \dot a^3 } { a^3 } + 2\frac{\dot a } a \frac k { a^2 } = 0 , % \end{aligned}\ ] ] where eqs . andare the and component of einstein - like equations , respectively ; and eqs . and are 2 independent yang - like equations , which is derived from the and components of lorentz connection .the spin density of present time is generally thought be very small which could be neglected .therefore , we here assumed the spin density is zero . the bianchi identities ensure that the energy momentum tensor is conserved , which leads to the continuity equation : equation can also be derived from eqs . - , which means only four of eqs .- are independent . with the equation of state ( eos ) of mattercontent , these four equations comprise a complete system of equations for five variables , . by some algebra and differential calculations, we could simplify these 5 equations as : +,\\ \ddot{t}_- & = & -3h\dot{t}_- -\left[-\frac{15}{2}t_{+}^{2}+\frac{33h t_{+}}{2}-6h^{2}-\frac{3k}{a^2}+\frac{8}{r^{2}}+\frac{5}{4}t^{2}_{-}+\frac{3}{2}\dot{t}_+ \right.\nonumber\\ & & \left .+ \frac{4\pi g}{3}(\rho+3p)\right]t_{-},\\ \dot{\rho}&=&-3h(\rho+p),\\ \label{eq : eos } w&=&\frac{p}{\rho } , \end{aligned}\ ] ] where is the hubble parameter .if we rescale the variables and parameters as where is the hubble radius in natural units , these variables and parameters would be dimensionless . under this transformation ,- remain unchanged expect for the terms including and , which change into and respectively .the contribution of radiation and spatial curvature in current universe are so small that it could be neglected , so we here just consider the dust universe with spatial flatness , whose eos is equal to zero . by some further calculations , these equations could be transformed to a set of six one - order ordinary derivative equations , which forms a six - dimensional autonomous system , as follows , + + 6h\rho , \\[0.2 cm ] \dot{t_{+}}&=&p,\\ \dot q&=&-3h q-\left(-\frac{15}{2}t_{+}^{2}+\frac{33h t_{+}}{2}-6h^{2}+\frac{8}{r^{2}}+\frac{5}{4}t^{2}_{-}+\frac{3}{2}p + \rho \right)t_{-},\\ \dot{t_{-}}&=&q,\\ \dot{\rho}&=&-3h\rho .\label{rho}\end{aligned}\ ] ] for such an autonomous system , we can use the dynamics analysis to investigate its qualitative properties .critical points are some exact constant solutions in the autonomous system , which indicate the asymptotic behavior of evolution . for example , some solutions , such as heteroclinic orbits , connect two different critical points , and some others , such as homoclinic orbits , are a closed loop starting from and coming back to the same critical point . in the dynamics analysis of cosmology , the heteroclinic orbit is more interesting. thus , critical points could be treated as the basic tool in dynamics analysis , form which one could know the qualitative properties of the autonomous system . by some algebra calculation, we find all the 9 critical points ( ) of this system , as shown in table 1 . .[ tab : critical - points]the critical points and their corresponding eigenvalues .the point 9 is not physically acceptable , for its negative energy density . [cols="<,<,<",options="header " , ] based on the current observations , the present density of torsion in our universe is very small , so it is reasonable to assume that the initial values of all torsions and their first order derivative are zero at .but their second order derivatives does not vanish yet , which would have a significant impact on the history and future of the evolution of our universe . in this case, the number of parameter and initial value is reduced , and the rest parameters and initial values are just and .it is easy to find the best - fit of these 2 parameters , which are shown in table [ tab : best - fit ] .and the minimal is , whereas the value for is , with . in fig .[ fig : chi2-distribution ] we show the distribution with respect to and compared to model , where the plane corresponds to the value of .furthermore , we plot the contours of some particular confidence levels , as shown in fig . [fig : contour ] . from these figures, we could find that the evolution of our universe is insensitive to the initial value , which alleviate the fine - tuning problem . the distribution with respect to and , compared to the , the plane .here we assume that all the torsions and their first order derivatives vanish at present time.,width=642,height=340 ] the 68.3% , 95.4% and 99.7% confidence contours of dsgt with respect to and , using the union 2 dataset .here we also assume that the current - time values for all the torsions and their first order derivatives are zero .the yellow point is the best - fit point ., height=264 ]the astronomical observations imply that our universe is accelerating to a de sitter spacetime .this gives us a strong motive to consider the cosmic evolution based on the de sitter gauge theory instead of other gravity theories .the localization of de sitter symmetry requires us to introduce curvature and torsion . so in de sitter gauge theory, the torsion is an indispensable quantity , by which people tried to include the effect of spin density in gravity theory at first .but now this essential quantity might account for the acceleration of our universe , if we apply dsgt to cosmology .we found the cosmological equations for dust universe in dsgt could form an autonomous system by some transformations , where the evolution of the universe is described in terms of the orbits in phase space .therefore , by dynamics analysis to the dsgt cosmology , one could study the qualitative properties of this phase space .we found all 9 critical points , as shown in table [ tab : critical - points ] .we also analyzed the stabilities of these critical points , and found among these critical points there is only one positive attractor , which is stable .the positive attractor alleviates the fine - tuning problem and implies that the universe will expand exponentially in the end , whereas all other physical quantities will turn out to vanish . in this sense , dsgt cosmology looks more like the , than pgt cosmology .and we conducted some concrete numerical calculations of this the destiny of our model of the universe , which confirms conclusions from dynamics analysis .finally , in order to find the best - fit values and constraints of model parameters and initial conditions , we fitted them to the union 2 snia dataset .the maximum likelihood estimator here we used is the estimate . by minimizing the , we found the best - fit parameters and the corresponding , while the value for is , with .note that we here set all the initial values of torsions and their first - order derivatives to zero at , since the contribution of torsion to the current universe is almost negligible .we also plotted the confidence contour fig .[ fig : contour ] with respect to and , from which it is easy to see that the fine - tuning problem is alleviated and the evolution is not so sensitive to the initial values and model parameters .if we want to go deeper into cosmology based on de sitter gauge theory , there are a lot of work need to be done .we should fit this model to some other observations , like bao and lss etc , to constrain the parameters better .we also could study the perturbations in the early universe , and compare the results to cmbr data .these issues will considered in some upcoming papers .this work is supported by srfdp under grant no .200931271104 and shanghai natural science foundation , china , under grant no .10zr1422000 .t. padmanabhan , _ cosmological constant : the weight of the vacuum _ , _ phys .* 380 * ( 2003 ) 235 - 320 , [ hep - th/0212290 ] ; x .- z .li and j .- g .o(n ) phantom , a way to implement w -1 _ _ phys .* d69 * ( 2004 ) 107303 , [ hep - th/0303093 ] .hao and x .- z .li , _ phantom cosmic dynamics : tracking attractor and cosmic doomsday _ , _ phys ._ * d70 * ( 2004 ) 043529 , [ astro - th/0309746 ] ; + c .- j .feng and x .- z .li , _ cardassian universe constrained by latest observations _ _ phys . lett . _ * b692 * ( 2010 ) 152 - 156 , [ arxiv:0912.4793 ] . f. w. hehl , _ four lectures on poincar gauge field theory _ in _ proc . of the 6th course of the school of cosmology and gravitation on spin , torsion , rotation , and supergravity _ , eds . p. g. bergmann and v. de sabbata ( new york : plenum ) p5 + m. blagojevi , _ gravitation and gauge symmetries _ , iop publishing , bristol , 2002 .h. chen , f .- h . ho and j. m. nester , c .- h .wang , and h .- j .yo , _ cosmological dynamics with propagating lorentz connection modes of spin zero _ , _ jcap _ * 0910 * ( 2009 ) 027 , [ arxiv:0908.3323 ] ; + p. baekler , f. w. hehl , and j. m. nester , _ poincare gauge theory of gravity : friedman cosmology with even and odd parity modes . analytic part _rev . _ * d83 * ( 2011 ) 024001 , [ arxiv:1009.5112 ] ; + f .- h . ho and j. m. nester , _ poincar gauge theory with coupled even and odd parity dynamic spin-0 modes : dynamic equations for isotropic bianchi cosmologies _ , [ arxiv:1106.0711 ] ; + f .- h . ho and j. m. nester , _ poincar gauge theory with even and odd parity dynamic connection modes : isotropic bianchi cosmological models _ , [ arxiv:1105.5001 ] .li , c .- b .sun , and p. xi , _ torsion cosmolgical dynamics _ , _ phys ._ * d79 * ( 2009 ) 027301 , [ arxiv:0903.3088 ] ; + x .- z .li , c .- b .sun , and p. xi , _ statefinder diagnostic in a torsion cosmology _, _ jcap _ * 0904 * ( 2009 ) 015 , [ arxiv:0903.4724 ] ; + x .- c .ao , x .- z .li , and p. xi_ analytical approach of late - time evolution in a torsion cosmology _lett . _ * b694 * ( 2010 ) 186 - 190 , [ arxiv:101.4117 ] .guo , c .- g .huang , z. xu , and b. zhou , _ on beltrami model of de sitter spacetime __ * a19 * ( 2004 ) 1701 - 1710 , [ hep - th/0311156 ] ; + h .- y .guo , c .- g .huang , z. xu , and b. zhou , _ on special relativity with cosmological constant _ , _ phys .lett .. _ * a331 * ( 2004 ) 1 - 7,[hep - th/043171 ] ; + h .- y .guo , c .- g .huang , z. xu , and b. zhou , _ three kinds of special relativity via inverse wick rotation _* 22 * ( 2005 ) 2477 - 2480 , [ hep - th/0508094 ] ; + h .- y .guo , c .- g .huang , b. zhou , _ temperature at horizon in de sitter spacetime _, _ europhys .* 72 * ( 2005 ) 1045 - 1051 .li , y .- b .zhao , and c .- b .sun , _ heteroclinic orbit and tracking attractor in cosmological model with a double exponential potential _, _ class .* 22 * ( 2005 ) 3759 - 3766 , [ astro - ph/050819 ] ; + j .-hao and x .- z .li , _ an attractor solution of phantom field _ , _ phys ._ * d67 * ( 2003 ) 107303 , [ gr - qc/0302100 ] .
a new cosmological model based on the de sitter gauge theory ( dsgt ) is studied in this paper . by some transformations , we find , in the dust universe , the cosmological equations of dsgt could form an autonomous system . we conduct dynamics analysis to this system , and find 9 critical points , among which there exist one positive attractor and one negative attractor . the positive attractor shows us that our universe will enter a exponential expansion phase in the end , which is similar to the conclusion of . we also carry out some numerical calculations , which confirms the conclusion of dynamics analysis . finally , we fit the model parameter and initial values to the union 2 snia dataset , present the confidence contour of parameters and obtain the best - fit values of parameters of dsgt .
neural networks ( nn s ) have become in the last years a very effective instrument for solving many difficult problems in the field of signal processing due to their properties like non - linear dynamics , adaptability , self - organization and high speed computational capability ( see for example and the papers therein quoted ) .aim of this paper is to show the feasibility of the use of nn s to solve difficult problems of signal processing regarding the so called virgo project .gravitational waves ( gw s ) are travelling perturbations of the space - time predicted by the theory of general relativity , emitted when massive systems are accelerated .up to now , there is only an indirect evidence of their existence , obtained by the observations of the binary pulsar system psr 1913 + 16 .moreover , the direct detection of gw s is not only a relevant test of general relativity , but the start of a new picture of the universe .in fact , gw s carry complementary information with respect to electromagnetic and optical waves , since the gw s are practically not absorbed by the matter .the aim of the virgo experiment is the direct detection of gravitational waves and , in joint operation with other similar detectors , to perform gravitational waves astronomical observations .in particular , the virgo project is designed for broadband detection from to .the principle of the detector is shown in figure 1 .a arm - length michelson interferometer with suspended mirrors ( test masses ) is used .the phase difference between the two arms is amplified using fabry - perot cavities of finesse in each arm .aiming for detection sensitivity of , virgo is a very delicate experimental challenge because of the competition between various sources of noise and the very small expected signal .in fact , the interferometer will be tuned on the dark fringe , and then the signal to noise ratio will be mainly limited , in the above defined range of sensitivity , by residual seismic noise , thermal noise of the suspensions photon counting noise ( shot noise ) . in figure 2 the overall sensitivity of the apparatus is shown . in this figureit is easy to see the contribution of the different noise sources to the global noise . in this contextwe use a multi - layer perceptron ( mlp ) nn with the back - propagation learning algorithm to model and identify the noise in the system , because we experimentally found that fir nn s and elman nn s did not work in a satisfying manner .both the fir and elman models proved to be very sensible to overfitting and were not stable .furthermore the elman network required a great number of hidden units , while the fir network required a great number of delay terms .instead , the mlp proved succesfull and easy to train because we used the bayesian learning paradigm .nn s are massively parallel , distributed processing systems .they are composed of a large number of processing elements ( called nodes , units or neurons ) which operate in parallel .scalars ( called weights ) are associated to the connections between units and determine the strength of the connections themselves .computational capability is due to the connections between the units and to their collective behaviour .furthermore , information representation is distributed , i.e. no single unit is charged with specific information .nn s are well - known for their universal approximation capability .system identification consists in finding the input - output mapping of a dynamic system .a discrete - time multi - input multi - output ( mimo ) system ( or a continuous - time sampled - data system ) can be described with a set of non - linear difference equations of the form ( _ input - state - output _ representation ) : where is the state vector of the system , is the input vector and is the output vector .since we can not always access the state vector of the system , therefore we can use an input - output representation of the form : \label{eq : io}\end{aligned}\ ] ] where and are the maximum lags of the input and output vectors , respectively , and is the pure time - delay ( or dead - time ) in the system .this is called an arx model ( autoregressive with exogenous inputs , ) and it can be shown that a wide class of discrete - time systems can be put in this form . to build a model of the form ( [ eq : io ] ) , we must therefore obtain an estimation of the functional ] .such a network has inputs and outputs ( see figure 3 ) .a difficulty in this approach arises from the fact that generally we do not have information about the model order ( i.e. the maximum lags to take into account ) unless we have some insight into the system physics .furthermore , the system is non - linear .recently a method has been proposed for determining the so - called _ embedding dimension _ of nonlinear dynamical systems , when the input - output pairs are affected by very low noise .furthermore , the lags can be determined by evaluating the _ average mutual information _ ( ami) . such methodologies , although not always successful , can be nevertheless used as a starting point in model design .in the virgo data analysis , the most difficult problem is the gravitational signal extraction from the noise due to the intrinsic weakness of the gravitational waves , to the very poor signal - to - noise ratio and to their not well known expected templates .furthermore , the virgo detector is not yet operational , and the noise sources analyzed are purely theoretical models ( often stationary noises ) , not based on experimental data .therefore , we expect a great difference between the theoretical noise models and the experimental ones . as a consequence, it is very important to study and to test algorithms for signal extraction that are not only very good in signal extraction from the theoretical noise , but also very adaptable to the real operational conditions of virgo .for this reason , we decided the following strategy for the study , the definition and the tests of algorithms for gravitational data analysis .the strategy consists of the following independent research lines .the first line starts from the definition of the expected theoretical noise models .then a signal is added to the virgo noise generated and the algorithm is used for the extraction of the signal of known and unknown shape from this noise at different levels of signal - to - noise ratio .this will allow us to make a number of data analysis controlled experiments to characterize the algorithms .the second line starts from the real measured environmental noise ( acoustic , electromagnetic , ... ) and tries to identify the noise added to a theoretical signal . in this way we can test the same algorithms in a real case when the noise is not under control . using this strategy , at the end ,when in a couple of years virgo will be ready for the first test of data analysis , the procedure will be moved to the real system , being sure to find small differences from theory and reality after having acquired a large experience in the field .as we have seen in the introduction , the virgo interferometer can be characterized by a sensitivity curve , which expresses the capability of the system to filter undesired influences from the environment , and which could spoil the detection of gravitational waves ( such a noise is generally called seismic noise ) .the sensitivity curve has the following expression : + s_{\nu } & \quad f\geq f_{\textrm{min } } \\s(f_{\textrm{min } } ) & \quad f < f_{\textrm{min } } \end{array } \right .\label{eq : exprsenscurve}\ ] ] where : * * is the shot noise cut - off frequency * is the pendulum mode * is the mirror mode * is the shot noise the contribution of violin resonances is given by : ^{2}+\phi _ { i}^{2}}+\frac{1}{i^{4}}\frac{f_{i}^{(f)}}{f}\frac{c_{f}\phi _ { i}^{2}}{\left [ \left ( \frac{f}{if_{i}^{(f)}}\right ) ^{2}-1\right ] ^{2}+\phi _ { i}^{2 } } \label{eq : violinres}\ ] ] where the different masses of close and far mirrors are taken into account : * * * * * note that we used a simplified curve for our simulations , in which we neglected the resonances ( see figure 4 ) .samples of the sensitivity curve can be obtained by evaluating the expression ( [ eq : exprsenscurve ] ) at a set of frequencies , $ ] .the samples of the sensitivity curve allow us to obtain the system transfer function ( in the frequency domain ) , , such that : > from this , by means of an inverse discrete fourier transform , samples of the system transfer function ( in the time domain ) can be obtained .our aim is to build a model of the system transfer function ( [ eq : tranfun ] ) . assuming that the interferometer input noise is a zero mean gaussian process , by feeding it to the system ( i.e. filtering it through the system transfer function ) we obtain a coloured noise .the so obtained white noise - colored noise pairs can then be used to train an mlp , as shown in figure 3 .the first step in building an arx model is the model order determination . to determine suitable lags which describe the system dynamics, we used the ami criterion .this can be seen as a generalization of the autocorrelation function , used to determine lags in linear systems .a strong property of the ami statistic is that it takes into account the non - linearities in the system .usually , the lag is chosen as the first minimum of the ami function .the result is reported in figure 5 , in which the first minimum is at 1 . to find how many samples are necessary to unfold the ( unknown ) state - space of the model ( the so called _ embedding dimension _ ) we used the method of , the lipschitz decomposition .the result of the search is reported in figure 6 . from the figurewe can see that , starting from lag three , the order index decreases very slowly , and so we can derive that the width of the input window is at least three . in order to testthe nn s capability in solving the problem , we chose a width of 5 , both for input and output ( i.e. ) . in this way , we obtained a nn with a simple structure .furthermore , some preliminary experiments showed that the system dead - time is ; this gives the best description of the system dynamics .another fundamental issue is the nn complexity , i.e. the number of units in the hidden layers of the nn .usually the determination of the network complexity is critical because of the risks of overfitting .since the nn was trained following a bayesian framework , then overfitting was of no concern ; so we directed our search for a model with the minimum possible complexity . in our case, we found a hidden layer with 6 units is optimal .the bayesian learning framework ( see and ) allows the use of a _ distribution _ of nn s , that is , the model is a realization of a random vector whose components are the nn weights .the so obtained nn is the _ most probable _ given the data used to train it .this approach avoids the bias of the cross - validatory techniques commonly used in practice to reduce model overfitting . to allow for a smooth mapping which does not produce overfitting , several regularization parameters ( also called_ hyperparameters _ ) embedded in the error criterion have been used : * one for each set of connections out of each input node , * one for the set of connections from hidden to output units , * one for each of the bias connections , * one for the data - contribution to the error criterion . usually , the hyperparameters of the first three kinds are called _ alphas _ , while the last is called a _beta_. the approach followed in the application of the bayesian framework is the `` exact integration '' scheme , where we sample from the analytical form of the distribution of the network weights .this can be done if we assume an analytic form for the prior hyperparameters distribution , in particular a distribution uniform on a logarithmic scale : this distribution is _ non - informative _ , i.e. it embodies our complete lack of knowledge about the values the hyperparameters should take .the chosen model was trained using a sequence of little less than a million of patterns ( we sampled the system at 4096hz for 240s ) normalized to zero mean and unity variance with a _ whitening _ process .note that the input - output pairs were processed through discrete integrators to obtain pattern - target pairs , as shown in figure 3 .the nn was then tested on a 120s long sequence .the nn was trained for epochs , with the hyperparameters being updated every epochs .a close look at the hyperparameters shows that all the inputs are relevant for the model ( they are of the same magnitude ; note that this further confirms the pre - processing analysis ) .the hyperparameter shows that the data contribution to the error is very small ( as we would expect , since the data are synthetic ) .the simulations were made using the matlab language , the netlab toolbox and other software designed by us . in figure 7 , the psds of the target and the predicted time series are shown ; in the lower part of the figure is reported the psds of the prediction residuals . in figure 8 ,the psd of a 100hz sine wine added to the noise is shown , with the signal extracted by the network .as can be seen , the network recognizes the frequency of the sine wave with the maximum precision allowed by the residuals .in this paper we have shown some preliminary tests on the use of nn s for signal processing in the virgo project .some observations can be elicited from the experimental results : * in evaluating the power spectral densities ( psds ) , we made the hypothesis that the system is sampled at .it is only a work hypothesis , but it shows how the network reproduces the system dynamics up to . note that the psds are nearly the same also if we were near the nyquist frequency . *the psd of the residuals shows a nearly - white spectrum , which is index of the model goodness ( see ) .- : : to increase the system model order and to test if there are significant differences in prediction ; - : : to test the models with a greater number of samples to obtain a better estimate of the system dynamics ; - : : to model the noise inside the system model to improve the system performance and to allow a multi - step ahead prediction ( i.e. an output - error model )
in this paper a neural networks based approach is presented to identify the noise in the virgo context . virgo is an experiment to detect gravitational waves by means of a laser interferometer . preliminary results appear to be very promising for data analysis of realistic interferometer outputs .
the internet has become a near indispensable tool with both private individuals and organizations becoming increasingly dependent on internet - based software services , downloadable resources like books and movies , online shopping and banking , and even social networking sites .the issue of network security has become significant due to the prevalence of software with malicious or fraudulent intent .malware is the general term given to a broad range of software including viruses and worms designed to infiltrate a computer system without the owner s permission .cohen s conclusion in his 1987 paper that computer viruses are potentially a severe threat to computer systems is still valid in real networks today .current security systems do little to control the spread of malicious content throughout an entire network .most security systems are designed to protect a single computer unit .these properly protected units make up only a fraction of online computers .these highlight the necessity of examining the dynamics of the spread of malware in order to be able to develop proper control strategies .studies on the spread of malware in computer networks date back to the late 1980s and are generally based on the mathematical approach to the spread of diseases in biological populations .math models developed for spread of malware within a computer network such as the kephart - white model and other models adapted from it are based on the kermack - mckendrick model .these models have an implicit assumption that all nodes in the network are always available for `` contact '' . however , it is a basic limitation of malware that it can only be passed on to another computer if there is a path through which information can be passed , so the states of the nodes of the network whether they are online or offline have an effect on the dynamics of the spread . in this work , we model the spread of malware utilizing an ising system to represent an isolated computer network .the state of each node is a composite of its connection status and health .the spin state of a node defines its connection status to be either online or offline .connections are established with the premise that autonomous networks configure themselves .the health status describes whether a node has been infected or not , and infection can propagate only among online nodes .the ising model was originally intended for simulating the magnetic domains of ferromagnetic materials .its versatility has allowed it to be applied to other systems wherein the behavior of individuals are affected by their neighbors .it has been applied to networks and network - like systems such as neural networks , cooperation in social networks , and analysing trust in a peer - to - peer computer network .a computer network is modeled by an ising spin system . associated with each nodeis a spin corresponding to two possible states : for online and for offline .the local interaction energy is given by the interaction parameter , , determines the degree and type of dependence of on its neighbors .the nearest neighbors or local neighborhood are defined according to the network topology and are usually von neumann or moore neighborhoods . summing up all local energies gives the total energy , , of the system .global energy , , is associated with network efficiency and more efficient networks are characterized by lower energies .note that while interaction energies are explicitly dependent on the nearest neighbors , the state of each node is implicitly dependent on the state of the entire system .a node will change its configuration provided that the new energy of the system is lower than the previous .if the resulting energy is higher , the new configuration is accepted with probability in the standard ising procedure , is the change in energy , is temperature , and is the boltzmann constant . here , relates to network traffic . to model the spread of infection , each nodeis assigned a health status separate from its spin .the health status is either infected or susceptible .every online susceptible has a probability of becoming infected , where offline nodes do not transmit or receive data .hence , they do not participate in the infection part .[ [ program - specifics ] ] program specifics + + + + + + + + + + + + + + + + + the computer network is a lattice .nearest neighbors are defined to be the four adjacent nodes .the interaction parameters are all set to .eq.[generalising ] becomes for the interaction energy calculations , circular boundary conditions are imposed .parameters are scaled such that .initially , all nodes are offline ( ) .every time step , the entire system is swept in a left - to - right top - to - bottom fashion , evaluating each node for a possible change in state .the mean energy per node of each configuration is stored and averaged at the end of the run .the spread of infection begins with a single infective . at ,one node is selected at random and infected . as the infection spreads , the number of susceptibles , , and infectives , , for each time step are stored . because no means for removal of infection is provided , all nodes eventually become infected .it is at this time that the program is terminated .the model was tested for -values ranging from to .the infection curves of five trials were averaged for each .the average infection curve was normalized by dividing it by the total number of nodes to get the fraction of infectives . because it can no longer be assumed that nodes are always available for connection , a regular decay equation is used to model the fraction of infectives curve .a system with nodes has susceptibles and infectives at time . within the time - frame , the number of susceptibles being converted to infectives is .as time passes , decreases as the population of susceptibles is exhausted .thus , the probability of conversion , given by decreases with time . in equation form , this is where is the decay constant .the solution to eq.[decayeq ] is where , the initial number of susceptibles , is just the total number of units in the system . using these , the expression for the number of infectives , may be written as this may be normalized to note that the actual rate of spread varies with time , and provides a measure of the average rate of spread .the fits were made using the unweighted levenberg - marquardt algorithm of gnuplot ver.4.2 initialized with . for consistency , because some runs terminate very rapidly , we consider only the first 50 time - steps . during the first 50 iterations _ : the rate of spread of the infection increases with . for the above graphs , the resulting decay constants are: , , , , , and , width=240 ] from fig.[allcompare ] , it appears that the spread of infection becomes faster as increases . for and ,the rates of spread are very slow , neither reaching -infected at the last iteration .particularly , for , no new infectives were produced .these low - traffic systems are not dynamic as nodes have a low probability of coming online from their initial offline state .the network is also very efficient , and , which may be interpreted as information exchange being limited to necessary transactions .for this reason , there is little information exchange and hence a slow spread .for very high , as in and , the spread is rapid and nearly infection is reached .this suggests that very high traffic means a large volume of information exchange that leads to a faster spread of infection .the system is also inefficient at very high , with .it is worth mentioning that the average infection curves of and nearly coincide indicating rates of spread that are very similar .-dependence of rates _ : the increase in the rate of infection corresponds with the decrease in efficiency in the network .note that -values are negative.,width=240 ] the observations are supported by the calculated decay constants .the calculated initially increases with traffic but is capped off at very high where it becomes constant .this behavior is similar to the saturation region in a traffic network where flux saturates at high densities .the saturation region indicates that information exchange is no longer freely flowing and that some kind of congestion has occurred . in fig .[ exponents ] , there is an evident transition that occurs in both the average rate of spread and the efficiency of the network . at the `` congested '' region ,the efficiency of the network is very low ; while at the `` free flow '' region , the efficiency of the network is comparatively high .congestion occurs because networks can only handle a limited amount of traffic in the form of data packets . when there is too much traffic , the network is forced to store or drop packets making it inefficient .an increase in packet loss with increasing data traffic is reflected by the decrease in efficiency at the congestion region .the congestion is most likely a result of the limited size of the network and the `` finite - size effect '' may be confirmed by testing a larger network .our ising model approach accounts for the connection status of nodes in an infected network . unlike most epidemic models whereall nodes are assumed to be always connected , the model allows malware propagation only among online nodes .we found that the rate of infection becomes faster in less efficient networks with higher data traffic and saturates as the network becomes congested .deepayan chakrabarti , yang wang , chenxi wang , jurij leskovec , and christos faloutsos ., `` epidemic thresholds in real networks '' , _ acm transactions on information and systems security _ , vol . 10 , no .4 , january 2008 , article 13 jeffrey o. kephart , et al .`` biologically inspired defenses against computer viruses '' _ proceedings of the 14th international joint conference on artificial intelligence _ vol . 1 , montreal , quebec , canada , 1995 , pp.985 - 996
we introduce an ising approach to study the spread of malware . the ising spins up and down are used to represent two states online and offline of the nodes in the network . malware is allowed to propagate amongst online nodes and the rate of propagation was found to increase with data traffic . for a more efficient network , the spread of infection is much slower ; while for a congested network , infection spreads quickly . computer networks ; computer viruses ; epidemiology
it is well known that quantum key distribution is one of the most interesting subjects in quantum information science , which was pioneered by c.bennett and g.brassard in 1984[1 ] . in the original paper of bennett ,single photon communication was employed as implementation of quantum key distribution . however , despite that it is not essential in great idea of bennett , many researchers employed single photon communication scheme to realize bb84 , b92[2 ] . because of the difficulties of single photon communication in practical sense , it was discussed whether one can realize a secure key distribution guaranteed by quantum nature based on light wave communication or not . in 1998 ,h.p.yuen and a.kim[3 ] proposed another scheme for key distribution based on communication theory(signal detection theory ) .this scheme corresponds to an implementation of secret key sharing which was information theoretically predicted by maurer[4 ] , et al .however , yuen s idea was found independently from maurer s discussion . in the first paper of yuen - kim[3 ] , they showed that if noises of eve(eavesdropper ) and bob(receiver ) are statistically independent , secure key distribution can be realized even if they are classical noises , in which they employed a modification of b92 protocol[2 ] . following yk s first paper ,a simple experimental demonstration of yk protocol based on classical noise was reported[5 ] , and recently yk scheme with 1 gbps and 10 km long fiber system based on quantum shot noise was demonstrated[6 ]. however , these schemes are not unconditional secure .that is , ability of signal detection of eve can be superior to that of bob . as a result, an interesting question arises `` is it possible to create a system with current technology that could provide a communication in which always bob s error probability is superior to that of eve ? '' in proceedings paper of qcm and c 2002 , yuen and his coworker reported that yk protocol can be unconditional secure , even if one uses conventional optical communication system[7 ] .this is interesting result for engineer , and will open a new trend of quantum cryptography . in this report, we simulate practical feature of yuen - kim protocol for quantum key distribution with unconditional secure , and propose a scheme to implement them using our former experimental setup[6 ] .a fundamental concept of yuen - kim protocol follows the next remark . + * remark * : _ if there are statistically independent noises between eve and bob , there exist a secure key distribution based on communication . _ + they emphasized that the essential point of security of the key distribution is detectability of signals .this is quite different with the principle of bb-84 , et al which are followed by no cloning theorem .that is , bb-84 and others employ a principle of disturbance of quantum states to give a guarantee of security , but yk protocol employs a principle of communication theory .it was clarified that this scheme can be realized as a modification of b-92 . however ,this scheme allows us use of classical noise , and it can not provide unconditional secure .then , yuen and his coworker showed that yk scheme is to be unconditional secure in which a fundamental theorem in quantum detection theory was used for his proof of security as follows . + * theorem * : ( helstrom - holevo - yuen ) + _ signals with non commuting density operators can not be distinguished without error ._ + so if we assign non commuting density operators for bit signals 1 and 0 , then one can not distinguish without error .when the error is 1/2 based on quantum noise , there is no way to distinguish them .so we would like to make such a situation on process between alice and eve . to do so, a new version of yk scheme was given as follows : _ _ * the sender(alice ) uses an explicit key(a short key: , expanded into a long key: by use of a stream cipher ) to modulate the parameters of a multimode coherent state .* state is prepared .bit encoding can be represented as follows : where . *alice uses the running key to specify a basis from a set of m uniformly distributed two - mode coherent state . * the message is encoded as .this mapping of the stream of bits is the key to be shared by alice and bob . because of his knowledge , bob can demodulate from to . here, let us introduce the original discussion on the security .the ciphering angle could have in general as discrete or continuous variable determined by distribution of keys .a ciphered two mode state may be the corresponding density operator for all possible choices of is , where or .the problem is to find the minimum error probability that eve can achieve in bit determination . to find the optimum detection process for discrimination between and is the problem of quantum detection theory .the solution is given by[8 ] as an example of encoding to create which is the error probability of eve , yuen et al suggested certain modulation scheme .in that case , closest values of a given can be associated with distinct bits from the bit at position , and two closest neighboring states represent distinct bits which means a set of base state . in this scheme, they assumed that one chooses a set of basis state(keying state for 1 and 0 ) for bits without overlap .the error probability for density operators and becomes 1/2 , when number of a set of basis state increases .asymptotic property of the error probability depends on the amplitude of coherent state[7][9 ] .original scheme of yk protocol in the above can be realized by practical devices . to apply them to fiber communication system, we would like to realize them by intensity modulation / direct detection scheme . if one does not want to get perfect yk scheme, one can more simplify the implementation of yk protocol . from a fundamental principle in quantum detection theory, we can construct non - commuting density operators from sets based on non - orthogonal states when one does not allow overlap of the selection of a set of basis state for 1 and 0 .on the other hand , when we allow overlap for selection of a set of basis state , one can use orthogonal state to construct the same density operators for 1 and 0 .that is , .however , in this case , unknown factor for eve is only an initial short key , and a stream of bits that eve observed is perfectly the same as those of alice and bob , though eve can not estimate the bits at that time .this gives still insecure situation .so , here , we employ a combination of non - orthogonality and overlap selection in order to reduce the number of basis sets .let us assume that the maximum amplitude is fixed as .we divide it into 2 m .so we have m sets of basis state .total set of basis state is given as shown in fig.1 .each set of basis state is used for , and , depending on initial keys . so the density operators for 1 and 0 for eve are for the sets of , , , let us assign 0 and 1 by the same way as eqs(4),(5 ) . in this case , eve can not get key information , but she can try to know the information of quantum states used for bit transmission .so this is the problem for discrimination of 2 m pure states .the error probability is given by although we have many results for calculation of optimum detection problems[10][11][12 ] , to solve this problem is still difficult at present time , because the set of states does not have complete symmetric structure .so we here give the lower bound and tight upper bound .the lower bound is given by the minimum error probability: for signal set which are neighboring states .it is given as follows : } \right)\ ] ] the upper bound is given by applying square root measurement for 2 m pure states .the numerical properties are shown in fig.2-(a ) .thus if m increases , then her error for information on quantum states increases . in this case, pure guessing corresponds to .the error probability of bob , however , is independent of the number of set of basis state , and it is given as follows : we emphasize that eve can not get key information in this stage , because the information for 1 and 0 are modulated by the way of eqs(4),(5 ) .furthermore , this scheme can send 2 m bits by m sets of basis state .let us apply the original scheme such that m bits are sent by m sets of basis state . in this case , eve will try to get key information , so the density operators for eve become mixed states , consisting of set of states which send 1 and 0 , respectively .the numerical properties are shown in fig.2-(b ) .both schemes have almost same security , but the latter can only send m bits by m sets of basis state . in other word , the number of sets is reduced to 1/2 in the former scheme .in implementing yk protocol by conventional fiber communication system , we use here our proposed system .figure 3 shows the experimental setup .the laser diode serves as 1.3 m light source .a pattern generator provides a signal pulse string to send keys . a modulator which selects basis state follows a driver of laser diode .the selector gives selection of amplitude and assignment of 1 and 0 , and is controlled by initial keys .the laser driver is driven by output signals of modulator .the optical divider corresponds to eve .the case 1 is a type of opaque " , and the case 2 is a type of translucent " .the channel consists of 10 km fiber and att .we can change the distance equivalently from 10 km to 200 km by att .the speed of pulse generator to drive laser diode is 311mbps , 622mbps , and 1.2gbps .the detector of bob is ingaas pin photo loaded by 50 register , and it is connected to an error probability counter which can apply to 12 gbps .the dark current is 7 and the minimum received power of our system is about -30 dbm . in this system ,the problem for degree of security is only power advantage of eve which will be set in near transmitter(alice ) .when the eavesdropping is opaque , the error probability of bob increases drastically , and the error probability counter shows almost 1/2 , which means that the error of eve is also 1/2 . in this case , problem of communication distance is not so important . we can detect the existence of eve in any distance of channel . when the eavesdropping is translucent , eve has to take only few power( from the main stream of bits sequence in order to avoid the power level disturbance . in this case , the error of bob does not increase . as a result , alice and bob can not detect the existence of eve .the secure communication distance depends on the error probabilities of bob and eve .let be transparency of channel from alice to bob .the detectability for bob in this experiment setup depends on the signal distance(amplitude difference between two states as basis state ) : for , and that of eve depends on the signal distance : .here we assume that , and the total loss is 20db which corresponds to 100 km . since our receiver requires about -30 dbm , the transmitter is -10 dbm . when m increases , sufficiently the error of eve increases .we examined a simulation of yk protocol based on intensity modulation / direct detection fiber communication system , and showed a design of implementation of secure system based on our experimental setup which was used to demonstrate the first version of implementation of yk protocol . we will soon report complete demonstration in experiment by the above system .
in this report , we simulate practical feature of yuen - kim protocol for quantum key distribution with unconditional secure . in order to demonstrate them experimentally by intensity modulation / direct detection(imdd ) optical fiber communication system , we use simplified encoding scheme to guarantee security for key information(1 or 0 ) . that is , pairwise m - ary intensity modulation scheme is employed . furthermore , we give an experimental implementation of yk protocol based on imdd .
due to the importance of the primes , the mathematicians have been investigating about them since long centuries ago . in 1801, carl gauss , one of the greatest mathematician , submitted that the problem of distinguishing the primes among the non - primes has been one of the outstanding problems of arithmetic .proving the infinity of prime numbers by euclid is one of the first and most brilliant works of the human being in the numbers theory .greek people knew prime numbers and were aware of their role as building blocks of other numbers .more , the most natural question asked by human being was this what order prime numbers are following and how one could find prime numbers ? until this time , there have been more attempts for finding a formula producing the prime numbers and or a model for appearance of prime numbers among other numbers and although they could be more helpful for developing the numbers theory , however , the complicated structure of prime numbers could not be decoded . during last years, the prime numbers attained an exceptional situation in the field of coding .for example , `` rsa '' system is one of the most applicable system in this field used in industries relying on prime numbers .`` rsa '' system is used in most computerized systems and counted as main protocol for secure internet connections used by states and huge companies and universities in most computerized systems . on 2004 ,manindra agrawal and his students in indian institute of technology kanpur could develop an algorithm called aks for detecting prime numbers .on 2006 , 2008 , 2009 and recently on 2013 , mathematics students in a project called detecting the mersenne prime numbers by computer network gimps succeeded to discover the greatest prime number .all such cases indicate the importance of mersenne theorem or any other approach for finding the largest prime numbers . generalizing the mersenne theorem ,this paper could accelerate finding the largest prime numbers .in addition , there have been provided new equations and algorithm for attaining the largest primes .assume that is a natural number greater than 1 , related to n and natural numbers and are defined as below : if is a prime number , then is a prime number , too . if is not the prime number so we can write as the multiplication of two natural numbers except : therefore , is not the prime number .so , must be a prime number .this theorem is a generalization for mersenne theorem in which and are arbitrary natural numbers .if in the theorem , c is chosen as a multiple to and , thus , will not be a prime number .suppose : therefore : the last equality shows that is not a prime number .suppose is a natural number greater than , function related to and natural number are defined as below : if is a prime number , then is a prime number , too . in this theorem ,based on constant , please consider a sequence we prove that sequence is strictly ascending , i.e. to prove the last inequality , we write : status 1 . if is a multiple of : status 2 .if is not a multiple of : therefore , inequity is accepted . in this theorem , each number is higher than mersenne number , meaning : be a natural number and are the primes smaller than or equal and , are natural numbers which limitations are intended for them indicated as follows : assume that is a function of which is displayed as bellow : if the and circumstances are followed , can obtain all the primes less than .knowing that is odd , because it is non prime , therefore it comprises from two odd numbers except , and because , has at least a prime factor .therefore , is divided at least on one of the prime factors . it is clear that above equalities are in discrepancy of the assumption of the theorem . 1 .if : 2 .interval : + it is clear that by putting minimum in the definition minimum followed by minimum is obtained as below : according to recent equation , it is obvious that being as prime number in prime numbers smaller than , r may not be divided into prime factors smaller than . on the other hand ,it is not necessary to see if prime numbers smaller than are divided into to detect it as a prime number .indeed , for obtaining the prime numbers , we only require in to enter the provision of prime factor . if is considered as a prime number bigger than , we could use instead of in this theorem because prime numbers smaller than include prime numbers smaller than .prime numbers smaller than 120 : \ { , prime numbers smaller than : } + be the natural number and are the primes smaller than or equal and also consider that are the primes larger than .suppose that and be the members of the natural numbers and also be the members of the account numbers , these variables are selected arbitrarily .function related to values , natural numbers and arithmetic numbers are defined as below : if be as the natural number less than , then is a prime number .if is not prime number , it has a prime factor . on the other side , because , has at least one prime factor .so , it is arbitrarily supposed that is divisible in . because is not denominator of any .we have : we reached a contradiction to the assumption .thus , the theorem was verified .we obtain prime numbers smaller than 119 .+ table 1 [ cols="^,^,^,^,^,^",options="header " , ] and continuing so suppose that : ( the order is considered in the primes ) general speaking , the theorem comes true to because includes the same primes . is obviously not divisible to and according to prime of the number , we have : to attain prime numbers , we divide the intervals as below : with regard to the relationship easier to be written .in example of the primes less than , the rang can be divided into three sections of , and .then , a distinct relation asserted for each .prime numbers smaller than 48 : and continuing so by integrating the relations , particularly using the relation 2 and notation 5.4 , we can attain an algorithm to obtain the largest prime number . ;one of them is as below : first of all , assign as one so as to obtain the minimum of . then in the following equation , give a counter to through the minimum so long as k be a member of the natural numbers .meanwhile , should not be even . if is not prime number , it has a prime factor . on the other side , because , has at least one prime factor therefore , with no interruption in the generality of the subject , we can assume that is divisible on : we reached a contradiction to the assumption .thus , the theorem was verified .99 p. ribenboim , the little book of bigger primes , 2nd ed .springer science & business media , 2004 .w. stein , `` elementary number theory : primes , congruences , and secrets '' , book , pp . 1 - 172 , 2009 .r. crandall and c. b. pomerance , prime numbers : a computational perspective .springer science & business media , 2006 .m. agrawal , n. kayal , and n. saxena , `` primes is in p '' , ann .781 - 793 , 2004 .`` list of known mersenne prime numbers - primenet '' .[ online ] .available : + http://www.mersenne.org/primes/ [ accessed : 08-apr-2015 ] .
today , prime numbers attained exceptional situation in the area of numbers theory and cryptography . as we know , the trend for accessing to the largest prime numbers due to using mersenne theorem , although resulted in vast development of related numbers , however it has reduced the speed of accessing to prime numbers from one to five years . this paper could attain to theorems that are more extended than mersenne theorem with accelerating the speed of accessing to prime numbers . since that time , the reason for frequently using mersenne theorem was that no one could find an efficient formula for accessing to the largest prime numbers . this paper provided some relations for prime numbers that one could define several formulas for attaining prime numbers in any interval ; therefore , according to flexibility of these relations , it could be found a new branch in the field of accessing to great prime numbers followed by providing an algorithm at the end of this paper for finding the largest prime numbers .
the discontinuous galerkin method nowadays is a well - established method for solving partial differential equations , especially for time - dependent problems .it has been thoroughly investigated by cockburn and shu as well as hesthaven and warburton , who summarized many of their findings in and , respectively . concerning maxwell s equations in time - domain ,the dgm has been studied in particular in .the former two apply tetrahedral meshes , which provide flexibility for the generation of meshes also for complicated structures .the latter two make use of hexahedral meshes , which allow for a computationally more efficient implementation . in the authors state that the method can easily deal with meshes with hanging nodes since no inter - element continuity is required , which renders it particularly well suited for -adaptivity .indeed , many works are concerned with - , - or -adaptivity within the dg framework .the first published work of this kind is presumably , where the authors consider linear scalar hyperbolic conservation laws in two space dimensions . for a selection of other publicationssee and references therein .the latter three are concerned with the adaptive solution of maxwell s equations in the time - harmonic case . in this article, we are concerned with solving the maxwell equations for electromagnetic fields with arbitrary time dependence in a three - dimensional domain .they read [ eq : maxwell ] with the spatial variable and the temporal variable subject to boundary conditions specified at the domain boundary and initial conditions specified at time .the vectors of the electric field and flux density are denoted by and and the vectors of the magnetic field and flux density by and .the electric current density is denoted by .however , we assume the domain to be source free and free of conductive currents ( ) . furthermore , we assume heterogeneous , linear , isotropic , non - dispersive and time - independent materials in the constitutive relations the material parameters and are the magnetic permeability and dielectric permittivity . at the domain boundary , we apply either electric ( ) or radiation boundary conditions ( ) , where denotes the local speed of light . we also introduce the electromagnetic energy contained in a volume obtained by integrating the energy density as this paper focuses on a general formulation of the dgm on non - regular hexahedral meshes as well as the projection of solutions during mesh adaptation . the issues of optimality of the projections and stability of the adaptive algorithm are addressed .special emphasis is put on discussing the computational efficiency . to the best of our knowledge ,this is the first publication dealing with dynamical -meshes for the maxwell time - domain problem employing the dg method in three - dimensional space .as they are key aspects of adaptive and specifically -adaptive methods , we will also address the issues of local error and smoothness estimation .this includes comments on the computational efficiency of the estimates .as estimators are not at the core of this article the discussion is , however , rather short .we perform a tesselation of the domain of interest into hexahedra such that the tesselation is a polyhedral approximation of .the tesselation is not required to be regular , however , it is assumed to be derivable from a regular root tesselation by means of element bisections .the number of element bisections along each cartesian coordinate , which is required to an obtain element of is referred to as the refinement levels . as we allow for anisotropic bisecting the refinement levels of one element may differ . in case of isotropic refinementwe simply use .the intersection of two neighboring elements is called their interface , which we denote as . as we consider non - regular grids , every face of a hexahedral elementmay be partitioned into several interfaces depending on the number of neighbors such that .the face orientation is described by the outward pointing unitary normal .the union of all faces is denoted as , and the internal faces are denoted as .finally , the volume , area and length measures of elements , interfaces , faces and edges are referred to as , , and , where denotes any of the cartesian coordinates .every element of the tesselation is related to a master element ^ 3 ] , which reduces the number of addends to the minimum possible . for the merging of elements , the approximation within the parent element , , is considered to be given piece - wise within its child elements .the projection reads where the simplifications ( [ eq : projsolutionrefine2 ] ) and ( [ eq : projsolutionrefine3 ] ) apply . for the case of -enrichments ,the local fes are amended with the order basis functions where any ( non - zero ) number of the local maximum approximation orders may be increased . also , an enrichment by more than one higher order basis function is possible .formally , we perform the orthogonal projection ( [ eq : projectionoperator ] ) , however , due to the orthogonality property of the basis functions the coefficients remain unaltered under a projection from to . practically , we simply extend the local vectors of coefficients with the new coefficients , which are initialized to zero .conversely , for the case of a -reduction , the local fes is reduced by discarding the -order basis functions again , by virtue of the orthogonality , we find that the coefficients are deleted from the local vectors of coefficients while the coefficients remain unaltered .we denote by the projection of the global approximation from the current discretization to another one obtained by local - and -adaptations .an approximation with coefficients according to is optimal in the sense of ( [ eq : dgapproxerror ] ) .the approximations within refined and merged elements with coefficients obtained through the orthogonal projections and are , hence , optimal in the same sense . if and holds true for all , the fes is a subspace of ( cf .( [ eq : legendrespaceunion ] ) ) and every function of is representable in but not vice versa . in this case ,a given approximation is exactly represented within an element under -refinement but not under -reduction .see fig .[ fig : adaplegendreremarks ] for an example .[ cc](a ) [ cc](b ) [ cc] [ cc] [ cc] [ cc] [ cc] [ lb] [ cb] and , the approximations of the parent and child elements agree point - wise .the projection to a merged element shown in ( b ) can , in general , not be exact due to the discontinuity ., title="fig:",scaledwidth=95.0% ] since the projections ( [ eq : projsolutionrefine ] ) for performing -refinement are independent of the actual approximation , we also tabulated the projection operators and ( expressed in master basis functions ) yielding the matrix operators and , where the superscript denotes that the refinement level is increased .accordingly , we make use of the matrix operators and for evaluating the projections of ( [ eq : projsolutionreduce ] ) in the case of element merging .the matrix operators are related as this allows for the computation of the approximations within adapted elements by means of efficient matrix - vector multiplications . as all projection matrices are triangular the evaluation can be carried out as an in - place operation requiring no allocation of temporary memory .the global approximation associated with an adapted grid is computed as .it can be considered as initial conditions applied on the new discretization obtained by performing the refinement operations .assuming stability of the time stepping scheme ( cf . ) , it is sufficient to show that the application of the projection operators at some time does not increase the electromagnetic energy associated with the approximate dg solution , i.e. , in this case it follows and , thus , stability of the adaptive scheme .following ( [ eq : energydensity ] ) the energy associated with element is given as as a consequence of ( [ eq : projsolutionrefine2 ] ) , it is sufficient to show that the energy ( [ eq : dgenergy ] ) is non - increasing during any adaptation involving one coordinate only . [[ sec : h - refinement-1 ] ] -refinement + + + + + + + + + + + + + + + + + + + + + + + + + + for the following discussion of stability it is assumed that refinement is carried out along the -coordinate .also we assume the maximum approximation orders and to be identical .it is clarified later , that this does not pose a restriction to the general validity of the results . in the case of -refinement , the operators and project from the space to the larger space .following the argument of paragraph [ sec : optimality ] on optimality , any function defined in the space is exactly represented in .the conservation of the discrete energy is a direct consequence as the approximation in the parent and child elements are point - wise identical .we find the following relation for the 2-norms of the respective local vectors of coefficients the exemplary parent element approximation plotted in fig .[ fig : adaplegendreremarks]a has a maximum order of with all coefficients equal to one .the coefficients of the child element approximations and the square values of their 2-norms are given in tab .[ tab : adapdgcoeffsr ] . if the vector is considered to be either the vector of coefficients of the electric field or the magnetic field the result agrees with ( [ eq : dgenergy1drefine ] ) ..parent and child element coefficients of the function plotted in fig .[ fig : adaplegendreremarks]a [ cols="^,^,^,^,^,^,^,^,>",options="header " , ] we chose a maximum -refinement level of two and the local element order to vary in between zero and four .the local energy density , introduced in ( [ eq : energydensity ] ) , serves as the criterion for controlling the adaptation procedure .denoting by the average energy density of element and by the normalized energy density with , we assigned the local refinement levels according to and polynomial orders as with .the initial discretization consisted of elements . during the simulationthe number of elements varies and grew strongly after scattering from the reflector took place , when it reached close to 800,000 elements corresponding to slightly more than 55 million dof . for comparison, we note that employing the finest mesh resolution globally as well as fourth order approximations uniformly would lead to approximately 7.5 billion ( ) dof .this corresponds to a factor of approximately 130 in terms of memory savings .we emphasize that the simulations were carried out on a single machine .the implementation takes full advantage of multi - core capabilities through openmp parallelization .the numerous run - time memory allocations and deallocations are handled through a specialized memory management library based on memory blocking , which we implemented for supporting the main code .figure [ fig : eygrids ] depicts cut - views of the -component of the electric field and the respective -mesh at three instances in time .note that the scaling of the electric field differs for every time instance , which is necessary to allow for a visual inspection .the enlargement shows details of the computational grid .all elements are of hexahedral kind , however , we make use of the common tensor product visualization technique ( cf . ) using embedded tetrahedra for displaying the three tensor product orders ( out of which only and are visible in the depicted -plane ) .as we employed isotropic - as well as -refinement all tetrahedra associated with one element share the same color .figure [ fig : signals ] shows plots of the outgoing and reflected waveform recorded along the waveguide center .the blue dashed line was obtained with the commercial cst microwave studio software on a very fine mesh and serves as a cross comparison result .-component of the electric field ( top panel ) and the computational grid ( bottom panel ) at three instances in time .the enlargement shows details of the grid .we employ hexahedral elements for the computation but make use of embedded tetrahedra for displaying the tensor product orders in the grid view .as isotropic -refinement was employed in this examples all tetrahedra associated with one element share a common color .note that different scalings are used for the time instances in the top panel.,scaledwidth=100.0% ]we presented a discontinuous galerkin formulation for non - regular hexahedral meshes and showed that hanging nodes of high level can easily be included into the framework .in fact , any non - regularity of the grid can be included in a single term reflecting the contribution of neighboring elements to the local interface flux .we demonstrated that the method can be implemented such that it maintains its computational efficiency also on non - regular and locally refined meshes as long as the mesh is derived from a regular root tesselation by means of element bisections .this is achieved by extensive tabulations of flux and projection matrices , which are obtained through ( analytical ) precomputations of integral terms .we also presented local refinement techniques for - and -refinements , which are based on projections between finite element spaces .these projections were shown to guarantee minimal projection errors in the -sense and to lead to an overall stable time - domain scheme .local error and smoothness estimates have been addressed , both of them relate to the size of the interface jumps of the dg solution .we considered the simulation of a smooth and a non - smooth waveform in a one - dimensional domain for validating the error and smoothness estimates .as an application example in three - dimensional space the backscattering of a broadband waveform from a radar reflector was considered . in this example the total wave propagation distance corresponds to approximately sixty wavelengths involving thousand of local mesh adaptations . asthe implementation of the derived error and smoothness estimates for three - dimensional problems is subject of ongoing work , we chose to drive the grid adaptation using the energy density as refinement indicator . crosschecking with a result obtained using a commercial software package showed good agreement .l. fezoui , s. lanteri , s. lohrengel , s. piperno , convergence and stability of a discontinuous galerkin time - domain method for the 3d heterogeneous maxwell equations on unstructured meshes , esaim - math model num 39 ( 6 ) ( 2005 ) 11491176 .d. wirasaet , s. tanaka , e. j. kubatko , j. j. westerink , c. dawson , a performance comparison of nodal discontinuous galerkin methods on triangles and quadrilaterals , int j numer meth fluids 64 ( 10 - 12 ) ( 2010 ) 13361362 .l. krivodonova , j. xin , j. remacle , n. chevaugeon , j. flaherty , shock detection and limiting with discontinuous galerkin methods for hyperbolic conservation laws , appl numer math 48 ( 3 - 4 ) ( 2004 ) 323338 .smove , a program for the adaptive simulation of electromagnetic fields and arbitrarily shaped charged particle bunches using moving meshes , technical documentation : _- school - ce.de / files2/schnepp / smove/_.
a framework for performing dynamic mesh adaptation with the discontinuous galerkin method ( dgm ) is presented . adaptations include modifications of the local mesh step size ( -adaptation ) and the local degree of the approximating polynomials ( -adaptation ) as well as their combination . the computation of the approximation within locally adapted elements is based on projections between finite element spaces ( fes ) , which are shown to preserve an upper limit of the electromagnetic energy . the formulation supports high level hanging nodes and applies precomputation of surface integrals for increasing computational efficiency . error and smoothness estimates based on interface jumps are presented and applied to the fully -adaptive simulation of two examples in one - dimensional space . a full wave simulation of electromagnetic scattering form a radar reflector demonstrates the applicability to large scale problems in three - dimensional space . discontinuous galerkin method , dynamic mesh adaptation , -adaptation , maxwell time - domain problem , large scale simulations 65m60 , 78a25
in industrial inspection , there is an ever - growing demand for highly accurate , non - destructive measurements of three - dimensional object geometries .a variety of optical sensors have been developed to meet these demands .these sensors satisfy the requirements at least partially .numerous applications , however , still wait for a capable metrology .the limitations of those sensors emerge from physics and technology the physical limits are determined by the wave equation and by coherent noise , while the technological limits are mainly due to the space - time - bandwidth product of electronic cameras .closer consideration reveals that the technological limits are basically of information - theoretical nature .the majority of the available optical 3d sensors need large amounts of raw data in order to obtain the shape .a lot of redundant information is acquired and the expensive channel capacity of the sensors is used inefficiently . a major source of redundancy is the shape of the object itself : if the object surface is almost planar , there is similar height information at each pixel . in terms of information theorythe surface points of such objects are `` correlated '' ; their power spectral density decreases rapidly . in order to remove redundancy, one can apply spatial differentiation to whiten the power spectral density ( see fig .[ spectra ] ) . fortunately , there are optical systems that perform such spatial differentiation .indeed , sensors that acquire just the local slope instead of absolute height values are much more efficient in terms of exploiting the available channel capacity .further , reconstructing the object height from slope data reduces the high - frequency noise since integration acts as a low - pass filter .there are several sensor principles that acquire the local slope : for _ rough surfaces _ , it is mainly the principle of shape from shading . for _ specular surfaces _ , there are differentiating sensor principles like the differential interference contrast microscopy or deflectometry .deflectometric scanning methods allow an extremely precise characterization of optical surfaces by measuring slope variations as small as .full - field deflectometric sensors acquire the two - dimensional local gradient of a ( specular ) surface . using `` phase - measuring deflectometry '' ( pmd ) , for example, one can measure the local gradient of an object at one million sample points within a few seconds .the repeatability of the sensor described in is below with an absolute error less than , on a measurement field of mm mm and a sampling distance of mm . in several casesit is sufficient to know the local gradient or the local curvature ; however , most applications demand the height information as well . as an example we consider eyeglass lenses . in order to calculate the local surface power of an eyeglass lens by numerical differentiation , we only need the surface slope and the lateral sampling distance . but for quality assurance in an industrial setup , it is necessary to adjust the production machines according to the measured shape deviation .this requires height information of the surface .another application is the measurement of precision optics . for the optimization of these systems sensorsare used to measure the local gradient of wavefronts . to obtain the shape of these wavefronts , a numerical shape reconstruction method is needed . in the previous section we stated that measuring the gradient instead of the object height is more efficient from an information - theoretical point of view , since redundant information is largely reduced .using numerical integration techniques , the shape of the object can be reconstructed locally with high accuracy .for example , a full - field deflectometric sensor allows the detection of local height variations as small as _ a few nanometers_. however , if we want to reconstruct the global shape of the object , low - frequency information is essential . acquiring _ solely the slope _ of the object reduces the low - frequency informationsubstantially ( see fig .[ spectra ] ) . in other words, we have a lot of _ local _ information while lacking _ global _ information , because we reduced the latter by optical differentiation . as a consequence , small measuring errors in the low - frequency range will have a strong effect on the overall reconstructed surface shape .this makes the reconstruction of the global shape a difficult task .furthermore , one - dimensional integration techniques can not be easily extended to the two - dimensional case . in this case, one has to choose a path of integration .unfortunately , noisy data leads to different integration results depending on the path . therefore, requiring the integration to be path independent becomes an important condition ( `` integrability condition '' ) for developing an optimal reconstruction algorithm ( see sections [ problem ] and [ optimal ] ) .we consider an _ object surface _ to be a twice continuously differentiable function on some compact , simply connected region .the integrability condition implies that the gradient field is _ curl free _ , i.e. every path integral between two points yields the same value .this is equivalent to the requirement that there exists a potential to the gradient field which is unique up to a constant .most object surfaces measurable by deflectometric sensors fulfill these requirements , or at least they can be decomposed into simple surface patches showing these properties .measuring the gradient at each sample point yields a discrete vector field .these measured gradient values usually are contaminated by noise the vector field is not necessarily curl free .hence , there might not exist a potential such that for all . in that case, we seek a least - squares approximation , i.e. a surface representation such that the following error functional is minimized : ^ 2 + \left[z_y({\bf x}_i)-q({\bf x}_i)\right]^2.\ ] ]in the case of one - dimensional data , integration is a rather straightforward procedure which has been investigated quite extensively . in case of two - dimensional data , thereexist mainly two different approaches to solve the stated problem . on the one hand ,there are _ local methods _ : they integrate along predetermined paths .the advantage of these methods is that they are simple and fast , and that they reconstruct small local height variations quite well .however , they propagate both the measurement error and the discretization error along the path . therefore , they introduce a global shape deviation .this effect is even worse if the given gradient field is not guaranteed to be curl free : in this case , the error also depends on the chosen path . on the other hand , there are _ global methods _ :they try to minimize by solving its corresponding euler - lagrange equation where and denote the numerical - and -derivative of the measured data and , respectively .the advantage of global methods is that there is no propagation of the error ; instead , the error gets evenly distributed over all sample points . unfortunately , the implementation has certain difficulties .methods based on finite differences are usually inefficient in their convergence when applied to strongly curved surfaces .therefore , they are mainly used for nearly planar objects . another approach to solve eq .( [ poisson ] ) is based on fourier transformation .integration in the fourier domain has the advantage of being optimal with respect to information - theoretical considerations .however , fourier methods assume a periodic extension on the boundary which can not be easily enforced with irregularly shaped boundaries in a two - dimensional domain . in general , it is crucial to note that the reconstruction method depends on the slope - measuring sensor and the properties of the acquired data .for example , slope data acquired by shape from shading is rather noisy , exhibits curl , and is usually located on a full grid of millions of points . here, a fast subspace approximation method like the one proposed by frankot and chellappa is appropriate . on the other hand , wavefront reconstruction deals with much smaller data sets , and the surfaceis known to be rather smooth and flat . in this case, a direct finite - difference solver can be applied .deflectometric sensors deliver a third type of data : it consists of very large data sets with rather small noise and curl , but the data may not be complete , depending on the local reflectance of the measured surface .furthermore , the measuring field may have an unknown , irregularly shaped boundary .these properties render most of the aforementioned methods unusable for deflectometric data . in the following sections, we will describe a surface reconstruction method which is especially able to deal with slope data acquired by sensors such as phase - measuring deflectometry .the desired surface reconstruction method should have the properties of both _ local and global _ integration methods : it needs to _ preserve local details _ without propagating the error along a certain path .it also needs to minimize the error functional of eq .( [ approx_functional ] ) , hence yielding a _ globally optimal solution _ in a least - squares sense .further , the method should be able to deal with _ irregularly shaped boundaries _ , _ missing data points _ , and it has to be able to reconstruct surfaces of a large variety of objects with steep slopes and _ high curvature values_. it should also be able to _ handle large data sets _ which may consist of some million sample points .we now show how to meet these challenges using an analytic interpolation approach .a low noise level allows _ interpolation _ of the slope values instead of approximation .interpolation is a special case which has the great advantage that we can ensure that small height variations are preserved . in this paperwe will only focus on the interpolation approach as analytic reconstruction method . for other measurement principles like shape from shading, an approximation approach might be more appropriate .the basic idea of the integration method is as follows : we seek an analytic interpolation function such that its gradient interpolates the measured gradient data .once this interpolation is determined , it uniquely defines the surface reconstruction up to an integration constant . to obtain the analytic interpolant, we choose a generalized hermite interpolation approach employing _ radial basis functions _ ( rbfs ) .this method has the advantage that it can be applied to _ scattered data_. it allows us to integrate data sets with holes , irregular sampling grids , or irregularly shaped boundaries .furthermore , this method allows for an optimal surface recovery in the sense of eq .( [ approx_functional ] ) ( see section [ optimal ] below ) . in more detail : assuming that the object surface fulfills the requirements described in section [ problem ] , the data is given as pairs , where and are the measured slopes of the object at in - and -direction , respectively , for .we define the interpolant to be where and , for , are coefficients and is a radial basis function . hereby , and denote the analytic derivative of with respect to and , respectively .this interpolant is specifically tailored for gradient data . to obtain the coefficients in eq .( [ interpolant ] ) we match the analytic derivatives of the interpolant with the measured derivatives : this leads to solving the following system of linear equations : \\[-0.7em ] & \\ \end{tabular } } \right)}_{a \ , \in \ , m^{2n\times2n } } \underbrace{\left ( \mbox{\begin{tabular}{c } \\ \\[-0.7em ] \\[-0.7em ] \end{tabular } } \right)}_{\alpha \ , \in \ , m^{2n\times1 } } = \underbrace{\left ( \mbox{\begin{tabular}{c } \\ \\[-0.7em ] \\[-0.7em ] \end{tabular } } \right)}_{d \ , \in \ , m^{2n\times1}}.\ ] ] + using the resulting coefficients we then can apply the interpolant in eq .( [ interpolant ] ) to reconstruct the object surface . for higher noise levels an _ approximation _ approach is recommended . in this case, we simply reduce the number of basis functions so that they do not match the number of data points any more . the system in eq .( [ linear_system ] ) then becomes overdetermined and can be solved in a least - squares sense .the interpolation approach employing radial basis functions has the advantage that it yields a _unique _ solution to the surface recovery problem : within this setup , the interpolation matrix in eq .( [ linear_system ] ) is always symmetric and positive definite .further , the solution satisfies a _ minimization principle _ in the sense that the resulting analytic surface function has minimal energy .we choose to be a wendland s function , , with this has two reasons : first , wendland s functions allow to choose their continuity according to the smoothness of the given data .the above wendland s function leads to an interpolant which is three - times continuously differentiable , hence guaranteeing the integrability condition .second , the compact support of the function allows to adjust the support size in such a way that the solution of eq .( [ linear_system ] ) is stable in the presence of noise .it turns out that the support size has to be chosen rather large in order to guarantee a good surface reconstruction .the amount of data commonly acquired with a pmd sensor in a single measurement is rather large : it consists of about one million sample points .this amount of data , which results from a measurement with high lateral resolution , would require the inversion of a matrix with entries ( eq . ( [ linear_system ] ) ) . since we choose a large support size for our basis functions to obtain good numerical stability the corresponding matrix is not sparse .it is obvious that this large amount of data can not be handled directly by inexpensive computing equipment in reasonable time . to cope with such large data sets we developed a method which first splits the data field into a set of overlapping rectangular patches .we interpolate the data on each patch separately .if the given data were height information only , this approach would yield the complete surface reconstruction . for slope data ,we interpolate the data and obtain a surface reconstruction _ up to a constant of integration _( see fig .[ mirror_fit](a ) ) on each patch . in order to determine the missing information we apply the following fitting scheme :let us denote two adjacent patches as and and the resulting interpolants as and , respectively .since the constant of integration is still unknown the two interpolants might be on different height levels .generally , we seek a correcting function by minimizing this fitting scheme is then propagated from the center toward the borders of the data field to obtain the reconstructed surface on the entire field ( see fig . [ mirror_fit](b ) ) . in the simplest case , the functions are chosen to be constant on each patch , representing the missing constant of integration .if the systematic error of the measured data is small , the constant fit method is appropriate since it basically yields no error propagation .for very noisy data sets it might be better to use a planar fit , i.e. , to avoid discontinuities at the patch boundaries .this modification , however , introduces a propagation of the error along the patches .the correction angle required on each patch to minimize eq .( [ lsqfit ] ) depends on the noise of the data .numerical experiments have shown that in most cases the correction angle is at least ten times smaller than the noise level of the measured data .using this information we can estimate the global height error which , by error propagation , may sum up toward the borders of the measuring field : where is the standard deviation of the correction angles , is the patch size , and is the number of patches .suppose we want to integrate over a field of mm ( which corresponds to a typical eyeglass diameter ) , assuming a realistic noise level of and a patch size ( not including its overlaps ) of mm . with our setup ,this results in patches , with a maximal tipping of per patch . according to eq .( [ globaltilt ] ) , the resulting global error caused by propagation of the correction angles is _ only . we choose the size of the patches as big as possible , provided that a single patch can still be handled efficiently . for the patch size in the example , points ( including overlap ) correspond to a interpolation matrix that can be inverted quickly using standard numerical methods like cholesky decomposition .a final remark concerning the runtime complexity of the method described above : the complexity can be further reduced in case the sampling grid is regular . since the patches all have the same size and the matrix entries in eq .( [ linear_system ] ) only depend on the distances between sample points , the matrix can be inverted once for all patches and then applied to varying data on different patches , as long as the particular data subset is complete . note that eq .( [ interpolant ] ) can be written as , where is the evaluation matrix .then , by applying eq .( [ linear_system ] ) we obtain , where the matrix needs to be calculated only once for all complete patches .if samples are missing , however , the interpolation yields different coefficients and hence forces to recompute for this particular patch .using these techniques , the reconstruction of surface values from their gradients takes about minutes on a current personal computer .first , we investigated the stability of our method with respect to noise .we simulated realistic gradient data of a sphere ( with mm radius , mm mm field with sampling distance mm , see figure [ noise_sphere](a ) ) and added uniformly distributed noise of different levels , ranging from to .we reconstructed the surface of the sphere using the interpolation method described in section [ method ] . hereby , we aligned the patches by only adding a constant to each patch .the reconstruction was performed for statistically independent slope data sets for each noise level . depicted in figure [ noise_sphere](b ) is a cross - section of the absolute error of the surface reconstruction from the ideal sphere , for a realistic noise level of .the absolute error is less than nm _ on the entire measurement field_. the local height error corresponding to this noise level is about nm .this demonstrates that the dynamic range of the global absolute error with respect to the height variance ( mm ) of the considered sphere is about .the graph in figure [ noise_sphere](c ) depicts the mean value and the standard deviation ( black error bars ) of the absolute error of the reconstruction , for different data sets and for different noise levels .it demonstrates that for increasing noise level the absolute error grows only linearly ( linear fit depicted in gray ) , and even for a noise level being the _ fifty - fold of the typical sensor noise _ the global absolute error remains in the _ sub - micrometer regime_. this result implies that the reconstruction error is smaller than most technical applications require .another common task in quality assurance is the detection and quantification of surface defects like scratches or grooves .we therefore tested our method for its ability to reconstruct such local defects that may be in a range of only a few nanometers .for this purpose , we considered data from a pmd sensor for small , specular objects .the sensor has a resolvable distance of m laterally and a local angular uncertainty of about . in order to quantify the deviation of the perfect shape, we again simulated a sphere ( this time with mm radius and mm mm data field size ) .we added parallel , straight grooves of varying depths from to nm and of m width and reconstructed the surface from the modified gradients .the perfect sphere was then subtracted from the reconstructed surface .the resulting reconstructed grooves are depicted in figure [ groove_sphere](a ) .the grooves ranging from down to nm depth are clearly distinguishable from the plane .figure [ groove_sphere](b ) shows that all reconstructed depths agree fairly well with the actual depths .note that each groove is determined by only inner sample points .the simulation results demonstrate that our method is almost free of error propagation while preserving small , local details of only some nanometers height .so far , we used only simulated data to test the reconstruction .now , we want to demonstrate the application of our method to a real measurement .the measurement was performed with a phase - measuring deflectometry sensor for very small , specular objects .it can laterally resolve object points with a distance of nm , while having a local angular uncertainty of about .the object under test is a part of a wafer with about nm height range .the size of the measurement field was m m .depicted in figure [ wafer ] is the reconstructed object surface from roughly three million data values .both the global shape and local details could be reconstructed with high precision .we motivated why the employment of optical slope - measuring sensors can be advantageous .we gave a brief overview of existing sensor principles .the question that arose next was how to reconstruct the surface from its slope data .we presented a method based on radial basis functions which enables us to reconstruct the object surface from noisy gradient data .the method can handle large data sets consisting of some million entries .furthermore , the data does not need to be acquired on a regular grid it can be arbitrarily scattered and it can contain data holes .we demonstrated that , while accurately reconstructing the object s global shape , which may have a _ height range of some millimeters _ , the method preserves _ local height variations on a nanometer scale_. a remaining challenge is to improve the runtime complexity of the algorithm in order to be able to employ it for inline quality assurance in a production process .the authors thank the deutsche forschungsgemeinschaft for supporting this work in the framework of sfb 603 .i. weingrtner and m. schulz , `` novel scanning technique for ultra - precise measurement of slope and topography of flats , aspheres , and complex surfaces , '' in _ optical fabrication and testing _ , ( 1999 ) , no .3739 in proc .spie , pp .27482 .m. c. knauer , j. kaminski , and g. husler , `` phase measuring deflectometry : a new approach to measure specular free - form surfaces , '' in _ optical metrology in production engineering _ , , w. osten and m. takeda , eds .( 2004 ) , no .5457 in proc .spie , pp .366376 .j. kaminski , s. lowitzsch , m. c. knauer , and g. husler , `` full - field shape measurement of specular surfaces , '' in _fringe 2005 , the 5th international workshop on automatic processing of fringe patterns _ , , w. osten , ed .( springer , 2005 ) , pp .372379 .s. lowitzsch , j. kaminski , m. c. knauer , and g. husler , `` vision and modeling of specular surfaces , '' in _ vision , modeling , and visualization 2005 _ , , g. greiner , j. hornegger , h. niemann , and s. m. , eds .( akademische verlagsgesellschaft aka gmbh , 2005 ) , pp . 479486 .j. pfund , n. lindlein , j. schwider , r. burow , t. blumel , and k. e. elssner , `` absolute sphericity measurement : a comparative study of the use of interferometry and a shack - hartmann sensor , '' opt .* 23 * , 742744 ( 1998 ) .a. agrawal , r. chellappa , and r. raskar , `` an algebraic approach to surface reconstruction from gradient fields , '' in `` ieee international conference on computer vision ( iccv ) , '' , vol . 1 ( 2005 ) ,vol . 1 , pp .174181 . c. elster and i. weingrtner ,`` high - accuracy reconstruction of a function when only or is known at discrete measurement points , '' in `` x - ray mirrors , crystals , and multilayers ii , '' ( 2002 ) , no .4782 in proc .spie , pp .1220 .s. ettl , j. kaminski , and g. husler , `` generalized hermite interpolation with radial basis functions considering only gradient data , '' in _ curve and surface fitting : avignon 2006 _ , , a. cohen , j .- l .merrien , and l. l. schumaker , eds .( nashboro press , brentwood , 2007 ) , pp .141149 . of a typical smooth object and ( b ) its derivative , together with ( c ) the power spectral density of the surface and ( d ) of its slope ( all in arbitrary units ) .the power spectral density shows that differentiation reduces redundancy , contained in the low frequencies ., width=627 ] nm down to nm . after the reconstruction ,the sphere was subtracted to make the grooves visible .the actual , reconstructed grooves are depicted in ( a ) full - field and in ( b ) cross - section.,width=529 ]
we present a novel method for reconstructing the shape of an object from measured gradient data . a certain class of optical sensors does not measure the shape of an object , but its local slope . these sensors display several advantages , including high information efficiency , sensitivity , and robustness . for many applications , however , it is necessary to acquire the shape , which must be calculated from the slopes by numerical integration . existing integration techniques show drawbacks that render them unusable in many cases . our method is based on approximation employing radial basis functions . it can be applied to irregularly sampled , noisy , and incomplete data , and it reconstructs surfaces both locally and globally with high accuracy .
multiple - input , multiple - output(mimo ) wireless transmission systems have been intensively studied during the last decade .the alamouti code for two transmit antennas is a novel scheme for mimo transmission , which , due to its orthogonality properties , allows a low complexity maximum - likelihood ( ml ) decoder .this scheme led to the generalization of stbcs from orthogonal designs .such codes allow the transmitted symbols to be decoupled from one another and single - symbol ml decoding is achieved over _ quasi static _ rayleigh fading channels . even though these codes achieve the maximum diversity gain for a given number of transmit and receive antennas and for any arbitrary complex constellations , unfortunately , these codes are not , where , by a code , we mean a code that transmits at a rate of complex symbols per channel use for an transmit antenna , receive antenna system .the golden code is a full - rate , full - diversity code and has a decoding complexity of the order of for arbitrary constellations of size the codes in and the trace - orthogonal cyclotomic code in also match the golden code . with reduction in the decoding complexitybeing the prime objective , two new full - rate , full - diversity codes have recently been discovered : the first code was independently discovered by hottinen , tirkkonen and wichman and by paredes , gershman and alkhansari , which we call the htw - pga code and the second , which we call the sezginer - sari code , was reported in by sezginer and sari . both these codes enable simplified decoding , achieving a complexity of the order of .the first code is also shown to have the non - vanishing determinant property .however , these two codes have lesser coding gain compared to the golden code .a detailed discussion of these codes has been made in , wherein a comparison of the codeword error rate ( cer ) performance reveals that the golden code has the best performance . in this paper, we propose a new full - rate , full - diversity stbc for mimo transmission , which has low decoding complexity .the contributions of this paper may be summarized ( see table [ table1 ] also ) as follows : * the proposed code has the same coding gain as that of the golden code ( and hence of that in and the trace - orthonormal cyclotomic code ) for any qam constellation ( by a qam constellation we mean any finite subset of the integer lattice ) and larger coding gain than those of the htw - pga code and the sezginer - sari code . *compared with the golden code and the codes in and , the proposed code has lesser decoding complexity for all complex constellations except for square qam constellations in which case the complexity is the same . compared to the htw - pga code and the sezginer - sari codes , the proposed code has the same decoding complexity for all non - rectangular qam [ fig [ fig2 ] ] constellations . *the proposed code has the non - vanishing determinant property for qam constellations and hence is diversity - multiplexing gain ( dmg ) tradeoff optimal .the remaining content of the paper is organized as follows : in section [ sec2 ] , the system model and the code design criteria are reviewed along with some basic definitions .the proposed stbc is described in section [ sec3 ] and its non - vanishing determinant property is shown in section [ sec4 ] . in section [ sec5 ] the ml decoding complexity of the proposed code is discussed and the scheme to decode it using sphere decoding is discussed in section [ sec6 ] . in section [ sec7 ] ,simulation results are presented to show the performance of the proposed code as well as to compare with few other known codes . concluding remarks constitute section [ sec8 ] ._ notations : _ for a complex matrix the matrices , and ] respectively .the columnwise stacking operation on is denoted by the kronecker product is denoted by and denotes the identity matrix .given a complex vector ^t, ] is the information symbol vector the code design criteria are : ( i ) to achieve maximum diversity , the codeword difference matrix must be full rank for all possible pairs of codewords and the diversity gain is given by ( ii ) for a full ranked stbc , the minimum determinant , defined as \ ] ] should be maximized .the coding gain is given by , with being the number of transmit antennas .for the mimo system , the target is to design a code that is full - rate , i.e transmits 2 complex symbols per channel use , has full - diversity , maximum coding gain and allows low ml decoding complexity .in this section , we present our stbc for mimo system .the design is based on the class of codes called co - ordinate interleaved orthogonal designs ( ciods ) , which was studied in in connection with the general class of single - symbol decodable codes and , specifically for 2 transmit antennas , is as follows .the ciod for transmit antennas is + \ ] ] where are the information symbols and and are the in - phase ( real ) and quadrature - phase ( imaginary ) components of respectively . notice that in order to make the above stbc full rank , the signal constellation from which the symbols are chosen should be such that the real part ( imaginary part , resp . ) of any signal point in is not equal to the real part ( imaginary part , resp . ) of any other signal point in .so if qam constellations are chosen , they have to be rotated .the optimum angle of rotation has been found in to be degrees and this maximizes the diversity and coding gain .we denote this angle by the proposed stbc is given by where * the four symbols and , where is a degrees rotated version of a regular qam signal set , denoted by which is a finite subset of the integer lattice , and to be precise , * is a permutation matrix designed to make the stbc full rate and is given by . ] .the optimum value of was found out to be .explicitly , our code matrix is \\\ ] ] the minimum determinant for our code when the symbols are chosen from qam constellations is , the same as that of the golden code , which will be proved in the next section .the generator matrix for our stbc , corresponding to the symbols , is as follows : \ ] ] it is easy to see that this generator matrix is orthonormal . in , it was shown that a necessary and sufficient condition for an stbc to be _ information lossless _ is that its generator matrix should be unitary .hence , our stbc has the _ information losslessness _ property .in this section it is shown that the proposed code has the non - vanishing determinant ( nvd ) property , which in conjunction with full - rateness means that our code is dmg tradeoff optimal .the determinant of the codeword matrix can be written as .\ ] ] using and in the equation above , we get , \\ & = & \big((s_1+s_2)+(s_1-s_2)^*\big)\big((s_1+s_2)-(s_1-s_2)^*\big ) { } \nonumber\\ & & { } -j[\big((s_3+s_4)+(s_3-s_4)^*\big)\big((s_3+s_4)-(s_3-s_4)^*\big)].\end{aligned}\ ] ] since , with , , a subset of ] , we get \\ & = & e^{j2\theta_g}a^2-e^{-j2\theta_g}b^2 - j[e^{j2\theta_g}c^2-e^{-j2\theta_g}d^2].\end{aligned}\ ] ] since , we get for the determinant of to be 0 , we must have the above can be written as where and clearly ] .also , the minimum value of the modulus of r.h.s of can be seen to be .so , .in particular , when the constellation chosen is the standard qam constellation , the difference between any two signal points is a multiple of 2 .hence , for such constellations , , where and are distinct codewords . the minimum determinant is consequently 16/5 .this means that the proposed codes has the non - vanishing determinant ( nvd ) property . in , it was shown that full - rate codes which satisfy the non - vanishing determinant property achieve the optimal dmg tradeoff .so , our proposed stbc is dmg tradeoff optimal .in this section , it is shows that sphere decoding can be used to achieve the decoding complexity of .it can be shown that can be written as where is given by with being the generator matrix as in and ^t\ ] ] with drawn from , which is a rotation of the regular qam constellation .let ^t ] with being a rotation matrix and is defined as follows .\ ] ] so , can be written as where . using this equivalent model ,the ml decoding metric can be written as on obtaining the qr decomposition of , we get = , where is an orthonormal matrix and is an upper triangular matrix .the ml decoding metric now can be written as if ] and hence obtain the symbols and .having found these , and can be decoded independently .observe that the real and imaginary parts of symbol are entangled with one another because of constellation rotation but are independent of the real and imaginary parts of when and are conditionally given .having found the partial vector ^t$ ] , we proceed to find the rest of the symbols as follows .we do two parallel 2 dimensional real search to decode the symbols and .so , overall , the worst case decoding complexity of the proposed stbc is 2 .this is due to the fact that 1 .a 4 dimensional real sd requires metric computations in the worst possible case .two parallel 2 dimensional real sd require metric computations in the worst case .this decoding complexity is the same as that achieved by the htw - pga code and the sezginer - sari code .though it has not been mentioned anywhere to the best of our knowledge , the ml decoding complexity of the golden code , dayal - varanasi code and the trace - orthogonal cyclotomic code is also for square qam constellations .this follows from the structure of the matrices for these codes which are counterparts of the one in .the matrices of these codes are similar in structure and as shown below : \ ] ] table [ table1 ] presents the comparison of the known full - rate , full - diversity codes in terms of their ml decoding complexity and the coding gain .fig [ 4qam ] shows the codeword error performance plots for the golden code , the proposed stbc and the htw - pga code for the 4-qam constellation .the performance of the proposed code is the same as that of the golden code .the htw - pga code performs slightly worse due to its lower coding gain .fig [ 16qam ] , which is a plot of the cer performance for 16-qam , also highlights these aspects .table [ table1 ] gives a comparison between the well known full - rate , full - diversity codes for mimo .[ cols="^,^,^,^,^ " , ]in this paper , we have presented a full - rate stbc for mimo systems which matches the best known codes for such systems in terms of error performance , while at the same time , enjoys simplified - decoding complexity that the codes presented in and do .recently , a rate-1 stbc , based on scaled repetition and rotation of the alamouti code , was proposed .this code was shown to have a hard - decision performance which was only slightly worse than that of the golden code for a spectral efficiency of , but the complexity was significantly lower .this work was partly supported by the drdo - iisc program on advanced research in mathematical engineering .j. c. belfiore , g. rekaya and e. viterbo , `` the golden code : a full rate space - time code with non - vanishing determinants , '' _ ieee trans .inf . theory _ ,51 , no . 4 , pp .1432 - 1436 , april 2005 .j. paredes , a.b . gershman and m. gharavi - alkhansari , a space - time code with non - vanishing determinants and fast maximum likelihood decoding , " in proc _ ieee international conference on acoustics , speech and signal processing(icassp 2007 ) , _ vol . 2 , pp.877 - 880 , april 2007 .s. sezginer and h. sari , `` a full rate full - diversity space - time code for mobile wimax systems , '' in proc ._ ieee international conference on signal processing and communications _ , dubai , july 2007 .p. elia , k. r. kumar , s. a. pawar , p. v. kumar and h. lu , explicit construction of space - time block codes : achieving the diversity - multiplexing gain tradeoff , _ ieee trans .inf . theory _ ,52 , pp . 3869 - 3884 , sept .2006 .v.tarokh , n.seshadri and a.r calderbank,"space time codes for high date rate wireless communication : performance criterion and code construction , _ ieee trans .inf . theory _ ,744 - 765 , 1998 .h. yao and g. w. wornell , `` achieving the full mimo diversity - multiplexing frontier with rotation - based space - time codes , '' in _ proc .allerton conf . on comm .control and comput ., _ monticello , il , oct . 2003 .
this paper presents a low - ml - decoding - complexity , full - rate , full - diversity space - time block code ( stbc ) for a transmit antenna , receive antenna multiple - input multiple - output ( mimo ) system , with coding gain equal to that of the best and well known golden code for any qam constellation . recently , two codes have been proposed ( by paredes , gershman and alkhansari and by sezginer and sari ) , which enjoy a lower decoding complexity relative to the golden code , but have lesser coding gain . the stbc presented in this paper has lesser decoding complexity for non - square qam constellations , compared with that of the golden code , while having the same decoding complexity for square qam constellations . compared with the paredes - gershman - alkhansari and sezginer - sari codes , the proposed code has the same decoding complexity for non - rectangular qam constellations . simulation results , which compare the codeword error rate ( cer ) performance , are presented .
many complex systems in various areas of science exhibit a spatio - temporal dynamics that is inhomogeneous and can be effectively described by a superposition of several statistics on different scales , in short a superstatistics .the superstatistics notion was introduced in , in the mean time many applications for a variety of complex systems have been pointed out .essential for this approach is the existence of sufficient time scale separation between two relevant dynamics within the complex system .there is an intensive parameter that fluctuates on a much larger time scale than the typical relaxation time of the local dynamics . in a thermodynamicsetting , can be interpreted as a local inverse temperature of the system , but much broader interpretations are possible .the stationary distributions of superstatistical systems , obtained by averaging over all , typically exhibit non - gaussian behavior with fat tails , which can be a power law , or a stretched exponential , or other functional forms as well .in general , the superstatistical parameter need not to be an inverse temperature but can be an effective parameter in a stochastic differential equation , a volatility in finance , or just a local variance parameter extracted from some experimental time series .many applications have been recently reported , for example in hydrodynamic turbulence , for defect turbulence , for cosmic rays and other scattering processes in high energy physics , solar flares , share price fluctuations , random matrix theory , random networks , multiplicative - noise stochastic processes , wind velocity fluctuations , hydro - climatic fluctuations , the statistics of train departure delays and models of the metastatic cascade in cancerous systems . on the theoretical side , there have been recent efforts to formulate maximum entropy principles for superstatistical systems . in this paperwe provide an overview over some recent developments in the area of superstatistics .three examples of recent applications are discussed in somewhat more detail : the statistics of lagrangian turbulence , the statistics of train departure delays , and the survival statistics of cancer patients . in all cases the superstatistical model predictions are in very good agreement with real data .we also comment on recent theoretical approaches to develop generalized maximum entropy principles for superstatistical systems .in generalized versions of statistical mechanics one starts from more general entropic measures than the boltzmann - gibbs shannon entropy .a well - known example is the -entropy but other forms are possible as well ( see , e.g. , for a recent review ) .the are the probabilities of the microstates , and is a real number , the entropic index .the ordinary shannon entropy is contained as the special case : extremizing subject to suitable constraints yields more general canonical ensembles , where the probability to observe a microstate with energy is given by one obtains a kind of power - law boltzmann factor , of the so - called -exponential form .the important question is what could be a physical ( non - equilibrium mechanism ) to obtain such distributions .the reason could indeed be a driven nonequilibrium situation with local fluctuations of the environment .this is the situation where the superstatistics concept enters .our starting point is the following well - known formula where is the ( or ) probability distribution .we see that averaged _ ordinary _ boltzmann factors with distributed yield a _generalized _ boltzmann factor of -exponential form .the physical interpretation is that tsallis type of statistical mechanics is relevant for _ nonequilibrium _ systems with temperature fluctuations .this approach was made popular by two prls in 2000/2001 , which used the -distribution for .general were then discussed in . in was suggested to construct a dynamical realization of -statistics in terms of e.g. a linear langevin equation with fluctuating parameters . here denotes gaussian white noise .the parameters are supposed to fluctuate on a much larger time scale than the velocity .one can think of a brownian particle that moves through spatial cells with different local in each cell ( a nonequilibrium situation ) .assume the probability distribution of in the various cells is a -distribution of degree : then the conditional probability given some fixed in a given cell is gaussian , , the joint probability is and the marginal probability is .integration yields i.e. we obtain power - law boltzmann factors with , , and . here is the average of .the idea of superstatistics is to generalize this example to much broader systems .for example , need not be an inverse temperature but can in principle be any intensive parameter .most importantly , one can generalize to _ general probability densities _ and _ general hamiltonians_. in all cases one obtains a superposition of two different statistics : that of and that of ordinary statistical mechanics. superstatistics hence describes complex nonequilibrium systems with spatio - temporal fluctuations of an intensive parameter on a large scale .the _ effective _ boltzmann factors for such systems are given by some recent theoretical developments of the superstatistics concept include the following : * can prove a superstatistical generalization of fluctuation theorems * can develop a variational principle for the large - energy asymptotics of general superstatistics ( depending on , one can get not only power laws for large but e.g. also stretched exponentials ) * can formally define generalized entropies for general superstatistics * can study microcanonical superstatistics ( related to a mixture of -values ) * can prove a superstatistical version of a central limit theorem leading to -statistics * can relate it to fractional reaction equations * can consider superstatistical random matrix theory * can apply superstatistical techniques to networks * can define superstatistical path integrals * can do superstatistical time series analysis ... and some more practical applications : * can apply superstatistical methods to analyze the statistics of 3d hydrodynamic turbulence * can apply it to atmospheric turbulence ( wind velocity fluctuations ) * can apply superstatistical methods to finance and economics * can apply it to blinking quantum dots * can apply it to cosmic ray statistics * can apply it to various scattering processes in particle physics * can apply it to hydroclimatic fluctuations * can apply it to train delay statistics * can consider medical applications in principle any is possible in the superstatistics approach , in practice one usually observes only a few relevant distributions .these are the , inverse and lognormal distribution . in other words , in typical complex systems with time scale separation one usually observes 3 physically relevant universality classes * \(a ) -superstatistics ( tsallis statistics ) * \(b ) inverse -superstatistics * \(c ) lognormal superstatistics what could be a plausible reason for this ?consider , e.g. , case ( a ) .assume there are many microscopic random variables , , contributing to in an additive way .for large , the sum will approach a gaussian random variable due to the ( ordinary ) central limit theorem .there can be gaussian random variables of the same variance due to various relevant degrees of freedom in the system . should be positive , hence the simplest way to get such a positive is to square the gaussian random variables and sum them up . as a result, is -distributed with degree , where is the average of .\(b ) the same considerations can be applied if the temperature rather than itself is the sum of several squared gaussian random variables arising out of many microscopic degrees of freedom .the resulting is the inverse -distribution : it generates superstatistical distributions that decay as for large .\(c ) may be generated by multiplicative random processes .consider a local cascade random variable , where is the number of cascade steps and the are positive microscopic random variables . by the ( ordinary ) central limit theorem , for large the random variable becomes gaussian for large .hence is log - normally distributed .in general there may be such product contributions to , i.e. , .then is a sum of gaussian random variables , hence it is gaussian as well .thus is log - normally distributed , i.e. , where and are suitable parameters .we will now discuss examples of the three different superstatistical universality classes .our first example is the departure delay statistics on the british rail network .clearly , at the various stations there are sometimes train departure delays of length . the 0th - order model for the waiting time would be a poisson process which predicts that the waiting time distribution until the train finally departs is , where is some parameter .but this does not agree with the actually observed data .a much better fit is given by a -exponential , see fig . 1 . ).,width=302 ] what may cause this power law that fits the data ?the idea is that there are fluctuations in the parameter as well .these fluctuations describe large - scale temporal or spatial variations of the british rail network environment .examples of causes of these -fluctuations : * begin of the holiday season with lots of passengers * problem with the track * bad weather conditions * extreme events such as derailments , industrial action , terror alerts , etc . as a result ,the long - term distribution of train delays is then a mixture of exponential distributions where the parameter fluctuates : for a -distributed with degrees of freedom one obtains where and .the model discussed in generates -exponential distributions of train delays by a simple mechanism , namely a -distributed parameter of the local poisson process .this is an example for superstatistics .our next example is an application in turbulence . consider a single tracer particle advected by a fully developed turbulent flow .for a while it will see regions of strong turbulent activity , then move on to calmer regions , just to continue in yet another region of strong activity , and so on .this is a superstatistical dynamics , and in fact superstatistical models of turbulence have been very successful in recent years .the typical shape of a trajectory of such a tracer particle is plotted in fig . 2 .this is lagrangian turbulence in contrast to eulerian turbulence , meaning that one is following a single particle in the flow . in particular , one is interested in velocity differences of the particle on a small time scale . for velocity difference becomes the local acceleration .a superstatistical lagrangian model for 3-dim velocity differences of the tracer particle has been developed in .one simply looks at a superstatistical langevin equation of the form here and are constants . note that the term proportional to introduces some rotational movement of the particle , mimicking the vortices in the flow .the noise strength and the unit vector evolve stochastically on a large time scale and , respectively , thus obtaining a superstatistical dynamics . is of the same order of magnitude as the integral time scale , whereas is of the same order of magnitude as the kolmogorov time scale .one can show that the reynolds number is basically given by the time scale ratio .the time scale describes the average life time of a region of given vorticity surrounding the test particle . in this superstatistical turbulence modelone defines the parameter to be , but it does _ not _ have the meaning of a physical inverse temperature in the flow .rather , one has , where is the kinematic viscosity and is the average energy dissipation , which is known to fluctuate in turbulent flows .in fact , kolmogorov s theory of 1961 suggests a lognormal distribution for , which automatically leads us to lognormal superstatistics : it is reasonable to assume that the probability density of the stochastic process is approximately a lognormal distribution for very small the 1d acceleration component of the particle is given by and one gets out of the model the 1-point distribution this prediction agrees very well with experimentally measured data of the acceleration statistics , which exhibits very pronounced ( non - gaussian ) tails , see fig . 3 for an example . )( see for more details).,width=302 ] it is interesting to see that our 3-dimensional superstatistical model predicts the existence of correlations between the acceleration components .for example , the acceleration in direction is not statistically independent of the acceleration in -direction .we may study the ratio of the joint probability to the 1-point probabilities and .for independent acceleration components this ratio would always be given by .however , our 3-dimensional superstatistical model yields prediction this is a very general formula , valid for any superstatistics , for example also tsallis statistics , obtained when is the -distribution .the trivial result is obtained only for , i.e. no fluctuations in the parameter at all .4 shows as predicted by lognormal superstatistics : ) , being the lognormal distribution.,width=302 ] the shape of this is very similar to experimental measurements .our final example of application of superstatistics is for a completely different area : medicine .we will look at cell migration processes describing the metastatic cascade of cancerous cells in the body .there are various pathways in which cancerous cells can migrate : via the blood system , the lymphatic system , and so on .the diffusion constants for these various pathways are different . in this waysuperstatistics enters , describing different diffusion speeds for different pathways ( see fig .5 ) .but there is another important issue here : when looking at a large ensemble of patients then the spread of cancerous cells can be very different from patient to patient . for some patients the cancer spreads in a very aggressive way , whereas for others it is much slower and less aggressive .so superstatistics also arises from the fact that all patients are different . a superstatistical model of metastasis and cancer survival has been developed in .details are described in that paper .here we just mention the final result that comes out of the model : one obtains the following prediction for the probability density function of survival time of a patient that is diagnosed with cancer at : or , \label{eq7}\end{aligned}\ ] ] where is the modified bessel function .note that this is inverse superstatistics .the role of the parameter is now played by the parameter , which in a sense describes how aggressively the cancer propagates . the above formula based on inverse superstatistics is in good agreement with real data of the survival statistics of breast cancer patients in the us .the superstatistical formula fits the observed distribution very well , both in a linear and logarithmic plot ( see fig.6 ) .one remark is at order .when looking at the relevant time scales one should keep in mind that the data shown are survival distributions _ conditioned on the fact that death occurs due to cancer_. many patients , in particular if they are diagnosed at an early stage , will live a long happy life and die from something else than cancer .these cases are _ not _ included in the data . ,both in a linear and double logarithmic plot . only patients that die from cancer are included in the statistics .the solid line is the superstatistical model prediction ., title="fig:",width=302 ] , both in a linear and double logarithmic plot .only patients that die from cancer are included in the statistics .the solid line is the superstatistical model prediction ., title="fig:",width=302 ]we finish this article by briefly mentioning some other recent interesting theoretical developments .one major theoretical concern is that a priori the superstatistical distribution can be anything .but perhaps one should single out the really relevant distributions by a least biased guess , given some constraints on the complex system under consideration .this program has been developed in some recent papers .there are some ambiguities which constraints should be implemented , and how . a very general formalism is presented in , which contains previous work as special cases .the three important universality classes discussed above , namely superstatistics , inverse superstatistics and lognormal superstatistics are contained as special cases in the formalism of . in principle , once a suitable generalized maximum entropy principle has been formulated for superstatistical systems , one can proceed to a generalized thermodynamic description , get a generalized equation of state , and so on .there is still a lot of scope of future research to develop the most suitable formalism .but the general tendency seems to be to apply maximum entropy principles and least biased guesses to nonequilibrium situations as well .in fact , jaynes always thought this is possible .another interesting development is what one could call a superstatistical path integral .these are just ordinary path integrals but with an additional integration over a parameter that make the wiener process become something more complicated , due to large - scale fluctuations of its diffusion constant .jizba et al .investigate under which conditions one obtains a markov process again .it seems some distributions are distinguished as making the superstatistical process simpler than others , preserving markovian - like properties .these types of superstatistical path integral processes have applications in finance , and possibly also in quantum field theory and elementary particle physics . in high energy physics ,many of the power laws observed for differential cross sections and energy spectra in high energy scattering processes can also be explained using superstatistical models .the key point here is to extend the hagedorn theory to a superstatistical one which properly takes into account temperature fluctuations .superstatistical techniques have also been recently used to describe the space - time foam in string theory .superstatistics ( a statistics of a statistics ) provides a physical reason why more general types of boltzmann factors ( e.g. -exponentials or other functional forms ) are relevant for _ nonequilibrium _ systems with suitable fluctuations of an intensive parameter .let us summarize : * there is evidence for three major physically relevant universality classes : -superstatistics tsallis statistics , inverse -superstatistics , and lognormal superstatistics .these arise as universal limit statistics for many different systems . *superstatistical techniques can be successfully applied to a variety of complex systems with time scale separation . *the train delays on the british railway network are an example of superstatistics = tsallis statistics . * a superstatistical model of _ lagrangian turbulence _ is in excellent agreement with the experimental data for probability densities , correlations between components , decay of correlations , lagrangian scaling exponents , etc .this is an example of lognormal superstatistics .* cancer survival statistics is described by inverse superstatistics . *the long - term aim is to find a good thermodynamic description for general superstatistical systems .a generalized maximum entropy principle may help to achieve this goal .99 c. beck and e.g.d .cohen , physica a * 322 * , 267 ( 2003 ) c. beck , e.g.d .cohen , and h.l .swinney , phys .e * 72 * , 056133 ( 2005 ) c. beck and e.g.d .cohen , physica a * 344 * , 393 ( 2004 ) h. touchette and c. beck , phys .e * 71 * , 016131 ( 2005 ) c. tsallis and a.m.c .souza , phys .e * 67 * , 026106 ( 2003 ) p. jizba , h. kleinert , phys .e * 78 * , 031122 ( 2008 ) c. vignat , a. plastino and a.r .plastino , cond - mat/0505580 c. vignat , a. plastino , arxiv 0706.0151 p .- h .chavanis , physica a * 359 * , 177 ( 2006 ) g. wilk and z. wlodarczyk , phys .lett . * 84 * , 2770 ( 2000 ) c. beck , phys .* 87 * , 180601 ( 2001 ) k. e. daniels , c. beck , and e. bodenschatz , physica d * 193 * , 208 ( 2004 ) c. beck , physica a * 331 * , 173 ( 2004 ) m. baiesi , m. paczuski and a.l .stella , phys .lett . * 96 * , 051103 ( 2006 ) y. ohtaki and h.h .hasegawa , cond - mat/0312568 a.y .abul - magd , physica a * 361 * , 41 ( 2006 ) s. rizzo and a. rapisarda , aip conf . proc . * 742 * , 176 ( 2004 ) ( cond - mat/0406684 ) t. laubrich , f. ghasemi , j. peinke , h. kantz , arxiv:0811.3337 a. porporato , g. vico , and p.a .fay , geophys . res. lett . * 33 * , l15402 ( 2006 ) a. reynolds , phys .* 91 * , 084503 ( 2003 ) h. aoyama et al ., arxiv:0805.2792 e. van der straeten , c. beck , arxiv:0901.2271 a.y .abul - magd , g. akemann , p. vivo , arxiv 0811.1992 c. beck , europhys . lett . * 64 * , 151 ( 2003 ) c. beck , phys .* 98 * , 064502 ( 2007 ) g. wilk , z. wlodarczyk , arxiv:0810.2939 c. beck , arxiv:0902.2459 m. ausloos and k. ivanova , phys .e * 68 * , 046122 ( 2003 ) j .- p .bouchard and m. potters , _ theory of financial risk and derivative pricing _ , cambridge university press , cambridge ( 2003 ) a.y .abul - magd , b. dietz , t. friedrich , a. richer , phys .e * 77 * , 046202 ( 2008 ) s. abe and s. thurner , phys .e * 72 * , 036102 ( 2005 ) slvio m. duarte queirs , braz .* 38 * , 203 ( 2008 ) k. briggs , c. beck , physica a * 378 * , 498 ( 2007 ) l. leon chen , c. beck , physica a * 387 * , 3162 ( 2008 ) s. abe , c. beck and g. d. cohen , phys .e * 76 * , 031102 ( 2007 ) g. e. crooks , phys .e * 75 * , 041119 ( 2007 ) j. naudts , aip conference proceedings * 965 * , 84 ( 2007 ) e. van der straeten , c. beck , phys .e * 78 * , 051101 ( 2008 ) c. tsallis , j. stat .phys . * 52 * , 479 ( 1988 ) c. tsallis , r.s .mendes , a.r .plastino , physica a * 261 * , 534 ( 1998 ) c. beck , arxiv:0902:1235 , to appear in contemporary physics ( 2009 ) a.m. mathai and h.j .haubold , physica a * 375 * , 110 ( 2007 ) r. d. rosenkrantz , _ e.t .jaynes : papers on probability , statistics and statistical physics , _ kluwer ( 1989 ) r. hagedorn , suppl .nuovo cim .* 3 * , 147 ( 1965 ) c. beck , physica a * 286 * , 164 ( 2000 ) n.e .mavromatos and s. sarkar , arxiv:0812.3952
we provide an overview on superstatistical techniques applied to complex systems with time scale separation . three examples of recent applications are dealt with in somewhat more detail : the statistics of small - scale velocity differences in lagrangian turbulence experiments , train delay statistics on the british rail network , and survival statistics of cancer patients once diagnosed with cancer . these examples correspond to three different universality classes : lognormal superstatistics , -superstatistics and inverse superstatistics .
in a typical instance of a combinatorial optimization problem the underlying constraints model a static application frozen in one time step . in many applicationshowever , one needs to solve instances of the combinatorial optimization problem that changes over time . while this is naturally handled by re - solving the optimization problem in each time step separately , changing the solution one holds from one time step to the next often incurs a transition cost .consider , for example , the problem faced by a vendor who needs to get supply of an item from different producers to meet her demand . on any given day, she could get prices from each of the producers and pick the cheapest ones to buy from .as prices change , this set of the cheapest producers may change .however , there is a fixed cost to starting and/or ending a relationship with any new producer .the goal of the vendor is to minimize the sum total of these two costs : an `` acquisition cost '' to be incurred each time she starts a new business relationship with a producer , and a per period cost of buying in period from the each of the producers that she picks in this period , summed over time periods . in this workwe consider a generalization of this problem , where the constraint `` pick producers '' may be replaced by a more general combinatorial constraint .it is natural to ask whether simple combinatorial problems for which the one - shot problem is easy to solve , as the example above is , also admit good algorithms for the multistage version .the first problem we study is the _ multistage matroid maintenance _ problem ( ) , where the underlying combinatorial constraint is that of maintaining a base of a given matroid in each period . in the example above , the requirement the vendor buys from different producers could be expressed as optimizing over the matroid . in a more interesting caseone may want to maintain a spanning tree of a given graph at each step , where the edge costs change over time , and an acquisition cost of has to paid every time a new edge enters the spanning tree .( a formal definition of the problem appears in section [ sec : formal - defs ] . )while our emphasis is on the online problem , we will mention results for the offline version as well , where the whole input is given in advance . a first observation we makeis that if the matroid in question is allowed to be different in each time period , then the problem is hard to approximate to any non - trivial factor ( see section [ sec : time - varying ] ) even in the offline case .we therefore focus on the case where the same matroid is given at each time period .thus we restrict ourselves to the case when the matroid is the same for all time steps . to set the baseline , we first study the offline version of the problem ( in section [ sec : offline ] ) , where all the input parameters are known in advance .we show an lp - rounding algorithm which approximates the total cost up to a logarithmic factor .this approximation factor is no better than that using a simple greedy algorithm , but it will be useful to see the rounding algorithm , since we will use its extension in the online setting .we also show a matching hardness reduction , proving that the problem is hard to approximate to better than a logarithmic factor ; this hardness holds even for the special case of spanning trees in graphs .we then turn to the online version of the problem , where in each time period , we learn the costs of each element that is available at time , and we need to pick a base of the matroid for this period . we analyze the performance of our online algorithm in the competitive analysis framework : i.e. , we compare the cost of the online algorithm to that of the optimum solution to the offline instance thus generated . in section [ sec : online ] , we give an efficient randomized -competitive algorithm for this problem against any oblivious adversary ( here is the universe for the matroid and is the rank of the matroid ) , and show that no polynomial - time online algorithm can do better .we also show that the requirement that the algorithm be randomized is necessary : any deterministic algorithm must incur an overhead of , even for the simplest of matroids . our results above crucially relied on the properties of matriods , andit is natural to ask if we can handle more general set systems , e.g. , -systems . in section [ sec : matchings ] , we consider the case where the combinatorial object we need to find each time step is a perfect matching in a graph .somewhat surprisingly , the problem here is significantly harder than the matroid case , even in the offline case . in particular, we show that even when the number of periods is a constant , no polynomial time algorithm can achieve an approximation ratio better than for any constant .we first show that the problem , which is a packing - covering problem , can be reduced to the analogous problem of maintaining a spanning set of a matroid .we call the latter the _ multistage spanning set maintenance _ ( ) problem .while the reduction itself is fairly clean , it is surprisingly powerful and is what enables us to improve on previous works .the problem is a covering problem , so it admits better approximation ratios and allows for a much larger toolbox of techniques at our disposal .we note that this is the only place where we need the matroid to not change over time : our algorithms for work when the matroids change over time , and even when considering matroid intersections .the problem is then further reduced to the case where the holding cost of an element is in , this reduction simplifies the analysis . in the offline case , we present two algorithms . we first observe that a greedy algorithm easily gives an -approximation .we then present a simple randomized rounding algorithm for the linear program .this is analyzed using recent results on contention resolution schemes , and gives an approximation of , which can be improved to when the acquisition costs are uniform .this lp - rounding algorithm will be an important constituent of our algorithm for the online case .for the online case we again use that the problem can be written as a covering problem , even though the natural lp formulation has both covering and packing constraints . phrasing it as a covering problem ( with box constraints )enables us to use , as a black - box , results on online algorithms for the fractional problem . this formulation however has exponentially many constraints .we handle that by showing a method of adaptively picking violated constraints such that only a small number of constraints are ever picked .the crucial insight here is that if is such that is not feasible , then is at least away in distance from any feasible solution ; in fact there is a single constraint that is violated to an extent half .this insight allows us to make non - trivial progress ( using a natural potential function ) every time we bring in a constraint , and lets us bound the number of constraints we need to add until constraints are satisfied by .our work is related to several lines of research , and extends some of them .the paging problem is a special case of where the underlying matroid is a uniform one .our online algorithm generalizes the -competitive algorithm for weighted caching , using existing online lp solvers in a black - box fashion .going from uniform to general matroids loses a logarithmic factor ( after rounding ) , we show such a loss is unavoidable unless we use exponential time .the problem is also a special case of classical metrical task systems ; see for more recent work .the best approximations for metrical task systems are poly - logarithmic in the size of the metric space . in our casethe metric space is specified by the total number of bases of the matroid which is often exponential , so these algorithms only give a trivial approximation . in trying to unify online learning and competitive analysis , buchbinder et al . consider a problem on matroids very similar to ours .the salient differences are : ( a ) in their model all acquisition costs are the same , and ( b ) they work with fractional bases instead of integral ones .they give an -competitive algorithm to solve the fractional online lp with uniform acquisition costs ( among other unrelated results ) .our online lp solving generalizes their result to arbitrary acquisition costs .they leave open the question of getting integer solutions online ( seffi naor , private communication ) , which we present in this work . in a more recent work , buchbinder , chen and naor use a regularization approach to solving a broader set of fractional problems , but once again can do not get integer solutions in a setting such as ours .shachnai et al . consider `` reoptimization '' problems : given a starting solution and a new instance , they want to balance the transition cost and the cost on the new instance .this is a two - timestep version of our problem , and the short time horizon raises a very different set of issues ( since the output solution does not need to itself hedge against possible subsequent futures ) .they consider a number of optimization / scheduling problems in their framework .cohen et al . consider several problems in the framework of the stability - versus - fit tradeoff ; e.g. , that of finding `` stable '' solutions which given the previous solution , like in reoptimization , is the current solution that maximizes the quality minus the transition costs .they show maintaining stable solutions for matroids becomes a repeated two - stage reoptimization problem ; their problem is poly - time solvable , whereas matroid problems in our model become np - hard .the reason is that the solution for two time steps does not necessarily lead to a base from which it is easy to move in subsequent time steps , as our hardness reduction shows .they consider a multistage offline version of their problem ( again maximizing fit minus stability ) which is very similar in spirit and form to our ( minimization ) problem , though the minus sign in the objective function makes it difficult to approximate in cases which are not in poly - time . in dynamic steiner tree maintenance where the goal is to maintain an approximately optimal steiner tree for a varying instance ( where terminals are added ) while changing few edges at each time step . in dynamic load balancing one has to maintain a good scheduling solution while moving a small number of jobs around .the work on lazy experts in the online prediction community also deals with similar concerns .there is also work on `` leasing '' problems : these are optimization problems where elements can be obtained for an interval of any length , where the cost is concave in the lengths ; the instance changes at each timestep .the main differences are that the solution only needs to be feasible at each timestep ( i.e. , the holding costs are ) , and that any element can be leased for any length of time starting at any timestep for a cost that depends only on , which gives these problems a lot of uniformity . in turn , these leasing problems are related to `` buy - at - bulk '' problems .given reals for elements , we will use for to denote .we denote by ] and element , a _ holding cost _ cost .the goal is to find bases } ] ( once again with ) is the following lemma shows the equivalence of maintaining bases and spanning sets .this enables us to significantly simplify the problem and avoid the difficulties faced by previous works on this problem .[ lem : pack - cover ] for matroids , the optimal solutions to and have the same costs . clearly , any solution to is also a solution to , since a base is also a spanning set .conversely , consider a solution to .set to any base in .given , start with , and extend it to any base of .this is the only step where we use the matroid properties indeed , since the matroid is the same at each time , the set remains independent at time , and by the matroid property this independent set can be extended to a base .observe that this process just requires us to know the base and the set , and hence can be performed in an online fashion .we claim that the cost of is no more than that of .indeed , , because .moreover , let , we pay for these elements we just added . to charge this , consider any such element , let be the time it was most recently added to the cover i.e ., for all ] .there are no holding costs , but the element can be used in spanning sets only for timesteps . or one can equivalently think of holding costs being zero for and otherwise ._ an offline exact reduction . _the translation is the natural one : given instance of , create elements for each and , with acquisition cost , and interval ] , where is set to in case , else it is set to the _ largest _ time such that the total holding costs for this interval ] .the number of elements in the modified matroid whose intervals contain any time is now only , the same as the original matroid ; each element of the modified matroid is only available for a single interval .moreover , the reduction can be done online : given the past history and the holding cost for the current time step , we can ascertain whether is the beginning of a new interval ( in which case the previous interval ended at ) and if so , we know the cost of acquiring a copy of for the new interval is .it is easy to check that the optimal cost in this interval model is within a constant factor of the optimal cost in the original acquisition / holding costs model .given the reductions of the previous section , we can focus on the problem .being a covering problem , is conceptually easier to solve : e.g. , we could use algorithms for submodular set cover with the submodular function being the sum of ranks at each of the timesteps , to get an approximation . in section [ sec : greedy ] , we give a dual - fitting proof of the performance of the greedy algorithm . herewe give an lp - rounding algorithm which gives an approximation ; this can be improved to in the common case where all acquisition costs are unit .( while the approximation guarantee is no better than that from submodular set cover , this lp - rounding algorithm will prove useful in the online case in section [ sec : online ] ) .finally , the hardness results of section [ sec : hardness - offline ] show that we can not hope to do much better than these logarithmic approximations .we now consider an lp - rounding algorithm for the problem ; this will generalize to the online setting , whereas it is unclear how to extend the greedy algorithm to that case .for the lp rounding , we use the standard definition of the problem to write the following lp relaxation . remains to round the solution to get a feasible solution to ( i.e. , a spanning set for each time ) with expected cost at most times the lp value , since we can use lemma [ lem : pack - cover ] to convert this to a solution for at no extra cost .the following lemma is well - known ( see , e.g. ) .we give a proof for completeness .[ lem : alon ] for a fractional base , let be the set obtained by picking each element independently with probability . then \geq r(1 - 1/e) ] .( they call this a -balanced cr scheme . )now , we get & \geq { { \mathbf{e } } } [ { \textsf{rank}}(\pi_z(r(z ) ) ) ] = \sum_{e \in \text{supp}(z ) } \pr [ e \in \pi_z(r(z ) ) ] \\ & = \sum_{e \in \text{supp}(z ) } \pr [ e \in \pi_z(r(z ) ) \mid e \in r(z ) ] \cdot \pr [ e \in r(z ) ] \\ & \geq \sum_{e \in \text{supp}(z ) } ( 1 - 1/e ) \cdot z_e = r(1 - 1/e ) .\end{aligned}\ ] ] the first inequality used the fact that is a subset of , the following equality used that is independent with probability 1 , the second inequality used the property of the cr scheme , and the final equality used the fact that was a fractional base . [thm : lp - round ] any fractional solution can be randomly rounded to get solution to with cost times the fractional value , where is the rank of the matroid and the number of timesteps .set . for each element , choose a random threshold independently and uniformly from the interval ] , the cost .moreover , exactly when satisfies , which happens with probability at most hence the expected acquisition cost for the elements newly added to is at most .finally , we have to account for any elements added to extend to a full - rank set .[ lem : rand - round ] for any fixed ] , so it suffices to give a lower bound on the former expression . for this, we use lemma [ lem : alon ] : the sample has expected rank , and using reverse markov , it has rank at least with probability at least . now focusing on the matroid obtained by contracting elements in ( which, say , has rank ) , the same argument says the set has rank with probability at least , etc . proceeding in this way , the probability that the rank of is less than is at most the probability that we see fewer than heads in flips of a coin of bias . by a chernoff bound ,this is at most .now if the set does not have full rank , the elements we add have cost at most that of the min - cost base under the cost function , which is at most the optimum value for ( [ eq : lp2 ] ) .( we use the fact that the lp is exact for a single matroid , and the global lp has cost at least the single timestep cost . )this happens with probability at most , and hence the total expected cost of augmenting over all timesteps is at most times the lp value .this proves the main theorem .again , this algorithm for works with different matroids at each timestep , and also for intersections of matroids . to see this observe that the only requirements from the algorithm are that there is a separation oracle for the polytope and that the contention resolution scheme works .in the case of intersection , if we pay an extra penalty in the approximation ratio we have that the probability a rounded solution does not contain a base is so we can take a union bound over the multiple matroids .0 we can replace the dependence on by a term that depends only on the variance in the acquisition costs .let us divide the period into `` epochs '' , where an epoch is an interval for such that the total fractional acquisition cost .we can afford to build a brand new tree at the beginning of each epoch and incur an acquisition cost of at most the rank , which we can charge to the lp s fractional acquisition cost in the epoch . by theorem [ thm : lp - round ] , naively applying the rounding algorithm to each epoch independently gives a guarantee of , where is the maximum length of an epoch .now we should be able to use the argument from the online section that says that we can ignore steps where the total movement is smaller than half .thus can be assumed to be .more details to be added once we consistentize notation .in fact , if we define epoch to be a period of acquisition cost , then the at least half means movement cost at least .thus the epoch only has relevant steps in it , so we get log of that . for the special case where all the acquisition costs are all the same , this implies we get rid of the term in the lp rounding , and get an -approximation . when the ratio of the maximum to the minimum acquisition cost is small , we can improve the approximation factor above .more specifically , we show that essentially the same randomized rounding algorithm ( with a different choice of ) gives an approximation ratio of .we defer the argument to section [ sec : just - logr ] , as it needs some additional definitions and results that we present in the online section .[ [ an - improvement - avoiding - the - dependence - on - t.-1 ] ] an improvement : avoiding the dependence on .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + when the ratio of the maximum to the minimum acquisition cost is small , we can improve the approximation factor above . more specifically , we show that essentially the same randomized rounding algorithm ( with a different choice of ) gives an approximation ratio of .we defer the argument to section [ sec : just - logr ] , as it needs some additional definitions and results that we present in the online section .we defer the hardness proof to appendix [ app : offline ] , which shows that the and problems are np - hard to approximate better than even for graphical matroids .an integrality gap of appears in appendix [ sec : int - gap - matroids ] .[ thm : matr - hard ] the and problems are np - hard to approximate better than even for graphical matroids .we give a reduction from set cover to the problem for graphical matroids .given an instance of set cover , with sets and elements , we construct a graph as follows . there is a special vertex , and set vertices ( with vertices for each set ) .there are edges which all have inclusion weight and per - time cost for all .all other edges will be zero cost short - term edges as given below .in particular , there are timesteps . in timestep ] during which it is alive . the element has an acquisition cost , no holding costs .once an element has been acquired ( which can be done at any time during its interval ) , it can be used at all times in that interval , but not after that . in the online setting, at each time step we are told which intervals have ended ( and which have not ) ; also , which new elements are available starting at time , along with their acquisition costs . of course, we do not know when its interval will end ; this information is known only once the interval ends. we will work with the same lp as in section [ sec : lp - round ] , albeit now we have to solve it online .the variable is the indicator for whether we acquire element . \notag\end{aligned}\ ] ] note that this is not a packing or covering lp , which makes it more annoying to solve online .hence we consider a slight reformulation .let denote the _ spanning set polytope _ defined as the convex hull of the full - rank ( a.k.a .spanning ) sets .since each spanning set contains a base , we can write the constraints of ( [ eq:3 ] ) as : herewe define to be the vector derived from by zeroing out the values for .it is known that the polytope can be written as a ( rather large ) set of covering constraints .indeed , , where is the dual matroid for .since the rank function of is given by , it follows that ( [ eq:4 ] ) can be written as thus we get a covering lp with `` box '' constraints over .the constraints can be presented one at a time : in timestep , we present all the covering constraints corresponding to .we remark that the newer machinery of may be applicable to [ eq : coveringconstraints ] .we next show that a simpler approach suffices will be useful in improving the rounding algorithm . ] .the general results of buchbinder and naor ( and its extension to row - sparse covering problems by ) imply a deterministic algorithm for fractionally solving this linear program online , with a competitive ratio of . however , this is not yet a polynomial - time algorithm , the number of constraints for each timestep being exponential .we next give an adaptive algorithm to generate a small yet sufficient set of constraints .[ [ solving - the - lp - online - in - polynomial - time.-1 ] ] solving the lp online in polynomial time .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + given a vector ^e ] .we next describe the algorithm for generating covering constraints in timestep .recall that give us an online algorithm for solving a fractional covering lp with box constraints ; we use this as a black - box .( this lp solver only raises variables , a fact we will use . ) in timestep , we adaptively select a small subset of the covering constraints from ( [ eq : coveringconstraints ] ) , and present it to .moreover , given a fractional solution returned by , we will need to massage it at the end of timestep to get a solution satisfying all the constraints from ( [ eq : coveringconstraints ] ) corresponding to .let be the fractional solution to ( [ eq : coveringconstraints ] ) at the end of timestep . now given information about timestep , in particular the elements in and their acquisition costs, we do the following . given , we construct and check if , as one can separate for .if , then is feasible and we do not need to present any new constraints to , and we return . if not , our separation oracle presents an such that the constraint is violated .we present the constraint corresponding to to to get an updated , and repeat until is feasible for time .( since only raises variables and we have a covering lp , the solution remains feasible for past timesteps . )we next argue that we do not need to repeat this loop more than times .[ lem : farfromfeasible ] if for some and the corresponding , the constraint is violated .then let and let .let denote .thus since both and are integers , it follows that . on the other hand , for every , and thus .consequently finally , for any , , so the claim follows .the algorithm updates to satisfy the constraint given to it , and lemma [ lem : farfromfeasible ] implies that each constraint we give to it must increase by at least .the translation to the interval model ensures that the number of elements whose intervals contain is at most latexmath:[ ] the beginning of the algorithm , where and selecting element whenever exceeds : here we use the fact that the online algorithm only ever raises values , and this rounding algorithm is monotone .rerandomizing in case of failure gives us an expected cost of times the lp solution , and hence we get an -competitive algorithm .the dependence on the time horizon is unsatisfactory in some settings , but we can do better using lemma [ lem : farfromfeasible ] . recall that the -factor loss in the rounding follows from the naive union bound over the time steps .we can argue that when is small , we can afford for the rounding to fail occasionally , and charge it to the acquisition cost incurred by the linear program .the details appear in appendix [ sec : just - logr ] .the dependence on the time horizon is unsatisfactory in some settings , but we can do better using lemma [ lem : farfromfeasible ] . recall that the -factor loss in the rounding follows from the naive union bound over the time steps .we now argue that when is small , we can afford for the rounding to fail occasionally , and charge it to the acquisition cost incurred by the linear program .let us divide the period ] , so that we can afford to build a brand new tree once in each epoch and can charge it to the lp s fractional acquisition cost in the epoch . naively applying theorem [ thm : lp - round ] to each epochindependently gives us a guarantee of , where is the maximum length of an epoch .however , an epoch can be fairly long if the lp solution changes very slowly .we break up each epoch into phases , where each phase is a maximal subsequence such that the lp incurs acquisition cost at most ; clearly the epoch can be divided into at most disjoint phases .for a phase ] denote the solution defined as }(e ) = \min_{t\in [ t_1,t_2 ] } z_t(e) ] , the difference } - z_t\|_1 \leq \frac{1}{4} ] is in , where is defined as in ( [ eq:5 ] ) .suppose that in the randomized rounding algorithm , we pick the threshold ] be the event that the rounding algorithm applied to } ] is in for a phase ] occurs with probability .moreover , if } ] .since there are phases within an epoch , the expected number of times that the randomized rounding fails any time during an epoch is .suppose that we rerandomize all thresholds whenever the randomized rounding fails .each rerandomization will cost us at most in expected acquisition cost .since the expected number of times we do this is less than once per epoch , we can charge this additional cost to the acquisition cost incurred by the lp during the epoch .thus we get an -approximation .this argument also works for the online case ; hence for the common case where all the acquisition costs are the same , the loss due to randomized rounding is .we can show that any polynomial - time algorithm can not achieve better than an competitive ratio , via a reduction from online set cover .details appear in appendix [ app : sec : hardness - online ] . in the online set cover problem ,one is given an instance of set cover , and in time step , the algorithm is presented an element , and is required to pick a set covering it .the competitive ratio of an algorithm on a sequence } ] .korman ( * ? ? ?* theorem 2.3.4 ) shows the following hardness for online set cover : there exists a constant such that if there is a ( possibly randomized ) polynomial time algorithm for online set cover with competitive ratio , then .recall that in the reduction in the proof of theorem [ thm : matr - hard ] , the set of long term edges depends only on .the short term edges alone depend on the elements to be covered .it can then we verified that the same approach gives a reduction from online set cover to online .it follows that the online problem does not admit an algorithm with competitive ratio better than unless .in fact this hardness holds even when the end time of each edge is known as soon as it appears , and the only non - zero costs are .we next consider the _ perfect matching maintenance _ ( ) problem where is the set of edges of a graph , and the at each step , we need to maintain a perfect matchings in .* integrality gap . *somewhat surprisingly , we show that the natural lp relaxation has an integrality gap , even for a constant number of timesteps .the lp and the ( very simple ) example appears in appendix [ sec : match - int - gap ] .the natural lp relaxation is : the polytope is now the perfect matching polytope for .[ lem : int - gap ] there is an integrality gap for the problem .consider the instance in the figure , and the following lp solution for 4 time steps . in , the edges of each of the two cycles has , and the cross - cycle edges have . in , we have and , and otherwise it is the same as . and are the same as . in , we have and , and otherwise it is the same as .for each time , the edges in the support of the solution have zero cost , and other edges have infinite cost .the only cost incurred by the lp is the movement cost , which is .consider the perfect matching found at time , which must consist of matchings on both the cycles .( moreover , the matching in time 3 must be the same , else we would change edges . )suppose this matching uses exactly one edge from and .then when we drop the edges and add in , we get a cycle on vertices , but to get a perfect matching on this in time we need to change edges . elsethe matching uses exactly one edge from and , in which case going from time to time requires changes . * hardness .* moreover , in appendix [ app : sec : match - hard ] we show that the perfect matching maintenance problem is very hard to approximate : for any it is np - hard to distinguish instances with cost from those with cost , where is the number of vertices in the graph .this holds even when the holding costs are in , acquisition costs are for all edges , and the number of time steps is a constant . in this sectionwe prove the following hardness result : for any it is np - hard to distinguish instances with cost from those with cost , where is the number of vertices in the graph .this holds even when the holding costs are in , acquisition costs are for all edges , and the number of time steps is a constant .the proof is via reduction from -coloring .we assume we are given an instance of -coloring where the maximum degree of is constant .it is known that the -coloring problem is still hard for graphs with bounded degree ( * ? ? ?* theorem 2 ) .we construct the following gadget for each vertex .( a figure is given in figure [ fig : gadget ] . )there are two cycles of length , where is odd .the first cycle ( say ) has three distinguished vertices at distance from each other .the second ( called ) has similar distinguished vertices at distance from each other .there are three more `` interface '' vertices .vertex is connected to and , similarly for and .there is a special `` switch '' vertex , which is connected to all three of .call these edges the _ switch _ edges . due to the two odd cycles ,every perfect matching in has the structure that one of the interface vertices is matched to some vertex in , another to a vertex in and the third to the switch .we think of the subscript of the vertex matched to as the color assigned to the vertex . at every odd time step ,the only allowed edges are those within the gadgets : i.e. , all the holding costs for edges within the gadgets is zero , and all edges between gadgets have holding costs .this is called the `` steady state '' . at every even time step , for some matching of the graph, we move into a `` test state '' , which intuitively tests whether the edges of a matching have been properly colored .we do this as follows . for every edge ,the switch edges in become unavailable ( have infinite holding costs ) .moreover , now we allow some edges that go between and , namely the edge , and the edges for and .note that any perfect matching on the vertices of which only uses the available edges would have to match , and one interface vertex of must be matched to one interface vertex of .moreover , by the structure of the allowed edges , the colors of these vertices must differ .( the other two interface vertices in each gadget must still be matched to their odd cycles to get a perfect matching . ) since the graph has bounded degree , we can partition the edges of into a constant number of matchings for some ( using vizing s theorem ) . hence , at time step , we test the edges of the matching .the number of timesteps is , which is a constant . .the test - state edges are on the right . ]suppose the graph was indeed -colorable , say is the proper coloring .in the steady states , we choose a perfect matching within each gadget so that is matched . in the test state ,if some edge is in the matching , we match and .since the coloring was a proper coloring , these edges are present and this is a valid perfect matching using only the edges allowed in this test state .note that the only changes are that for every test edge , the matching edges and are replaced by and .hence the total acquisition cost incurred at time is , and the same acquisition cost is incurred at time to revert to the steady state .hence the total acquisition cost , summed over all the timesteps , is .suppose is not -colorable .we claim that there exists vertex such that the interface vertex not matched to the odd cycles is different in two different timesteps i.e . , there are times such that and ( for ) are the states. then the length of the augmenting path to get from the perfect matching at time to the perfect matching at is at least .now if we set , then we get a total acquisition cost of at least in this case .the size of the graph is , so the gap is between and .this proves the claim .in this paper we studied multistage optimization problems : an optimization problem ( think about finding a minimum - cost spanning tree in a graph ) needs to be solved repeatedly , each day a different set of element costs are presented , and there is a penalty for changing the elements picked as part of the solution .hence one has to hedge between sticking to a suboptimal solution and changing solutions too rapidly .we present online and offline algorithms when the optimization problem is maintaining a base in a matroid .we show that our results are optimal under standard complexity - theoretic assumptions .we also show that the problem of maintaining a perfect matching becomes impossibly hard .our work suggests several directions for future research .it is natural to study other combinatorial optimization problems , both polynomial time solvable ones such shortest path and min - cut , as well np - hard ones such as min - max load balancing and bin - packing in this multistage framework with acquisition costs .moreover , the approximability of the _ bipartite _ matching maintenance , as well as matroid intersection maintenance remains open .our hardness results for the matroid problem hold when edges have acquisition costs . the unweighted version where all acquisition costs are equal may be easier ; we currently know no hardness results , or sub - logarithmic approximations for this useful special case .an extension of /problems is to the case when the set of elements remain the same , but the matroids change over time . againthe goal in is to maintain a matroid base at each time .[ thm : diff - matrs - wpb ] the problem with different matroids is np - hard to approximate better than a factor of , even for partition matroids , as long as .the reduction is from 3d - matching ( 3 dm ) .an instance of 3 dm has three sets of equal size , and a set of hyperedges .the goal is to choose a set of disjoint edges such that . first , consider the instance of with three timesteps .the universe elements correspond to the edges . for , create a partition with parts , with edges sharing a vertex in falling in the same part .the matroid is now to choose a set of elements with at most one element in each part . for , the partition now corresponds to edges that share a vertex in , and for , edges that share a vertex in .set the movement weights for all edges .if there exists a feasible solution to 3 dm with edges , choosing the corresponding elements form a solution with total weight . if the largest matching is of size , then we must pay extra over these three timesteps .this gives a -vs- gap for three timesteps . to get a result for timesteps , we give the same matroids repeatedly , giving matroids at all times ] and each , add an arc , with cost .add a cost of per unit flow through vertex .( we could simulate this using edge - costs if needed . ) finally , add vertices and source . for each ,add arcs from to all vertices with costs .all these arcs have infinite capacity .now add unit capacity edges from to each , and infinite capacity edges from all nodes to . since the flow polytope is integral for integral capacities , a flow of units will trace out paths from to , with the elements chosen at each time being independent in the partition matroid , and the cost being exactly the per - time costs and movement costs of the elements .observe that we could even have time - varying movement costs .whereas , for graphical matroids the problem is hard even when the movement costs for each element do not change over time , and even just lie in the set . moreover , the restriction in theorem [ thm : diff - matrs - wpb ] that is also necessary , as the following result shows .[ thm : two ] for the case of two rounds ( i.e. , ) the problem can be solved in polynomial time , even when the two matroids in the two rounds are different .the solution is simple , via matroid intersection .suppose the matroids in the two timesteps are and .create elements which corresponds to picking element and in the two time steps , with cost .lift the matroids and to these tuples in the natural way , and look for a common basis .we note that deterministic online algorithms can not get any non - trivial guarantee for the problem , even in the simple case of a -uniform matroid .this is related to the lower bound for deterministic algorithms for paging .formally , we have the 1-uniform matroid on elements , and .all acquisition costs are 1 . in the first period , all holding costs are zero and the online algorithm picks an element , say .since we are in the non - oblivious model , the algorithm knows and can in the second time step , set , while leaving the other ones at zero .now the algorithm is forced to move to another edge , say , allowing the adversary to set and so on . at the end of rounds ,the online algorithm is forced to incur a cost of 1 in each round , giving a total cost of .however , there is still an edge whose holding cost was zero throughout , so that the offline opt is 1 . thus against a non - oblivious adversary , any online algorithm must incur a overhead . in this section ,we show that if the aspect ratio of the movement costs is not bounded , the linear program has a gap , even when is exponentially larger than .we present an instance where and are about with , and the linear program has a gap of .this shows that the term in our rounding algorithm is unavoidable .the instance is a graphical matroid , on a graph on , and .the edges for ] have acquisition cost and have holding cost determined as follows : we find a bijection between the set ] , and set for .it is easy to check that is in the spanning tree polytope for all time steps . finally , the total acquisition cost is at most for the edges incident on and at most for the other edges , both of which are .the holding costs paid by this solution is zero .thus the lp has a solution of cost the claim follows .the greedy algorithm for is the natural one .we consider the interval view of the problem ( as in section [ sec : intervals ] ) where each element only has acquisition costs , and can be used only in some interval .given a current subset , define .the benefit of adding an element to is and the greedy algorithm repeatedly picks an element maximizing and adds to .this is done until for all $ ] .phrased this way , an bound on the approximation ration follows from wolsey .we next give an alternate dual fitting proof .we do not know of an instance with uniform acquisition costs where greedy does not give a constant factor approximation .the dual fitting approach may be useful in proving a better approximation bound for this special case .using lagrangian variables for each and , we write a lower bound for by which using the integrality of the matroid polytope can be rewritten as : here , denotes the cost of the minimum weight base at time according to the element weights , where the available elements at time is .the best lower bound is : it is useful to maintain , for each time , a _ minimum weight base _ of the subset according to weights .hence the current dual value equals .we start with and for all , which satisfies the above properties .suppose we now pick maximizing and get new set .we use akin to our definition of . call a timestep `` interesting '' if ; there are interesting timesteps .how do we update the duals ?for , we set .note the element itself satisfies the condition of being in for precisely the interesting timesteps , and hence .for each interesting , define the base ; for all other times set .it is easy to verify that is a base in .but is it a min - weight base ?inductively assume that was a min - weight base of ; if is not interesting there is nothing to prove , so consider an interesting .all the elements in have just been assigned weight , which by the monotonicity properties of the greedy algorithm is at least as large as the weight of any element in . since lies in andis assigned value , it can not be swapped with any other element in to improve the weight of the base , and hence is an min - weight base of .it remains to show that the dual constraints are approximately satisfied .consider any element , and let .the first step where we update for some is when is in the span of for some time .we claim that .indeed , at this time is a potential element to be added to the solution and it would cause a rank increase for time steps .the greedy rule ensures that we must have picked an element with weight - to - coverage ratio at most as high .similarly , the next for which is updated will have , etc .hence we get the sum since each element can only be alive for all timesteps , we get the claimed -approximation . note that the greedy algorithm would solve even if we had a different matroid at each time . however , the equivalence of and no longer holds in this setting , which is not surprising given the hardness of theorem [ thm : diff - matrs - wpb ] .
this paper is motivated by the fact that many systems need to be maintained continually while the underlying costs change over time . the challenge then is to continually maintain near - optimal solutions to the underlying optimization problems , without creating too much churn in the solution itself . we model this as a multistage combinatorial optimization problem where the input is a sequence of cost functions ( one for each time step ) ; while we can change the solution from step to step , we incur an additional cost for every such change . we first study the multistage matroid maintenance problem , where we need to maintain a base of a matroid in each time step under the changing cost functions and acquisition costs for adding new elements . the online version of this problem generalizes onine paging , and is a well - structured case of the metrical task systems . e.g. , given a graph , we need to maintain a spanning tree at each step : we pay for the cost of the tree at time , and also for the number of edges changed at this step . our main result is a polynomial time -approximation to the online multistage matroid maintenance problem , where is the number of elements / edges and is the rank of the matroid . this improves on results of buchbinder et al . who addressed the _ fractional _ version of this problem under uniform acquisition costs , and buchbinder , chen and naor who studied the fractional version of a more general problem . we also give an approximation for the offline version of the problem . these bounds hold when the acquisition costs are non - uniform , in which case both these results are the best possible unless p = np . we also study the perfect matching version of the problem , where we must maintain a perfect matching at each step under changing cost functions and costs for adding new elements . surprisingly , the hardness drastically increases : for any constant , there is no -approximation to the multistage matching maintenance problem , even in the offline case .
dependence logic is an extension of first - order logic which adds _ dependence atoms _ of the form to it , with the intended interpretation of `` the value of the term is a function of the values of the terms . ''the introduction of such atoms is roughly equivalent to the introduction of non - linear patterns of dependence and independence between variables of branching quantifier logic or independence friendly logic : for example , both the branching quantifier logic sentence and the independence friendly logic sentence correspond in dependence logic to in the sense that all of these expressions are equivalent to the skolem formula as this example illustrates , the main peculiarity of dependence logic compared to the others above - mentioned logics lies in the fact that , in dependence logic , the notion of _ dependence and independence between variables _ is explicitly separated from the notion of quantification .this makes it an eminently suitable formalism for the formal analysis of the properties of _ dependence itself _ in a first - order setting , and some recent papers ( ) explore the effects of replace dependence atoms with other similar primitives such as _ independence atoms _ , _ multivalued dependence atoms _ , or _ inclusion _ or _atoms .branching quantifier logic , independence friendly logic and dependence logic , as well as their variants , are called _ logics of imperfect information _ : indeed , the truth conditions of their sentences can be obtained by defining , for every model and sentence , an imperfect - information _ semantic game _ between a _ verifier _ ( also called eloise ) and a _ falsifier _ ( also called abelard ) , and then asserting that is true in if and only if the verifier has a winning strategy in . as an alternative of this ( non - compositional ) _ game - theoretic semantics _ , which is an imperfect - information variant of hintikka s game - theoretic semantics for first order logic , hodges introduced in _ team semantics _ ( also called _ trump semantics _ ) , a compositional semantics for logics of imperfect information which is equivalent to game - theoretic semantics over sentences and in which formulas are satisfied or not satisfied not by single assignments , but by _ sets _ of assignments ( called _ teams _ ) . in this work, we will be mostly concerned with team semantics and some of its variants .we refer the reader to the relevant literature ( for example to and ) for further information regarding these logics : in the rest of this section , we will content ourselves with recalling the definitions and results which will be useful for the rest of this work .let be a first order model and let be a finite set of variables .then an _ assignment _ over with _ domain _ is a function from to the set of all elements of .furthermore , for any assignment over with domain , any element and any variable ( not necessarily in ) , we write ] , where = \{s[f(s)/v ] : s \in x\}\ ] ] ts- : : : is of the form and } \psi ] and \subseteq y\} ] for some and . by induction hypothesis and downwards closure, this can be the case if and only if } \psi ] , that is , if and only if } \psi ] .hence , we have that ; and , by downwards closure , this implies that , and hence that as required .+ conversely , suppose that .then , and hence for some . then we have by definition that } v_p(j , x) ] . by induction hypothesis, the first condition holds if and only if . as for the second one, it holds if and only if = y_1 \cup y_2 ] , and hence .now let us consider the cases corresponding to transition terms : 1 .suppose that .if then , and hence by * non - creation * we have that , as required. + let us assume instead that .then , by hypothesis , there exists a such that * there exists a such that [f / y ] } r_t(i , x , y) ] .+ from the first condition it follows that for every there exists a such that : therefore , by the definition of , every such must be in .+ from the second condition it follows that whenever and , ; and , since , this implies that by the definition of .+ hence , by * monotonicity * and * downwards closure * , we have that and that , as required .+ conversely , suppose that for some .if then , and hence by proposition [ propo : emptyteam ] we have that , as required . otherwise , by * non - triviality * , now be any of its elements and let for all ] , as any assignment of this team sends to some element of and to .furthermore , let , and let be such that : then , and hence [t^{dl}/y ] } \lnot r_t(i , x , y ) \vee py ] can be split into : s(y ) \not \in q\}\ ] ] and : s(y ) \in q\}\ ] ] it is trivial to see that ; and furthermore , since and , by induction hypothesis we have that . thus } \forall y ( \lnot qy \vee ( \tau_2)^{dl}_y(p)) ] , and since and do not occur in or the above algorithm can succeed ( for some choice of and ) only if or . as another , slightly more complicated example , let us consider the following problem . given four variables , , and ,let be an _ exclusion atom _ holding in a team if and only if for all , that is , if and only if the sets of the values taken by and by in are disjoint .by theorem [ sigmatodl ] , we can tell at once that there exists some dependence logic formula such that for all suitable and , if and only if ; but what about the converse ? for example , can we find an expression , in the language of first order logic augmented with these exclusion atoms ( but with no dependence atoms ) , such that for all suitable and if and only if ?as discussed in in a more general setting , the answer is positive , and one such is , where is some variable other than and . in the second disjunct can be removed , but for simplicity we will keep it . ] why is this the case ?well , let us consider any team with domain containing and , and let us evaluate over it .as shown graphically in figure [ fig : f1 ] , the transitions between teams occurring during the evaluation of the formula correspond to the following algorithm : 1 .first , assign all possible values to the variable for all assignments in , thus obtaining = \{s[m / z ] : s \in x , m \in { { \texttt{dom}}}(m)\} ] all assignments for which , keeping only the ones for which ; 3 . then , verify that for any possible fixed value of , the possible values of and are disjoint .this algorithm succeeds only if is a function of .indeed , suppose that instead there are two assignments such that , and for three with .now we have that , s[c / z ] , s'[b / z ] , s'[c / z]\ } \subseteq x[m / z] ] and ] , and therefore it is not true that . and , conversely ,if in the team the value of is a function of the value of then by splitting ] and : s(y ) \not = s(z)\} ] ; tdl- : : : is of the form for some and \subseteq y ] for some , and by choosing ] , and by downwards closure we have that } \theta ] , and if we choose ] and , by downwards closure , } \theta ] ; dpl- : : : is of the form , and for all elements there exists an such that \rightarrow h } \psi ] for some ; ddl- : : : is of the form for some , and \subseteq y$ ] ; ddl- : : : is of the form and for two teams and such that and ; ddl- : : : is of the form , and ; ddl - concat : : : is of the form , and there exists a such that and . a formula is said to be _ satisfied _ by a team in a model if and only if there exists a such that ; and if this is the case , we will write .it is not difficult to see that dynamic dependence logic is equivalent to transition dependence logic ( and , therefore , to dependence logic ) .let be a dependence logic formula .then there exists a dynamic dependence logic formula which is equivalent to it , in the sense that for all suitable teams and models we build by structural induction : 1 . if is a literal or a dependence atom then ; 2 . if is then ; 3 . if is then ; 4 . if is then ; 5 . if is then .let be a dynamic dependence logic formula .then there exists a transition dependence logic transition term such that for all suitable , and , and such that hence build by structural induction : 1 . if is a literal or dependence atom then ; 2 . if is of the form or then ; 3 . if is of the form then ; 4 . if is of the form then ; 5 . if is of the form then .dynamic dependence logic is equivalent to transition dependence logic and to dependence logic follows from the two previous results and from the equivalence between dependence logic and transition dependence logic .in this work , we established a connection between a variant of dynamic game logic and dependence logic , and we used it as the basis for the development of variants of dependence logic in which it is possible to talk directly about transitions from teams to teams .this suggests a new perspective on dependence logic and team semantics , one which allow us to study them as a special kind of _ algebras of nondeterministic transitions between relations_. one of the main problems that is now open is whether it is possible to axiomatize these algebras , in the same sense in which , in , allen mann offers an axiomatization of the algebra of trumps corresponding to if logic ( or , equivalently , to dependence logic ) .furthermore , we might want to consider different choices of connectives , like for example ones related to the theory of database transactions .the investigation of the relationships between the resulting formalisms is a natural continuation of the currently ongoing work on the study of the relationship between various extensions of dependence logic , and promises of being of great utility for the further development of this fascinating line of research .the author wishes to thank johan van benthem and jouko vnnen for a number of useful suggestions and insights .furthermore , he wishes to thank the reviewers for a number of highly useful suggestions and comments .hintikka , j. and g. sandu : 1989 , ` informational independence as a semantic phenomenon ' . in : j. fenstad ,i. frolov , and r. hilpinen ( eds . ) : _ logic , methodology and philosophy of science_. elsevier , pp .571589 .kontinen , j. and v. nurmi : 2009 , ` team logic and second - order logic ' .in : h. ono , m. kanazawa , and r. de queiroz ( eds . ) : _ logic , language , information and computation _ , vol .5514 of _ lecture notes in computer science_. springer berlin / heidelberg , pp .230241 .parikh , r. : 1985 , ` the logic of games and its applications ' . in : _ selected papers of the international conference on `` foundations of computation theory '' on topics in the theory of computation_. new york , ny , usa , pp .111139 .vnnen , j. : 2007b , ` team logic ' . in : j. van benthem , d. gabbay , and b. lwe ( eds . ) : _ interactive logic .selected papers from the 7th augustus de morgan workshop_. msterdam university press , pp .
we examine the relationship between dependence logic and game logics . a variant of dynamic game logic , called _ transition logic _ , is developed , and we show that its relationship with dependence logic is comparable to the one between first - order logic and dynamic game logic discussed by van benthem . this suggests a new perspective on the interpretation of dependence logic formulas , in terms of assertions about _ reachability _ in games of imperfect information against nature . we then capitalize on this intuition by developing expressively equivalent variants of dependence logic in which this interpretation is taken to the foreground .
most biopolymers , such as rnas , proteins and genomic dna , are found in folded configurations .folding involves the formation of one or more intramolecular interactions , termed contacts .proper folding of these molecules is often necessary for their function .intensive efforts have been made to measure the geometric and topological properties of protein and rna folds , and to find generic relations between those properties and molecular function , dynamics and evolution .likewise , topological properties of synthetic molecules have been subject to intense research , and their significance for polymer chemistry and physics has been widely recognized .topology is a mathematical term , which is used to describe the properties of objects that remain unchanged under continuous deformation .different approaches have been discussed in the literature to describe the topology of branched or knotted polymers . however , many important biopolymers , such as proteins and nucleic acids , are unknotted linear chains .the circuit topology approach has recently been introduced to characterize the folded configuration of linear polymers .circuit topology of a linear chain elucidates generically the arrangement of intra - chain contacts of a folded - chain configuration ( see fig . [ fig1 ] ) .the arrangement of the contacts has been shown to be a determinant of the folding rates and unfolding pathways of biomolecules , and has important implications for bimolecular evolution and molecular engineering .topology characterization and sorting of polymers has been the subject of intense research in recent years ; bulk purification of theta - shaped and three - armed star polymers is performed using chromatography ; linear and circular dna are separated in nano - grooves embedded in a nano - slit ; and star - branched polymers with different number of arms are shown to travel with different speeds through a nano - channel . in the context of characterization , linear and circular dna molecules are probed by confining them in a nano - channel and using fluorescence microscopy . we knowlittle about how to sort folded linear polymers based on topology .this is in contrast to size sorting of folded linear polymers which has been studied extensively in the literature .nano - pore technology represents a versatile tool for single - molecule studies and biosensing .a typical setting involves a voltage difference across the nano - pore in an ionic solution containing the desired molecule .the ion current through the nano - pore decreases as the molecule enters the pore .the level of current reduction and its duration reveals information about the molecule . prior to the current project, different properties of nucleic acids and proteins have been studied using nano - pore technology , for example : dna sequencing , unzipping of nucleic acids , protein detection , unfolding of proteins , and interactions between nucleic acids and proteins . in our study , we used simple models of polymer chains and molecular dynamic simulations to determine how the circuit topology of a chain influences its passage through a nano - pore .we investigated whether nano - pores can be used for topology - based sorting and characterization of folded chains .two scenarios were considered : ( 1 ) passage through pores large enough to permit the chain to pass through without breaking its contacts , and ( 2 ) passage of chains through small nano - pores , during which contacts were ripped apart . in the first scenario ,nano - pore technology enabled purification of chains with certain topologies and allowed us to read the topology of a folded molecule as it passed through the pore . in the second scenario, we used the nano - pore to read the circuit topology of a single fold .we also asked if translocation time and chain topology are correlated .this technology has been subject to intense research for simple - structured polynucleotides ; however , the current study is the first to use nano - pores to systematically measure contact arrangements of folded molecules ( fig . [ fig1 ] ) .the polymer is modeled by beads connected by fene bonds ] at . is the energy scale of the simulations . is the monomer size and the length scale of the simulations .all simulations were performed by espresso as detailed below .initially , the first monomer is fixed inside the nano - pore .after the whole polymer is equilibrated , the first monomer is unfixed and force , , is applied to pull it through the nano - pore . for pore diameters smaller than two monomers , passage ofthe polymer inevitably leads to breakage of the contacts . in this case, the bond between the contact sites is replaced with a simple lennard - jones potential $ ] after equilibration .the depth of the attraction well , , is a measure of the strength of the bond between the contact sites .number of passed monomers and position of the first monomer versus time are studied in simulations .these quantities are averaged over different realizations . for longer passages ,the averages are again window - averaged over intervals equal to 10 time units .window - averaging is used to reduce the data points and the noise in the plots . to minimize the effect of determinants other than topology, we take equal spacing , , between the contact sites ( connected monomers ) and two tails on the sides equal to the spacings .the total length of the polymer is . if the monomers are numbered consecutively from one end , then, the position of the contact sites along the chain would be , , and .the spacing is taken equal to 12 monomers , unless otherwise stated .some chains become knotted when the bonds are formed in the chain or when the chain is pulled suddenly with a strong force .passage times of these knotted chains are extremely long .the data related to these unusual passages is removed before averaging .consider translocation of 2-contact chains through a nano - pore with an internal diameter equal to .first we assume contacts are permanent and unbreakable .two different strengths for the pulling force , and , are examined .50 realizations are performed for each of the three topologies ( fig . [ fig1](a ) ) and the two forces . the average number of monomers passed through the nano - pore versus time is shown in figs . [ fig2](a ) and [ fig2](b ) .shoulders in the curves correspond to pauses during the passage of the polymer when the contacts encounter the nano - pore .we first examine the passage dynamics under stronger force , ( fig . [ fig2](a ) ) .one shoulder is observed during the passages of the cross and the parallel topologies , while the series topology is markedly different with two clear shoulders during its passage .the average number of passed monomers at the shoulders coincides with the position of the contact sites ( shown with horizontal lines in the plot ) .this confirms interpretation of the shoulders as the pauses related to the passage of the contacts . the average number of monomers inside the nano - pore versus time is also significantly different for the series topology ( inset of fig . [ fig2](a ) ) .two distinct peaks are observed for the series topology , while only one peak is seen for the cross and the parallel topologies .the peaks in the inset plot occur simultaneously with the shoulders in the main plot. force has a dramatic effect on the passage dynamics of chains .[ fig2](b ) plots the average number of monomers passed through the nano - pore versus time under . here, the maximum passage time for the chain with parallel topology is larger relative to other topologies , while the one with the series topology had the largest maximum passage time under the stronger force . by reducing the force, the passage time gets much longer and the entropic effects become dominant . as a result , the shoulders in the number of passed monomers are not as clear as before .one shoulder is observed for all topologies at the position of the first contact site .two other shoulders are observed during the passage of the chain with parallel contact arrangemenet .the second shoulder corresponds to the time when the large loop of the chain is midway inside the nano - pore .furthermore , the third shoulder is due to the second contact and the pause caused by the entropy of the second small loop in the parallel topology ( shown schematically in the inset ) .additionally , two peaks are seen in the time profile of the average number of monomers inside the nano - pore .these peaks appear simultaneously with the described shoulders in the time profile of the number of passed monomers ( inset of fig . [ fig2](b ) ) .we note that the first and the third shoulders are also seen in the average position of the first monomer versus time ( fig .s2 ) . and ( b ) .the shoulders correspond to pauses in the passage process and can be used to read the chain topology .the series topology shows two shoulders under the strong force ( a ) and the parallel topology shows three shoulders under the weak force ( b ) .insets : average number of monomers inside the nano - pore versus time .the peaks in the inset plots occur at the same time with the shoulders in the main plots .schematics shows the parallel topology and the arrow points to the smaller loop .entropy of the smaller loop causes the last pause in the passage of the parallel topology under the weak force . ]the results show that it is possible to distinguish the series and parallel from other topologies using nano - pores with strong and weak forces , respectively .the number of passed monomers and the number of monomers inside the nano - pore can be readily measured in experiments .the former can be measured by pulling the end of the polymer using optical tweezers , while the latter is readable by measuring the ion current through the nano - pore .the ion current has been shown to take discrete values with the number of monomers in the nano - pore . finally , the maximum passage time in each pulling force is also different for the three topologies and can be used alternatively for identifying the topology of a 2-contact unbreakable chain . to generalize the obtained results to molecules of various sizes ( chain lengths ) ,we investigate the passage of a chain with two unbreakable contacts and double spacing between the contact sites under weak and strong pulling forces .position of the shoulders and the peaks are in agreement with the above descriptions ( fig .also , it is seen that the maximum passage time is longer for the series topology under the strong force and for the parallel topology under the weak force .( see esi section 1 ) next , we consider folded molecules with more than two intramolecular contacts .nano - pores of various sizes are needed to pass these complex unbreakable topologies under usual pulling forces .this gives the opportunity to use the nano - pore for purifying topologies or for enrichment of a certain topology from a mixture of different topologies .to test this idea , we examine the passage of 3-contact chains through a pore with an internal diameter equal to under the pulling force .there are 15 topologically different configurations for a 3-contact chain . among these ,three configurations have two parallel relations in their topologies , shown in fig 1(b ) .two of them ( among all 15 configurations ) do not pass the nano - pore in usual time intervals .this is in agreement with the expectation that chains with a higher number of parallel topologies tend to interlock more , and do not pass through smaller pores .the three chains shown in fig .[ fig1](b ) behave similarly when they enter the nano - pore from either end .this means that the chain direction is not important in purifying the topologies using a nano - pore . .for the smaller nano - pores , 150 random configurations are averaged . however , up to 700 random configurations are tested for the nano - pores with .all the chains can pass through the nano - pore when the nano - pore diameter is larger than or equal to . ]we then extended the simulations to 5-contact chains as a representation of real chains with increasing complexities .there are ( 2 * 5 - 1)!!=1500 topologically different configurations for a 5-contact chain , so we chose configurations at random and passed them through the nano - pore under the pulling force .the chains are examined to see whether they pass through the nano - pore in a reasonable time .a first - order measure of the circuit topology of a chain is the number of contact pairs that are in series , cross , or parallel arrangements .3 shows the average numbers of the three topological arrangements versus the internal diameter of the nano - pore , , for chains that pass and do not pass through the nano - pore .the average number of series topology is higher in the passed chains compared to the chains that do not pass .the average number of parallel topology is smaller in the passed chains , for pore diameters smaller than four times the monomer size .average number of cross topology is smaller for passed chains . in a realistic setting , when a mixture of randomly connected chains are allowed to pass through a nano - pore ( smaller than ) , we predict that the flow through would be enriched in series topology .however , the fraction of the mixture that fails to pass through the pore would contain chains with high number of parallel and cross topologies .this can be justified by the fact that in parallel and cross topologies , the contact sites are relatively far from each other along the chain .thus , chains with a high number of contacts with parallel and cross arrangements are bulkier and have more interlocking configurations .in contrast , in series topology , the contacts are local and do not connect distant points along the chain . as a result , the chains with a high number of contacts with series arrangements are more extended in configuration and can pass through the nano - pore more easily .as it is evident from our study , the excluded volume interaction is the main deriving force behind separation in a narrow nano - pore ; this interaction has not been considered in previous theoretical works . finally , simulations with a four times stronger pulling force shows that no purification is possible under higher forces even with the smallest nano - pore , , reflecting the importance of tuning the applied force to its optimal values .there are two parameters that determine the time needed for the passage of a chain coupled to bond breakup ; the bond strength and the pulling force .we first studied pulling of 2-contact chains through a nano - pore with internal diameter equal to , under a force comparable to the bond strength ( see esi section 2 for a theoretical description ) .for very weak bonds , with a bond strength equal to , the contacts break before reaching the pore .this is due to the tension propagated along the chain from the pulled end . for medium to strong bonds between , it is possible to see shoulders in the time profile of the position of the first monomer , using suitable pulling forces ( figs .4(a ) and 4(b ) ) . for shouldersto become prominent , a large pulling force is required to dominate the entropic fluctuations . however, it should not be too strong to completely eliminate the effect of topology .as the leading end of the chain is stretched completely with large pulling forces , the shoulders can be used to find position of the contact sites along the chain ( horizontal lines in figs .4(a ) and 4(b ) ) . and .the pulling force should be chosen carefully : large enough to minimize the effect of entropy but not too large to eliminate the effect of topology .the shoulders are due to the pauses at the contact sites .position of the shoulders can give information about the position of the connected monomers along the chain ( horizontal lines ) .insets : position of the contact sites can not be tracked in smaller pulling forces . ] for large forces , there is no difference between the passage times of the three topologies .the difference increases by decreasing the pulling force . for very weak forces , however , the entropic effects become significant and hide the effect of topology on the translocation time . moreover , the simulation time becomes very large and simulations ( experiments ) are not cost - effective .the results indicate that moderately weak forces can be used to discriminate the three topologies . for this purpose ,we calculate the average passage times of the three topologies using a suitable pulling force ( considering the bond strength ) .then , the maximum and the minimum average passage time is found among the three topologies . figs . [ fig5](a ) and [ fig5](b ) show the topologies that have the minimum and the maximum of the average passage times , respectively .we note that changing the dataset used for averaging does not alter the order of the average passage times of the three topologies for all bond strengths and the corresponding suitable forces ( fig .s7 and table s1 ) .therefore , the average translocation time can be regarded as a tool for reading the chain topology .( see esi section 3 ) to generalize our findings , we investigate translocation of 5-contact chains to find a correlation between topology of the chain and its average passage time . in three sets of simulations , one of the cross , parallel , and series relationsis taken to be dominant in the topology of the chain , meaning that the majority of the contact pairs have the dominant arrangement .we call such states as pure states .more specifically , 8 out of 10 total binary relations are taken the same ; however , the numbers of the other two relations are not determined .the average passage time in each set is calculated over 150 randomly chosen chains that fulfill the mentioned conditions ; , or . for each bond energy and pulling force , we also calculate the average passage time for a fourth set which contains 150 completely random chains .again , we calculate the average passage times for the three pure states. then , the maximum and the minimum passage times are found between the three sets .pure states that have the maximum and the minimum of the average passage times are shown in figs . [ fig5](c ) and [ fig5](d ) .extremely large passage times , which occur due to chain knotting , are removed from the data prior to averaging .the order of the average passage time among pure states does not depend on the data set used , while the data set contains enough data points ( table s2 ) .this shows that the dominant topology in pure states can be recognized by using the passage time through a nano - pore .( see esi section 4 )in summary , we studied translocation of folded polymers through nano - pores using molecular dynamics simulations .we found settings that are required for a nano - pore setup to be able to read and sort molecules based on their molecular topology .we showed that nano - pores can be used to efficiently enrich certain topologies from mixtures of random 5-contact chains and that this purification is not sensitive to chain orientation in the nano - pore .we also showed that nano - pores can be used to determine the chain topology for 2-contact chains when the intact folded chains pass through the pore .when the chain unfolds upon passing through the nano - pore , we showed that the nano - pore enables determining the position of the contacts along a 2-contact chain in large pulling forces . in this condition , by using moderate forces, we could discriminate between pure states ( i.e. , states for which the majority of contacts were arranged identically ) by using the average passage time .the authors thank mahdieh mikani for technical help and fatemeh ramazani for careful reading of the paper .vakhrushev a. v. ; gorbunov a. a. ; tezuka y. ; tsuchitani a. ; oike h. liquid chromatography of theta - shaped and three - armed star poly ( tetrahydrofuran)s : theory and experimental evidence of topological separation .chem . _ * 2008 * , 80 , 8153 - 8162 .mikkelsen m. b. ; reisner w. ; flyvbjerg h. ; kristensen a. pressure - driven dna in nanogroove arrays : complex dynamics leads to length - and topology - dependent separation ._ nano lett . _ * 2011 * , 11 , 1598 - 1602 .dorfman k. d. ; king s. b. ; olson d. w. ; thomas j. d. ; tree d. r. beyond gel electrophoresis : microfluidic separations , fluorescence burst analysis , and dna stretching .rev . _ * 2012 * , 113 , 2584 - 2667 . oukhaled g. ; mathe j. ; biance a. l. ; bacri l. ; betton j. m. ; lairez d. ; pelta j. ; auvray l. unfolding of proteins and long transient conformations detected by single nano - pore recording . _lett . _ * 2007 * , 98 , 158101 .langecker m. ; ivankin a. ; carson s. ; kinney s. r. ; simmel f. c. ; wanunu m. nano - pores suggest a negligible influence of cpg methylation on nucleosome packaging and stability ._ nano lett . _ * 2014 * , 15 , 783 - 790 .
here we report on the translocation of folded polymers through nano - pores using molecular dynamic simulations . two cases are studied ; one in which a folded molecule unfolds upon passage and one in which the folding remains intact as the molecule passes through the nano - pore . the topology of a folded polymer chain is defined as the arrangement of the intramolecular contacts , known as circuit topology . in the case where intramolecular contacts remain intact , we show that the dynamics of passage through a nano - pore varies for molecules with differing topologies : a phenomenon that can be exploited to enrich certain topologies in mixtures . we find that the nano - pore allows reading of topology for short chains . moreover , when the passage is coupled with unfolding , the nano - pore enables discrimination between pure states , i.e. , states for which the majority of contacts are arranged identically . in this case , as we show here , it is also possible to read the positions of the contact sites along a chain . our results demonstrate the applicability of nano - pore technology to characterize and sort molecules based on their topology .
a central open question in classical fluid dynamics is whether the incompressible three - dimensional euler equations with smooth initial conditions develop a singularity after a finite time .a key result was established in the late eighties by beale , kato and majda ( bkm ) .the bkm theorem states that blowup ( if it takes place ) requires the time - integral of the supremum of the vorticity to become infinite ( see the review by bardos and titi ) . many studies have been performed using the bkm result to monitor the growth of the vorticity supremum in numerical simulations in order to conclude yes or no regarding the question of whether a finite - time singularity might develop .the answer is somewhat mixed , see _e.g. _ references and the recent review by gibbon .other conditional theoretical results , going beyond the bkm theorem , were obtained in a pioneering paper by constantin , fefferman and majda .they showed that the evolution of the direction of vorticity posed geometric constraints on potentially singular solutions for the 3d euler equation .this point of view was further developed by deng , hou and yu in references and .an alternative way to extract insights on the singularity problem from numerical simulations is the so - called analyticity strip method . in this method the time is considered as a real variable and the space - coordinatesare considered as complex variables .the so - called `` width of the analyticity strip '' is defined as the imaginary part of the complex - space singularity of the velocity field nearest to the real space .the idea is to monitor as a function of time .this method uses the rigorous result that a real - space singularity of the euler equations occurring at time must be preceded by a non - zero that vanishes at . using spectral methods , obtained directly from the high - wavenumber exponential fall off of the spatial fourier transform of the solution .this method effectively provides a `` distance to the singularity '' given by , which can not be obtained from the general bkm theorem .note that the bkm theorem is more robust than the analyticity - strip method in the sense that it applies to velocity fields that do not need to be analytic .however , in the present paper we will concentrate on initial conditions that are analytic . in this case, there is a well - known result that states : _ in three dimensions with periodic boundary conditions and analytic initial conditions , analyticity is preserved as long as the velocity is continuously differentiable _ ( ) _ in the real domain _ .the bkm theorem allows for a strengthening of this result : analyticity is actually preserved as long as the vorticity is finite .the analyticity - strip method has been applied to probe the euler singularity problem using a standard periodic ( and analytical ) initial data : the so - called taylor - green ( tg ) vortex .we now give a short review of what is already known about the tg dynamics .numerical simulations of the tg flow were performed with resolution increasing over the years , as more computing power became available .it was found that except for very short times and for as long as can be reliably measured , it displays almost perfect exponential decrease .simulations performed in on a grid of points obtained ( for up to ) .this behavior was confirmed in at resolution .more than years after the first study , simulations performed on a grid of points yielded ( for up to ) .if these results could be safely extrapolated to later times then the taylor - green vortex would never develop a real singularity .the present paper has two main goals .one is to report on and analyze new simulations of the tg vortex that are performed at resolution .these new simulations show , for the first time , a well - resolved change of regime , leading to a faster decay of happening at a time where preliminary visualizations show the collision of vortex sheets .that was reported in mhd for the so - called imtg initial data at resolution in reference . ]the second goal of this paper is to answer the following question , motivated by the new behavior of the tg vortex : how fast does the analyticity - strip width have to decrease to zero in order to sustain a finite - time singularity , consistent with the bkm theorem ?to the best of our knowledge , this question has not been formulated previously . to answer this questionwe introduce a new bound of the supremum norm of vorticity in terms of the energy spectrum .we then use this bound to combine the bkm theorem with the analyticity - strip method .this new bound is sharper than usual bounds .we show that a finite - time blowup exists only if the analyticity - strip width goes to zero sufficiently fast at the singularity time .if a power - law behavior is assumed for then its exponent must be greater than some critical value .in other words , we provide a powerful test that can potentially rule out the existence of a finite - time singularity in a given numerical solution of euler equations .we apply this test to the data from the latest taylor - green numerical simulation in order to see if the change of behavior in can be consistent with a singularity .the paper is organized as follows : section [ sec : theo ] is devoted to the basic definitions , symmetries and numerical method related to the inviscid taylor - green vortex . in sec .[ sec : numerics_classical ] , the new high - resolution taylor - green results are presented and are analyzed classically in terms of analyticity - strip method and bkm . in sec .[ sec : as_bkm ] , the analyticity - strip method and bkm theorem are bridged together .the section starts with heuristic arguments that are next formalized in a mathematical framework of definitions , hypotheses and theorems . in sec .[ sec : newanal ] , our new theoretical results are used to analyze the behavior of the decrement .section [ sec : conclusion ] is our conclusion .the generalization to non tg - symmetric periodic flows of the results presented in sec .[ sec : as_bkm ] are described in an appendix .let us consider the 3d incompressible euler equations for the velocity field defined for and in a time interval : the taylor - green ( tg ) flow is defined by the -periodic initial data , where the periodicity of allows us to define the ( standard ) fourier representation the kinetic energy spectrum is defined as the sum over spherical shells and the total energy is independent of time because satisfies the 3d euler equations ( [ eq : euler ] ) .a number of the symmetries of are compatible with the equation of motions .they are , first , rotational symmetries of angle around the axis and ; and of angle around the axis .a second set of symmetries corresponds to planes of mirror symmetry : , and . on the symmetry planes , the velocity andthe vorticity are ( respectively ) parallel and perpendicular to these planes that form the sides of the so - called impermeable box which confines the flow .it is demonstrated in reference that these symmetries imply that the fourier expansions coefficients of the velocity field in eq . unless are either all even or all odd integers .this fact can be used in a standard way to reduce memory storage and speed up computations .the euler equations are solved numerically using standard pseudo - spectral methods with resolution .time marching is done with a second - order runge - kutta finite - difference scheme .the solutions are dealiased by suppressing , at each time step , the modes for which at least one wave - vector component exceeds two - thirds of the maximum wave - number ( thus a run is truncated at ) .the simulations reported in this paper were performed using a special purpose symmetric parallel code developed from that described in .the workload for a timestep is ( roughly ) twice that of a general periodic code running at a quarter of the resolution .specifically , at a given computational cost , the ratio of the largest to the smallest scale available to a computation with enforced taylor - green symmetries is enhanced by a factor of in linear resolution .this leads to a factor of savings in total computational time and memory usage .the code is based on fftw and a hybrid mpi - openmp scheme derived from that described in .the runs were performed on the idris bluegene / p machine . at resolution we used mpi processes , each process spawning openmp threads .( see eq . ) at and b ) maximum of vorticity .results from runs performed at different resolutions are displayed together : ( brown triangles ) , ( blue squares ) , ( green diamonds ) and ( red circles).,height=377 ] runs were performed at resolutions , , and .the behavior of the energy spectra and the spatial maximum of the norm of the vorticity are presented in fig .[ fig : energy_spectra_maxvort ] . visualization of tg vorticity at resolution : a ) full impermeable box , and at .zooms over the subbox marked near , are displayed in b ) at , in c ) at and in d ) at .,height=359 ] it is apparent in fig .[ fig : energy_spectra_maxvort](a ) that resolution - dependent even - odd oscillations are present , at certain times , on the tg energy spectrum .note that this behavior is produced when the tail of the spectrum rises above the round - off error .this phenomenon can be explained in terms of a _ resonance _ , along the lines developed in reference . in practicewe will deal with this problem by averaging the spectrum over shells of width .apart from this it can be seen that spectra computed using different resolutions are in good agreement for all times .in contrast , it is visible in fig .[ fig : energy_spectra_maxvort](b ) that the maximum of vorticity computed at different resolutions are in agreement only up to some resolution - dependent time ( see the inset ) . the fact that at a given time decreases if one truncates the higher wavenumbers of the velocity field ( see fig .[ fig : energy_spectra_maxvort](b ) ) strongly suggests that has significant contributions coming from high - wavenumbers modes .this forms the basis of the heuristic argument presented below in sec .[ subsec : heur ] .figure [ fig : vort_3d_viz ] shows visualizations ( using the vapor software ) of the high vorticity regions in the impermeable box , corresponding to the run at late times .a thin vortex sheet is apparent in fig.[fig : vort_3d_viz](a ) on the vertical faces , and , of the impermeable box . the emergence of this thin vortex sheet is well understood by simple dynamical arguments about the flow on the faces of the impermeable box that were first given in reference .we now briefly review these arguments .the initial vortex on the bottom face is first forced by centrifugal action to spiral outwards toward the edges and then up the side faces .a corresponding outflow on the top face and downflow from the top edges onto the side faces leads to a convergence of fluid near the horizontal centreline of each side face , from where it is forced back into the centre of the box and subsequently back to the top and bottom faces .the vorticity on the side faces is efficiently produced in the zone of convergence , and builds up rapidly into a vortex sheet ( see figs . 1 and 2 of reference and fig. 8 of reference ) .while these considerations explain the presence of the thin vortex sheet in fig.[fig : vort_3d_viz](a ) , the dynamics presented in fig.[fig : vort_3d_viz](b - d ) also involves the collision of vortex sheets happening near the edge , close to .note that , as stated above in sec .[ subsec : symm ] , the vortex lines are perpendicular to the faces of the impermeable box .thus , because the collision takes place near an edge , the corresponding vortex lines must be highly curved , with strong variations of the direction of vorticity .the geometric constraints on potential singularities posed by the evolution of the direction of vorticity developed in references could be applied to the situation described in fig .[ fig : vort_3d_viz ] .however , such an analysis goes beyond the bkm theorem and involves extensive post - processing of very large datasets .this task is thus left for further work and we concentrate here on simple bkm diagnostics for the vorticity supremum and analyticity strip analysis of energy spectra .the analyticity - strip method is based on the fact that when the velocity field is analytic in space the energy spectrum satisfies in the asymptotic ` ultraviolet region ' with a proportionality factor that may contain an algebraic decay in a multiplicative function of time and , depending on the complexity of the physical flow , even an oscillatory ( in ) modulation .( red markers ) ; times and fit intervals are indicated in the legend . ]the basic idea is thus to assume that can be well approximated by a function of the form in some wave numbers interval between and ( the maximum wavenumber permitted by the numerical resolution ) . the common procedure to determine is to perform a least - square fit at each time on the logarithm of the energy spectrum , using the functional form the error on the fit interval , is minimized by solving the equations , and .note that these equations are linear in the parameters , and the transient oscillations of the energy spectrum observed at the highest wavenumbers ( see above fig .[ fig : energy_spectra_maxvort](a ) are eliminated by averaging the tg spectrum on shells of width before performing the fit .we present in fig .[ fig : fit_comp ] , examples of tg energy spectra fitted in such a way on the intervals , where denotes the beginning of round off noise .it is apparent that the fits are globally of a good quality .the time evolution of the fit parameters , and computed at different resolutions are displayed in fig .[ fig : fit_evolution ] .the measure of the fit parameters is reliable as long as remains larger than a few mesh sizes , a condition required for the smallest scales to be accurately resolved and spectral convergence ensured .thus the dimensionless quantity is a measure of spectral convergence .it is conventional to define a ` reliability time ' by the condition and to say that the numerical simulation is reliable for times .this reliability time can be extended only by increasing the spatial resolution available for the simulation , so the more computer power is available the larger is the reliability time .the resolution - dependent reliability condition is marked by the horizontal lines in fig .[ fig : fit_evolution](c ) . the exponential law that was previously reported at resolution in reference is also indicated in fig .[ fig : fit_evolution](c ) by a dashed black line .it is thus apparent that our lower - resolution results well reproduce the previous computations that were discussed above in sec .[ sec : intro ] ( see text preceding references ) . in table [ tab : table_rel ] , the reliability time obtained from the fit parameter of fig .[ fig : fit_evolution ] is compared with the reliability time stemming from the exponential behavior ..reliability time deduced from the exponential behavior compared with the reliability time obtained from the fit parameter of fig .[ fig : fit_evolution ] . [ cols="^,^,^",options="header " , ] the results for exponent and predicted singular time of table [ tab : table_dels ] have to be read carefully . because of the local -point method used to derive them from the data in table [ tab : table_int ] , they use the values of at , the last one being marginally reliable ( see sec.[sec : numefits ] ) . in fact , they amount to linear -point extrapolation of the data in fig .[ fig : delta1 ] ( see the inset ) : is the intersection of the straight line extrapolation with the time axis and is the inverse of the slope .one can guess that there is room for a power - law type of behavior , with exponent if we consider the data at and if we include the data at .we now use corollary 11 ( see sec . [sec : as_bkm ] ) to test if these estimates of power - law are consistent with the hypothesis of finite - time singularity .there , the product must be greater than or equal to one if finite - time singularity is to be expected . with the conservative estimate obtained by inspection of fig .[ fig : fit_evolution](b ) ( or equivalently using the values of in table [ tab : table_int ] ) , we obtain that for the data at and , but for the data at .these results are insensitive to the fit interval , see table [ tab : table_dels ] .therefore , if the latest data is considered , corollary 11 can not be used to negate the validity of the hypothesis of finite - time singularity .however , there is no sign that the data values of and in table [ tab : table_dels ] are settling down into constants , corresponding to a simple power - law behavior .another piece of analysis consists of comparing the singular time predicted from the data for the decrement with the singular time predicted from the direct data for the vorticity supremum norm .they seem both to be close to ( compare table [ tab : table_dels ] to table [ tab : table_omegasup ] ) . in this context, we should perhaps mention feynman s rule : `` never trust the data point furthest to the right '' , a comment attributed to richard feynman , saying basically that he would never trust the last points on an experimental graph , because if the people taking data could have gone beyond that , they would have .higher - resolution simulations are clearly needed to investigate whether the new regime is genuinely a power law and not simply a crossover to a faster exponential decay .our conclusion for this section is thus similar to that of sec .[ subsec : fit_methods_omegas ] : although our late - time reliable data for shows and is therefore not inconsistent with our corollary 11 , clear power - law behavior of is not achieved .in summary , we presented simulations of the taylor - green vortex with resolutions up to .we used the analyticity strip method to analyze the energy spectrum .we found that , around , a ( well - resolved up to ) change of regime is taking place , leading to a faster decay of the width of the analyticity strip . in the same time - interval, preliminary visualizations displayed a collision of vortex sheets . applying the bkm criterium to the growth of the maximum of the vorticity on the time - interval we found that the occurrence of a singularity around was not ruled out but that higher - resolution simulations were needed to confirm a clear power - law behavior for .we introduced a new sharp bound for the supremum norm of the vorticity in terms of the energy spectrum .this bound allowed us to combine the bkm theorem with the analyticity - strip method and to show that a finite - time blowup can exist only if vanishes sufficiently fast .applying this new test to our highest - resolution numerical simulation we found that the behavior of is not inconsistent with a singularity .however , due to the rather short time interval on which is both well - resolved and behaving as a power - law , higher - resolution studies are needed to investigate whether the new regime is genuinely a power law and not simply a crossover to a faster exponential decay .let us finally remark that our formal assumptions of section [ subsec : main_results ] are motivated and to some extent justified by the fact that , in systems that are known to lead to finite - time singularity , the analogous of the working hypothesis ( [ eq : fit_bound ] ) is verified .for the analogy to apply , a version of the bkm theorem must be available .this is the case of the -d inviscid burgers equation for a real scalar field defined on the torus : , \,\forall \ , t \in [ 0,t_*),\ ] ] which admits a bkm - type of theorem , with singularity time defined by . in the 1-d case ,the analogous of our bound is using the simple trigonometric initial data , the energy spectrum can be expressed in terms of bessel functions that admit simple asymptotic expansions .it is straightforward to show ( see for details ) that , for , one has the large- asymptotic expansion with while , at , in fact , the power law appears already before ( see the remark following eq .( 3 - 10 ) of reference ) .it is easy to check that the analytical solution admits , for all and for all sufficiently close to , a working hypothesis ( [ eq : fit_bound ] ) of the form with analytically - obtainable functions and with .the analogous of corollary [ cor : beta ] gives the inequality which is saturated by the analytically - obtained exponents , .we acknowledge useful scientific discussions with annick pouquet , uriel frisch and giorgio krstulovic who also helped us with the visualizations of fig . [fig : vort_3d_viz ] .the computations were carried out at idris ( cnrs ) .support for this work was provided by ucd seed funding projects sf304 and sf564 , and ircset ulysses project `` singularities in three - dimensional euler equations : simulations and geometry '' .here we provide the generalization to non tg - symmetric periodic flows of the results presented in section [ subsec : main_results ] . definition [ defn : spectrum ] and the working hypothesis ( hypothesis [ hypo : working ] ) are modified slightly in the general case .accordingly , the new bounds leading to lemma [ lem : main ] and theorem [ thm : main ] need to be modified slightly to accommodate the general case .the crucial derived relations between and in lemma [ lem : strong ] and corollaries [ cor : finite - time - sing ] and [ cor : beta ] will apply directly to the general periodic case and will not be discussed .the main technical difference is that the new bounds presented in section [ subsec : main_results ] apply for a flow with tg symmetries ( see section [ subsec : symm ] ) which imply that only modes with even - even - even and odd - odd - odd wavenumber components are populated .the general periodic case does not follow this restriction , which slightly modifies the bounds .we will assume , to simplify matters , that the so - called zero - mode of the velocity field is identically zero : + notice that all remaining wave numbers are populated .this means that all sums involving the scalar in equations ( [ eq : ineq_1 ] ) and ( [ eq : ineq_2_tyg ] ) will start effectively from also , because modes with mixed even - odd wavenumber components are allowed , the definitions of in lemma 2 and constant in equation ( [ eq : ineq_2_tyg ] ) must be replaced by more appropriate quantities .therefore , the corresponding general periodic versions of lemma [ lem : main ] ( equation ( [ eq : ineq_1 ] ) ) and practical bound ( equation ( [ eq : ineq_2_tyg ] ) ) are : + * lemma [ lem : main] ( general periodic version of lemma [ lem : main ] ) . *_ let be a velocity field with energy spectrum defined by equation ( [ eq : spectrum ] ) and let be its vorticity , defined on the periodicity domain ^ 3.$ ] then the following inequality is verified for all times when the sum in the rhs is defined , and independently of any evolution equation that might satisfy : _ _ where is the number of lattice points in a spherical shell of width 1 and radius . _ + * practical bound , general case . * where .we can easily check that the bounds for taylor - green , equations ( [ eq : ineq_1 ] ) and ( [ eq : ineq_2_tyg ] ) , are sharper ( by a factor close to 2 ) to their respective general bounds , equations ( [ eq : ineq_1_gp ] ) and ( [ eq : ineq_2_gp ] ) . finally , theorem [ thm : main ] is replaced by + * theorem [ thm : main]. * _ let a solution of the 3d euler equations satisfy the working hypothesis ( [ eq : fit_bound ] ) with included .then the maximal regularity time of the solution must satisfy _
numerical simulations of the incompressible euler equations are performed using the taylor - green vortex initial conditions and resolutions up to . the results are analyzed in terms of the classical analyticity strip method and beale , kato and majda ( bkm ) theorem . a well - resolved acceleration of the time - decay of the width of the analyticity strip is observed at the highest resolution for while preliminary visualizations show the collision of vortex sheets . the bkm criterium on the power - law growth of supremum of the vorticity , applied on the same time - interval , is not inconsistent with the occurrence of a singularity around . these new findings lead us to investigate how fast the analyticity strip width needs to decrease to zero in order to sustain a finite - time singularity consistent with the bkm theorem . a new simple bound of the supremum norm of vorticity in terms of the energy spectrum is introduced and used to combine the bkm theorem with the analyticity - strip method . it is shown that a finite - time blowup can exist only if vanishes sufficiently fast at the singularity time . in particular , if a power law is assumed for then its exponent must be greater than some critical value , thus providing a new test that is applied to our taylor - green numerical simulation . our main conclusion is that the numerical results are not inconsistent with a singularity but that higher - resolution studies are needed to extend the time - interval on which a well - resolved power - law behavior of takes place , and check whether the new regime is genuine and not simply a crossover to a faster exponential decay .
our title clearly alludes to the story of columbus landing in what he called the west indies " , which later on turned out to be part of the new world " .i have substituted antarctica in place of the new world " , following a quip from frank paige after he realized that i was talking all the time about _ penguins_. at the end of the millennium , we are indeed on another discovery voyage .we are at the dawn of observing cp violation in the b system .the stage is the emerging penguins .well , had columbus seen penguins in _ his _ west indies " , he probably would have known he was onto something really new .the em penguin ( emp ) ( and later , ) was first observed by cleo in 1993 .alas , it looked and walked pretty much according to the standard model ( sm ) , and the agreement between theory and experiment on rates are quite good .perhaps the study of cp asymmetries ( ) could reveal whether sm holds fully .the strong penguins ( p ) burst on the scene in 1997 , and by now the cleo collaboration has observed of order 10 exclusive modes , as well as the surprisingly large inclusive mode .the , and modes are rather robust , but the and rates shifted when cleo ii data were recalibrated in 1998 and part of cleo ii.v data were included .the and modes are still being reanalyzed .the nonobservation , so far , of the , and modes are also rather stringent .the observation of the mode was announced in january this year , while the observation of the and modes were announced in march .cleo ii.v data taking ended in february . with 10 million or so each of charged and neutral b s , new results are expected by summer and certainly by winter .perhaps the first observation of direct cp violation could be reported soon .with belle and babar turning on in may , together with the cleo iii detector upgrade all with separation ( pid ) capability ! we have a three way race for detecting and eventually disentangling _ direct _ cp violation in charmless b decays .we expect that , during 19992002 , the number of observed modes may increase to a few dozen , while the events per mode may increase from 1070 to events for some modes , and sensitivity for direct cp asymmetries would go from the present level of order 30% down to 10% or so .it should be realized that _the modes that are already observed _ ( ) _ should be the most sensitive probes . _ our first theme is therefore : _ is large possible in processes ? _ and , _if so , whither new physics ?_ however , as an antidote against the rush into the brave new world , we point out that the three observed modes may indicate that the west indies " interpretation is still correct so far .our second subject would hence be _ whither ewp ? now ! ?_ that is , we will argue for the intriguing possibility that perhaps we already have some indication for the electroweak penguin ( ewp ) .it is clear that 1999 would be an exciting landmark year in b physics .so , work hard and come party at the end of the year / century / millennium celebration called third international conference on b physics and cp violation " , held december 3 - 7 in taipei .we shall motivate the physics and give some results that have not been presented before , but refer to more detailed discussions that can be found elsewhere .our interests were stirred by a _ rumor _ in 1997 that cleo had a very large in the mode .the question was : _ how to get large ? _ with short distance ( bander - silverman - soni ) rescattering phase from penguin , the cp asymmetry could reach its maximum of order 10% around the presently preferred .final state rescattering phases could bring this up to 30% or so , and would hence mask new physics . buta 50% asymmetry seems difficult .new physics asymmetries in the process and process are typically of order 10% , whereas asymmetries for penguin dominant transitions are expected to be no more than 1% .the answer to the above challenge is to _ hit sm at its weakest ! _* _ weak spot of penguin _ : dipole transition + -0.3 cm 0.8 cm 1.3 cm + note that these two terms are at same order in and expansion .the effective charge " is which vanishes when the or goes on - shell , hence , only the dipole enters and transitions .it is an sm quirk due to the gim mechanism that ( the former becoming coefficients in usual operator formalism for gluonic penguin ) .hence one usually does not pay attention to the subdominant which goes into the variously called , , or coefficients . in particular , rate in sm is only of order 0.2% .but if new physics is present , having is natural , hence the gluonic dipole could get greatly enhanced . while subject to constraint, this could have great impact on process . *_ blind spot of detector ! _ + because leads to _ jetty , high multiplicity _ transitions + -0.3 cm 0.8 cm 0.9 cm + at present , 510% could still easily be allowed .the semileptonic branching ratio and charm counting deficits , and the strength of rate provide circumstantial _ hints _ that could be more than a few percent . * _ unconstrained new cp phase _ via +if enhanced by new physics , is likely to carry a new phase + -0.27 cm 0.8cm 0.9 cm + however , one faces a severe constraint from .for example it rules out the possibility of as source of enhancement . butas alex kagan taught me at last dpf meeting in minnesota , the constraint can be evaded if one has sources for radiating but not . *uncharted territory of nonuniversal squark masses + susy provides a natural possibility via gluino loops : + -0.35 cm 0.9 cm 1.3 cm + the simplest being a mixing model .since the first generation down squark is not involved , one evades all low energy constraints .this is a new physics cp model tailor - made for transitions . with the aim of generating huge cp asymmetries, we can now take and study transitions at both inclusive and exclusive level . in bothwe have used operator language .one needs to consider the tree diagram , which carries the cp phase ; the standard penguin diagrams , which contain short distance rescattering phases ; the enhanced dipole ( susy loop induced ) diagram ; finally , diagrams containing loop insertions to the gluon self - energy which are needed to maintain unitarity and consistency to order in rate differences . at the inclusive level ,one finds a pole " at low which reflects the jetty process that is experimentally hard to identify .destructive interference is in general needed to allow the rate to be comparable to sm . but this precisely facilitates the generation of large !more details such as figures can be found in .dominant rate asymmetry comes from large of the virtual gluon . to illustrate this, table i gives inclusive br ( arbitrarily cutoff at gev ) and for sm and for various new cp phase valus , assuming rate of order 10% .one obtains sm - like branching ratios for , and also seem to peak . this becomes clearer in table ii where we give the results for , where ( perturbative ) rescattering is fully open .we see that 2030% asymmetries are achieveable .this provides support for findings in exclusive processes .exclusive two body modes are much more problematic .starting from the operator formalism as in inclusive , we set , take and try to _ fit observed brs _ with .we then find the preferred by present rate data .one finds that , analogous to the inclusive case , destructive interference is needed and in fact provides a mechanism to suppress the pure penguin mode to satisfy cleo bound . for the and modes which are p - dominated, one utilizes the fact that the matrix element could be enhanced by low values ( of order 100120 mev ) to raise , which at same time leads to near degeneracy of and rates .the upshot is that one finds rather large cp asymmetries , i.e. 35% , 45% and 55% for , and modes , respectively , and all of the same sign .such pattern can not be generated by sm , with or without rescattering .we expect such pattern to hold true for many modes ..inclusive br ( in )/ ( in % ) for sm and for . [ cols="^,^,^,^,^,^,^,^",options="header " , ] we have left out the prominent modes from our discussion largely because the anomaly contribution -0.35 cm 0.9 cm 1.3 cm to compute such diagrams , one needs to know the fock component of the meson !this may be at the root of the rather large size of mode .before we get carried away by the possibility of large cp asymmetries from new physics , there is one flaw ( or two ? ) that emerged after summer 1998 .because of p - dominance which is certainly true in case of enhanced , is only half of .the factor of 1/2 comes from , which is just an isospin clebsch factor that originates from the wave function .although this seemed quite reasonable from 1997 data where mode was not reported , a crisis emerged in summer 1998 when cleo updated their results for the three modes .they found instead !curiously , also , which can not change the situation . in any case the expectation that can not make a factor of 2 change by interference . miraculously , however , this could be the first indication of the last type of penguin , the ewp .the yet to be observed ewp ( electroweak penguin ) , namely , occurs by followed by .the strong penguin oftentimes obscure the case ( or so it is thought ) , and to cleanly identify the ewp one has to search for pure " ewp modes such as , which are clearly rather far away .one usually expects the mode to be the first ewp to be observed , which is still a year or two away , while clean and purely weak penguin is rather far away . with the hint from , however , andputting back on our sm hat , we wish to establish the possibility that ewp may be operating behind the scene already .it should be emphasized that , unlike the gluon , the coupling depends on isospin , and can in principle break the isospin factor of 1/2 mentioned earlier .3.2 truein 3.2 truein -.2cm we first show that simple rescattering can not change drastically the factor of two . from fig .1(a ) , where we have adopted from current best fit " to ckm matrix , one clearly sees the factor of 2 between and .we also not that rescattering , as parametrized by the phase difference between i = 1/2 and 3/2 amplitudes , is only between and .when we put in the ewp contribution , at first sight it seems that the effect is drastic . on closer inspection at ,it is clear that the ewp contribution to and modes are small , but is quite visible for and modes .this is because the and modes suffer from suppression in amplitude because of wave function .however , it is precisely these modes which pick up a sizable penguin contribution via the ( the strength of is roughly a quarter of and ) .as one dials , and rescattering redistributes this ewp impact and leads to the rather visible change in fig . 1(b ) .we notice the remarkable result that the ewp reduces rate slightly but raises the rate considerably , such that the two modes become rather close .we have to admit , however , to something that we have sneaked in .to enhance the relative importance of ewp , we had to suppress the strong penguin effect .we have therefore employed a much heavier mev as compared to 100120 mev employed previously in new physics case .otherwise we can not bring and rates close to each other .3.2 truein 3.2 truein-.2 cm having brought and modes closer , the problem now is that lies above them , and the situation becomes worse for large rescattering . to remedy this , we play with the phase angle which tunes the weak phase of the tree contribution t. setting now , again we start without ewp in fig .the factor of two between and is again apparent .dialing clearly changes t - p interference . for in first quadrant one has destructive interference , which becomes constructive in second quadrant .this allows the mode to become larger than the pure penguin mode , which is insensitive to . however ,nowhere do we find a solution where is approximately true .there is always one mode that is split away from the other two .putting in ewp , as shown in fig .2(b ) , the impact is again quite visible .as anticipated , the and modes come close to each other .since their dependence is quite similar , one finds that for , the three observed modes come together as close as one can get , and are basically consistent with errors allowed by data .note that is never larger than .we emphasize that a large rescattering phase would destroy this achieved approximate equality , as can be seen from fig .3 , where we illustrate dependence for .it seems that can not be larger than or so .3.2 truein -.2 cm 3.2 truein 3.2 truein -.2 cm as a further check of effect of the ewp , we show the results for in fig .4 . in absence of rescattering ,the change in rate ( enhancement ) for mode from adding ewp is reflected in a dilution of the asymmetry , which could serve as a further test .this , however , depends rather crucially on absence of rescattering .once rescattering is included , it would be hard to distinguish the impact of ewp from cp asymmetries .however , even with rescattering phase , the dependence of cp asymmetries can easily distinguish between the two solutions of and , as illustrated in fig .5 , where ewp effect is included . from our observation that a large phase would destroy the near equality of the three observed modes that we had obtained , we find that even with presence of rescattering phase .3.2 truein 3.2 truein -.2cm it should be emphasized that the value we find necessary to have is in a different quadrant than the present best fit " result of .in particular , the sign of is preferred to be negative rather than positive .an extended analysis to , and modes confirm this assertion .intriguingly , the size of and was anticipated via this value .perhaps hadronic rare b decays can provide information on , and present results seem to be at odds with ckm fits to , , mixing , and in particular the mixing bound , which rules out .e prepared for violation ! !we first illustrated the possibility of having from new physics in _ already observed modes _, such as , , and mode when seen .our existence proof " was the possibility of enhanced dipole transition , which from susy model considerations one could have a new cp phase carried by .note that this is just an illustration .we are quite sure that nature is smatter .we then made an about - face and went back to sm , and pointed out that the ewp may have already shone through the special slit " of , where we inferred that is preferred , which implies that , contrary to current ckm fit " preference .see talks by f. w " urthwein and y. gao , these proceedings , hep - ex/9904008 .please see the web page http://www.phys.ntu.edu.tw / english / bcp3/. g.w.s .hou , hep - ph/9902382 , expanded version of proceedings for 4th international workshop on particle physics phenomenology , kaohsiung , taiwan , r.o.c ., june 1998 ; and workshop on cp violation , adelaide , australia , july 1998 .hou and k.c .yang , phys .lett . * 81 * , 5738 ( 1998 ) .m. bander , d. silverman and a. soni , phys .lett . * 43 * , 242 ( 1979 ) .l. wolfenstein and y.l .wu , phys .. lett . * 73 * , 2809 ( 1994 ) .w.s . hou and b. tseng , phys .* 80 * , 434 ( 1998 ) .kagan , _ phys ._ d*51 * , 6196 ( 1995 ) .x.g . he and w.s .hou , phys .* b 445 * , 344 ( 1999 ) .chua , x.g .he and w.s .hou , phys . rev . d*60 * , 014003 ( 1999 ) . j .-grard and w.s .hou , phys .lett . * 62 * , 855 ( 1989 ) ; phys .rev . d*43 * , 2909 ( 1991 ) .m. artuso et al .( cleo collaboration ) , cleo conf 98 - 20 .deshpande et al . , phys .. lett . * 82 * , 2240 ( 1999 ) .f. parodi , p. roudeau and a. stocchi , hep - ph/9802289 .hou and k.c .yang , hep - ph/9902256 .
discovery voyage into the age of :
the multiobjective optimization problem ( moop ) is to optimize two or more objective functions simultaneously , subject to given constraints .the multiobjective optimization can be applied to problems where the final decision should be made considering two or more conflicting objectives .moop occurs in various fields such as industrial design , finance , management and many engineering areas .practical goals in these fields can be generalized in such a way that the cost of a process is minimized while the quality of its product is maximized .the primary goal is to find a set of solutions that any individual objective function can not be improved without deteriorating the other objective functions , and such a set is called a pareto set . for efficient decision making ,a set of generated solutions ( ) should meet two conditions : it should be as close to the pareto front as possible and the solutions should be distributed as widely as possible. evolutionary algorithm ( ea ) is one of the most popular and successful approaches to solve moops .a number of ea - based algorithms have been suggested including the vector evaluated genetic algorithm ( vega ) , the niched pareto genetic algorithm ( npga ) , the nondominated sorting genetic algorithm ( nsga2 ) , the strength pareto evolutionary algorithm ( spea2 ) , the mimetic pareto archived evolution strategy ( m - paes ) and micro genetic algorithm ( micro - ga ) . among them ,nsga2 and spea2 are arguably the most widely used methods .other approaches include simulated annealing ( sa ) , tabu search , particle swarm optimization ( pso ) , immune algorithm ( ia ) , ant system and cultural algorithm .conformational space annealing ( csa ) is a highly efficient single - objective global optimization algorithm which incorporates advantages of genetic algorithm and sa .it has been successfully applied to diverse single - objective optimization problems in physics and biology , such as protein structure modeling , finding the minimum energy solution of a lenard - jones cluster , multiple sequence alignment and the community detection problem on networks . in these studies, csa is shown to perform more efficient sampling using less computational resources than the conventional monte - carlo ( mc ) and sa methods . here , we introduce a new multiobjective optimization algorithm by using csa , mocsa .compared to existing eas , mocsa has the following distinct features : ( a ) the ranking system considers the dominance relationship and the distance between solutions in the objective space , ( b ) solutions are updated by using a dynamically varying distance cutoff measure to control the diversity of the sampling in the decision space , and ( c ) a gradient - based constrained minimizer is utilized for local search . the remainder of this paper is organized as follows . in section 2 ,the definition of moop and related terms are described . in section 3 , details of mocsais presented .numerical results and the comparison between mocsa and nsga2 on various test problems are presented in section 4 .the final section contains the conclusion .the mathematical definition of a moop can be defined as follows , where is the decision vector , the decision space , the objective vector and the objective space .due to the presence of multiple objective functions , a final solution of moop consists of a set of non - dominated solutions instead of a single point .the notion of _ dominance _ and related terms are defined below . a decision vector is said to dominate another solution ( denoted by ) , if and only if [ paretodominance]definition a solution is said to be non - dominated by any other solutions ( a pareto optimal solution ) if and only if [ paretodominance]definition for a given moop , a pareto optimal set in the decision space , , is defined as [ paretodominance]definition for a given moop , a pareto optimal set in the objective space , , is defined as since the size of pareto optimal front , is infinite in general , which is impossible to obtain in practice , practical algorithms for moop yield a set of non - dominated solutions of a finite size .it should be noted that is always a non - dominated set by definition while a non - dominated set of solutions generated by an algorithm , which is denoted as a , may not be a subset of .here , a new multiobjective optimization algorithm based on csa is described .the csa was initially developed to obtain the protein structure with the minimum potential energy , _i.e. _ , to solve a single objective optimization problem .csa has been successfully applied to various kinds of optimization problems with modification .the general framework of csa is shown in figure [ csa_flow_chart ] , and the description of mocsa is given in algorithm [ csa ] .initialize the bank , , with random individuals minimize( ) using a constrained local minimizer initialize seed flags of all individuals to zeros : get average distance , , between all pairs of individuals and set as : initialize generation counter to zero : initialize the reserve bank , , to an empty set generate random individuals , minimize( ) search space evaluate fitness of select seeds among individuals with and set to 1 generate trial solutions by crossover generate trial solutions by mutation solutions minimize( ) update( ) reduce [ csaendwhile ] csa is a global optimization method which combines essential ingredients of three methods : monte carlo with minimization ( mcm ) , genetic algorithm ( ga ) , and sa . as in mcm , we consider only the solution / conformational space of local minima ; in general , all solutions are minimized by a local minimizer . as in ga, we use a set of solutions ( called _ bank _ in csa , denoted as ) collectively , and we generate offsprings from the bank solutions by cross - over and mutation . finally , as in sa, we introduce a distance parameter , which plays the role of the temperature in sa . in csa , each solution is assumed to represent a hyper - sphere of radius in the decision space .diversity of sampling is directly controlled by introducing a distance measure between two solutions and comparing it with , to prevent two solutions from approaching too close to each other in the decision space .similar to the manipulation of temperature in sa , the value of is initially set to a large value and is slowly reduced to a smaller value in csa ; hence the name conformational space annealing .compared to the conventional ea for multiobjective problems , mocsa has three distinct features ; ( a ) a ranking algorithm which considers the dominance relationship as well as the distance between solutions in the objective space , ( b ) an update rule with a dynamically varying distance cutoff measure to control the size of search space and to keep the diversity of sampling in the decision space and ( c ) the usage of a gradient - based constrained minimizer , feasible sequential quadratic programming ( fsqp ) , for local search . in csa , we first initialize the _ bank _ , , with random solutions which are subsequently minimized by fsqp constrained minimizer .the solutions in the bank are updated using subsequent solutions found during the course of optimization .the initial value of is set as , where is the average distance in the _ decision _ space between two solutions at the initial stage .a number of solutions ( 20 in this study ) in the bank are selected as _seeds_. for each seed , 30 trial solutions are generated by cross - over between the seed and randomly chosen solutions from the bank .additional 5 are generated by mutation of the seed .it should be noted that if a solution is used as a seed and not replaced by a offspring , it is excluded from the subsequent seed selection .the generated offsprings are locally minimized by fsqp which guarantees to improve a subset of objective functions without deteriorating the others and without violating given constraints . to limit the computational usage, the minimization is performed only once per every five generation steps ( see algorithm [ csa ] ) .offsprings are used to update the bank , and detailed description on the updating rule is provided in section [ sec : update ] .once all solutions in the bank are used as seeds without generating better solutions , implying that the procedure might have reached a deadlock , we reset all bank solutions to be eligible for seeds again and repeat another round of the search procedure . after this additional search reaches a deadlock again ,we expand our search space by adding additional 50 randomly generated and minimized solutions to the bank ( ) , and repeat the whole procedure until a termination criterion is satisfied .the maximum number of generation is set to .typically , with in this study , mocsa is terminated before a deadlock occurs with the final bank size of . for a given set of generated solutions , , the fitness of solution evaluated in terms of , and . is the number of solutions in which dominate . is the number of solutions in dominated by . is the sum of distances from to its nearest and second nearest neighbors in in the objective space .the relative fitness between two solutions , and , is determined by the comparing function shown in algorithm [ fitness ] . with a set of non - dominated solutionsall values of and become zeros and the solution with the least value of is considered as the worst . the solutions generated by crossover and mutationare locally minimized by fsqp constrained minimizer and we call them trial solutions . each trial solution ,is compared with the bank for update procedure as shown in algorithm [ update ] .first , , the closest solution in from in the _ decision _ space is identified .if there exist dominated solutions in , the closest conformation search is performed only among them . otherwise , is a set of non - dominated solutions , and all in are considered . once is found , the distance in the decision space between and is calculated . if , the current cutoff distance , which indicates that lies in a newly sampled region in the decision space , remote from the existing solutions in , the dominance relationship between and the worst solution in , , is compared . if , is compared with .the selection procedure , described in section [ sec : select ] , is performed to determine which solution should be kept in . at each iteration step , is reduced with a pre - determined ratio , .after reaches to its final value , , it is kept constant . in algorithm[ update ] , for a given trial solution and a solution in , is updated as follows .if dominates , replaces . if is dominated by , is discarded and stays in . if there is no dominance relationship between and and if is better than by algorithm [ fitness ] , is discarded and stays in . finally , when is better than without dominance relationship between them , algorithm [ select ] is used . for the selection procedure ,we introduce an additional set of non - dominated solutions , the reserve bank , . due to the limited size of the bank, we may encounter a situation where a solution exists in which is not dominated by the current bank , but dominated by a solution eliminated from the bank in an earlier generation . to solve this problem ,non - dominated solutions eliminated from are stored up to 500 in , which is conceptually similar to an _ archive _ in other eas .the difference is that is used only when more than half of the solutions in are non - dominated solutions because csa focuses more on diverse sampling rather than optimization at the early stage of the optimization .note that keeps only non - dominated solutions . number of solutions in which dominate ( ) number of solutions in which are dominated by ( ) the sum of distances from to the nearest and second nearest neighbors in in the _ objective _ space is better is better is better is better is better is better nearest solution to in in the _ decision _ space nearest solution to among _ dominated _ solutions in in the _ decision _ space distance in the decision space worst solution in dominates replaces dominates stays in and is discarded stays in and is discarded select( ) , , is better than replaces in is non - dominated in and is used replaces in is dominated by nearest dominating solution to in in the _ objective _ space move from to move from to \\ & z_1 , \dotsc , z_k \in [ -5,5 ] \end{aligned}\ ] ] \\\end{aligned}\ ] ] \right ) \\g & = 1 + 9\sum_{i=1}^{k}z_i / k \\ \end{aligned}\ ] ] the benchmark test of mocsa , we have selected 12 widely used test problems in the field .they consist of zdt and dtlz .each test suite contains several functional forms and can feature various aspects of optimization algorithms .comprehensive analysis on the characteristics of the two test suites are well documented by huband _et al . _ . in both suites , the input vector , ,is divided into two sets and to construct test problems as follows , , where and are the dimensions of decision and objective spaces respectively and .the zdt problem suite consists of six test problems and is probably the most popular test suite to access multiobjective optimization algorithms .the explicit functional forms of five zdt problems are presented in table [ tab : zdt ] .the zdt test suite has two main advantages : ( a ) the pareto fronts of the problems are known in exact forms and ( b ) benchmark results of many existing studies are available . however , there are shortcomings : ( a ) the problems have only two objectives , ( b ) none of the problems contain flat regions and ( c ) none of the problems have degenerate pareto optimal front ..five real - valued zdt problems are described .the first objective depends only on the first decision variable as and the second objective is given as , where and , where and are the dimensions of the decision space and the objective space . unless the functional forms of and are separately given , they are identical to those of zdt1 . [ cols="^,<,^",options="header " , ]in this paper , we have introduced a novel multiobjective optimization algorithm by using the conformational space annealing ( csa ) algorithm , mocsa .benchmark results on 12 test problems show that mocsa finds better solutions than nsga2 , in terms of four criteria tested .solutions by mocsa are closer to the pareto front , a higher fraction of them are on the pareto front , they cover a wider objective space , and they are more evenly distributed on average .we note that the efficiency of mocsa arises from the fact that it controls the diversity of solutions in the decision space as well as in the objective space .the authors acknowledge support from creative research initiatives ( center for in silico protein science , 20110000040 ) of mest / kosef .we thank korea institute for advanced study for providing computing resources ( kias center for advanced computation linux cluster ) for this work .we also would like to acknowledge the support from the kisti supercomputing center ( ksc-2012-c3 - 02 ) .coello coello , c. , corts , n. , 2002 .an approach to solve multiobjective optimization problems based on an artificial immune system . in : proceedings of first international conference on artificial immune systems 2002; canterbury , uk .. 212221 .coello coello , c. , lechuga , m. , 2002 .mopso : a proposal for multiple objective particle swarm optimization . in : evolutionary computation , 2002 .proceedings of the 2002 congress on .vol . 2 .ieee , pp .10511056 .deb , k. , agrawal , s. , pratap , a. , meyarivan , t. , 2000 . a fast elitist non - dominated sorting genetic algorithm for multi - objective optimization : nsga - ii .lecture notes in computer science 1917 , 849858 .deb , k. , thiele , l. , laumanns , m. , zitzler , e. , 2002 .scalable multi - objective optimization test problems . in : proceedings of the congress on evolutionary computation ( cec-2002),(honolulu , usa ) .proceedings of the congress on evolutionary computation ( cec-2002),(honolulu , usa ) , pp . 825830 .doerner , k. , gutjahr , w. , hartl , r. , strauss , c. , stummer , c. , 2004 .pareto ant colony optimization : a metaheuristic approach to multiobjective portfolio selection .annals of operations research 131 ( 1 ) , 7999 .hansen , m. , 1997 .tabu search for multiobjective optimization : mots . in : proceedings of the 13th international conference on multiple criteria decision making ( mcdm97 ) , cape town , south africa .citeseer , pp . 574586 .horn , j. , nafpliotis , n. , goldberg , d. , 1994 . a niched pareto genetic algorithm for multiobjective optimization . in : evolutionary computation , 1994 .ieee world congress on computational intelligence . , proceedings of the first ieee conference on .ieee , pp .8287 .joo , k. , lee , j. , seo , j. , lee , k. , kim , b. , lee , j. , 2009 .all - atom chain - building by optimizing modeller energy function using conformational space annealing .proteins : structure , function , and bioinformatics 75 ( 4 ) , 10101023 .lee , j. , liwo , a. , ripoll , d. , pillardy , j. , scheraga , h. , jan .calculation of protein conformation by global optimization of a potential energy function .proteins structure function and genetics 37 ( s 3 ) , 204208 .liwo , a. , lee , j. , ripoll , d. r. , pillardy , j. , scheraga , h. , may 1999 .protein structure prediction by global optimization of a potential energy function .proceedings of the national academy of sciences of the united states of america 96 ( 10 ) , 54825 .pillardy , j. , czaplewski , c. , liwo , a. , lee , j. , ripoll , d. r. , kamierkiewicz , r. , oldziej , s. , wedemeyer , w. j. , gibson , k. d. , arnautova , y. a. , saunders , j. , ye , y. j. , scheraga , h. , feb . 2001 .recent improvements in prediction of protein structure by global optimization of a potential energy function .proceedings of the national academy of sciences of the united states of america 98 ( 5 ) , 232933 .schaffer , j. , 1985 .multiple objective optimization with vector evaluated genetic algorithms . in : proceedings of the 1st international conference on genetic algorithms .l. erlbaum associates inc . , pp . 93100 .van veldhuizen , d. , lamont , g. , 2000 . on measuring multiobjective evolutionary algorithm performance . in : evolutionary computation , 2000 .proceedings of the 2000 congress on .vol . 1 .ieee , pp .
we introduce a novel multiobjective optimization algorithm based on the conformational space annealing ( csa ) algorithm , mocsa . it has three characteristic features : ( a ) dominance relationship and distance between solutions in the objective space are used as the fitness measure , ( b ) update rules are based on the fitness as well as the distance between solutions in the decision space and ( c ) it uses a constrained local minimizer . we have tested mocsa on 12 test problems , consisting of zdt and dtlz test suites . benchmark results show that solutions obtained by mocsa are closer to the pareto front and covers a wider range of the objective space than those by the elitist non - dominated sorting genetic system ( nsga2 ) . conformational space annealing , multiobjective optimization , genetic algorithm , evolutionary algorithm , pareto front
_ steganography _ is a scientific discipline within the field known as _ data hiding _ ,concerned with hiding information into a commonly used media , in such a way that no one apart from the sender and the intended recipient can detect the presence of embedded data .a comprehensive overview of the core principles and the mathematical methods that can be used for data hiding can be found in .an interesting steganographic method is known as _ matrix encoding _ , introduced by crandall and analyzed by bierbrauer et al .matrix encoding requires the sender and the recipient to agree in advance on a parity check matrix , and the secret message is then extracted by the recipient as the syndrome ( with respect to ) of the received cover object .this method was made popular by westfeld , who incorporated a specific implementation using hamming codes in his f5 algorithm , which can embed bits of message in cover symbols by changing , at most , one of them .there are two parameters which help to evaluate the performance of a steganographic method over a cover message of symbols : the _ average distortion _ , where is the expected number of changes over uniformly distributed messages ; and the _ embedding rate _ , which is the amount of bits that can be hidden in a cover message . in general , for the same embedding rate a method is better when the average distortion is smaller . following the terminology used by fridrich et al . , the pair will be called _-rate_. furthermore , as willems et al . in , we will also assume that a discrete source produces a sequence , where is the block length , each , and depends on whether the source is a grayscale digital image , or a cd audio , etc .the message we want to hide into a host sequence produces a composite sequence , where and each .the composite sequence is obtained from distorting , and the distortion will be assumed to be a squared - error distortion ( see ) . in these conditions , if information is only carried by the least significant bit ( lsb ) of each , the appropriate solution comes from using binary hamming codes , improved using product hamming codes . for larger magnitude of changes , but limited to , that is , , where , the situation is called -steganography " , and the information is carried by the two least significant bits .it is known that the embedding becomes statistically detectable rather quickly with the increasing amplitude of embedding changes .therefore , our interest goes to avoid changes of amplitude greater than one . with this assumption ,our steganographic scheme will be compared with the upper bound from for the embedding rate in -steganography " , given by , where is the binary entropy function and is the average distortion .a main purpose of steganography is designing schemes in order to approach this upper bound . in most of the previous papers , -steganography " has involved a ternary coding problem .willems et al . proposed a schemed based on ternary hamming and golay codes , which were proved to be optimal .fridrich and lisonk proposed a method based on rainbow colouring graphs which , for some values , outperformed the scheme obtained by direct sum of ternary hamming codes with the same average distortion .however , both methods from and show a problem when dealing with extreme grayscale values , since they suggest making a change of magnitude greater than one in order to avoid having to apply the change and to a host sequence of value and , respectively .note that the kind of change they propose would obviously introduce larger distortion and therefore make the embedding more statistically detectable . in this paperwe also consider the -steganography .our new method is based on perfect -linear codes which , although they are not linear , they have a representation using a parity check matrix that makes them as efficient as the hamming codes .as we will show , this new method not only performs better than the one obtained by direct sum of ternary hamming codes from , but it also deals better with the extreme grayscale values , because the magnitude of embedding changes is under no circumstances greater than one . to make this paper self - contained , we review in section [ sec : additivecodes ] a few elementary concepts on perfect -linear codes , relevant for our study .the new steganographic method is presented in section [ sec : stegoz2z4 ] , whereas an improvement to better deal with the extreme grayscale values problem is given in section [ sec : anomalies ] .finally , the paper is concluded in section [ sec : conclusions ] .in general , any non - empty subgroup of is a _ -additive code _ , where denotes the set of all binary vectors of length and is the set of all -tuples in .let , where is given by the map where , , , and is the usual gray map from onto .a -additive code is also isomorphic to an abelian structure like .therefore , has codewords , where of them are of order two .we call such code a_ -additive code of type _ and its binary image is a _-linear code of type . note that the lee distance of a -additive code coincides with the hamming distance of the -linear code , and that the binary code does not have to be linear .the _ -additive dual code _ of , denoted by , is defined as the set of vectors in that are orthogonal to every codeword in , being the definition of inner product in the following : where and computations are made considering the zeros and ones in the binary coordinates as quaternary zeros and ones , respectively .the binary code , of length , is called the _ -dual code _ of .a -additive code is said to be _ perfect _ if code is a perfect -linear code , that is all vectors in are within distance one from a codeword and the distance between two codewords is , at least , three . for any and each thereexists a perfect -linear code of binary length , such that its -dual code is of type , where , and ( note that the binary length can be computed as ) .the above result is due to and it allows us to write the parity check matrix of any -additive perfect code for a given value of .matrix can be represented taking all possible vectors in , up to sign changes , as columns . in this representation , there are columns which correspond to the binary part of vectors in , and columns of order four which correspond to the quaternary part .we agree on a representation of the binary coordinates as coordinates in .take a perfect -linear code and consider its -dual , which is of type . as stated in the previous section , this gives us a parity check matrix which has rows of order two and rows of order four .for instance , for and according to , there are three different -additive perfect codes of binary length which correspond to the possible values of . for ,the corresponding -additive perfect code is the usual binary hamming code , while for the -additive perfect code has parameters , , , and the following parity check matrix : let , for , denote the -th column vector of . note that the all twos vector is always one of the columns in and , for the sake of simplicity , it will be written as the column .we group the remaining first columns in in such a way that , for any , the column vector is paired up with its complementary column vector , where . to use these perfect -additive codes in steganography take andlet be an -length source of grayscale symbols such that , where , for instance , for grayscale images .we assume that a grayscale symbol is represented as a binary vector such that where is the inverse of gray map .we will use the two least significant bits ( lsbs ) , , of every grayscale symbol in the source , for , as well as the least significant bit of symbol to embed the secret message . each symbol be associated with one or more column vectors in , depending on the grayscale symbol : 1 .grayscale symbol is associated with column vector by taking the least significant bit of .grayscale symbol , for , is associated with the two column vectors and , by taking , respectively , the two least significant bits , , of .grayscale symbol , for , is associated with column vector by taking its two least significant bits and interpreting them as an integer number in . in this way, the given -length packet of symbols is translated into a vector of binary coordinates and quaternary coordinates .the embedding process we are proposing is based on the matrix encoding method .the secret message can be any vector .vector indicates the changes needed to embed within ; that is , where is an integer whose value will be described bellow , is the syndrome vector of and is a column vector in .the following situations can occur , depending on which column needs to be modified : 1 .if , then the embedder is required to change the least significant bit of by adding or substracting one unit to / from , depending on which operation will flip its least significant bit , .if is among the first column vectors in and , then can only be . in this case , since was paired up with its complementary column vector , then this situation is equivalent to make or , where and are the least significant bits of the symbol which had been associated with those two column vectors . hence , after the inverse of gray map , by changing one or another least significant bit we are actually adding or subtracting one unit to / from .note that a problem may crop up at this point when we need to add to a symbol of value or , likewise , when has value and we need to subtract from it .3 . if is one of the last columns in we can see that this situation corresponds to add to . note that because we are using a -additive perfect code , will never be .hence , the embedder should add ( ) or subtract ( ) one unit to / from symbol .once again , a problem may arise with the extreme grayscale values .[ ex : noanomalies ] let be an -length source of grayscale symbols , where and , and let be the matrix in ( [ pcm ] ) .the source is then translated into the vector in the way specified above .let be the vector representing the secret message we want to embed in .we then compute and see , by the matrix encoding method , that and . according to the method just described ,we should apply the change . in this way, becomes , and then , which has the expected syndrome .as already mentioned at the beginning of this paper , the problematic cases related to the extreme grayscale values are also present in the methods from and , but their authors assume that the probability of gray value saturation is not too large . we argue that , though rare , this gray saturation can still occur . however , in order to compare our proposal with these others we will not consider these problems either until the next section .therefore , we proceed to compute the values of the average distortion and the embedding rate .our method is able to hide any secret vector into the given symbols .hence , the embedding rate is bits per symbols , .concerning the average distortion , we are using a perfect code of binary length , which corresponds to grayscale symbols .there are symbols , for , with a probability of being subjected to a change ; a symbol with a probability of being the one changed ; and , finally , there is a probability of that neither of the symbols will need to be changed to embed .hence , .the described method has a -rate , where and is any integer .we are able to generate a specific embedding scheme for any value of but not for any -rate . with the aim of improving this situation ,convex combinations of -rates of two codes related to their direct sum are extensively treated in .actually , it is possible to choose the coordinate and cover more -rates by taking convex combinations .therefore , if is a non - allowable parameter for the average distortion we can still take , where are two contiguous allowable parameters , and by means of the direct sum of the two codes with embedding rate and , respectively , we can obtain a new -rate , with and . from a graphic point of view , this is equivalent to draw a line between two contiguous points and , as it is shown in [ fig : graphicwoanom ] .in the following theorem we claim that the -rate of our method improves the one given by direct sum of ternary hamming codes from . for , the -rate given by the method based on -additive perfect codes improves the -rate obtained by direct sum of ternary hamming codes with the same average distortion .optimal embedding ( of course , in the allowable values of ) can be obtained by using ternary codes , as it is shown in .the -rate of these codes is for any integer .our method , based on -additive perfect codes , has -rate , for any integer and .take , for any , two contiguous values for such that and write , where .we want to prove that , for , we have , which is straightforward .however , since it is neither short nor contributes to the well understanding of the method , we do not include all computations here .the graphic bellow compares the -rate of the method based on ternary hamming codes with that one based on -additive perfect codes .as one may see in this graphic , for some values of the average distortion , the scheme based on -additive perfect codes has greater embedding rate than the one based on ternary hamming codes .* remark : * the same argumentation can be used and the same conclusion can be reached taking instead of and comparing our method with the method described in .-rate , for , of steganographic methods based on ternary hamming codes and on -additive perfect codes . ]in section [ sec : stegoz2z4 ] we described a problem which may raise when , according to our method , the embedder is required to add one unit to a source symbol containing the maximum allowed value ( ) , or to substract one unit from a symbol containing the minimum allowed value , . to face this problem, we will use the complementary column vector of columns in matrix , where and is among the last columns in . note that and can coincide .the first column vectors in will be paired up as before , and the association between each and each column vector in will be also the same as in section [ sec : stegoz2z4 ] . however , given an -length source of grayscale symbols , a secret message and the vector , such that , indicating the changes needed to embed within , we can now make some variations on the kinds of changes to be done for the specific problematic cases : * if is among the first columns in , for , and the embedder is required to add to a symbol , then the embedder should instead substract from as well as perform the appropiate operation ( or ) over to have flipped . likewise , if the embedder is required to substract from a symbol , then ( s)he should instead add to and also change to flip . *if is one of the last columns in , and the embedder has to add to a symbol , ( s)he should instead substract from the grayscale symbol associated to and also change to flip . if the method requires substracting from , then we should instead add to the symbol associated to and , again , change to flip .[ ex : anomalies ] let be an -length source of grayscale symbols , where and , and let be the matrix ( [ pcm ] ) . as in example[ ex : noanomalies ] , the packet is translated into vector , and .however , note that now we are not able to make because . instead of this , we will add one unit to , which is the symbol associated with , and substract one unit from so as to have its least significant bit flipped .therefore , we obtain and then , which has the desired syndrome . the method above described has the same embedding rate as the one from section [ sec : stegoz2z4 ] but a slightly worse average distortion .we will take into account the squared - error distortion defined in for our reasoning .as before , among the total number of grayscale symbols , there are symbols , for , with a probability of being changed ; a symbol with a probability of being the one changed ; and , finally , there is a probability of that neither of the symbols will need to be changed .as one may have noted in this scheme , performing a certain change to a symbol , associated with a column in , has the same effect as performing the opposite change to the grayscale symbol associated with and also changing the least significant bit of .this means that with probability we will change a symbol , for , a magnitude of ; and with probability we will change two other symbols also a magnitude of .therefore , and the average distortion is thus .hence , the described method has -rate .as we have already mentioned , the problem of grayscale symbols with and values was previously detected in both and . with the aim of providing a possible solution to this problem, the authors suggested to perform a change of a magnitude greater than .the effects of doing this were are out of the scope of -steganography .in the remainder of this section we proceed to compare the -rate of our method with the -rate that those methods would have if their proposed solution was implemented .the scheme presented by willems et al . is based on ternary hamming codes , which are known to have length , where denotes the number of parity check equations .let us assume that whenever the embedder is required to perform a change ( or ) that would lead the corresponding symbol to a non - allowed value , then a change of magnitude ( or ) is made instead . while the embedding rate of this scheme would still be , the average distortion would no longer be .the actual expected number of changes is computed by noting that a symbol will be changed with probability , and will not with probability . among the cases in which a symbol would need to be changed , there is a probability of that a symbol will be changed a magnitude of , and a probability of that it will be changed a magnitude of . by the squared - error distortion , and therefore .fridrich and lisonk propose in their paper to pool the grayscale symbols source into cells of size , then rainbow colour these cells and apply a -ary hamming code , where is a prime power .they measure the distortion by counting the maximum number of embedding changes , thus just considering the covering radius of the -ary hamming codes .however , we will now consider the average number of embedding changes ( see ) .as willems et at . , the authors from also suggest to perform a change of magnitude to solve the extreme grayscale values problem .if this is done , the embedding rate would still be the same , , but the average distortion would now be .one can see in [ fig : graphicwanom ] how our steganographic method for -additive perfect codes deals with the extreme grayscale values problem , for some values of , better than those using ternary hamming codes ( ) from and .-rates , for , of steganographic methods based on ternary hamming codes and on -additive perfect codes , when they are dealing with the extreme grayscale values problem described in section [ sec : anomalies ] . ]in this paper , we have presented a new method for -steganography , based on perfect -linear codes .these codes are non - linear but still there exists a parity check matrix representation that makes them efficient to work with .as we have shown in sections [ sec : stegoz2z4 ] and [ sec : anomalies ] , this new scheme outperforms the one obtained by direct sum of ternary hamming codes ( see ) as well as the one obtained after rainbow colouring graphs by using -ary hamming codes for .if we consider the special cases in which the technique might require to substract one unit from a grayscale symbol containing the minimum allowed value , or to add one unit to a symbol containing the maximum allowed value , our method performs even better than those aforementioned schemes .this is so because unlike them , our method never applies any change of magnitude greater than , but two changes of magnitude instead , which is better in terms of distortion .therefore , our method makes the embedding less statistically detectable . as for further research , since the approach based on product hamming codes in improved the performance of basic lsb steganography and the basic algorithm , we would also expect a considerable improvement of the -rate by using product -additive codes or subspaces of product -additive codes in -steganography .j. bierbrauer and j. fridrich , constructing good covering codes for applications in steganography " , in transactions on data hiding and multimedia security iii , vol .4920 of lecture notes in computer science , pp . 1 - 22 , springer , berlin , germany , 2008 .
steganography is an information hiding application which aims to hide secret data imperceptibly into a commonly used media . unfortunately , the theoretical hiding asymptotical capacity of steganographic systems is not attained by algorithms developed so far . in this paper , we describe a novel coding method based on -linear codes that conforms to -steganography , that is secret data is embedded into a cover message by distorting each symbol by one unit at most . this method solves some problems encountered by the most efficient methods known today , based on ternary hamming codes . finally , the performance of this new technique is compared with that of the mentioned methods and with the well - known theoretical upper bound .
the scope of the growing field of complexity science `` ( or complex systems '' ) includes a broad variety of problems belonging to different scientific areas .examples for complex systems `` can be found in physics , biology , computer science , ecology , economy , sociology and other fields .a recurring theme in most of what is classified as complex systems '' is that of _ emergence_. emergent properties are those which arise spontaneously from the collective dynamics of a large assemblage of interacting parts .a basic question one asks in this context is how to derive and predict the emergent properties from the behavior of the individual parts .in other words , the central issue is how to extract large - scale , global properties from the underlying or microscopic degrees of freedom . in the physical sciences ,there are many examples of emergent phenomena where it is indeed possible to relate the microscopic and macroscopic worlds .physical systems are typically described in terms of equations of motion of a huge number of microscopic degrees of freedom ( e.g. atoms ) .the microscopic dynamics is often erratic and complex , yet in many cases it gives rise to patterns with characteristic length and time scales much larger than the microscopic ones ( e.g. the pressure and temperature of a gas ) .these large scale patterns often posses the interesting , physically relevant properties of the system and one would like to model them or simulate their behavior . an important problem in physics is therefore to understand and predict the emergence of large scale behavior in a system , starting from its microscopic description .this problem is a fundamental one because most physical systems contain too many parts to be simulated directly and would become intractable without a large reduction in the number of degrees of freedom .a useful way to address this issue is to construct coarse - grained models , which treat the dynamics of the large scale patterns .the derivation of coarse - grained models from the microscopic dynamics is far from trivial . in most casesit is done in a phenomenological manner by introducing various ( often uncontrolled ) approximations .the problem of predicting emergent properties is most severe in systems which are modelled or described by _undecidable _ mathematical algorithms . for such systemsthere exists no computationally efficient way of predicting their long time evolution . in order to know the system s state after ( e.g. ) one million time steps one must evolve the system a million time steps or perform a computation of equivalent complexity .wolfram has termed such systems _ computationally irreducible _ and suggested that their existence in nature is at the root of our apparent inability to model and understand complex systems .it is tempting to conclude from this that the enterprise of physics itself is doomed from the outset ; rather than attempting to construct solvable mathematical models of physical processes , computational models should be built , explored and empirically analyzed .this argument , however , assumes that infinite precision is required for the prediction of future evolution .as we mentioned above , usually coarse - grained or even statistical information is sufficient . an interesting question that arises is therefore : is it possible to derive coarse - grained models of undecidable systems and can these coarse - grained models be decidable and predictable ? in this workwe address the emergence of large scale patterns in complex systems and the associated predictability problems by studying cellular - automata ( ca ) .ca are spatially and temporally discrete dynamical systems composed of a lattice of cells .they were originally introduced by von neumann and ulam in the 1940 s as a possible way of simulating self - reproduction in biological systems . since then , ca have attracted a great deal of interest in physics because they capture two basic ingredients of many physical systems : 1 ) they evolve according to a local uniform rule .2 ) ca can exhibit rich behavior even with very simple update rules .for similar and other reasons , ca have also attracted attention in computer science , biology , material science and many other fields . for a review on the literature on ca see refs . .the simple construction of ca makes them accessible to computational theoretic research methods . using these methods it is sometimes possible to quantify the complexity of ca rules according to the types of computations they are capable of performing .this together with the fact that ca are caricatures of physical systems has led many authors to use them as a conceptual vehicle for studying complexity and pattern formation . in this workwe adopt this approach and study the predictability of emergent patterns in complex systems by attempting to systematically coarse - grain ca .a brief preliminary report of our project can be found in ref . .there is no unique way to define coarse - graining , but here we will mean that our information about the ca is locally coarse - grained in the sense of being stroboscopic in time , but that nearby cells are grouped into a supercell according to some specified rule ( as is frequently done in statistical physics ) . below we shall frequently drop the qualifier `` local '' whenever there is no cause for confusion .a system which can be coarse - grained is _ compact - able _ since it is possible to calculate its future time evolution ( or some coarse aspects of it ) using a more compact algorithm than its native description .note that our use of the term compact - able refers to the phase space reduction associated with coarse - graining , and is agnostic as to whether or not the coarse - grained system is decidable or undecidable .accordingly , we define _ predictable _ to mean that a system is decidable or has a decidable coarse - graining . thus , it is possible to calculate the future time evolution of a predictable system ( or some coarse aspects of it ) using an algorithm which is more compact than both the native and coarse - grained descriptions .our work is organized as follows . in section [ ca_intro ]we give an introduction to ca and their use in the study of complexity . in section [ cg_procedure ]we present a procedure for coarse - graining ca .section [ cg_results ] shows and discusses the results of applying our procedure to one dimensional ca .most of the ca that we attempt to coarse - grain are wolfram s 256 elementary rules for nearest - neighbor ca .we will also consider a few other rules of special interest . in section [ kolmogorov_complexity ]we consider whether the coarse - grain - ability of many ca that we found in the elementary rule family is a common property of ca .using computational theoretic arguments we argue that the large scale behavior of local processes must be very simple .almost all ca can therefore be coarse - grained if we go to a large enough scale .our results are summarized and discussed in [ conclusions ] .cellular automata are a class of homogeneous , local and fully discrete dynamical systems. a cellular automaton is composed of a lattice of cells that can each assume a value from a finite alphabet .we denote individual lattice cells by where the indexing reflects the dimensionality and geometry of the lattice .cell values evolve in discrete time steps according to the pre - prescribed update rule .the update rule determines a cell s new state as a function of cell values in a finite neighborhood .for example , in the case of a one dimensional , nearest - neighbor ca the update rule is a function and ] of , makes the move \\ & = & p\left(f_{a^n}\left[a^n_{n-1}(t),a^n_n(t),a^n_{n+1}(t)\right]\right ) \nonumber \\ & = & p\left(a^n_n(t+1)\right)\ ; , \nonumber\end{aligned}\ ] ] and therefore satisfies eq .( [ coarse - graining_def ] ) . since a single time step of computes time steps of , is also a coarse - graining of with a coarse - grained time scale .analogies of these operators have been used in attempts to reduce the computational complexity of certain stochastic partial differential equations .similar ideas have been used to calculate critical exponents in probabilistic ca . to illustrate our methodlet us give a simple example .rule 128 is a class 1 elementary ca defined on the alphabet with the update function = } \nonumber \\ & & \left\ { \begin{array}{l } \square\;,\;x_{n-1},x_n , x_{n+1}\neq \blacksquare,\blacksquare,\blacksquare \\ \blacksquare\;,\;x_{n-1},x_n , x_{n+1}=\blacksquare,\blacksquare,\blacksquare\\ \end{array } \right . \;.\label{f128}\end{aligned}\ ] ] figure [ cgof146figure ] b ) shows a typical evolution of this simple rule where all black regions which are in contact with white cells decay at a constant rate . to coarse - grain rule 128we choose a supercell size and calculate the supercell update function = } \nonumber \\ & & \left\ { \begin{array}{l } \blacksquare\blacksquare\;,\;y_{n-1},y_n , y_{n+1}=\blacksquare\blacksquare,\blacksquare\blacksquare,\blacksquare\blacksquare \\\square\blacksquare\;,\;y_{n-1},y_n , y_{n+1}=\square\blacksquare,\blacksquare\blacksquare,\blacksquare\blacksquare \\\blacksquare\square\;,\;y_{n-1},y_n , y_{n+1}=\blacksquare\blacksquare,\blacksquare\blacksquare,\blacksquare\square \\ \square\square\;,\;\mbox{all other combinations } \end{array } \right.\;. \label{rule128supercellf}\end{aligned}\ ] ]next we project the supercell alphabet using namely , the value of the coarse - grained cell is black only when the supercell value corresponds to two black cells .applying this projection to the supercell update function eq.([rule128supercellf ] ) we find that \right)= } \nonumber \\ & & \left\ { \begin{array}{l } \blacksquare\;,\;p(y_{n-1}),p(y_n),p(y_{n+1})=\blacksquare,\blacksquare,\blacksquare \\\end{array } \right .\;,\end{aligned}\ ] ] which is identical to the original update function .rule 128 can therefore be coarse - grained to itself , an expected result due to the scale invariant behavior of this simple rule .it is interesting to notice that the above coarse - graining procedure can lose two very different types of dynamic information . to see this ,consider eq.([trans_func_projection ] ) .this equation can be satisfied in two ways .in the first case &=&f_{a^n}\left[y_1 , y_2,y_3\right],\nonumber \\ & & \forall \left(x , y|p(x_i)=p(y_i)\right)\ ; , \label{irrelevant_cond}\end{aligned}\ ] ] which necessarily leads to eq .( [ trans_func_projection ] ) . in this case is insensitive to the projection of its arguments .the distinction between two variables which are identical under projection is therefore _ irrelevant _ to the dynamics of , and by construction to the long time dynamics of . by eliminating irrelevant degrees of freedom ( dof ) , coarse - graining of this typeremoves information which is redundant on the microscopic scale .the coarse ca in this case accounts for all possible long time trajectories of the original ca and the complexity classification of the two ca is therefore the same . in the second case eq .( [ trans_func_projection ] ) is satisfied even though eq .( [ irrelevant_cond ] ) is violated . herethe distinction between two variables which are identical under projection is _ relevant _ to the dynamics of .replacing by in the initial condition may give rise to a difference in the dynamics of .moreover , the difference can be ( and in many occasions is ) unbounded in space and time .coarse - graining in this case is possible because the difference is constrained in the cell state space by the projection operator .namely , projection of all such different dynamics results in the same coarse - grained behavior .note that the coarse ca in this case can not account for all possible long time trajectories of the original one .it is therefore possible for the original and coarse ca to fall into different complexity classifications .coarse - graining by elimination of relevant dof removes information which is not redundant with respect to the original system .the information becomes redundant only when moving to the coarse scale .in fact , redundant " becomes a subjective qualifier here since it depends on our choice of coarse description .in other words , it depends on what aspects of the microscopic dynamics we want the coarse ca to capture .let us illustrate the difference between coarse - graining of relevant and irrelevant dof .consider a dynamical system whose initial condition is in the vicinity of two limit cycles .depending on the initial condition , the system will flow to one of the two cycles .coarse - graining of irrelevant dof can project all the initial conditions on to two possible long time behaviors . now consider a system which is chaotic with two strange attractors .coarse - graining irrelevant dof is inappropriate because the dynamics is sensitive to small changes in the initial conditions .coarse - graining of relevant dof is appropriate , however .the resulting coarse - grained system will distinguish between trajectories that circle the first or second attractor , but will be insensitive to the details of those trajectories . in a sense , this is analogous to the subtleties encountered in constructing renormalization group transformations for the critical behavior of antiferromagnets .the coarse - graining procedure we described above is not constructive , but instead is a self - consistency condition on a putative coarse - graining rule with a specific supercell size and projection operator . in many cases the single - valuedness condition eq.([trans_func_projection ] ) is not satisfied , the coarse - graining fails and one must try other choices of and .it is therefore natural to ask the following questions .can all ca be coarse - grained ? if not , which ca can be coarse - grained and which can not ? what types of coarse - graining transitions can we hope to find ? to answer these questions we tried systematically to coarse - grain one dimensional ca .we considered wolfram s 256 elementary rules and several non - binary ca of interest to us .our coarse - graining procedure was applied to each rule with different choices of and . in this way we were able to coarse - grain 240 out of the 256 elementary ca .these 240 coarse - grained - able rules include members of all four classes .the 16 elementary ca which we could not coarse - grain are rules 30 , 45 , 106 , 154 and their symmetries .rules 30 , 45 and 106 belong to class 3 while 154 is a class 2 rule .we do nt know if our inability to coarse - grain these 16 rules comes from limited computing power or from something deeper .we suspect ( and give arguments in section [ kolmogorov_complexity ] ) the former .the number of possible projection operators grows very fast with . even for small ,it is computationally impossible to scan all possible . in order to find valid projections , we therefore used two simple search strategies . in the first strategy, we looked for coarse - graining transitions within the elementary ca family by considering which project back on the binary alphabet . excluding the trivial projections and there are such projections .we were able to scan all of them for and found many coarse - graining transitions .figure [ mapfigure ] shows a map of the coarse - graining transitions that we found within the family of elementary rules .an arrow in the map indicates that each rule from the origin group can be coarse - grained to each rule from the target group .the supercell size and the projection are not shown and each arrow may correspond to several choices of and . as we explained above , only coarse - grainings with are shown due to limited computing power .other transitions within the elementary rule family may exist with larger values of .this map is in some sense an analogue of the familiar renormalization group flow diagrams from statistical mechanics .several features of fig .[ mapfigure ] are worthy of a short discussion .first , notice that the map manifests the left`` '' and 0`` '' symmetries of the elementary ca family .for example rules 252 , 136 and 238 are the 0`` '' , left`` '' and the 0`` '' and left`` '' symmetries of rule 192 respectively .second , coarse - graining transitions are obviously transitive , i.e. if goes to with and goes to with then goes to with .for some transitions , the map in fig . [ mapfigure ] fails to show this property because we did not attain large enough values of .another interesting feature of the transition map is that the apparent rule complexity never increases with a coarse - graining transition .namely , we never find a simple behaving rule which after being coarse - grained becomes a complex rule .the transition map , therefore , introduces a hierarchy of elementary rules and this hierarchy agrees well with the apparent rule complexity .the hierarchy is partial and we can not relate rules which are not connected by a coarse - graining transition . as opposed to the wolfram classification ,this coarse - graining hierarchy is well defined and is therefore a good candidate for a complexity measure .finally notice that the eight rules 0 , 60 , 90 , 102 , 150 , 170 , 204 , 240 , whose update function has the additive form &=&\alpha\cdot x_{n-1}\oplus\beta\cdot x_n \oplus \gamma\cdot x_{n+1}\;,\nonumber \\ & & \alpha,\beta,\gamma\in \{0,1\}\;,\end{aligned}\ ] ] where denotes the xor operation , are all fixed points of the map .this result is not limited to elementary rules .as showed by barbe et.al , additive ca in arbitrary dimension whose alphabet sizes are prime numbers coarse - grain themselves .we conjecture that there are situations where reducible fixed points exist for a wide range of systems , analogous to the emergence of amplitude equations in the vicinity of bifurcation points .when projecting back on the binary alphabet , one maximizes the amount of information lost in the coarse - graining transition . at first glance, this seems to be an unlikely strategy , because it is difficult for the coarse - grained ca to emulate the original one when so much information was lost . in terms of our coarse - graining proceduresuch a projection maximizes the number of instances of eq .( [ trans_func_projection ] ) . on second examination , however .this strategy is not that poor .the fact that there are only two states in the coarse - grained alphabet reduces the probability that an instance of eq.([trans_func_projection ] ) will be violated to 1/2 .the extreme case of this argument would be a projection on a coarse - grained alphabet with a single state .such a trivial projection will never violate eq .( [ trans_func_projection ] ) ( but will never show any patterns or dynamics either ) .a second search strategy for valid projection operators that we used is located on the other extreme of the above tradeoff .namely , we attempt to lose the smallest possible amount of information .we start by choosing two supercell states and and unite them using where the subscript in denotes that this is an initial trial projection to be refined later . the refinement process of the projection operator proceeds as follows .if ( starting with ) satisfies eq .( [ trans_func_projection ] ) then we are done .if on the other hand , eq . ( [ trans_func_projection ] ) is violated by some \right)&\neq & p_n\left(f_{a^n}\left[y_1,y_2,y_3\right]\right)\ ; , \nonumber \\ & & p_n(x_i)=p_n(y_i)\;,\end{aligned}\ ] ] the inequality is resolved by refining to \;\ ; , r_2=f_{a^n}\left[y_1,y_2,y_3\right]\;.\end{aligned}\ ] ] this process is repeated until eq .( [ trans_func_projection ] ) is satisfied .a non - trivial coarse - graining is found in cases where the resulting projection operator is non - constant ( more than a single state in the coarse - grained ca ) . by trying all possible initial pairs ,the above projection search method is guaranteed to find a valid projection if such a projection exist on the scale defined by the supercell size . using this methodwe were able to coarse - grain many ca .the resulting coarse - grained ca that are generated in this way are often multicolored and do not belong to the elementary ca family . for this reason it is difficult to graphically summarize all the transitions that we found in a map . instead of trying to give an overall view of those transitionswe will concentrate our attention on several interesting cases which we include in the examples section bellow .as our first example we choose a transition between two class 2 rules .the elementary rule 105 is defined on the alphabet with the transition function =\overline{x_{n-1}\oplus x_{n } \oplus x_{n+1}}\;,\ ] ] where the over - bar denotes the not operation , and , .we use a supercell size and calculate the transition function , defined on the alphabet .now we project this alphabet back on the alphabet with a pair of cells in rule 105 are coarse - grained to a single cell and the value of the coarse cell is black only when the pair share a same value . using the above projection operator we construct the transition function of the coarse ca .the result is found to be the transition function of the additive rule 150 : =x_{n-1}\oplus x_{n } \oplus x_{n+1}\;.\ ] ] figure [ 105to150figure ] shows the results of this coarse - graining transition . in fig .[ 105to150figure ] ( a ) we show the evolution of rule 105 with a specific initial condition while fig . [ 105to150figure ] ( b ) shows the evolution of rule 150 from the coarse - grained initial condition .the small scale details in rule 105 are lost in the transformation but extended white and black regions are coarse - grained to black regions in rule 150 .the time evolution of rule 150 captures the overall shape of these large structures but without the black - white decorations . as shown in fig .[ mapfigure ] , rule 150 is a fixed point of the transition map .rule 105 can therefore be further coarse - grained to arbitrary scales . as a second example of coarse - grained - able elementaryca we choose rule 146 .rule 146 is defined on the alphabet with the transition function = } \nonumber \\ & & \left\ { \begin{array}{l } \blacksquare\;,\;x_{n-1}x_nx_{n+1}=\square\square\blacksquare;\blacksquare\square\square;\blacksquare\blacksquare\blacksquare \\ \square\;,\;\mbox{all other combinations } \\ \end{array}\right . \;.\end{aligned}\ ] ] it produces a complex , seemingly random behavior which falls into the class 3 group .we choose a supercell size and calculate the transition function , defined on the alphabet .now we project this alphabet back on the alphabet with namely , a triplet of cells in rule 146 are coarse - grained to a single cell and the value of the coarse cell is black only when the triplet is all black . using the above projection operator we construct the transition function of the coarse ca .the result is found to be the transition function of rule 128 which was given in eq .( [ f128 ] ) .rule 146 can therefore be coarse - grained by rule 128 , a class 1 elementary ca . in figure [ cgof146figure ]we show the results of this coarse - graining .[ cgof146figure ] ( a ) shows the evolution of rule 146 with a specific initial condition while fig .[ cgof146figure ] ( b ) shows the evolution of rule 128 from the coarse - grained initial condition .our choice of coarse - graining has eliminated the small scale details of rule 146 .only structures of lateral size of three or more cells are accounted for .the decay of such structures in rule 146 is accurately described by rule 128 .note that a class 3 ca was coarse - grained to a class 1 ca in the above example .our gain was therefore two - fold .in addition to the phase space reduction associated with coarse - graining we have also achieved a reduction in complexity .our procedure was able to find predictable coarse - grained aspects of the dynamics even though the small scale behavior of rule 146 is complex , potentially irreducible .rule 146 can also be coarse - grained by non elementary ca .using a supercell size of we found that the difference between the combinations and is irrelevant to the long time behavior of rule 146 .it is therefore possible to project these two combinations into a single coarse grained state .the same is true for the combinations and which can be projected to another coarse - grained state .the end result of this coarse - graining ( fig .[ cgof146figure ] ( c ) ) is a 62 color ca which retains the information of all other 6 cell combinations .the amount of information lost in this transition is relatively small , 2/64 of the supercell states have been eliminated .more impressive alphabet reductions can be found by going to larger scales . for =7,8,9,10 and 11we found an alphabet reduction of 9/128 , 33/256 , 97/512 , 261/1024 and 652/2048 respectively .[ cgof146figure ] ( d ) shows the percentage of states that can be eliminated as a function of the supercell size .all of the information lost in those coarse - grainings corresponds to irrelevant dof .the two different coarse - graining transitions of rule 146 that we presented above are a good opportunity to show the difference between relevant and irrelevant dof . as we explained earlier , a transition like 146 where the rules has different complexities must involve the elimination of relevant dof .indeed if we modify an initial condition of rule 146 by replacing a segment with we will get a modified evolution . as we show in figure [ rule146rdoffigure ], the difference in the trajectories has a complex behavior and is unbounded in space and time .however , since and are both projected by eq .( [ rule146projection ] ) to , the projections of the original and modified trajectories will be identical .in contrast , the coarse graining of rule 146 to the 62 state ca of fig .[ cgof146figure ] ( c ) involves the elimination of irrelevant dof only .if we replace a in the initial condition with a we find that the difference between the modified and unmodified trajectories decays after a few time steps .the elementary ca rule 184 is a simplified one lane traffic flow model .its transition function is given by = } \nonumber \\ & & \left\ { \begin{array}{l } \square\;,\;x_{n-1}x_nx_{n+1}=\square\square\square;\square\square\blacksquare;\square\blacksquare\square;\blacksquare\blacksquare\square \\ \blacksquare\;,\;x_{n-1}x_nx_{n+1}=\square\blacksquare\blacksquare;\blacksquare\square\square;\blacksquare\square\blacksquare;\blacksquare\blacksquare\blacksquare \\\;.\end{aligned}\ ] ] identifying a black cell with a car moving to the right and a white cell with an empty road segment we can rewrite the update rule as follows .a car with an empty road segment to its right advances and occupies the empty segment .a car with another car to its right will avoid a collision and stay put .this is a deterministic and simplified version of the more realistic nagel schreckenberg model .rule 184 can be coarse - grained to a 3 color ca using a supercell size and the local density projection {0.4,0.4,0.4}\blacksquare\color{black}}\;,\;x=\square\blacksquare;\blacksquare\square \\ \blacksquare\;,\;x=\blacksquare\blacksquare \\ \end{array}\right.\;.\ ] ] the update function of the resulting ca is given by = } \\ & & \left\ { \begin{array}{l } \square\;,\;y_{n-1}y_ny_{n+1}=\square\square\square;\square\square{\color[rgb]{0.4,0.4,0.4}\blacksquare\color{black}};\square\square\blacksquare;\square{\color[rgb]{0.4,0.4,0.4}\blacksquare\color{black}}\square;\square{\color[rgb]{0.4,0.4,0.4}\blacksquare\color{black}}{\color[rgb]{0.4,0.4,0.4}\blacksquare\color{black}}\\ \blacksquare\;,\;y_{n-1}y_ny_{n+1}=\square\blacksquare\blacksquare;{\color[rgb]{0.4,0.4,0.4}\blacksquare\color{black}}{\color[rgb]{0.4,0.4,0.4}\blacksquare\color{black}}\blacksquare;{\color[rgb]{0.4,0.4,0.4}\blacksquare\color{black}}\blacksquare\blacksquare;\blacksquare{\color[rgb]{0.4,0.4,0.4}\blacksquare\color{black}}\blacksquare;\blacksquare\blacksquare\blacksquare \\ { \color[rgb]{0.4,0.4,0.4}\blacksquare\color{black}}\;,\;\mbox{all other combinations } \\\end{array}\right .. \nonumber\end{aligned}\ ] ] figure [ rule184figure ] shows the result of this coarse - graining .[ rule184figure ] ( a ) shows a trajectory of rule 184 while fig .[ rule184figure ] ( b ) shows the trajectory of the coarse ca . from this figureit is clear that the white zero density regions correspond to empty road and the black high density regions correspond to traffic jams .the density 1/2 grey regions correspond to free flowing traffic with an exception near traffic jams due to a boundary effect . by using larger supercell sizes it is possible to find other coarse - grained versions of rule 184 . as in the above example , the coarse - grained states group together local configurations of equal car densities .the projection operators however are not functions of the local density alone .they are a partition of such a function and there could be several coarse - grained states which correspond to the same local car density .we found ( empirically ) that for even supercell sizes the coarse - grained ca contain states and for odd supercell sizes they contain states .figure [ rule184figure ] ( c ) shows the amount of information lost in those transitions as a function of .most of the lost information corresponds to relevant dof but some of it is irrelevant .rule 110 is one of the most interesting rules in the elementary ca family .it belongs to class 4 and exhibits a complex behavior where several types of particles `` move and interact above a regular background .the behavior of these particles '' is rich enough to support universal computation . in this sense rule 110is maximally complex because it is capable of emulating all computations done by other computing devices in general and ca in particular . as a consequenceit is also undecidable .we found several ways to coarse - grain rule 110 . using , it is possible to project the 64 possible supercell states onto an alphabet of 63 symbols .figure [ rule110figure ] ( a ) and ( b ) shows a trajectory of rule 110 and the corresponding trajectory of the coarse - grained 63 states ca . a more impressive reduction in the alphabet size is obtained by going to larger values of . for we found an alphabet reduction of , , , , and respectively . only irrelevant dof are eliminated in those transitions .[ rule110figure ] ( c ) shows the percentage of reduced states as a function of the supercell size .we expect this behavior to persist for larger values of .another important coarse - graining of rule 110 that we found is the transition to rule 0 .rule 0 has the trivial dynamics where all initial states evolve to the null configuration in a single time step .the transition to rule 0 is possible because many cell sequences can not appear in the long time trajectories of rule 110 .for example the sequence is a so called garden of eden `` of rule 110 .it can not be generated by rule 110 and can only appear in the initial state .coarse - graining by rule 0 is achieved in this case using and projecting to and all other five cell combinations to .another example is the sequence .this sequence is a garden of eden '' of the supercell version of rule 110 .it can appear only in the first 12 time steps of rule 110 but no later .coarse - graining by rule 0 is achieved in this case using and projecting to and all other 13 cell combinations to .these examples are important because they show that even though rule 110 is undecidable it has decidable and predictable coarse - grained aspects ( however trivial ) . to our knowledge rule 110 is the only proven undecidable elementary ca and therefore this is the only ( proven ) example of undecidable to decidable transition that we found within the elementary ca family .it is interesting to note that the number of garden of eden `` states in supercell versions of rule 110 grows very rapidly with the supercell size . as we show in fig.[rule110figure ] ( d ) , the fraction of garden of eden '' states out of the possible sequences , grows almost linearly with . in addition , at every scale there are new garden of eden `` sequences which do not contain any smaller gardens of eden '' as subsequences .these results are consistent with our understanding that even though the dynamics looks complex , more and more structure emerges as one goes to larger scales .we will have more to say about this in section [ kolmogorov_complexity ] .the garden of eden `` states of supercell versions of rule 110 represent pieces of information that can be used in reducing the computational effort in rule 110 .the reduction can be achieved by truncating the supercell update rule to be a function of only non garden of eden '' states .the size of the resulting rule table will be much smaller ( with ) than the size of the supercell rule table .efficient computations of rule 110 can then be carried out by running rule 110 for the first time steps .after time steps the system contains no garden of eden `` sequences and we can continue to propagate it by using the truncated supercell rule table without loosing any information .note that we have not reduced rule 110 to a decidable system . at every scale we achieved a constant reduction in the computational effort .wolfram has pointed out that many irreducible systems have pockets of reducibility and termed such a reduction as superficial reducibility '' ( see page 746 in ref . ) .it will be interesting to check how much superficial reducibility `` is contained in rule 110 at larger scales. it will be inappropriate to call it superficial '' if the curve in fig.[rule110figure ] ( d ) approaches 100% in the large limit .it might be argued that the coarse - graining of rule 110 by rule 0 is a trivial example of an undecidable to a decidable coarse - graining transition .the fact that certain configurations can not be arrived at in the long time behavior is not very surprising and is expected of any irreversible system . in order to search for more interesting examples we studied other one dimensional universal ca that we found in the literature .lindgren and nordahl constructed a 7 state nearest neighbor and a 4 state next - nearest neighbor ca that are capable of emulating a universal turing machine .the entries in the update tables of these ca are only partly determined by the emulated turing machine and can be completed at will .we found that for certain completion choices these two universal ca can be coarse - grained to a trivial ca which like rule 0 decay to a quiescent configuration in a single time step .another universal ca that can undergo such a transition is wolfram s 19 state , next - nearest neighbor universal ca .these results are essentially equivalent to the rule 110 rule 0 transition .a more interesting example is albert and culik s universal ca .it is a 14 state nearest - neighbor ca which is capable of emulating all other ca .the transition table of this ca is only partly determined by its construction and can be completed at will .we found that when the empty entries in the transition function are filled by the copy operation =x_n\ ; , \label{copy_operation}\ ] ] the resulting undecidable ca has many coarse - graining transitions to decidable ca .in all these transitions the coarse - grained ca performs the copy operation eq .( [ copy_operation ] ) for all .different transitions differ in the projection operator and the alphabet size of the coarse - grained ca .figure [ albertculik_figure ] shows a coarse - graining of albert and culik s universal ca to a 4 state copy ca .the coarse - grained ca captures three types of persistent structures that appear in the original system but is ignorant of more complicated details .the supercell size used here is .in the previous section we showed that a large majority of elementary ca can be coarse - grained in space and time .this is rather surprising since finding a valid projection operator is equivalent to solving eq.([cg_matrix_form ] ) which is greatly over constrained .solutions for this equation should be rare for random choices of the matrix . in this sectionwe show that solutions of eq.([cg_matrix_form ] ) are frequent because is not random but a highly structured object . as the supercell size increased , becomes less random and the probability of finding a valid projection approaches unity . to appreciate the high success rate in coarse - graining elementary ca consider the following statistics . by using supercells of size and considering all possible projection operators we were able to coarse - grain approximately one third of all 256 elementary ca rules .recall that the coarse - graining procedure that we use involves two stages . in the first stagewe generate the supercell version , a 4 color ca in the case . in the second stagewe look for valid projection operators .4 color ca that are supercell versions of elementary ca are a tiny fraction of all possible ( ) 4 color ca .if we pick a random 4 color ca and try to project it ; i.e. attempt to solve eq .( [ cg_matrix_form ] ) with replaced by an arbitrary 4 color ca , we find an average of one solvable instance out of every attempts .this large difference in the projection probability indicates that 4 color ca which are supercells versions of elementary rules are not random .the numbers become more convincing when we go to larger values of and attempt to find projections to random color ca . to put our arguments on a more quantitative levelwe need to quantify the information content of supercell versions of ca .an accepted measure in algorithmic information theory for the randomness and information content of an individual object is its kolmogorov complexity ( algorithmic complexity ) .the kolmogorov complexity of a string of characters with respect to a universal computer is defined as where is the length of in bits and is the bit length of the minimal computer program that generates and halts on ( irrespective of the running time ) .this definition is sensitive to the choice of machine only up to an additive constant in which do not depend on . for long strings this dependency is negligible andthe subscript can be dropped .according to this definition , strings which are very structured require short generating programs and will therefore have small kolmogorov complexity .for example , a periodic with period can be generated by a long program and .in contrast , if has no structure it must be generated literally , i.e. the shortest program is print(x ) " . in such cases , and the information content of is maximal . by using simple counting arguments it is easy to show that simple objects are rare and that for most objects .kolmogorov complexity is a powerful and elegant concept which comes with an annoying limitation .it is uncomputable , i.e. it is impossible to find the length of the minimal program that generates a string .it is only possible to bound it .it is easy to see that supercell ca are highly structured objects by looking at their kolmogorov complexity .consider the ca and its supercell version ( for simplicity of notation we omit the subscript from the alphabet size ) .the transition function is a table that specifies a cell s new state for all possible local configurations ( assuming a is nearest neighbor and one dimensional ) . can therefore be described by a string of symbols from the alphabet .the bit length of such a description is if was a typical ca with colors we could expect that , the length of the minimal program that generates , will not differ significantly from .however , since is a super cell version of we have a much shorter description , i.e. to construct from .this construction involves running , time steps for all possible initial configurations of cells .it can be conveniently coded in a program as repeated applications of the transition function within several loops . up to an additive constant, the length of such a program will be equal to the bit length description of : note that we have used to indicate that this is an upper bound for the length of the minimal program that generates .this upper bound , however , should be tight for an update rule with little structure .the kolmogorov complexity of can consequently be bounded by this complexity approaches zero at large values of .our argument above shows that the large scale behavior of ca ( or any local process ) must be simple in some sense .we would like to continue this line of reasoning and conjecture that the small kolmogorov complexity of the large scale behavior is related to our ability to coarse - grain many ca . at presentwe are unable to prove this conjecture analytically , and must therefore resort to numerical evidence which we present below .ideally , in order to show that such a connection exists one would attempt to coarse - grain ca with different alphabets and on different length scales ( supercell sizes ) , and verify that the success rate correlates with the kolmogorov complexity of the generated supercell ca .this , however , is computationally very challenging and going beyond ca with a binary alphabet and supercell sizes of more than is not realistic .a more modest experiment is the following .we start with a ca with an alphabet , and check whether its supercell version contains all possible states .namely , if there exist such that such a missing state of is sometimes referred to as a garden of eden `` configuration because it can only appear in the initial state of .note that by the construction of , a garden of eden '' state of can appear only in the first time steps of and is therefore a generalized garden of eden `` of . in cases where a state of is missing, can be trivially coarse - grained to the elementary ca rule 0 by projecting the missing state of to 1 '' and all other combinations to 0 `` .this type of trivial projection was discussed earlier in connection with the coarse - graining of rule 110 .finding a garden of eden '' state of is computationally relatively easy because there is no need to calculate the supercell transition function .it is enough to back - trace the evolution of and check if all cell combinations has a cell ancestor combination , time steps in the past .figure [ missing_colors ] ( a ) shows the statistics obtained from such an experiment. it exhibits the fraction of ca rules with different alphabet sizes , whose supercell version is missing at least one state .each data point in this figure was obtained by testing 10,000 ca rules . the fraction approaches unity at large values of , an expected behavior since most of the ca are irreversible .figure [ missing_colors ] ( b ) shows the same data as in ( a ) when plotted against the variable where is the alphabet size , is the upper bound for the kolmogorov complexity of the supercell ca from eq.([upper_bound_kc ] ) and is a constant .the excellent data collapse imply a strong correlation between the probability of finding a missing state and the kolmogorov complexity of a supercell ca .this figure also shows that the data points can be accurately fitted by with a constant and ( solid line in fig .[ missing_colors ] ( b ) ) . having the scaling form we can now study the behavior of with large alphabet sizes .assuming and to be continuous we define as the point where .for a fixed value of , the slope of at the transition region can be calculated by where putting together eqs .( [ rge_slope ] ) and ( [ nh ] ) we find that the slope of at the transition region grows as for large values of .an indication of this phenomena can be seen in fig .[ missing_colors ] ( a ) which shows sharper transitions at large values of . in the limit of large , becomes a step function with respect to .this fact introduces a critical value such that for the probability of finding a missing state is zero and for the probability is one .the value of this critical grows with the alphabet size as .note that is an emergent length scale , as it is not present in any of the ca rules , but according to the above analysis will emerge ( with probability one ) in their dynamics .a direct consequence of the emergence of is that a measure 1 of all ca can be coarse - grained to the elementary rule 0 " on the coarse - grained scale .generalized garden of eden " states are a specific form of emergent pattern that can be encountered in the large scale dynamics of ca .is the kolmogorov complexity of ca rules related to other types of coarse - grained behavior ? to explore this question we attempted to project ( solve eq .( [ cg_matrix_form ] ) ) random ca with bounded kolmogorov complexities . to generate a random ca with a bounded kolmogorov complexitywe view the update rule as a string of bits , denote the bit by and apply the following procedure : 1 ) randomly pick the first bits of .2 ) randomly pick a generating function .3 ) set the values of all the empty bits of by applying : \;,\ ] ] starting at and finishing at . up to an additive constant, the length of such a procedure is equal to , the number of random bits chosen .the kolmogorov complexity of the resulting rule table can therefore be bounded by for small values of this is a reasonable upper bound .however for large values of this upper bound is obviously not tight since the size of can be much larger than the length of . using the above procedure we studied the probability of projecting ca with different alphabets and different upper bound kolmogorov complexities .for given values of and we generated 10,000 ( 200 for the case ) ca and tried to find a valid projection on the alphabet .figure [ projection_probability ] ( a ) shows the fraction of solvable instances as a function of . the constant used for this data collapse is , very close to . as valid projection solutions we considered all possible projections . in doingso we may be redoing the missing states experiment because many low kolmogorov complexity rules has missing states and can thus be trivially projected . in order to exclude this option we repeated the same experiment while restricting the family of allowed projections to be equal partitions of , i.e. results are shown if fig .[ projection_probability ] ( b ) .it seems that in both cases there is a good correlation between the kolmogorov complexity ( or its upper bound ) of a ca rule and the probability of finding a valid projection . in particular, the fraction of solvable instances goes to one at the low limit . as shown by the solid lines in fig.[projection_probability ] , this fractioncan again be fitted by where is a constant and in this case .how many of the ca rules that we generate and project show a complex behavior ? does the fraction of projectable rules simply reflect the fraction of simple behaving rules ? to answer this question we studied the rules generated by our procedure .for each value of and we generated 100 rules and counted the number of rules exhibiting complex behavior .a rule was labelled complex " if it showed class 3 or 4 behavior and exhibited a complex sensitivity to perturbations in the initial conditions .[ projection_probability ] ( c ) shows the statistics we obtained with different alphabet sizes as a function of while the inset shows it as a function of .we first note that our statistics support dubacq et al . , who proposed that rule tables with low kolmogorov complexities lead to simple behavior and rule tables with large kolmogorov complexity lead to complex behavior .moreover , our results show that the fraction of complex rules does not depend on the alphabet size and is only a function of .rules with larger alphabets show complex behavior at a lower value of . as a consequence, a large fraction of projectable rules are complex and this fraction grows with the alphabet size . as we explained earlier , the kolmogorov complexity of supercell versions of ca approaches zero as the supercell size increased .our experiments therefore indicate that a measure one of all ca are coarse - grained - able if we use a coarse enough scale .moreover , the data collapse that we obtain and the sharp transition of the scaling function suggest that it may be possible to know in advance at what length scales to look for valid projections. this can be very useful when attempting to coarse - grain ca or other dynamical systems because it can narrow down the search domain .as in the case of garden of eden " states that we studied earlier , we interpret the transition point as an emergent scale which above it we are likely to find self organized patterns .note however that this scale is a little shifted in fig .[ projection_probability ] ( b ) when compared with fig .[ projection_probability ] ( a ) .the emergence scale is thus sensitive to the types of large scale patterns we are looking for .in this work we studied emergent phenomena in complex systems and the associated predictability problems by attempting to coarse - grain ca .we found that many elementary ca can be coarse - grained in space and time and that in some cases complex , undecidable ca can be coarse - grained to decidable and predictable ca .we conclude from this fact that undecidability and computational irreducibility are not good measures for physical complexity . physical complexity , as opposed to computational complexity should address the interesting , physically relevant , coarse - grained degrees of freedom .these coarse - grained degrees of freedom maybe simple and predictable even when the microscopic behavior is very complex .the above definition of physical complexity brings about the question of the objectivity of macroscopic descriptions .is our choice of a coarse - grained description ( and its consequent complexity ) subjective or is it dictated by the system ?our results are in accordance with shalizi and moore : it is both . in many cases we discovered that a particular ca can undergo different coarse - graining transitions using different projection operators . in these casesthe system dictates a set of valid projection operators and we are restricted to choose our coarse - grained description from this set .we do however have some freedom to manifest our subjective interest .the coarse - graining transitions that we found induce a hierarchy on the family of elementary ca ( see fig .[ mapfigure ] ) .moreover , it seems that rule complexity never increases with coarse - graining transitions .the coarse - graining hierarchy therefore provides a partial complexity order of ca where complex rules are found at the top of the hierarchy and simple rules are at the bottom .the order is partial because we can not relate rules which are not connected by coarse - graining transitions .this coarse - graining hierarchy can be used as a new classification scheme of ca . unlike wolfram s , classificationthis scheme is not a topological one since the basis of our suggested classification is not the ca trajectories . nor is this scheme parametric , such as langton s parameter scheme .our scheme reflects similarities in the algebraic properties of ca rules .it simply says that if some coarse - grained aspects of rule can be captured by the detailed dynamics of rule then rule is at least as complex as rule .rule maybe more complex because in some cases it can do more than its projection .note that our hierarchy may subdivide wolfram s classes .for example rule 128 is higher on the hierarchy than rule 0 .these two rules belong to class 1 but rule 128 can be coarse - grained to rule 0 and it is clear that an opposite transition can not exist. it will be interesting to find out if class 3 and 4 can also be subdivided . in the last part of this work we tried to understand why is it possible to find so many coarse - graining transitions between ca .at first blush , it seems that coarse - graining transitions should be rare because finding valid projection operators is an over constrained problem .this was our initial intuition when we first attempted to coarse - grain ca .to our surprise we found that many ca can undergo coarse - graining transitions .a more careful investigation of the above question suggests that finding valid projection operators is possible because of the structure of the rules which govern the large scale dynamics .these large scale rules are update functions for supercells , whose tables can be computed directly from the single cell update function .they thus contain the same amount of information as the single cell rule .their size however grows with the supercell size and therefore they have vanishing kolmogorov complexities .in other words , the large scale update functions are highly structured objects .they contain many regularities which can be used for finding valid projection operators .we did not give a formal proof for this statement but provided a strong experimental evidence .in our experiments we discovered that the probability to find a valid projection is a universal function of the kolmogorov complexity of the supercell update rule .this universal probability function varies from zero at large kolmogorov complexity ( small supercells ) to one at small kolmogorov complexity ( large supercells ) .it is therefore very likely that we find many coarse - graining transitions when we go to large enough scales .our interpretation of the above results is that of emergence .when we go to large enough scales we are likely to find dynamically identifiable large scale patterns .these patterns are emergent ( or self organized ) because they do not explicitly exist in the original single cell rules .the large scale patterns are forced upon the system by the lack of information .namely , the system ( the update rule , not the cell lattice ) does not contain enough information to be complex at large scales .finding a projection operator is one specific type of an over constrained problem .motivated by our results we looked into other types of over constrained problems .the satisfyability problem ( k - sat ) is a generalized ( np complete ) form of constraint satisfaction system . we generated random 3-sat instances with different number of variables deep in the un - sat region of parameter space .the generated instances however were not completely random and were generated by generating functions .the generating functions controlled the instance s kolmogorov complexity , in the same way that we used in section [ proj_prob_with_bounded_k ] .we found that the probability for these instances to be satisfiable obeys the same universal probability function of eq .( [ rproj_fit ] ). it will be interesting to understand the origin of this universality and its implications . in this work ,we have restricted ourselves to deal with ca because it is relatively easy to look for valid projection operators for them . a greater ( and more practical ) challenge will now be to try and coarse - grain more sophisticated dynamical systems such as probabilistic ca , coupled maps and partial differential equations .these types of systems are among the main work horses of scientific modelling , and being able to coarse - grain them will be very useful , and is a topic of current research , e.g. in material science. it will be interesting to see if one can derive an emergence length scale for those systems like the one we found for garden of eden " sequences in ca ( section [ gardensofeden ] ) .such an emergence length scale can assist in finding valid projection operators by narrowing the search to a particular scale .ng wishes to thank stephen wolfram for numerous useful discussions and his encouragement of this research project .ni wishes to thank david mukamel for his help and advice .this work was partially supported by the national science foundation through grant nsf - dmr-99 - 70690 ( ng ) and by the national aeronautics and space administration through grant nag8 - 1657 .
we study the predictability of emergent phenomena in complex systems . using nearest neighbor , one - dimensional cellular automata ( ca ) as an example , we show how to construct local coarse - grained descriptions of ca in all classes of wolfram s classification . the resulting coarse - grained ca that we construct are capable of emulating the large - scale behavior of the original systems without accounting for small - scale details . several ca that can be coarse - grained by this construction are known to be universal turing machines ; they can emulate any ca or other computing devices and are therefore undecidable . we thus show that because in practice one only seeks coarse - grained information , complex physical systems can be predictable and even decidable at some level of description . the renormalization group flows that we construct induce a hierarchy of ca rules . this hierarchy agrees well with apparent rule complexity and is therefore a good candidate for a complexity measure and a classification method . finally we argue that the large scale dynamics of ca can be very simple , at least when measured by the kolmogorov complexity of the large scale update rule , and moreover exhibits a novel scaling law . we show that because of this large - scale simplicity , the probability of finding a coarse - grained description of ca approaches unity as one goes to increasingly coarser scales . we interpret this large scale simplicity as a pattern formation mechanism in which large scale patterns are forced upon the system by the simplicity of the rules that govern the large scale dynamics .
because many natural systems are organized as networks , in which the nodes ( be they cells , individuals , populations or web servers ) interact in a time - dependent fashion the study of networks has been an important focus in recent research .one of the particular points of interest has been the question of how the hardwired _ structure _ of a network ( its underlying graph ) affects its _ function _ , for example in the context of optimal information storage or transmission between nodes along time .it has been hypothesized that there are two key conditions for optimal function in such networks : a well - balanced adjacency matrix ( the underlying graph should appropriately combine robust features and random edges ) and well - balanced connection strengths , driving optimal dynamics in the system .however , only recently has mathematics started to study rigorously ( through a combined graph theoretical and dynamic approach ) the effects of configuration patterns on the efficiency of network function , by applying graph theoretical measures of segregation ( clustering coefficient , motifs , modularity , rich clubs ) , integration ( path length , efficiency ) and influence ( node degree , centrality ) .various studies have been investigating the sensitivity of a system s temporal behavior to removing / adding nodes or edges at different places in the network structure , and have tried to relate these patterns to applications to natural networks .brain functioning is one of the most intensely studied contexts which requires our understanding of the tight inter - connections between system architecture and dynamics .the brain is organized as a `` dynamic network , '' self - interacting in a time - dependent fashion at multiple spacial and temporal scales , to deliver an optimal range for biological functioning .the way in which these modules are wired together in large networks that control complex cognition and behavior is one of the great scientific challenges of the 21st century , currently being addressed by large - scale research collaborations , such as the human connectome project .graph theoretical studies of empirical empirical data support certain generic topological properties of brain architecture , such as modularity , small - worldness , the existence of hubs and `` rich clubs '' . in order to explain how connectivity patterns may affect the system s dynamics ( e.g. , in the context of stability and synchronization in networks of coupled neural populations ) , and thus the observed behavior ,a lot of effort has been thus invested towards formal modeling approaches , using a combination of analytical and numerical methods from nonlinear dynamics and graph theory , in both biophysical models and simplified systems .these analyses revealed a rich range of potential dynamic regimes and transitions , shown to depend as much on the coupling parameters of the network as on the arrangement of the excitatory and inhibitory connections .the construction of a realistic , data - compatible computational model has been subsequently found to present many difficulties that transcend the existing methods from nonlinear dynamics , and may in fact require : ( 1 ) new analysis and book - keeping methods and ( 2 ) a new framework that would naturally encompass the rich phenomena intrinsic to these systems both of which aspects are central to our proposed work . in a paper with dr .verduzco - flores , one of the authors of this paper first explored the idea of having network connectivity as a bifurcation parameter for the ensemble dynamics in a continuous time system of coupled differential equations .we used configuration dependent phase spaces and our probabilistic extension of bifurcation diagrams in the parameter space to investigate the relationship between classes of system architectures and classes of their possible dynamics , and we observed the robustness of the coupled dynamics to certain changes in the network architecture and its vulnerability to others . as expected , when translating connectivity patterns to network dynamics , the main difficulties were raised by the combination of graph complexity and the system s intractable dynamic richness .in order to break down and better understand this dependence , we started to investigate it in simpler theoretical models , where one may more easily identify and pair specific structural patterns to their effects on dynamics .the logistic family is historically perhaps the most - studied family of maps in nonlinear dynamics , whose behavior is by now relatively well understood .therefore , we started by looking in particular at how dynamic behavior depends on connectivity in networks with simple logistic nodes .this paper focuses on definitions , concepts and observations in low - dimensional networks .future work will address large networks , and different classes of maps .dynamic networks with discrete nodes and the dependence of their behavior on connectivity parameters have been previously described in several contexts over the past two decades .for example , in an early paper , wang considered a simple neural network of only two excitatory / inhibitory neurons , and analyzed it as a parameterized family of two - dimensional maps , proving existence of period - doubling to chaos and strange attractors in the network .masolle , attay et al . have found that , in networks of delay - coupled logistic maps , synchronization regimes and formation of anti - phase clusters depend on coupling strength and on the edge topology ( characterized by the spectrum of the graph laplacian ) .yu has constructed and studied a network wherein the undirected edges symbolize the nodes relation of adjacency in an integer sequence obtained from the logistic mapping and the top integral function . in our present work ,we focus on investigating , in the context of networked maps , extensions of the julia and mandelbrot sets traditionally defined for single map iterations . for three different model networks ,we use a combination of analytical and numerical tools to illustrate how the system behavior ( measured via topological properties of the _ julia sets _ ) changes when perturbing the underlying adjacency graph .we differentiate between the effects on dynamics of different perturbations that directly modulate network connectivity : increasing / decreasing edge weights , and altering edge configuration by adding , deleting or moving edges .we discuss the implications of considering a rigorous extension of fatou - julia theory known to apply for iterations of single maps , to iterations of ensembles of maps coupled as nodes in a network .the logistic map is historically perhaps the best - known family of maps in nonlinear dynamics .iterations of one single quadratic function have been studied starting in the early 19th century , with the work of fatou and julia .the prisoner set of a map is defined as the set of all points in the complex dynamic plane , whose orbits are bounded .the escape set of a complex map is the set of all points whose orbits are unbounded .the julia set of is defined as their common boundary .the filled julia set is the union of prisoner points with their boundary . for polynomial maps, it has been shown that the connectivity of a map s julia set is tightly related to the structure of its critical orbits ( i.e. , the orbits of the map s critical points ) .due to extensive work spanning almost one century , from julia and fatou until recent developments , we now have the following : + * fatou - julia theorem . * _ for a polynomial with at least one critical orbit unbounded , the julia set is totally disconnected if and only if all the bounded critical orbits are aperiodic . _+ for a single iterated logistic map , the fatou - julia theorem implies that the julia set is either totally connected , for values of in the mandelbrot set ( i.e. , if the orbit of the critical point 0 is bounded ) , or totally disconnected , for values of outside of the mandelbrot set ( i.e. , if the orbit of the critical point 0 is unbounded ) . in previous work, the authors showed that this dichotomy breaks in the case of random iterations of two maps . in our current work ,we focus on extensions for networked logistic maps .although julia and mandelbrot sets have been studied somewhat in connection with coupled systems , none of the existing work seems to address the basic problems of how these sets can be defined for networks of maps , how different aspects of the network hardwiring affect the topology of these sets and whether there is any fatou - julia type result in this context .these are some of the questions addressed in this paper , which is organized as follows : in section [ logistic_maps ] , we introduce definitions of our network setup , as well as of the extensions of mandelbrot and julia sets that we will be studying . in order to illustrate some basic ideas and concepts , we concentrate on three examples of 3-dimensional networks , which differ from each other in edge distribution , and whose connectivity strengths are allow to vary .in section [ complex_maps ] , we focus on the behavior of these 3-dimensional models when we consider the nodes as complex iterated variables .we analyze the similarities and differences between node - wise behavior in each case , and we investigate the topological perturbations in one - dimensional complex slices of the mandelbrot and julia sets , as the connectivity changes from one model to the next , through intermediate stages . in section [ real_maps ] , we address the same questions for real logistic nodes , with the advantage of being able to visualize the entire network mandelbrot and julia sets , as 3-dimensional real objects . in both sections ,we conjecture weaker versions of the fatou - julia theorem , connecting points in the mandelbrot set with connectivity properties of the corresponding julia sets .finally , in section [ discussion ] , we interpret our results both mathematically and in the larger context of network sciences .we also briefly preview future work on high - dimensional networks and on networks with adaptable nodes and edges .we consider a set of nodes coupled according to the edges of an oriented graph , with adjacency matrix ( on which one may impose additional structural conditions , related to edge density or distribution ) . in isolation , each node , , functions as a discrete nonlinear map , changing at each iteration as . when coupled as a network with adjacency , each node will also receive contributions through the incoming edges from the adjacent nodes . throughout this paper , we will consider an additive rule of combining these contributions , for a couple of reasons : first , summing weighted incoming inputs is one simple , yet mathematically nontrivial way to introduce the cross talk between nodes ; second , summing weighted inputs inside a nonlinear integrating function is reminiscent of certain mechanisms studied in the natural sciences ( such as the integrate and fire neural mechanism studied in our previous work in the context of coupled dynamics ) . the coupled system will then have the following general form : where are the weights along the adjacency edges .one may view this system simply as an iteration of an -dimensional map , with ( in the case of real - valued nodes ) , or respectively ( in the case of complex - valued nodes ) .the new and exciting aspect that we are proposing in our work is to study the dependence of the coupled dynamics on the parameters , in particular on the coupling scheme ( adjacency matrix ) viewed itself as a system parameter . to fix these ideas , we focused first on defining these questions and proposing hypotheses for the case of quadratic node - dynamics .the logistic family is one of the most studied family of maps in the context of both real and complex dynamics of a single variable .it was also the subject of our previous modeling work on random iterations . in this paper in particular, we will work with quadratic node - maps , with their traditional parametrization , with and for the complex case and and for the real case .the network variable will be called respectively in the case of complex nodes , and in the case of real nodes .we consider both the particular case of identical quadratic maps ( equal values ) , and the general case of different maps attached to the nodes throughout the network . in both cases, we aim to study the asymptotic behavior of iterated node - wise orbits , as well as of the -dimensional orbits ( which we will call multi - orbits ) . as in the classical theory of fatou and julia, we will investigate when orbits escape to infinity or remain bounded , and how much of this information is encoded in the critical multi - orbit of the system .for the following definitions , fix the network ( i.e. , fix the adjacency and the edge weights ) . to avoid redundancy , we give definitions for the complex case , but they can be formulated similarly for real maps : for a fixed parameter , we call the * filled multi - julia set * of the network , the locus of which produce a bounded multi - orbit in . we call the * filled uni - julia set * the locus of so that produces a bounded multi - orbit .the * multi - julia set ( or the multi - j set ) * of the network is defined as the boundary in of the filled multi - julia set .similarly , one defines the * uni - julia set ( or uni - j set ) * of the network as the boundary in of its filled counterparts .we define the * multi - mandelbrot set ( or the multi - m set ) * of the network the parameter locus of for which the multi - orbit of the critical point is bounded in .we call the * equi - mandelbrot set ( or the equi - m set ) * of the network , the locus of for which the critical multi - orbit is bounded for * equi - parameter * .we call the * node equi - m set * the locus such that the component of the multi - orbit of corresponding to the node remains bounded in .we study , using a combination of analytical and numerical methods , how the structure of the julia and mandelbrot sets varies under perturbations of the node - wise dynamics ( i.e. , under changes of the quadratic multi- parameter ) and under perturbations of the coupling scheme ( i.e. , of the adjacency matrix and of the coupling weights ) . in this paper, we start with investigating these questions in small ( 3-dimensional ) networks , with specific adjacency configurations . in a subsequent paper , we will move to investigate how similar phenomena may be quantified and studied analytically and numerically in high - dimensional networks . in both cases, we are interested in particular in observing differences in the effects on dynamics of three different aspects of the network architecture : ( 1 ) increasing / decreasing edge weights , ( 2 ) increasing / decreasing edge density , ( 3 ) altering edge configuration by adding , deleting or moving edges . while a desired objective would be to obtain general results for all network sizes ( since many natural networks are large ) , we start by studying simple , low dimensional systems . in this study, we focus on simple networks formed of three nodes , connected by different network geometries and edge weights . to fix our ideas, we will follow and illustrate three structures in particular ( also see figure [ 3d_net ] ) : ( 1 ) two input nodes and are self driven by quadratic maps , and the single output node is driven symmetrically by the two input nodes ; additionally communicates with via an edge of variable weight , which can take both positive and negative values .we will call this the * _ simple dual model_*. ( 2 ) in addition to the simple dual scheme , the output node is also self - driven , i.e. there is a self - loop on of weight ( which can be positive or negative ) .we will call this the * _ self - drive model_*. ( 3 ) in addition to the self - driven model , there is also feedback from the output node into the node , via a new edge of variable weight .we will call this the * _ feedback model_*. unless specified , edges have positive unit weight . notice that the same effect as negative feed - forward edges from and into can be obtained by changing the sign of , etc .the three connectivity models we chose to study and compare are described by the equations below : + * _ simple dual model : _ * * _ self - drive model : _ * * _ feedback model : _ * for a fixed multi - parameter for example , one can see all three systems as generated by a network map , , , defined as ),f_{c_2}([az]_2),f_{c_3}([az]_3))].__,scaledwidth=90.0% ] in this section , we will track the changes in the uni - julia set when the parameters of the system change .one of our goals is to test , first in the case of equi - parameters , then for general parameters in , if a fatou - julia type theorem applies in the case of our three networks .first , we try to establish a hypothesis for connectedness of uni - j sets , by addressing numerically and visually questions such as : `` is it true that if is in the equi - m set of a network , then the uni - julia set is connected ? ''`` is it true that , if is not in the equi - m set of the network , then the uni - julia set is totally disconnected ? ''clearly , this is not simply a version of the traditional fatou - julia theorem , but rather a slightly different result involving the projection of the julia set onto a uni - slice .notice that a connected uni - j set in may be obtained from a disconnected network julia set , and conversely , that a disconnected uni - j projection may come from a connected julia in .we will further discuss versions of these objects in the context of iterations of real variables , where one can visualize the full mandelbrot and julia sets for the network as subsets of . here , we will first investigate uni - j sets for equi - parameters , with a particular focus on tracking the topological changes of the uni - j set as the system approaches the boundary of the equi - m set and leaves the equi - m set . and , for different values of the equi - parameter ( marked with colored dots on the equi - m template in upper left ) : ( red ) ; ( green ) ; ( blue ) ; ( orange ) ; ( purple ) .all sets were based on iterations .both equi - m and uni - j sets coincide in this case with the traditional mandebrot and julia sets for single map iterations.__,scaledwidth=65.0% ] and , for different values of the equi - parameter ( marked with colored dots on the equi - m template in upper left ) : ( red ) ; ( green ) ; ( orange ) ; ( blue ) ; ( dark purple ) ; ( cyan ) ; ( magenta ) . for the first four panels , is in the equi - m set ; for the last two , is outside of the equi - m set .all sets were based on 100 iterations.__,scaledwidth=90.0% ] , as the network profile is changed , from * a. * simple dual with , to * b. * self - drive with additional , to * c. * feedback with additional .__,scaledwidth=60.0% ] first , we fix the network type and the connectivity profile ( i.e. , the parameters , and ) , and we observe how the uni - j sets evolves as the equi - parameter changes . in figures [ unijulia1 ] and [ unijulia2 ] we illustrate this for two examples of self - driven models : one with and , the other with and .as the parameter point approaches the boundary of the equi - m set , the topology of the uni - j set if affected , with its connectivity braking down `` around '' the boundary .second , we look at the dependence of uni - julia sets on the coupling profile ( network type ) . as an example , we fixed the equi - parameter , and we first considered a simple dual network with negative feed - forward and small cross - talk .we then added self - drive to the output node , then additionally introduced a small negative feedback .the three resulting uni - julia sets are shown in figure [ unijulia_model_comparison ] .notice that a very small degree of feedback produces a more substantial difference than a significant change in the self - drive .third , one can study the dependence of uni - julia sets on the strength of specific connections within the network . as a simple illustration ofhow complex this dependence may be , we show in figures [ c2_julia ] and [ c3_julia ] the effects on the uni - j sets of slight increases in the cross - talk parameter , for two different values of the equi - parameter .an immediate observation is that uni - j sets no not exhibit the dichotomy from traditional single - map iterations no longer stands : uni - j sets can be connected , totally disconnected , but also disconnected into a ( finite or infinite ) number of connected components , without being totally disconnected .based on our illustrations , we can further conjecture , in the context of our three models , a description of connectedness for uni - j sets , as follows : and equi - parameter , as the input cross - talk is increased .the panels show , left to right : , , , and .__,scaledwidth=90.0% ] and equi - parameter , as the input cross - talk is increased .the panels show , left to right : , , , , , and , .__,scaledwidth=90.0% ] for any of the three models described , and for any equi - parameter , the uni - j set is connected only if is in the equi - m set of the network , and it is totally disconnected only if is not in the equi - m set of the network .* the conjecture implies a looser dichotomy regarding connectivity of uni - j sets than that delivered by the traditional fatou - julia result for single maps : if is in the equi - m set of the network , then the uni - j set is either connected or disconnected , without being totally disconnected .if is not in the equi - m set of the network , then the uni - j set is disconnected ( allowing in particular the case of totally disconnected ) .+ finally , we want to remind the reader that uni - julia sets can be defined for general parameters , as shown in figure [ general_unijulia ] . , with , and .the panels represent uni - j sets for a self - drive network with , as the cross - talk changes from * a. * , to * b. * , to * c. * .__,scaledwidth=70.0% ]* center . * self - drive network with and .* right . *self - drive network with and .plots were generated with iterations and in resolution .__,scaledwidth=90.0% ] the same definitions apply for iterations of real quadratic maps , with the real case presenting the advantage of easy visualization of full julia and mandelbrot sets , rather than having to consider equi - slices , as we did in the complex case . in figures[ realmand ] and [ realjulia ] , we illustrate a few multi - m and multi - j sets respectively , for some of the same networks considered in our complex case . moving to illustrate the _ relationship _ between the multi - m and the multi - j set in this case ,consider for example the self - drive real network with and , for different parameters . while more computationally intensive, higher - resolution figures would be necessary to establish the geometry and fractality of these sets , one may already notice basic properties .for example , figure [ realjulia ] shows that , if one were to consider complex equi - parameters , the multi - julia set may not only be connected ( figure [ realjulia]a ) , or totally disconnected ( not shown ) , but may also be broken into a number of connected components ( figures [ realjulia]b anc c ) . and , with equi - parameters respectively : * a. * ; * b. * ; * c. * .plots were generated with iterations and in resolution .__,scaledwidth=90.0% ] this remained true if we returned to our restriction of having real parameters , once we allow arbitrary ( that is , not necessarily equi ) parameters .the panels of figure [ real_comp ] show the multi - j sets for two different , but close parameters , and respectively , both of which are not in the multi - m set .the figures suggest a disconnected ( although not totally disconnected ) multi - j set in the first case , and a connected multi - j set in the second case .this implies that in this case the fatou - julia dichotomy fails in its traditional form and that the statement relating boundedness of the critical orbit with connectedness of the multi - j set does not hold for real networks .more precisely , we found parameters for which the multi - julia set appears to be connected , although the critical multi - orbit is unbounded . on the other hand ,the counterpart of the theorem may still hold , in the following form : `` the multi - j set is connected if the parameter belongs to the multi - m set . ''part of our current work consists in optimizing the numerical algorithm for multi - m and j sets in real networks , with high enough resolution to allow ( 1 ) observation of possible fractal properties of multi - j sets and of multi - m sets boundaries and ( 2 ) computation of the genus of the filled multi - j sets , in attempt to phrase a topological extension of the theorem that takes into account the number of handles and tunnels that open up in these sets as their connectivity breaks down when leaving the mandelbrot set . and . the two multi - parameters ( left panels ) and ( right panels )are not in the mandelbrot set for the network .the top row shows the 3-dimensional julia sets , the bottom panels show top views of the same sets .plots were generated with iterations and in resolution .__,scaledwidth=60.0% ]in this paper , we used a combination of analytical and numerical approaches to propose possible extensions of fatou - julia theory to networked complex maps .we started by showing that , even in networks where all nodes are identical maps , their behavior may not be `` synchronized : '' the node - wise mandelbrot sets may be identical in some cases , which in others they may differ substantially , depending on the coupling pattern .we then investigated how specific changes in the network hard - wiring trigger different effects on the structure of the network mandelbrot and julia sets , focusing in particular on observing topological properties ( connectivity ) and fractal behavior ( haussdorff dimension ) .we found instances in which small perturbations in the strength of one single connection may lead to dramatic topological changes in the asymptotic sets , and instances in which these sets are robust to much more significant changes .more generally , our paper suggests a new direction of study , with potentially tractable , although complex mathematical questions .while existing results do not apply to networks in their traditional form , it appears that connectivity of the newly defined uni - julia sets may still be determined by the behavior of the critical orbit .we conjectured a weaker extension of the fatou - julia theorem , which was based only on numerical inspection , and which remains subject to a rigorous study that would support or refute it .there are a few interesting aspects which we aim to address in our future work on iterated networks .for example , we are interested in studying the structure of equi - m and uni - j sets for larger networks , and in understanding the connection between the network architecture and its asymptotic dynamics. this direction can lead to ties and applications to understanding functional networks that appear in the natural sciences , which are typically large .the authors previous work has addressed some of these aspects in the context of continuous dynamics and coupled differential equations . however , when translating network architectural patters into network dynamics , the great difficulty arises from a combination of the graph complexity and the system s intractable dynamic richness .addressing the question at the level of low - dimensional networks can help us more easily identify and pair specific structural patterns to their effects on dynamics , and thus better understand this dependence .the next natural step is to return to the search for a similar classification in high - dimensional networks , where specific graph measures or patters ( e.g. node - degree distribution , graph laplacian , presence of strong components , cycles or motifs ) may help us , independently or in combination , classify the network s dynamic behavior . *described schematically on the left , together with their adjacency matrices .both systems have connectivity parameters , . _ _ ] = ] , + + = , * described schematically on the left , together with their adjacency matrices .both systems have connectivity parameters , .__,scaledwidth=90.0% ] of high interest are methods that can identify robust versus vulnerable features of the graph from a dynamics standpoint . as figures [ 4dim_mand ] and [ 10dim_mand ] show , it is clear that a small perturbation of the graph ( e.g. , adding a single edge ) have the potential , even for higher dimensional networks , to produce dramatic changes in the asymptotic dynamics of the network , and readily lead to substantially different m and j sets .however , this is not consistently true .we would like to understand whether a network may have a priori knowledge of which structural changes are likely to produce large dynamic effects .this is a real possibility in large natural learning networks , including the brain where such knowledge probably affects decisions of synaptic restructuring and temporal evolution of the connectivity profile .nodes * , formed of two cliques and , with nodes in each .th adjacency matrix is therefore similar to those in figure [ 4dim_mand ] , with square blocks , and of size . the densities ( number of ones in each block , i.e. number of -to- and respectively -to- connecting edges ) were takes in each panel to be ( out of the total of : * a. * ; * b. * , ; * c. * ; * d. * . in all cases ,the connectivity parameters ( i.e. , edge weights ) were and .__,title="fig:",scaledwidth=24.0% ] nodes * , formed of two cliques and , with nodes in each .th adjacency matrix is therefore similar to those in figure [ 4dim_mand ] , with square blocks , and of size . the densities ( number of ones in each block , i.e. number of -to- and respectively -to- connecting edges ) were takes in each panel to be ( out of the total of : * a. * ; * b. * , ; * c. * ; * d. * . in all cases ,the connectivity parameters ( i.e. , edge weights ) were and .__,title="fig:",scaledwidth=24.0% ] nodes * , formed of two cliques and , with nodes in each .th adjacency matrix is therefore similar to those in figure [ 4dim_mand ] , with square blocks , and of size . the densities ( number of ones in each block , i.e. number of -to- and respectively -to- connecting edges ) were takes in each panel to be ( out of the total of : * a. * ; * b. * , ; * c. * ; * d. * . in all cases ,the connectivity parameters ( i.e. , edge weights ) were and .__,title="fig:",scaledwidth=24.0% ] nodes * , formed of two cliques and , with nodes in each .th adjacency matrix is therefore similar to those in figure [ 4dim_mand ] , with square blocks , and of size . the densities ( number of ones in each block , i.e. number of -to- and respectively -to- connecting edges ) were takes in each panel to be ( out of the total of : * a. * ; * b. * , ; * c. * ; * d. * . in all cases ,the connectivity parameters ( i.e. , edge weights ) were and .__,title="fig:",scaledwidth=24.0% ] our future work includes understanding and interpreting the importance of this type of results in the context of networks from natural sciences .one potential view , proposed by the authors in their previous joint work , is to interpret iterated orbits as describing the temporal evolution of an evolving system ( e.g. , copying and proofreading dna sequences , or learning in a neural network ) . along these lines ,an initial which escapes to under iterations may represent a feature of the system which becomes in time unsustainable , while an initial which is attracted to a simple periodic orbit may represent a feature which is too simple to be relevant or efficient for the system .then the points on the boundary between these two behaviors ( i.e. , the julia set ) may be viewed as the optimal features , allowing the system to perform its complex function .we study how this `` optimal set of features '' changes when perturbing its architecture .once we gain enough knowledge of networked maps for fixed nodes and edges , and we formulate which applications this framework may be appropriate to address symbolically , we will allow the nodes dynamics , as well as the edge weights and distribution , to evolve in time together with the iterations .this process may account for phenomena such as learning , or adaptation a crucial aspect that needs to be understood about systems .this represents a natural direction in which to extend existing work by the authors on random iterations in the one - dimensional case .the work on this project was supported by the suny new paltz research scholarship and creative activities program .we additionally want to thank sergio verduzco - flores , for his programing suggestions , and mark comerford , for the useful mathematical discussions. 10 rdulescu a , verduzco - flores s , 2015 .nonlinear network dynamics under perturbations of the underlying graph .chaos : an interdisciplinary journal of nonlinear science .25(1 ) : 013116 .gray rt , robinson pa , 2009 .stability and structural constraints of random brain networks with excitatory and inhibitory neural populations .journal of computational neuroscience .27(1 ) : 81101 .siri b , quoy m , delord b , cessac b , berry h , 2007 .effects of hebbian learning on the dynamics and structure of random networks with inhibitory and excitatory neurons .journal of physiology - paris .101(1 ) : 136148 .brunel n , 2000 .dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons .journal of computational neuroscience .8(3 ) : 183208 .bullmore e , sporns o , 2009 .complex brain networks : graph theoretical analysis of structural and functional systems .nature reviews neuroscience .10(3 ) : 186198 .sporns o , 2002 .graph theory methods for the analysis of neural connectivity patterns .neuroscience databases : a practical guide . 171186 .sporns o , 2011 . the non - random brain : efficiency , economy , and complex dynamics .frontiers in computational neuroscience . 5 : 5 .julia , g. : mmoire sur litration des fonctions rationnelles . journal de mathmatiques pures et appliques , 47246 ( 1918 ) fatou , p. : sur les quations fonctionnelles .bulletin de la socit mathmatique de france * 47 * , 161271 ( 1919 ) branner , b. , hubbard , j.h . :the iteration of cubic polynomials part ii : patterns and parapatterns .acta mathematica * 169*(1 ) , 229325 ( 1992 ) qiu , wy ., yin , yc . , proof of the branner - hubbard conjecture on cantor julia sets .science in china series a : mathematics * 52*(1 ) , 4565 ( 2009 ) carleson , l. , gamelin , t.w . : complex dynamics , volume 69 .springer science & business media ( 1993 ) devaney , r.l ., look , d.m . : a criterion for sierpinski curve julia sets . in `` topology proceedings , '' volume 30 , 163179 ( 2006 ) fatihcan m atay , jrgen jost , and andreas wende . delays , connection topology , and synchronization of coupled chaotic maps . , 92(14):144101 , 2004 .c hauptmann , h touchette , and mc mackey .information capacity and pattern formation in a tent map network featuring statistical periodicity ., 67(2):026217 , 2003 .ob isaeva , sp kuznetsov , and ah osbaldestin .phenomena of complex analytic dynamics in the systems of alternately excited coupled non - autonomous oscillators and self - sustained oscillators . , 2010 .cm marcus and rm westervelt .dynamics of iterated - map neural networks . , 40(1):501 , 1989 .cristina masoller and fatihcan m atay .complex transitions to synchronization in delay - coupled networks of logistic maps ., 62(1):119126 , 2011 .anca rdulescu and ariel pignatelli .symbolic template iterations of complex quadratic maps . , 1 - 18 , 2016 . xiaoling yu , zhen jia , and xiangguo jian .logistic mapping - based complex network modeling ., 4(11):1558 , 2013 .wang , xin .period - doublings to chaos in a simple neural network : an analytical proof ., 5(4 ) : 425444 , 1991 .the figures show four uni - j sets , for and nodes .the equi - parameters , adjacency matrices , and connectivity parameters of each network are given below , from left to right : +
many natural systems are organized as networks , in which the nodes interact in a time - dependent fashion . the object of our study is to relate connectivity to the temporal behavior of a network in which the nodes are ( real or complex ) logistic maps , coupled according to a connectivity scheme that obeys certain constrains , but also incorporates random aspects . we investigate in particular the relationship between the system architecture and possible dynamics . in the current paper we focus on establishing the framework , terminology and pertinent questions for low - dimensional networks . a subsequent paper will further address the relationship between hardwiring and dynamics in high - dimensional networks . + for networks of both complex and real node - maps , we define extensions of the julia and mandelbrot sets traditionally defined in the context of single map iterations . for three different model networks , we use a combination of analytical and numerical tools to illustrate how the system behavior ( measured via topological properties of the _ julia sets _ ) changes when perturbing the underlying adjacency graph . we differentiate between the effects on dynamics of different perturbations that directly modulate network connectivity : increasing / decreasing edge weights , and altering edge configuration by adding , deleting or moving edges . we discuss the implications of considering a rigorous extension of fatou - julia theory known to apply for iterations of single maps , to iterations of ensembles of maps coupled as nodes in a network . * real and complex behavior for networks of coupled logistic maps * + anca rdulescu , ariel pignatelli + department of mathematics , suny new paltz , ny 12561 department of mechanical engineering , suny new paltz , ny 12561 +
frequently it is the case in the study of real - world complex networks that we observe essentially a sample from a larger network .there are many reasons why sampling in networks is often unavoidable and , in some cases , even desirable .sampling , for example , has long been a necessary part of studying internet topology .similarly , its role has been long - recognized in the context of biological networks , e.g. , protein - protein interaction , gene regulation and metabolic networks .finally , in recent years , there has been intense interest in the use of sampling for monitoring online social media networks .see , for example , for a representative list of articles in this latter domain . given a sample from a network , a fundamental statistical question is how the sampled network statistics be used to make inferences about the parameters of the underlying global network .parameters of interest in the literature include ( but are by no means limited to ) degree distribution , density , diameter , clustering coefficient , and number of connected components . for seminal work in this direction , see . in this paper , we propose potential solutions to an estimation problem that appears to have received significantly less attention in the literature to date the estimation of the degrees of individual sampled nodes .degree is one of the most fundamental of network metrics , and is a basic notion of node - centrality .deriving a good estimate of the node degree , in turn , can be helpful in estimating other global parameters , as many such parameters can be viewed as functions that include degree as an argument . while a number of methods are available to estimate the full degree distribution under network sampling ( e.g. , ) , little work appears to have been done on estimating the individual node degrees .our work addresses this gap .formally , our interest lies in estimation of the degree of a vertex , provided that vertex is selected in a sample of the underlying graph .there are many sampling designs for graphs .see ( * ? ? ?* ch 5 ) for a review of the classical literature , and for a recent survey .canonical examples include ego - centric sampling , snowball sampling , induced / incident subgraph sampling , link - tracing and random walk based methods . under certain sampling designs where one observes the true degree of the sampled node ( e.g. ego - centric and one - wave snowball sampling ) ,degree estimation is unnecessary . in this paper, we focus on _ induced subgraph sampling _ , which is structurally representative of a number of other sampling strategies .formally , in induced subgraph sampling , a set of nodes is selected according to independent bernoulli( ) trials at each node .then , the subgraph induced by the selected nodes , i.e. , the graph generated by selecting edges between selected nodes , is observed .this method of sampling shares stochastic properties with incident subgraph sampling ( wherein the role of nodes and edges is reversed ) and with certain types of random walk sampling .the problem of estimating degrees of sampled nodes has been given a formal statistical treatment in , for the specific case of traceroute sampling as a special case of the so - called _ species problem _ .to the best of our knowledge , a similarly formal treatment has not been applied more generally for other , more canonical sampling strategies . however, a similar problem would be estimating personal network size for a group of people in a survey . some prior works in this direction consider estimators obtained by scaling up the observed degree in the sampled network , in the spirit of what we term a method of moments estimator below . butno specific graph sampling designs are discussed in these studies .we focus on formulating the problem using the induced subgraph sampling design and exploit network information beyond sampled degree to propose estimators that are better than naive scale - up estimators .key to our formulation is a risk theoretic framework used to derive our estimators of the node degrees , through minimizing frequentist or bayes risks .this contribution is accompanied by a comparative analysis of our proposed estimators and naive scale - up estimators , both theoretical and empirical , in several network regimes .we note that when sampling is coupled with false positive and false negative edges , e.g. , in certain biological networks , our methods are not immediately applicable .sampling designs that result in the selection of a fraction of edges from the underlying global network ( induced and incident subgraph sampling , random walks etc . ) are our primary objects of study .we use induced subgraph sampling as a rudimentary but representative model for this class and aim to simultaneously estimate the true degrees of all the observed nodes with a precision better than that obtained by trivial scale - up estimators with no network information used .let us denote by a true underlying network , where .this network is assumed static and , without loss of generality , undirected .the true degree vector is .the sampled network is denoted by where , again without loss of generality , we assume that . write the sampled degree vector as . throughout the paper , we assume that we have an induced subgraph sample , with ( known ) sampling proportion .it is easy to see from the sampling scheme that .therefore , the method of moments estimator ( mme ) for is .thus , is a natural scale - up estimator of the degree sequence of the sampled nodes . in this section, we propose a class of estimators that minimize the unweighted -risk of the sampled degree vector and discuss their theoretical properties .we aim to demonstrate , under several conditions , that the risk minimizers are superior to the regular scale - up estimators , the former taking into account the inherent relationships inside the network .we note that although a maximum likelihood approach to estimation is perhaps intuitively appealing , a closed form derivation of the mle in this setting is probitive .another option is to look at marginal likelihoods .but the mle based on univariate marginal likelihoods are essentially equivalent to the mme for this sampling scheme .we will frequently use the the first and second moments of the sampled degree vector in our estimation methods .the following lemma will be useful .[ lem : meancovariancedegree ] under induced subgraph sampling , the mean and covariance matrix of the observed degree vector are where the diagonals of are and the -th off - diagonal is denoted by , which denotes the number of common neighbors of node and node in the network . adopting the standard definition of ( unweighted ) frequentist risk of an estimator of a parameter , i.e. , ,the frequentist risks are calculated for a general class of estimators .we also define , a _ restricted risk function _ assuming the sampled graph is restricted to some class .our proposed candidates are the elements in the class of linear functions of the observed degree vector that minimize the risk or the restricted risk w.r.t . some class .it is expected that the optimal estimator will be a function of the parameter and hence another ( naive ) estimator will need to be plugged in . our final estimate will then be a plug - in risk minimizer . herewe estimate the node degrees individually , assuming that the estimate for the node is of the form , where is a scalar and is the observed degree in the sample . since , where is the true degree of the node , differentiating w.r.t . and equating to 0 , we get the optimal . plugging in the mme of , we get the plug - in univariate risk minimizer taylor expanding the above formula ( during taylor expansions of functions of , we will assume that is concentrated around its mean , so that the taylor expanded approximation is close ) and taking expectation , we see that = \frac{1}{p } \mathbb{e}\left [ d^*_i\left(1 + \frac{1-p}{d^*_i}\right)^{-1 } \right ] \approx \frac{1}{p } \mathbb{e}\left [ d^*_i\left(1 - \frac{1-p}{d^*_i}\right ) \right ] = d^0_i - \frac{1-p}{p } \enskip . x_1 , x_2 , \cdots , x_n 0\leq x_i\leq 1 i x = \sum_i x_i \mu = \mathbb{e}x d^0_i\left(np_e\right) \psi 250 m to post more than 60 m advertisements over a two - year time frame .indexing and cross - referencing the ads with the same contact number , similar address or zip codes help identify and track the illegal trafficking activities .this leads to a massive background network structure where each node represents an advertisement and an edge between two nodes are created if they share certain features .it is not unreasonable to expect that , in surveillance of networks like this , sampling may well arise , either by choice or by circumstance .we mimic this situation by pretending that this underlying network generated by the _ memex _program is unknown to us and sampling it using induced subgraph sampling .the nodes associated with trafficking activities are flagged in the data .there are 31,248 nodes , of which 12,387 are flagged and there are 10,200,838 edges .our goal was to estimate the true degrees of flagged nodes that we saw in our sample .we compared the distance of regular scale - up estimators , and our proposed univariate , multivariate and bayes estimators .for the bayes estimator , a number of polynomial priors were taken into consideration with varying degree of decay , denoted by .the results are shown in table [ humantraffic ] .almost everything works better than the naive scale - up estimator in terms of total loss , although the relative improvement is more modest than in simulation .[ humantraffic ]in this paper , we addressed the problem of estimation of true degrees of sampled nodes from an unknown graph .we proposed a class of estimators from a risk - theory perspective where the goal was to minimize the overall risk of the degree estimates for the sampled nodes .we considered estimators that minimize both frequentist and bayes risk functions and compared the frequentist risks of our proposed estimator to the naive scale - up estimator .the basic objective of proposing these estimators was to exploit the additional network information inherent in the sampled graph , beyond the observed degrees .our theoretical analyses , simulation studies and real data show clear evidence of superior performance of our estimators compared to mme , especially when the graph is sparse and the sampling ratio is low , mimicking the real - world examples .there are a number of ways our current work could be extended .firstly , a theoretical analysis of the bayes estimators under priors for random graph models beyond erds - rnyi is desirable , although likely more involved .secondly , although induced subgraph sampling serves as a representative structural model for a certain class of adaptive sampling designs , the specific details of the sufficiency conditions discussed in this paper can be expected to vary slightly with the other sampling designs ( e.g. , incident subgraph or random walk designs ) .finally , the success of the bayesian method appears to rely heavily upon appropriate choice of prior distribution , as observed in our theoretical analysis and computational experiments .it would be of interest to explore the performance of the empirical bayes estimate in conjunction with the nonparametric method of degree distribution estimation proposed in .more generally , the method in can in principle be extended to estimate individual vertex degrees .but the computational challenge of implementation and the corresponding risk analysis can be expected to be nontrivial .d. killworth , c. mccarty , h. r. bernard , g.a .shelley , and e.c .estimation of seroprevalence , rape , and homelessness in the united states using a social network approach ., 22(2):289308 , 1998 .j. leskovec and c. faloutsos . sampling from large graphs . in _ proceedings of the 12th acm sigkdd international conference on knowledge discovery and data mining _ , kdd 06 , pages 631636 , new york , ny , usa , 2006 .acm .b. ribeiro and d. towsley . estimating and sampling graphs with multidimensional random walks . in _ proceedings of the 10th acm sigcomm conference on internet measurement _ , imc 10 , pages 390403 , new york , ny , usa , 2010 .
the need to produce accurate estimates of vertex degree in a large network , based on observation of a subnetwork , arises in a number of practical settings . we study a formalized version of this problem , wherein the goal is , given a randomly sampled subnetwork from a large parent network , to estimate the actual degree of the sampled nodes . depending on the sampling scheme , trivial method of moments estimators ( mmes ) can be used . however , the mme is not expected , in general , to use all relevant network information . in this study , we propose a handful of novel estimators derived from a risk - theoretic perspective , which make more sophisticated use of the information in the sampled network . theoretical assessment of the new estimators characterizes under what conditions they can offer improvement over the mme , while numerical comparisons show that when such improvement obtains , it can be substantial . illustration is provided on a human trafficking network .
in recent years , renewable energy has gained much popularity and attention because of it s potential in economic and environmental advantages . some of the benefits include- high stainability , low carbon emission , reduction of environmental impact , saving fuel cost and so on .other advantages include economical benefits to remote communities and supporting the microgrids during the operation in islanded mode .although renewable energy , e.g. , wind and solar , offers huge benefits , their practical use is limited due to their intermittent nature which makes it very challenging to ensure a steady power supply in the grid . because of the variable nature of the renewable energy based power generation sources , transmission and distribution system operators need advanced monitoring and control .wind power generation relies on wind speed which varies depending on location and time . for economic and stable operation of the wind power plant ,accurate forecasting of wind power is critical .there are two main wind power forecasting approaches , physical method and statistical method . in the first approach, the physical system and power translation processes are modelled in detail . therefore , physical approaches not only need the information of historical wind speed data but also other information , i.e. , meteorological output , hub height of the turbine and physical modelling of power conversion process from wind speed are essential . on the other hand , in a statistical approach ,wind power output is modelled as a time - series where the power output at any time instant depends on its previous observation values .the physical approach provides good accuracy for long term forecasting but not so good for short term forecasting as it is computationally very demanding . on the contrary ,statistical approaches are well suited for short therm forecasting . for short term wind power forecasting , different approaches are well studied . in a conventional statistical approach, wind power output behaviour is modelled as a time - series .autoregressive ( ar ) model has been used for wind energy forecasting in and autoregressive moving average ( arma ) model has been used in . the artificial neural network ( ann )is also widely used . however , the ann based approaches has very slow convergence during the training phase . on the other hand ,statistical regressive models are computationally very efficient and widely used for short term forecasting . in the statistical approaches ,the forecasting accuracy is highly dependent on the estimated model of the wind power output behaviour .therefore , it is important to identify the estimated model parameters accurately .different methods are widely used to estimate the ar model parameters , such as , ordinary least squares ( ls ) approach , forward backward ( fb ) approach , geometric lattice ( gl ) approach and yule - walker ( yw ) approach , etc .as the wind power output has variable characteristics , the error function obtained from the estimated model may have many local minima .for short - term load forecasting , it has been shown that the particle swarm optimization ( pso ) , one of the major paradigms of the computational swarm intelligence , converges to the global optimal solution of a complex error surface and finds better solution compared with gradient search based stochastic time - series techniques .previously , pso has been widely used in different applications of power system . in this work ,a modified variant of pso based on constriction factor ( cf ) is employed to identify the ar parameters more accurately .the proposed cf - pso based identified ar parameters have better error minimization profiles compared to the well - established ls , fb , gl and yw based approaches .the organization of this paper is as follows- the formulation of basic pso and cf - pso is discussed in section [ psos ] .autoregressive model order selection and parameter estimation methodology is described in section [ armodel ] .the proposed ar parameter estimation method based on cf - pso is illustrated in section [ psomodel ] . in section [ rnd ] , results obtained from this experiment are given and compared with four standard techniques . finally , the paper concludes with some brief remarks in section [ secend ] .pso is a multi - objective optimization technique which finds the global optimum solution by searching iteratively in a large space of candidate solutions .the description of basic pso and cf - pso formulation is discussed in the following subsections : this meta - heuristic is initialized by generating random population which is referred as a swarm .the dimension of the swarm depends on the problem size . in a swarm ,each individual possible solution is represented as a ` particle ' . at each iteration, positions and velocities of particles are updated depending on their individual and collective behavior .generally , objective functions are formulated for solving minimization problems ; however , the duality principle can be used to search the maximum value of the objective function . at the first step of the optimization process , an _n_-dimensional initial population ( swarm ) and control parameters are initialized .each particle of a swarm is associated with the position vector and the velocity vector , which can be written as + velocity vector , ] + where n represents the search space dimension . before going to the basic pso loop ,the position and velocity of each particle is initialized .generally , the initial position of the particle can be obtained from uniformly distributed random vector u ( ) , where and represents the lower and upper limits of the solution space respectively . during the optimization procedure ,position of each particle is updated using ( [ peq ] ) where and .+ at each iteration , new velocity for each particle is updated which drives the optimization process .the new velocity of any particle is calculated based on its previous velocity , the particle s best known position and the swarm s best known position .particle s best known position is it s location at which the best fitness value so far has been achieved by itself and swarm s best known position is the location at which the best fitness value so far has been achieved by any particle of the entire swarm .the velocity equation drives the optimization process which is updated using ( [ veq ] ) in this equation , _ w _ is the inertia weight . represents the ` self influence ' of each particle which quantifies the performance of each particle with it s previous performances .the component represents the ` social cognition ' among different particles within a swarm and quantify the performance relative to other neighboring particles .the learning co - efficients and represent the trade - off between the self influence part and the social cognition part of the particles .the values of and are adopted from previous research and is typically set to 2 . in eqn ( [ veq ] ) , is particle s best known position and is swarm s best known position . in the solution loop of pso , the algorithm continues to run iteratively , until one of the following stopping conditions is satisfied . 1 .number of iterations reach the maximum limit , e.g. , 100 iterations .no improvement is observed over a number of iterations , e.g. , error less than = 0.001 to achieve better stability and convergent behavior of pso , a constriction factor has been introduced by clerc and kennedy in . the superiority of cf - pso over inertia - weight pso is discussed in .basically , the search procedure of cf - pso is improved using the eigenvalue analysis and the system behavior can be controlled which ensures a convergent and efficient search procedure . to formulate cf - pso , ( [ veq ] )is replaced by ( [ veqcons])-([kcons2]) . \label{veqcons}\ ] ] where and here the value of must be greater than 4 to ensure a stable and convergent behavior .usually , the value of is set to 4.1 ( ) ; therefore , the value of _ k _ becomes 0.7298 .boundary condition , which helps to keep the particles within allowable solution space , is also applied in this research as shown below : +ar is a univariate time - series analysis model that is widely used for model estimation and forecasting . in an ar model, the output variable has linear association with its own previous observations . for a sample period of ^t$ ] , a -order ar model can be written following the expression below : where , are the lag parameters of the model , is the constant term , and is the white gaussian noise with zero mean . to select the minimal appropriate lag order , akaike information criteria is used following ( [ aiceq ] ) , where is the number of parameters in the ar model , is the effective number of observations , is the maximum likelihood of the estimate of the error covariance .the best fitted model has the minimum aic value . +firstly , the structure of the ar model is selected based on ( [ areq ] ) for a predetermined order . after thatthe model parameters are estimated using cf - pso . during the optimization loop, the algorithm determines the optimal parameters by minimizing the residual sum of squares ( rss ) of the estimated model as shown in ( [ rsseq ] ) , where is the actual data that need to be predicted and is the estimated data . the steps of cf - pso based ar parameter estimation are discussed below .1 . initialize particle s position and velocity in an _n_-dimensional search space . here ,the dimension _n _ represents the order of the ar model and each vector of the particle position indicates a potential solution .2 . calculate the rss for each ` initial ' particle positions and determine the particle s best known position and swarm s best known position .3 . update particle s position and velocity following ( [ peq ] ) and ( [ veq ] ) if necessary .4 . calculate the rss with the updated velocity and position .repeat ( 3 ) and ( 4 ) until the stopping criteria is satisfied , i.e. , there is not significant change in rss over a good number of iterations or the algorithm reaches it s maximum iteration limit .here maximum iteration is considered 100 .the proposed cf - pso based ar model parameter estimation algorithm is implemented on the practical wind power output data of the _ capital wind farm _ , obtained from the australian energy market operator .the algorithm is implemented using matlab and standard ls , fb , gl and yw based approaches are evaluated using ` system identification toolbox ' . to compare the performance of the proposed method and four aforementioned well - established methods , following evaluation indicesare used : 1 . mean square error ( mse ) : where is the actual data and is the data from the estimated model .2 . akaike s final prediction error ( fpe ) : where , is the number of parameters in the estimated model , is the number of values of the model and is the loss function . 3 .normalized mean square error ( nmse ) : ^ 2\\ { \delta^2}=\frac{1}{n-1 } \sum_{i=1}^n[{y_i } - mean({y_i})]^2\\ \end{split}\ ] ] where is the actual data and is the data from the estimated model .the value of nmse varies between ` -inf ' to ` 1 ' .while ` -inf ' indicates a bad fit , 1 represents the perfect fit of the data .firstly , the appropriate lag order is determined . in order to do that the value of is varied from to and for each value of , the aic is calculated following the information criterion in ( [ aiceq ] ) .once all aic value is known , is selected for that fitted ar( ) model which leads to the minimum aic value .considering , in this analysis aic value is observed when for all four standard methods as shown in fig .[ aiclbl ] .now , the wind power output data ( first week of march 2012 with 5 minute interval ) of the ` capital wind farm ' is used to evaluate the performance of the five approaches including the cf - pso based ar model . for the proposed method ,the results are documented considering the mean value of 30 individual runs .the obtained results are summarized in the table .[ case1results ] and the best results among all approaches are highlighted .results presented in table . [ case1results ] shows that the proposed method outperforms the other approaches for this test case .considering the error value ( mse and fpe ) of the ls approach as a base scenario , the error minimization performance ( emp ) of other approaches are evaluated following ( [ myindex ] ) , where is the mse or fpe of the ls method and is the corresponding mse or fpe of other approaches .the positive value of emp indicates an improvement of error minimization performance over ls approach while negative value represents that the performance is worse than the ls approach . in order to justify the performance , the proposed method is employed considering another data set , second week 5s interval data of the month march 2012 for capital wind farm .results from this method is shown in table [ case1results2 ] . in this test case ,the proposed cf - pso based ar model reduces the error indices most .compared with ls , almost 40% reduction of error is achieved for this test data . moreover , the nmse for the standard ls , fb , gl and yw based approaches are around 96.7% while proposed method experiences 98.9% , as shown in table .[ case1results2 ] .as the value of nmse close to unity indicates the best performance , proposed method also shows it superiority for this test case ..performance of ar parameter estimation considering the first week data of march 2012 [ cols="^,^,^,^,^,^ " , ] fig .[ actest ] shows the actual wind output data and the estimated model data using the proposed method .the convergence characteristics of the cf - pso based proposed algorithm is shown in fig .[ cfpsolbl ] . from the figure , the algorithm convergence within 40 iterations . according to the results shown in table .[ case1results ] and table .[ case1results2 ] , the best improvement is observed considering the cf - pso based ar modelling among these five approaches for both of the test data sets .the cf - pso based ar parameter estimation method finds a better solution compared to the gradient based methods due to its global search capabilities .it is important to mention that these well - established gradient based methods may trap in local minima as referred by huang et al . on the other hand ,cf - pso finds a global optimal solution . in our analysis, we found that the performance of the cf - pso varied based on the wind data characteristics .if wind data has a global minima that is very close to the local minima , the performance of the cf - pso is slightly improved compared with other algorithms ( as observed in table .i ) . on the other hand ,if the local minimum is far from the global minima value , significant improvement is observed using the cf - pso algorithm ( as observed in table .in this paper , constriction factor based pso is employed to enhance the performance of the time - series autoregressive model .the proposed algorithm is implemented to estimate the wind power output considering practical wind data . using the global search capable cf - pso based proposed ar model ,the results obtained in this experiment show that the algorithm finds the solution very accurately and efficiently ( within 40 iterations ) . to justify the results obtained from the proposed method ,four algorithms including the widely used least - square method and yule - walker method are employed for comparison .experimental results conducted in this experiments show that the proposed method enhances the ar estimation model with better accuracy compared to other four well - established method . in this experiment ,the exogenous input variables are not considered during the model estimation , which will be included in the future work .since the proposed model enhances the performance of the autoregressive model by minimizing model estimation errors more effectively , in the future work the forecasting performance will also be explored in detail .a. anwar and h. pota , `` optimum allocation and sizing of dg unit for efficiency enhancement of distribution system , '' in _ ieee international power engineering and optimization conference ( peoco ) _ , jun .p. eriksen , t. ackermann , h. abildgaard , p. smith , w. winter , and j. rodriguez garcia , `` system operation with high wind penetration , '' _ ieee power and energy magazine _, vol . 3 , no . 6 ,6574 , 2005 . a. anwar and h. pota , `` loss reduction of power distribution network using optimum size and location of distributed generation , '' in _21st australasian universities power engineering conference ( aupec ) _ , sept .2011 , pp . 16 . c. lei and l. ran , `` short - term wind speed forecasting model for wind farm based on wavelet decomposition , '' in _ third international conference on electric utility deregulation and restructuring and power technologies _ , 2008 .m. lei , l. shiyan , j. chuanwen , l. hongling , and z. yan , `` a review on the forecasting of wind speed and generated power , '' _ renewable and sustainable energy reviews _ , vol . 13 , no . 4 , pp . 915 920 , 2009 .d. hill , d. mcmillan , k. r. w. bell , and d. infield , `` application of auto - regressive models to u.k .wind speed data for power system impact studies , '' _ ieee transactions on sustainable energy _ , vol . 3 , no . 1 , pp .134141 , 2012 .p. poggi , m. muselli , g. notton , c. cristofari , and a. louche , `` forecasting and simulating wind speed in corsica by using an autoregressive model , '' _ energy conversion and management _ , vol .44 , no . 20 , pp .3177 3196 , 2003 .g. venayagamoorthy , k. rohrig , and i. erlich , `` one step ahead : short - term wind power forecasting and intelligent predictive control based on data analytics , '' _ ieee power and energy magazine _ , vol . 10 , no . 5 , pp .7078 , 2012 .k. methaprayoon , c. yingvivatanapong , w .- j .lee , and j. liao , `` an integration of ann wind power estimation into unit commitment considering the forecasting uncertainty , '' _ ieee transactions on industry applications _ , vol .43 , no . 6 , pp .14411448 , 2007 .huang , c .- j .huang , and m .- l .wang , `` a particle swarm optimization to identifying the armax model for short - term load forecasting , '' _ ieee transactions on power systems _ , vol .20 , no . 2 ,pp . 11261133 , 2005 .m. karen , v. ramos , and t. razafindralambo , `` using efficiently autoregressive estimation in wireless sensor networks , '' in _ international conference on computer , information , and telecommunication systems ( cits ) _ , 2013 . y. del valle , g. venayagamoorthy , s. mohagheghi , j .- c .hernandez , and r. harley , `` particle swarm optimization : basic concepts , variants and applications in power systems , '' _ ieee transactions on evolutionary computation _12 , no . 2 ,pp . 171195 , 2008 .r. eberhart and y. shi , `` comparing inertia weights and constriction factors in particle swarm optimization , '' in _ proceedings of the 2000 congress on evolutionary computation _ , vol . 1 , 2000 , pp . 8488. h. zeineldin , e. el - saadany , m. salama , a. alaboudy , and w. woon , `` optimal sizing of thyristor - controlled impedance for smart grids with multiple configurations , '' _ ieee transactions on smart grid _ , vol . 2 , no . 3 , pp . 528537 ,2011 .m. clerc and j. kennedy , `` the particle swarm - explosion , stability , and convergence in a multidimensional complex space , '' _ ieee transactions on evolutionary computation _ , vol .6 , no . 1 ,5873 , feb 2002 .j. vlachogiannis and k. lee , `` a comparative study on particle swarm optimization for optimal steady - state performance of power systems , '' _ ieee transactions on power systems _ , vol .21 , no . 4 ,17181728 , nov .r. a. krohling and l. dos santos coelho , `` coevolutionary particle swarm optimization using gaussian distribution for solving constrained optimization problems , '' _ ieee transactions on systems , man , and cybernetics , part b : cybernetics _ , vol . 36 , no . 6 , pp .14071416 , dec . 2006 .j. heo , k. lee , and r. garduno - ramirez , `` multiobjective control of power plants using particle swarm optimization techniques , '' _ ieee transactions on energy conversion _21 , no . 2 ,pp . 552561 , june 2006 .
accurate forecasting is important for cost - effective and efficient monitoring and control of the renewable energy based power generation . wind based power is one of the most difficult energy to predict accurately , due to the widely varying and unpredictable nature of wind energy . although autoregressive ( ar ) techniques have been widely used to create wind power models , they have shown limited accuracy in forecasting , as well as difficulty in determining the correct parameters for an optimized ar model . in this paper , constriction factor particle swarm optimization ( cf - pso ) is employed to optimally determine the parameters of an autoregressive ( ar ) model for accurate prediction of the wind power output behaviour . appropriate lag order of the proposed model is selected based on akaike information criterion . the performance of the proposed pso based ar model is compared with four well - established approaches ; forward - backward approach , geometric lattice approach , least - squares approach and yule - walker approach , that are widely used for error minimization of the ar model . to validate the proposed approach , real - life wind power data of _ capital wind farm _ was obtained from australian energy market operator . experimental evaluation based on a number of different datasets demonstrate that the performance of the ar model is significantly improved compared with benchmark methods . constriction factor particle swarm optimization ( cf - pso ) , ar model , wind power prediction
quantized control problems have been an active research topic in the past two decades .discrete - level actuators / sensors and digital communication channels are typical in practical control systems , and they yield quantized signals in feedback loops .quantization errors lead to poor system performance and even loss of stability .therefore , various control techniques to explicitly take quantization into account have been proposed , as surveyed in . on the other hand ,switched system models are widely used as a mathematical framework to represent both continuous and discrete dynamics . for example , such models are applied to dc - dc converters and to car engines .stability and stabilization of switched systems have also been extensively studied ; see , e.g. , the survey , the book , and many references therein . in view of the practical importance of both research areas and common technical tools to study them , the extension of quantized control to switched systemshas recently received increasing attention .there is by now a stream of papers on control with limited information for discrete - time markovian jump systems .moreover , our previous work has analyzed the stability of sampled - data switched systems with static quantizers . in this paper , we study the stabilization of continuous - time switched linear systems with quantized output feedback .our objective is to solve the following problem : given a switched system and a controller , design a quantizer to achieve asymptotic stability of the closed - loop system .we assume that the information of the currently active plant mode is available to the controller and the quantizer . extending the quantizer in for the non - switched case to the switched case, we propose a lyapunov - based update rule of the quantizer under a slow - switching assumption of average dwell - time type .the difficulty of quantized control for switched systems is that a mode switch changes the state trajectories and saturates the quantizer . in the non - switched case , in order to avoid quantizer saturation , the quantizer is updated so that the state trajectories always belong to certain invariant regions defined by level sets of a lyapunov function .however , for switched systems , these invariant regions are dependent on the modes .hence the state may not belong to such regions after a switch . to keep the state in the invariant regions, we here adjust the quantizer at every switching time , which prevent quantizer saturation .the same philosophy of emphasizing the importance of quantizer updates after switching has been proposed in for sampled - data switched systems with quantized state feedback .subsequently , related works were presented for the output feedback case and for the case with bounded disturbances .the crucial difference lies in the fact that these works use the quantizer based on and investigates propagation of reachable sets for capturing the measurement .this approach also aims to avoid quantizer saturation , but it is fundamentally disparate from our lyapunov - based approach . this paper is organized as follows . in sectionii , we present the main result , theorem [ thm : stability_theorem ] , after explaining the components of the closed - loop system .section iii gives the update rule of the quantizer and is devoted to the proof of the convergence of the state to the origin . in section iv , we discuss lyapunov stability. we present a numerical example in section v and finally conclude this paper in section vi .the present paper is based on the conference paper . herewe extend the conference version by addressing state jumps at switching times .we also made structural improvements in this version ._ notation : _ let and denote the smallest and the largest eigenvalue of .let denote the transpose of .the euclidean norm of is denoted by .the euclidean induced norm of is defined by . for a piecewise continuous function , its left - sided limit at denoted by .for a finite index set , let be a right - continuous and piecewise constant function .we call a _ switching signal _ and the discontinuities of _ switching times_. let us denote by the number of discontinuities of on the interval ] such that ._ let us denote the switching times by , and fix .suppose that for all ] .thus we have . from, we see that for , it follows from that therefore satisfies the following inequality : since was arbitrary , is equivalent to thus we have shown that if holds for all ] , then we have since , it follows from that clearly , this inequality holds in the case when no switches occur .since shows that and since the growth rate of is larger than that of , there exists such that in conjunction with , this implies that holds for every .let be an integer satisfying .then lemma [ lem : adt_upperbound ] guarantees the existence of ] , then for every . hence if satisfies then every trajectory of with an initial state satisfies _since the mode is fixed , this lemma is a trivial extension of lemma 5 in for single - modal systems .we therefore omit its proof ; see also the conference version . using lemma [ lem : fix_zoom_parameter ], we obtain an update rule of the `` zoom '' parameter to drive the state to the origin . [lem : convergence_to_origin ] _ consider the system under the same assumptions as in lemma [ lem : fix_zoom_parameter ] .assume that .for each with , the positive definite matrices and in the lyapunov equation satisfy for some .define and by fix so that is satisfied , and set the `` zoom '' parameter for all and ] , then otherwise , where are the switching times in the interval ] .we see from lemma [ lem : fix_zoom_parameter ] that for all and that .since , a routine calculation shows that .we now study the switched case .let be the switching times in the interval ] , gives and hence we have from that we repeat this process and use , then which contradicts . thus we obtain and hence holds . from and , we derive the desired result , because .finally , since , gives for every and . if , that is , if the average dwell time satisfies , then .since for all , we obtain .( a ) we can compute by linear matrix inequalities . moreover , if the jump matrix in is invertible , then lemma 13 of gives an explicit formula for .( b ) the proposed method is sensitive to the time - delay of the switching signal at the `` zooming - in '' stage .if the switching signal is delayed , a mode mismatch occurs between the plant and the controller .here we do not proceed along this line to avoid technical issues .see also for the stabilization of asynchronous switched systems with time - delays .( c ) we have updated the `` zoom '' parameter at each switching time in the `` zooming - in '' stage .if we would not , switching could lead to instability of the closed - loop system .in fact , since the state may not belong to the invariant region without adjusting , the quantizer may saturate .( d ) similarly , `` pre - emptively '' multiplying at time by does not work , either .this is because such an adjustment does not make invariant for the state trajectories .for example , consider the situation where the state belongs to at due to this pre - emptively adjustment .then does not converge to the origin .let be a switching time .since may not be a subset of , it follows that does not belong to the invariant region at in general .let us denote by the open ball with center at the origin and radius in . in what follows , we use the same letters as in the previous section and assume that the average dwell time satisfies .the proof consists of three steps : 1 .obtain an upper bound of the time at which the quantization process transitions from the `` zoom - out '' stage to the `` zoom - in '' stage .2 . show that there exists a time such that the state satisfies for all .3 . set so that if , then for all .we break the proof of lyapunov stability into the above three steps .\1 ) let satisfy and let be small enough to satisfy we see from the state bound that for ] .set so that since , by , , , and , assumption [ ass : near origin ] gives in the interval ] and .the vertical dashed - dotted line indicates the switching times . in this example ,the `` zooming - out '' stage finished at .we see the non - smoothness of and the increase of at the switching times because of switches and quantizer updates . not surprisingly , the adjustments of in and are conservative .we have proposed an update rule of dynamic quantizers to stabilize continuous - time switched systems with quantized output feedback .the average dwell - time property has been utilized for the state reconstruction in the `` zooming - out '' stage and for convergence to the origin in the `` zooming - in '' stage .the update rule not only periodically decreases the `` zoom '' parameter to drive the state to the origin , but also adjusts the parameter at each switching time to avoid quantizer saturation .future work involves designing the controller and the quantizer simultaneously , and addressing more general systems by incorporating disturbances and nonlinear dynamics .
in this paper , we study the problem of stabilizing continuous - time switched linear systems with quantized output feedback . we assume that the observer and the control gain are given for each mode . also , the plant mode is known to the controller and the quantizer . extending the result in the non - switched case , we develop an update rule of the quantizer to achieve asymptotic stability of the closed - loop system under the average dwell - time assumption . to avoid quantizer saturation , we adjust the quantizer at every switching time . switched systems , quantized control , output feedback stabilization .
quantum key distribution ( qkd ) is often said to be unconditionally secure .more precisely , qkd can be proven to be secure against any eavesdropping _ given _ that the users ( alice and bob ) devices satisfy some requirements , which often include mathematical characterization of users devices as well as the assumption that there is no side - channel .this means that no one can break mathematical model of qkd , however in practice , it is very difficult for practical devices to meet the requirements , leading to the breakage of the security of practical qkd systems . actually , some attacks on qkd have been proposed and demonstrated successfully against practical qkd systems . to combat the practical attacks , some counter - measures , including device independent security proof idea , have been proposed .the device independent security proof is very interesting from the theoretical viewpoint , however it can not apply to practical qkd systems where loopholes in testing bell s inequality can not be closed . as for the experimental counter - measures ,battle - testing of the practical detection unit has attracted many researchers attention since the most successful practical attack so far is to exploit the imperfections of the detectors .recently , a very simple and very promising idea , which is called a measurement device independent qkd ( mdiqkd ) has been proposed by lo , curty , and qi . in this scheme, neither alice nor bob performs any measurement , but they only send out quantum signals to a measurement unit ( mu ) .mu is a willing participant of the protocol , and mu can be a network administrator or a relay .however , mu can be untrusted and completely under the control of the eavesdropper ( eve ) .after alice and bob send out signals , they wait for mu s announcement of whether she has obtained the successful detection , and proceed to the standard post - processing of their sifted data , such as error rate estimation , error correction , and privacy amplification .the basic idea of mdiqkd is based on a reversed epr - based qkd protocol , which is equivalent to epr - based qkd in the sense of the security , and mdiqkd is remarkable because it removes _ all _ the potential loopholes of the detectors without sacrificing the performance of standard qkd since alice and bob do not detect any quantum signals from eve . moreover , it is shown in that mdiqkd with infinite number of decoy states and polarization encoding can cover about twice the distance of standard decoyed qkd , which is comparable to epr - based qkd .the only assumption needed in mdiqkd is that the preparation of the quantum signal sources by alice and bob is ( almost ) perfect and carefully characterized .we remark that the characterization of the signal source should be easier than that of the detection unit since the characterization of the detection unit involves the estimation of the response of the devices to unknown input signals sent from eve .with mdiqkd in our hand , we do not need to worry about imperfections of mu any more , and we should focus our attention more to the imperfections of signal sources .one of the important imperfections of the sources is the basis - dependent flaw that stems from the discrepancy of the density matrices corresponding to the two bases in bb84 states .the security of standard bb84 with basis - dependent flaw has been analyzed in which show that the basis - dependent flaw decreases the achievable distance .thus , in order to investigate the practicality of mdiqkd , we need to generalize the above works to investigate the security of mdiqkd under the imperfection .another problem in mdiqkd is that the first proposal is based on polarization encoding , however , in some situations where birefringence effect in optical fiber is highly time - dependent , we need to consider mdiqkd with phase encoding rather than polarization encoding . in this paper , we study the above issues simultaneously .we first propose two schemes of the phase encoding mdiqkd , one employs phase locking of two separate laser sources and the other one uses the conversion of phase encoding into polarization encoding .then , we prove the unconditional security of these schemes with basis - dependent flaw by generalizing the quantum coin idea . based on the security proof , we simulate the key generation rate with realistic parameters , especially we employ a simple model to evaluate the basis - dependent flaw due to the imperfection of the phase modulators .our simulation results imply that the first scheme covers shorter distances and may require less accuracy of the state preparation , while the second scheme can cover much longer distances when we can prepare the state very precisely .we note that in this paper we consider the most general type of attacks allowed by quantum mechanics and establish unconditional security for our protocols .this paper is organized as follows . in sec .[ sec : protocol ] , we give a generic description of mdiqkd protocol , and we propose our schemes in sec . [ sec : phase encoding scheme i ] and sec .[ sec : phase encoding scheme ii ] .then , we prove the unconditional security of our schemes in sec . [ sec : proof ] , and we present some simulation results of the key generation rate based on realistic parameters in sec . [ sec : simulation ] . finally , we summarize this paper in sec . [sec : summary ] .in this section , we introduce mdiqkd protocol whose description is generic for all the schemes that we will introduce in the following sections . the mdiqkd protocol runs as follows .step ( 1 ) : each of alice and bob prepares a signal pulse and a reference pulse , and each of alice and bob applies phase modulation to the signal pulse , which is randomly chosen from , , , and . here , ( ) defines ( )-basis .alice and bob send both pulses through quantum channels to eve who possesses mu .step ( 2 ) : mu performs some measurement , and announces whether the measurement outcome is successful or not .it also broadcasts whether the successful event is the detection of type-0 or type-1 ( the two types of the successful outcomes correspond to two specific bell states ) .step ( 3 ) : if the measurement outcome is successful , then alice and bob keep their data .otherwise , they discard the data .when the outcome is successful , alice and bob broadcast their bases and they keep the data only when the bases match , which we call sifted key . depending on the type of the successful event and the basis that they used , bob may or may not perform bit - flip on his sifted key .step ( 4 ) : alice and bob repeat ( 1)-(3 ) many times until they have large enough number of the sifted key .step ( 5 ) : they sacrifice a portion of the data as the test bits to estimate the bit error rate and the phase error rate on the remaining data ( code bits ) .step ( 6 ) : if the estimated bit error and phase error rates are too high , then they abort the protocol , otherwise they proceed .step ( 7 ) : alice and bob agree over a public channel on an error correcting code and on a hash function depending on the bit and phase error rate on the code bits . after performing error correction and privacy amplification , they share the key .the role of the mu in eve is to establish a quantum correlation , i.e. , a bell state , between alice and bob to generate the key .if it can establish the strong correlation , then alice and bob can generate the key , and if it can not , then it only results in a high bit error rate to be detected by alice and bob and they abort the protocol . as we will see later , since alice and bob can judge whether they can generate a key or not by only checking the experimental data as well as information on the fidelity between the density matrices in - basis and -basis , it does not matter who performs the measurement nor what kind of measurement is actually done as long as mu broadcasts whether the measurement outcome was successful together with the information of whether the successful outcome is type-0 or type-1 . in the security proof, we assume that mu is totally under the control of eve . in practice , however , we should choose an appropriate measurement that establishes the strong correlation under the normal operation , i.e. , the situation without eve who induces the channel losses and noises . in the following sections , we will propose two phase encoding mdiqkd schemes .in this section , we propose an experimental setup for mdiqkd with phase encoding scheme , which is depicted in fig .[ setup1 ] .this scheme will be proven to be unconditionally secure , i.e. , secure against the most general type of attacks allowed by quantum mechanics . in this setup , we assume that the intensity of alice s signal ( reference ) pulse matches with that of bob s signal ( reference ) pulse when they enter mu . in order to lock the relative phase , we use strong pulses as the reference pulses . in pl unit in the figure , the relative phase between the two strong pulses is measured in two polarization modes separately . the measurement result is denoted by ( here , the arrow represents two entries that correspond to the two relative phases ) . depending on this information , appropriate phase modulations for two polarization modesare applied to incoming signal pulse from alice .then , alice s and bob s signal pulses are input into the 50/50 beam splitter which is followed by two single - photon threshold detectors .the successful event of type-0 ( type-1 ) in step ( 2 ) is defined as the event where only d0 ( d1 ) clicks . in the case of type-1 successful detection event, bob applies bit flip to his sifted key ( we define the phase relationship of bs in such a way that d1 never clicks when the phases of the two input signal coherent pulses are the same ) . .then , the phase shift of for each polarization mode is applied to one of the signal pulses , and they will be detected by d0 and d1 after the interference at the 50:50 beam splitter bs .[ setup1 ] ] roughly speaking , our scheme performs double bb84 , i.e. , each of alice and bob is sending signals in the bb84 states , without phase randomization .differences between our scheme and the polarization encoding mdiqkd scheme include that alice and bob do not need to share the reference frame for the polarization mode , since mu performs the feed - forward control of the polarization , and our scheme intrinsically possesses the basis - dependent flaw .to see how this particular setup establishes the quantum correlation under the normal operation , it is convenient to consider an entanglement distribution scheme , which is mathematically equivalent to the actual protocol . for the simplicity of the discussion, we assume the perfect phase locking for the moment and we only consider the case where both of alice and bob use -basis .we skip the discussion for -basis , however it holds in a similar manner . in this case , the actual protocol is equivalently described as follows .first , alice prepares two systems in the following state , which is a purification of the -basis density matrix , and sends the second system to mu through the quantum channel .here , and represent coherent states that alice prepares in the actual protocol ( represents the mean photon number or inetensity ) , and are eigenstate of the computational basis ( basis ) , which is related with -basis eigenstate through and . for the later convenience ,we also define -basis states as and .moreover , the subscript of in represents that alice is to measure her qubit along -basis , the subscript of in refers to the party who prepares the system , and the superscript represents the relative phase of the superposition .similarly , bob also prepares two systems in a similar state , sends the second system to mu , and performs -basis measurement . note that -basis measurement by alice and bob can be delayed after eve s announcement of the successful event without losing any generalities in the security analysis , and we assume this delay in what follows .in order to see the joint state of the qubit pair after the announcement , note that the beam splitter converts the joint state into the following state here , for the simplicity of the discussion , we assume that there is no channel losses , we define , and represents the vacuum state .moreover , the subscripts and represent the output ports of the beam splitter . if detector d0( d1 ) detects photons and the other detector d1 ( d0 ) detects the vacuum state , i.e. , type-0 ( type-1 ) event , it is shown in the appendix a that the joint probability of having type-0 ( type-1 ) successful event and alice and bob share the maximally entangled state ( ) is .we note that since , alice and bob do not always share this state , and with a joint probability of , they have type-0 ( type-1 ) successful event and share the maximally entangled state with the phase error , i.e. , the bit error in -basis , as ( ) .note that the bit - flip operation in type-1 successful detection can be equivalently performed by rotation around -basis before bob performs basis measurement .in other words , rotation around -basis before -basis measurement does not change the statistics of the -basis measurement followed by the bit - flip .thanks to this property , we can conclude that alice and bob share with probability of and with probability of after the rotation .this means that even if alice and bob are given the successful detection event , they can not be sure whether they share or , however , if they choose a small enough , then the phase error rate ( the rate of the state in the qubit pairs remaining after the successful events or equivalently , the rate of -basis bit error among all the shared qubit pairs ) becomes small and they can generate a pure state by phase error correction , which is equivalently done by privacy amplification in the actual protocol .we note that the above discussion is valid only for the case without noises and losses , and we will prove the security against the most general attack in sec .[ sec : proof ] without relying on the argument given in this section .we remark that in the phase encoding scheme i , it is important that alice and bob know quite well about the four states that they prepare .this may be accomplished by using state tomography with homodyne measurement involving the use of the strong reference pulse .[ scheme ii ] in this section , we propose the second experimental setup for mdiqkd with phase encoding scheme .like scheme i , this scheme will also be proven to be unconditionally secure . in this scheme , the coherent pulses that alice and bobsend out are exactly the same as those in the standard phase encoding bb84 , i.e. , where subscripts and respectively denote the signal pulse and the reference pulse , is a completely random phase , is randomly chosen from to encode the information . after entering the mu ,each pulse pair is converted from a phase coding signal to a polarization coding signal by a phase - to - polarization converter ( see details below ) .we note that thanks to the phase randomization by , the joint state of the signal pulse and the reference pulse is a classical mixture of photon number states . in fig .[ setup2 ] , we show the schematics of the converter . this converter performs the phase - to - polarization conversion : to , where is a projector that projects the joint system of the signal and reference pulses to a two - dimensional single - photon subspace spanned by where and represent the photon number , and ( ) represents the horizontal ( vertical ) polarization state of a single - photon . to see how it works ,let us follow the time evolution of the input state .at the polarization beam splitter ( pbs in fig .[ setup2 ] ) , the signal and reference pulses first split into two polarization modes , h and v , and we throw away the pulses being routed to v mode .then , in h mode , the signal pulse and the reference pulse are routed to different paths by using an optical switch , and we apply -rotation only to one of the paths to convert h to v. at this point , we essentially have , where the subscripts of `` '' and `` '' respectively denote the upper path and the lower path .finally , these spatial modes and are combined together by using a polarization beam splitter so that we have in the output port depicted as `` out '' . in practice , since the birefringence of the quantum channel can be highly time dependent and the polarization state of the input pulses to mu may randomly change with time , i.e. , the input polarization state is a completely mixed state , we can not deterministically distill a pure polarization state , and thus the conversion efficiency can never be perfect .in other words , one may consider the same conversion of the v mode just after the first polarization beam splitter , however it is impossible to combine the resulting polarization pulses from v mode and the one from h mode into a single mode .we assume that mu has two converters , one is for the conversion of alice s pulse and the other one is for bob s pulse , and the two output ports `` out '' are connected to exactly the same bell measurement unit in the polarization encoding mdiqkd scheme in fig .[ bell m ] .this bell measurement unit consists of a 50:50 beam splitter , two polarization beam splitters , and four single - photon detectors , which only distinguishes perfectly two out of the four bell states of and .the polarization beam splitters discriminate between and ( note that we choose and modes rather than h and v modes since our computational basis is and ) .suppose that a single - photon enters both from alice and bob . in this case, the click of d0 + and d0- or d1 + and d1- means the detection of , and the click of d0 + and d1- or d0- and d1 + means the detection of ( see fig .[ bell m ] ) . in this scheme , since the use of coherent light induces non - zero bit error rate in -basis ( -basis ) , we consider to generate the key from -basis and we use the data in -basis only to estimate the bit error rate in this basis conditioned on that both of alice and bob emit a single - photon , which determines the amount of privacy amplification . by considering a single - photon polarization input both from alice and bob , one can see that bob should not apply the bit flip only when alice and bob use -basis and is detected in mu , and bob should apply the bit flip in all the other successful events to share the same bit value .accordingly , the bit error in -basis is given by the successful detection event conditioned on that alice and bob s polarization are identical .as for -basis , the bit error is detection given the orthogonal polarizations or detection given the identical polarization .assuming completely random input polarization state , our converter successfully converts the single - photon pulse with a probability of .note in the normal experiment that the birefringence effect between alice and the converter and the one between bob and the converter are random and independent , however it only leads to fluctuating coincidence rate of alice s and bob s signals at the bell measurement , but does not affect the qber .moreover , the fluctuation increases the single - photon loss inserted into the bell measurement .especially , the events that the output of the converter for alice is the vacuum and the one for bob is a single - photon , and vice versa would increase compared to the case where we have no birefringence effect .however , this is not a problem since the bell measurement does not output the conclusive events in these cases unless the dark counting occurs .thus , the random and independent polarization fluctuation in the normal experiment is not a problem , and we will simply assume in our simulation in sec .[ simulationii ] that this fluctuation can be modeled just by loss .we emphasize that we do not rely on these assumptions at all when we prove the security , and our security proof applies to any channels and mus . for the better performance and also for the simplicity of analysis , we assume the use of infinite number of decoy states to estimate the fraction of the probability of successful event conditioned on that both of alice and bob emit a single - photon .one of the differences in our analysis from the work in is that we will take into account the imperfection of alice s and bob s source , i.e. , the decay of the fidelity between two density matrices in two bases .we also remark that since the h and v modes are defined locally in mu , alice and bob do not need to share the reference frame for the polarization mode , which is one of the qualitative differences from polarization encoding miqkd scheme .this section is devoted to the unconditional security proof , i.e. , the security proof against the most general attacks , of our schemes .since both of our schemes are based on bb84 and the basis - dependent flaw in both protocols can be treated in the same manner , we can prove the security in a unified manner .if the states sent by alice and bob were basis independent , i.e. , the density matrices of -basis and -basis were the same , then the security proof of the original bb84 could directly apply ( also see for a bit more detailed discussion of this proof ) , however they are basis dependent in our case .fortunately , security proof of standard bb84 with basis - dependent flaw has already been shown to be secure , and we generalize this idea to our case where we have basis - dependent flaw from both of alice and bob . in order to do so , we consider a virtual protocol that alice and bob get together and the basis choices by alice and bob are made via measurement processes on the so - called quantum coin . in this virtual protocol of the phase encoding scheme i ,alice and bob prepare joint systems in the state since just replacing the state , for instance where and in the ket respectively represents the single - photon and the vacuum , is enough to apply the following proof to the phase encoding scheme ii , we discuss only the security of the phase encoding scheme i in what follows . in eq .( [ coin - joint - state ] ) , the first system denoted by is given to eve just after the preparation , and it informs eve of whether the bases to be used by alice and bob match or not .the second system , denoted by , is a copy of the first system and this system is given to bob who measures this system with basis to know whether alice s and bob s bases match or not . if his measurement outcome is ( ) , then he uses the same ( the other ) basis to be used by alice ( note that no classical communication is needed in order for bob to know alice s basis since alice and bob get together ) .the third system , which is denoted by and we call `` quantum coin '' , is possessed and to be measured by alice along basis to determine her basis choice , and the measurement outcome will be sent to eve after eve broadcasts the measurement outcome at mu . moreover ,all the second systems of , , , and are sent to eve .note in this formalism that the information , including classical information and quantum information , available to eve is the same as those in the actual protocol , and the generated key is also the same as the one of the actual protocol since the statistics of alice s and bob s raw data is exactly the same as the one of the actual protocol .thus , we are allowed to work on this virtual protocol for the security proof . the first system given to eve in eq .( [ coin - joint - state ] ) allows her to know which coherent pulses contain data in the sifted key and she can post - select only the relevant pulses .thus , without the loss of any generalities of the security proof , we can concentrate only on the post - selected version of the state in eq .( [ coin - joint - state ] ) as the most important quantity in the proof is the phase error rate in the code bits . the definition of the phase error rate is the rate of bit errors along -basis in the sifted key if they had chosen -basis as the measurement basis when both of them have sent pulses in -basis . if alice and bob have a good estimation of this rate as well as the bit error rate in the sifted key ( the bit error rate in -basis given alice and bob have chosen -basis for the state preparation ) , they can perform hashing in -basis and -basis simultaneously to distill pairs of qubits in the state whose fidelity with respect to the product state of the maximally entangled state is close to . according to the discussion on the universal composability , the key distilled via -basis measurement on such a stateis composably secure and moreover exactly the same key can be generated only by classical means , i.e. , error correction and privacy amplification .thus , we are left only with the phase error estimation . for the simplicity of the discussion , we assume the large number of successful events so that we neglect all the statistical fluctuations and we are allowed to work on a probability rather than the relative frequency . the quantity we have to estimate is the bit error along -basis , denoted by , given alice and bob send state , which is different from the experimentally available bit error rate along -basis given alice and bob send state .intuitively , if the basis - dependent flaw is very small , and should be very close since the states are almost indistinguishable . to make this intuition rigorous, we briefly review the idea by which applies bloch sphere bound to the quantum coin .suppose that we randomly choose -basis or -basis as the measurement basis for each quantum coin .let and be fraction that those quantum coins result in in -basis and -basis measurement , respectively . what bloch sphere bound , i.e. , eq .( 13 ) or eq .( 14 ) in or eq .( a1 ) in , tells us in our case is that no matter how the correlations among the quantum coins are and no matter what the state for the quantum coins is , thanks to the randomly chosen bases , the following inequality holds with probability exponentially close to in , by applying this bound separately to the quantum coins that are conditional on having phase errors and to those that are conditional on having no phase error , and furthermore by combining those inequalities using bayes s rule , we have here , is equivalent to the probability that the measurement outcome of the quantum coin along -basis is given the successful event in mu .note that this probability can be enhanced by eve who chooses carefully the pulses , and eve could attribute all the loss events to the quantum coins being in the state .thus , we have an upper bound of in the worst case scenario as and where is the frequency of the successful event .note that we have not used the explicit form of and , where , in the derivation of eqs .( [ phase error bound ] ) , ( [ fdelta ] ) , and ( [ delta ] ) , and the important point is that the state and are the purification of alice s and bob s density matrices for both bases .since there always exists purification states of and , which are respectively denoted by and , such that , can be rewritten by /2\ , , \label{fiddelta}\end{aligned}\ ] ] where represents alice s density matrix of basis and all the other density matrices are defined by the same manner .our expression of has the product of two fidelities , while the standard bb84 with basis - dependent flaw in has only one fidelity ( the fidelity between alice s density matrices in and bases ) .the two products may lead to poor performance of our schemes compared to that of standard qkd in terms of the achievable distances , however our schemes have the huge advantage over the standard qkd that there is no side - channel in the detectors .finally , the key generation rate , given -basis , in the asymptotic limit of large is given by where is the bit error rate in -basis , is the inefficiency of the error correcting code , and .we can trivially obtain the key generation rate for -basis just by interchanging -basis in all the discussions above to -basis .we remark in our security proof that we have assumed nothing about what kind of measurement mu conducts but that it announces whether it detects the successful event and the type of the event ( this announcement allows us to calculate and the error rates ) .thus , mu can be assumed to be totally under the control of eve .in the following subsections , we show some examples of the key generation rate of each of our schemes assuming typical experimental parameters taken from gobby - yuan - shields ( gys ) experiment unless otherwise stated .moreover , we assume that the imperfect phase modulation is the main source of the decay of the fidelity between the density matrices in two bases , and we evaluate the effect of this imperfection on the key generation rate . in the phase encoding schemei , the important quantity for the security can be expressed as \ , .\label{delta11}\end{aligned}\ ] ] note that this quantity is dependent on the intensity of alice s and bob s sources . as we have mentioned in sec .iii , this quantity may be estimated relatively easily via tomography involving homodyne measurement . ) of and . dashed line : ( a ) mu is at bob s side , i.e. , . solid line : ( b ) mu is just in the middle between alice and bob .the lines achieving the longer distances correspond to of .see also the main text for the explanation .[ fig : key33 ] ] ) that outputs fig .[ fig : key33 ] as a function of the distance between alice and bob .[ fig : intensity33 ] ] to simulate the resulting key generation rate , we assume that the bit error stems from the dark counting as well as alignment errors due to imperfect phase locking or imperfect optical components . the alignment error is assumed to be proportional to the probability of having a correct click caused only by the optical detection not by the dark counting .moreover , we make assumptions that all the detectors have the same characteristics for the simplicity of the analysis , and alice and bob choose the intensities of the signal lights in such a way that the intensities of the incoming pulses to mu are the same . finally , we assume the quantum inefficiency of the detectors to be part of the losses in the quantum channels .with all the assumptions , we may express the resulting experimental parameters as (1-p_{\rm dark})\nonumber\\ & + & ( 1-p_{\rm dark})e^{-2\alpha_{\rm in}}p_{\rm dark}\nonumber\\ \gamma_{\rm suc}&=&\gamma_{\rm suc}^{(x)}+\gamma_{\rm suc}^{(y)}\nonumber\\ \delta_x&=&\delta_y=\big[e_{\rm ali}(1-p_{\rm dark})^2(1-e^{-2\alpha_{\rm in}})\nonumber\\ & + & ( 1-p_{\rm dark})e^{-2\alpha_{\rm in}}p_{\rm dark}\big]/\gamma_{\rm suc}^{(x)}\nonumber\\ \alpha_{\rm in}&\equiv&\alpha_{a}\eta_{a}=\alpha_{b}\eta_{b}\nonumber\\ \eta_{a}&=&\eta_{{\rm det } , a}10^{-\xi_{a } l_{a}/10}\nonumber\\ \eta_{b}&=&\eta_{{\rm det } , b}10^{-\xi_{b } l_{b}/10}\ , . \label{ex data i}\end{aligned}\ ] ] here , is the dark count rate of the detector , is the alignment error rate , is alice s ( bob s ) overall transmission rate , ( ) is the quantum efficiency of alice s ( bob s ) detector , is alice s ( bob s ) channel transmission rate , and ( ) is the distance between alice ( bob ) and mu .the first term and the second term in or respectively represent the alignment error , which is assumed to be proportional to the probability of having correct bit value due to the detection of the light , and errors due to dark counting ( one detector clicks due to the dark counting while the other one does not ) .we take the following parameters from gys experiment : , , ( db / km ) , , and , and we simulate the key generation rate as a function of the distance between alice and bob in fig .[ fig : key33 ] . in the figure, we consider two settings : ( a ) mu is at bob s side , i.e. , ( b ) mu is just in the middle between alice and bob .the reason why we consider these setting is that the basis - dependent flaw is dependent on intensities that alice and bob employ , and it is not trivial where we should place mu for the better performance .since mdiqkd polarization encoding scheme without basis - dependent flaw achieves almost twice the distance of bb84 , we may expect that the setting ( b ) could achieve almost twice the distance of bb84 without phase randomization that achieves about 13 ( km ) with the same experimental parameters .the simulation result , however , does not follow this intuition since we have the basis - dependent flaw not only from alice s side but also from bob s side .thus , the advantage that we obtain from putting mu between alice and bob is overwhelmed by the double basis - dependent flaw . in each setting , we have optimized the intensity of the coherent pulses for each distance ( see fig .[ fig : intensity33 ] ) . in order to explain why the optimal is so small, note that scheme i intrinsically suffers from the basis - dependent flaw due to eq .( [ delta11 ] ) .this means that if we use relatively large , then we can not generate the key due to the flaw .actually , when we set , which is a typical order of the amplitude for decoy bb84 , one can see that the upper bound of the phase error rate is even in the zero distance , i.e. , , and we have no chance to generate the key with this amplitude .thus , alice and bob have to reduce the intensities in order to suppress the basis - dependent flaw .also , as the distance gets larger and the losses get increased , alice and bob have to use weaker pulses since larger losses can be exploited by eve to enhance the basis - dependent flaw according to eq .( [ fdelta ] ) , and they can reduce the intensities until it reaches the cut - off value where the detection of the weak pulses is overwhelmed by the dark counts . in the above simulation , we have assumed that alice and bob can prepare states very accurately , however in reality , they can only prepare approximate states due to the imperfection of the sources .this imperfection gives more basis - dependent flaw , and in order to estimate the effect of this imperfection , we assume that the fidelity between the two actually prepared density matrices in two bases is approximated by the fidelity between the following density matrices ( see appendix b for the detail ) and where we assume an imperfect phase modulator whose degree of the phase modulation error is proportional to the target phase modulation value , and represents the imperfection of the phase modulation that is related with the extinction ratio as in this equation , we assume that the non - zero extinction ratio is only due to the imperfection of the phase modulators . since imperfect phase modulation results in the same effect as the alignment errors , i.e. , the pulses are routed to a wrong output port , we assume that the alignment error rate is increased with this imperfection . thus , in the simulation accommodating the imperfection of the phase modulation , we replace with . here , we have used a pessimistic assumption that the effect of the phase modulation becomes -times higher than before since each of alice and bob has one phase modulator and mu has two phase modulators for the phase shift of two polarization modes ( note from eq .( [ mimperfect phase modulator ] ) that is approximately proportional to , thus 4 times degradation in terms of the accuracy of the phase modulation results in -times degradation in terms of the extinction ratio ) .we also remark that in practice , it is more likely that the phase encoding errors are independent , in which case a factor of 4 will suffice and the key rate will actually be higher than what is presented in our paper . on the other hand , we have to use the following when we consider the security : /2\,.\end{aligned}\ ] ] in figs .[ fig : keyimpferfectpmi ] and [ fig : intensityimpferfectpmi ] , we plot the key generation rate and the corresponding optimal alice s mean photon numbers ( ) as a function of the distance between alice and bob . in the figures , we define that satisfies as , where is the typical order of in some experiments .we have confirmed that we can not generate the key when .however , we can see in the figures that if the accuracy of the phase modulation is increased three times or five times , i.e. , and , then we can generate the key . like the case in fig .[ fig : intensity33 ] , the small optimal mean photon number can be intuitively understood by the arguments that we have already made in this section . in order to investigate the feasibility of the phase encoding scheme i with the current technologies , we replace , , and with , , and .we see in fig . [fig : keyimpferfectpminew ] that the key generation is possible over much longer distances with those parameters assuming the precise control of the intensities of the laser source .we also show the corresponding optimal mean photon number in fig .[ fig : intensityimpferfectpminew ] .we note that thanks to the higher quantum efficiency , the success probability becomes higher , following that alice and bob can use larger mean photon number compared to those in figs .[ fig : intensityimpferfectpmi ] and [ fig : intensityimpferfectpminew ] . ) of and imperfect phase modulators . represents the typical amount of the phase modulation error , and we plot the key rate for smaller imperfection of and . dashed line : mu is at bob s side , i.e. , .solid line : mu is just in the middle between alice and bob.[fig : keyimpferfectpmi ] ] ) that outputs fig .[ fig : keyimpferfectpmi ] as a function of the distance between alice and bob .[ fig : intensityimpferfectpmi ] ] with , , and . dashed line :mu is at bob s side , i.e. , .solid line : mu is just in the middle between alice and bob.[fig : keyimpferfectpminew ] ] ) that outputs fig .[ fig : keyimpferfectpminew ] as a function of the distance between alice and bob .[ fig : intensityimpferfectpminew ] ] in the phase encoding scheme ii , note that we can generate the key only from the successful detection event in mu given both of alice and bob send out a single - photon since if either or both of alice and bob emit more than one photon , then eve can employ the so - called photon number splitting attack .thus , the important quantities to estimate are , , , , which respectively represents gain in -basis given both of alice and bob emit a single - photon , the phase error rate given alice and bob emit a single - photon , overall bit error rate in -basis , and overall gain in -basis . to estimate these quantities stemming from the simultaneous single - photon emission ,we assume the use of infinite number of decoy states for the simplicity of analysis .another important quantity in our study is the fidelity ( ) between alice s ( bob s ) -basis and -basis density matrices of only single - photon component , _ not _ whole optical modes .if this fidelity is given , then we have for the simplicity of the discussion , we consider the case of in our simulation .the estimation of the fidelity only in the single - photon part is very important , however to the best of our knowledge we do not know any experiment directly measuring this quantity .this measurement may require photon number resolving detectors and very accurate interferometers .thus , we again assume that the degradation of the fidelity is only due to the imperfect phase modulation given by eq .( [ mimperfect phase modulator ] ) , and we presume that the fidelity of the two density matrices between the two bases is approximated by the fidelity between the following density matrices ( see appendix b for the detail ) \nonumber\\\rho_{y}^{(1)}&=&\frac{1}{2}\big[{\hat p}\left(\frac{{\left| 0_z \right\rangle}+i e^{i|\delta|/2}{\left| 1_z \right\rangle}}{\sqrt{2}}\right)\nonumber\\ & + & { \hat p}\left(\frac{{\left| 0_z \right\rangle}-i e^{-i|\delta|/2}{\left| 1_z \right\rangle}}{\sqrt{2}}\right)\big]\,.\end{aligned}\ ] ] with these parameters , we can express the key generation rate given alice and bob use -basis as -f(\delta_{x})q_{x}h(\delta_{x})\,,\end{aligned}\ ] ] where is the version of in eq .( [ key rate ] ) . to simulate the resulting key generation rate ,the bit errors are assumed to stem from multi - photon component , the dark counting , and the misalignment that is assumed to be proportional to the probability of obtaining the correct bit values only due to the detection by optical pulses .like before , we also assume that all the detectors have the same characteristics , alice and bob choose the intensities of the signal lights in such a way that the intensities of the incoming pulses to mu are the same , and all the quantum inefficiencies of the detectors can be attributed to part of the losses in the quantum channel .finally , alice s and bob s coherent light sources are assumed to be phase randomized , and the imperfect phase modulation is represented by the increase of the alignment error rate . with these assumptions , we may have the following resulting experimental parameters \nonumber\\ & + & w^{(2,1)}+w^{(2,0)}\nonumber\\ \delta_{x}^{(1,1)}&=&\big\{4\alpha_{a}\alpha_{b}\eta_{a}\eta_{b}e^{-2(\alpha_{a}+\alpha_{b})}p_{\rm dark}(1-p_{\rm dark})^2/2\nonumber\\ & + & 2(e_{\rm ali}+4\eta_{\rm ex})\alpha_{a}\alpha_{b}\eta_{a}\eta_{b}e^{-2(\alpha_{a}+\alpha_{b})}(1-p_{\rm dark})^2\nonumber\\ & + & ( w^{(2,1)}+w^{(2,0)})/2\big\}/q^{(1,1)}_{x}\nonumber\\ q^{(1,1)}_{y}&=&q^{(1,1)}_{x}\nonumber\\ \delta^{(1,1)}_{y}&=&\delta^{(1,1)}_{x}\nonumber\\ w^{(2,1)}&\equiv&8\alpha_{a}\alpha_{b}e^{-2(\alpha_{a}+\alpha_{b})}\big[\eta_{a}(1-\eta_{b})+(1-\eta_{a})\eta_{b}\big]\nonumber\\ & \times&p_{\rm dark}(1-p_{\rm dark})^2\nonumber\\ w^{(2,0)}&\equiv&16\alpha_{a}\alpha_{b}(1-\eta_{a})(1-\eta_{b})e^{-2(\alpha_{a}+\alpha_{b})}\nonumber\\ & \times&p_{\rm dark}^2(1-p_{\rm dark})^2\nonumber\\ q_{x}&=&2\left[1-(1-p_{\rm dark})e^{-\alpha_{\rm in}}\right]^2(1-p_{\rm dark})^2e^{-2\alpha_{\rm in}}+v \nonumber\\ \delta_{x}&=&v+(e_{\rm ali}+4\eta_{\rm ex})2\left(1-e^{-\alpha_{\rm in}}\right)^2 \nonumber\\ & \times&(1-p_{\rm dark})^2e^{-2\alpha_{\rm in}}\nonumber\\ v&\equiv&\frac{p_{\rm dark}(1-p_{\rm dark})}{2\pi}\nonumber\\ & \times&\int_{0}^{2\pi}d\theta\big[1-(1-p_{\rm dark})e^{-\alpha_{\rm in}|1+e^{i\theta}|^2}\big ] \nonumber\\ & \times&\big[(1-p_{\rm dark})e^{-\alpha_{\rm in}|1-e^{i\theta}|^2}\big]\nonumber\\ & + & \frac{p_{\rm dark}(1-p_{\rm dark})}{2\pi}\nonumber\\ & \times&\int_{0}^{2\pi}d\theta\big[1-(1-p_{\rm dark})e^{-\alpha_{\rm in}|1-e^{i\theta}|^2}\big ] \nonumber\\ & \times&\big[(1-p_{\rm dark})e^{-\alpha_{\rm in}|1+e^{i\theta}|^2}\big]\nonumber\\ \alpha_{\rm in}&\equiv&\alpha_{a}\eta_{a}=\alpha_{b}\eta_{b}\nonumber\\ \eta_{a}&=&\eta_{{\rm det } , a}10^{-\xi_{a } l_{a}/10}/2\nonumber\\ \eta_{b}&=&\eta_{{\rm det } , b}10^{-\xi_{b } l_{b}/10}/2\ , \label{exp - data - ii}\end{aligned}\ ] ] note that ( ) represents each of the intensity of alice s ( bob s ) signal light and the reference light , _ not _ the total intensity of them , and and are divided by since the conversion efficiency of our converter is . in again comes from the pessimistic assumption that each of alice s and bob s phase modulator is imperfect , and ( ) represents the probability of the event where both of alice and bob emit a single - photon and only one ( zero ) photon is detected but the successful detection event is obtained due to the dark counting . on the other hand , the quantity that quantifies the basis - dependent flaw in the present case is upper bounded by \nonumber\\ q^{(1,1)}&\equiv&(q^{(1,1)}_{x}+q^{(1,1)}_{y})/2\end{aligned}\ ] ] where is the probability that mu receives a single - photon both from alice andbob simultaneously conditioned on that each of alice and bob sends out a single - photon .we remark that in this scheme is only dependent on the accuracy of the phase modulation .this is different from scheme i where the manipulation of the intensities of the pulses can affect the basis - dependent flaw . in the simulation, we again assume gys experimental parameters and we consider two settings : ( a ) mu is at bob s side and ( b ) mu is just in the middle between alice and bob .note that is independent of and in the phase encoding scheme ii case . in fig .[ f1 ] , we plot the key generation rates of ( a ) and ( b ) for , , , ( recall from eq .( [ mimperfect phase modulator ] ) that that corresponds to the typical extinction ratio of ) , which respectively correspond to , , , and , and the achievable distances of ( a ) and ( b ) increase with the improvement of the accuracy , i.e. , with the decrease of .we have confirmed that no key can be distilled in ( a ) and ( b ) when .the figure shows that the achievable distance drops significantly with the degradation of the accuracy of the phase modulator , and the main reason of this fast decay is that is approximated by and this dominator decreases exponentially with the increase of the distance .we also plot the corresponding optimal in fig .notice that the mean photon number increases in some regime in some cases of ( a ) , and recall that this increase does not change .if we increased the intensity in scheme i with the distance , then we would have more basis - dependent flaw , resulting in shortening of the achievable distance .this may be an intuitive reason why we see no such increase in figs .[ fig : intensity33 ] , [ fig : intensityimpferfectpmi ] , and 9 . like in the phase encoding scheme i, we investigate the feasibility of the phase encoding scheme ii with the current technologies by replacing , , and with , , and . with this upgrade, we have confirmed the impossibility of the key generation , however if we double the quantum efficiency of the detector or equivalently , if we assume the polarization encoding so that the factor of , which is introduced by the phase - to - polarization converter , is removed both from and in eq .( [ exp - data - ii ] ) , then we can generate the key , which is shown in fig . [ f1new ] ( also see fig .finally , we note that our simulation is essentially the same as the polarization coding since the fact that we use phase encoding is only reflected by the dominator of 2 in and in eq .( [ exp - data - ii ] ) .thus , the behavior of the key generation rate against the degradation of the state preparation is the same also in polarization based mdiqkd .also note that even in the standard bb84 , decays exponentially with increasing distance .thus , we conclude that very precise state preparation is very crucial in the security of not only mdiqkd but also in standard qkd. we also note that our estimation of the fidelity might be too pessimistic since we have assumed that the degradation of the extinction ratio is only due to imperfect phase modulation . in reality, the imperfection of mach - zehnder interferometer and other imperfections should contribute to the degradation , and the fidelity should be closer to than the one based on our model . .solid line : ( b ) mu is just in the middle between alice and bob .we plot the key generation rates of each case when , , , where is proportional to the amount of the phase modulation error , and for each case of ( a ) and ( b ) the key generation rates monotonously increase with the decrease of ., i.e. , with the improvement of the phase modulation . the key rates of ( a ) and ( b ) when are almost superposed . see also the main text for the explanation .[ f1 ] ] ) that outputs fig .the bold lines correspond to ( a ) .see also the main text for the explanation .[ f2 ] ] , , and .note that we double compared to the one of , or we effectively consider the polarization encoding .dashed line : ( a ) mu is at bob s side , i.e. , . solid line : ( b ) mu is just in the middle between alice and bob .the key rates are almost superposed .see also the main text for the explanation .[ f1new ] ] ) that outputs fig .[ f1new ] .the bold lines correspond to ( a ) .see also the main text for the explanation .[ f2new ] ]in summary , we have proposed two phase encoding mdiqkd schemes .the first scheme is based on the phase locking technique and the other one is based on the conversion of the pulses in the standard phase encoding bb84 to polarization modes .we proved the security of the first scheme , which intrinsically possesses basis - dependent flaw , as well as the second scheme with the assumption of the basis - dependent flaw in the single - photon part of the pulses .based on the security proof , we also evaluate the effect of imperfect state preparation , and especially we focus our attention to the imperfect phase modulation .while the first scheme can cover relatively short distances of the key generation , this scheme has an advantage that the basis - dependent flaw can be controlled by the intensities of the pulses .thanks to this property , we have confirmed based on a simple model that 3 or 5 times of the improvement in the accuracy of the phase modulation is enough to generate the key . moreover , we have confirmed that the key generation is possible even without these improvements if we implement this scheme by using the up - to - date technologies and the control of intensities of the laser source is precise . on the other hand , it is not so clear to us how accurate we can lock the phase of two spatially separated laser sources , which is important for the performance of scheme i. our result still implies that scheme i can tolerate up to some extent of the imperfect phase locking errors , which should be basically the same as the misalignment errors , but further analysis of the accuracy from the experimental viewpoint is necessary . we leave this problem for the future studies .-basis when , , where is the amount of the phase modulation error .[ sf1 ] ] ) that outputs fig .[ sf2 ] ] the second scheme can cover much longer distances when the fidelity of the _ single - photon components _ of -basis and -basis density matrices is perfect or extremely close to perfect . when we considerthe slight degradations of the fidelity , however , we found that the achievable distances drop significantly .this suggests that we need a photon source with a very high fidelity , and very accurate estimation of the fidelity of the single - photon subspace is also indispensable . in our estimation of the imperfection of the phase modulation, we simply assume that the degradation of the extinction ratio is only due to imperfect phase modulation , which might be too pessimistic , and the imperfection of mach - zehnder interferometer and other imperfections contribute to the degradation .thus , the actual fidelity between the density matrices of the single - photon part in two bases might be very close to 1 , which should be experimentally confirmed for the secure communication .we note that the use of the passive device to prepare the state may be a promising way for the very accurate state preparation .we remark that the accurate preparation of the state is very important not only in mdiqkd but also in standard qkd where eve can enhance the imbalance of the quantum coin exponentially with the increase of the distance . to see this point , we respectively plot in fig .[ sf1 ] and fig .[ sf2 ] the key generation rate of standard bb84 with infinite decoy states in -basis and its optimal mean photon number assuming , , , , and .again , is the typical value of the phase modulation error , and we see in the figure that the degradation of the phase modulator in terms of the accuracy significantly decreases the achievable distance of secure key generation .one notices that standard decoy bb84 is more robust against the degradation since the probability that the measurement outcome of the quantum coin along -basis is given the successful detection of the signal by bob is written as rather than . on the other hand, one has to remember that we trust the operation of bob s detectors in this simulation , which may not hold in practice .finally , we neglect the effect of the fluctuation of the intensity and the center frequency of the laser light in our study , which we will analyze in the future works . in summary ,our work highlights the importance of very accurate preparation of the states to avoid basis - dependent flaws .we thank x. ma , m. curty , k. azuma , t. yamamoto , r. namiki , t. honjo , h. takesue , y. tokunaga , and especially g. kato for enlightening discussions .part of this research was conducted when k. t and c - h .f. f visited the university of toronto , and they express their sincere gratitude for all the supports and hospitalities that they received during their visit .this research is in part supported by the project `` secure photonic network technology '' as part of `` the project uqcc '' by the national institute of information and communications technology ( nict ) of japan , in part by the japan society for the promotion of science ( jsps ) through its funding program for world - leading innovative r on science and technology ( first program ) " , in part by rgc grant no .700709p of the hksar government , and also in part by nserc , canada research chair program , canadian institute for advanced research ( cifar ) and quantumworks .in this appendix , we give a detailed calculation about how scheme i works when there is no channel losses and noises . in order to calculate the joint probability that alice and bob obtain type-0 successful event , where only the detector d0 clicks , and they share the maximally entangled state , we introduce a projector that corresponds to type-0 successful event . here, represents the non - vacuum state .the state after alice and bob have the type-0 successful event ( see eq .( [ normal schi ] ) for the definition of ) can be expressed by here , is an identity operator on and , and are complex numbers , and and are orthonormal bases , which are related with each other through by a direct calculation , one can show that latexmath:[\ ] ] here , , , and , and is defined by where is a purification of , which is the state that alice actually prepares for the bit value in basis , and is alice s qubit system .one can choose any purification for , and in particular it should be chosen in such a way that it maximizes the inner product in eq .( c2 ) or ( c3 ) .one can similarly define , and is introduced via considering a joint state involving the quantum coin as due to this change , the figures for the key generation rate have to be revised . as the examples of revised figures , we show the revised version of figs . 8 , 9 , 12 , and 13 , which are the most important figures for our main conclusions to hold .notice that there are only minor changes in figs . 8 and 9 andthe changes in figs .12 and 13 are relatively big. however , the big changes do not affect the validity of the main conclusions in our paper , which is the importance of the state preparation in mdiqkd and the fact that our schemes can generate the key with the practical channel mode that we have assumed . for the derivation of eq .( [ 0 ] ) , we invoke koashi s proof .to apply koashi s proof , it is important to ensure that i ) one of the two parties holds a virtual * qubit * ( rather than a higher dimensional system ) and ii ) the fictitious measurements performed on the virtual qubit have to form * conjugate * observables .therefore , it is not valid to consider fidelity alone ( which allows arbitrary purifications that may not satisfy the conjugate observables requirement ) .fortunately , it turns out to be easy to modify our equation to satisfy the above two requirements . since the difference between eq .( c2 ) and eq .( c3 ) comes from whether we consider alice s virtual qubit or bob s virtual qubit , we focus only on eq .( c2 ) and the same argument holds for eq .( c3 ) . in koashi sproof , the security is guaranteed via two alternative tasks , ( i ) agreement on x ( key distillation basis ) and ( ii ) alice s or bob s preparation of an eigenstate of y , the conjugate basis of x , with use of an extra communication channel . the problem with the original ( i.e. uncorrected ) version of eq .( 9 ) is the following .if we use the uncorrected version of eq .( 9 ) in our paper , then the use of the fidelity means that the real part in eq .( c2 ) is equivalent to with the maximization over _ all possible _ local unitary operators . in this case , if alice performs a measurement along x basis , then it violates the correspondence between her sending state and her qubit state in general , and thus , in the uncorrected version of eq .( 9 ) in our paper , the argument based on the fidelity does not guarantee the security of the protocol .in contrast , with the corrected version of eq .( 9 ) in our paper , since the maximization over and in eq .( c2 ) preserves the relationship between alice s sending state and her qubit state as well as the conjugate relationship between x and y , we can apply koashi s proof for the security argument of the protocol .b. qi , c .- h .f. fung , h .- k .lo and x. ma , quant .73 - 82 ( 2007 ) , y. zhao , c .- h .f. fung , b. qi , c. chen and h .- k .lo , phys .a 78 , 042333 ( 2008 ) , c .- h .f. fung , b. qi , k. tamaki and h .- k .lo , phys .a 75 , 032314 ( 2007 ) , f. xu , b. qi and h .- k .lo , new j. phys .12 , 113026 ( 2010 ) .l. lydersen , c. wiechers , c. wittmann , d. elser , j. skaar and v. makarov , nature photonics 4 , pp .686 - 689 ( 2010 ) , z. l. yuan , j. f. dynes and a. j. shields , nature photonics 4 , pp .800 - 801 ( 2010 ) , l. lydersen , c. wiechers , c. wittmann , d. elser , j. skaar and v. makarov , nature photonics 4 , 801 ( 2010 ) , i. gerhardt , q. liu , a. lamas - linares , j. skaar , c. kurtsiefer and v. makarov , nature comm . 2 , 349 ( 2011 ) , l. lydersen , m. k. akhlaghi , a. h. majedi , j. skaar and v. makarov , arxiv : 1106.2396 .f. fung , k. tamaki , b. qi , h .- k .lo and x. ma , quant .inf . comp .9 , 131 ( 2009 ) , l. lydersen , j. skaar , quant .inf . comp . * 10 * , 0060 ( 2010 ) , .mary , l. lydersen , j. skaar , phys .a * 82 * , 032337 ( 2010 ) .d. mayers and a. c .- c .yao , in proceedings of the 39th annual symposium on foundations of computer science ( focs98 ) , ( ieee computer society , washington , dc , 1998 ) , p. 503, a. acin , n. brunner , n. gisin , s. massar , s. pironio , and v. scarani , phys .lett . * 98 * , 230501 ( 2007 ) .the definition of the four bell state is as follows .=\frac{1}{\sqrt{2}}[{\left| 0_x \right\rangle}_{a1}{\left| 0_x \right\rangle}_{b1}-{\left| 1_x \right\rangle}_{a1}{\left| 1_x \right\rangle}_{b1}] ] , =\frac{1}{\sqrt{2}}[{\left| 0_x \right\rangle}_{a1}{\left| 0_x \right\rangle}_{b1}+{\left|1_x \right\rangle}_{a1}{\left| 1_x \right\rangle}_{b1}]$ ] , and .one of the most simplest proofs is shor - preskill s proof .the intuition of this proof is as follows .note that if alice and bob share some pairs of , ( i.e. , alice has one half of each pair and bob has the other half ) , then they can generate a secure key by performing -basis measurement .the reason of the security is that this state is a pure state , which means that this state has no correlations with the third system including eve s system .due to the intervention by eve , alice and bob do not share this pure state in general , but instead they share noisy pairs .the basic idea of the proof is to consider the distillation of from the noisy pairs .for the distillation , note that is only one qubit pair state that has no bit errors in -basis ( we call this error as the bit error ) and has no bit errors in -basis ( we call this error as the phase error ) .it is known that if alice and bob employ the so - called css code ( calderbank - shor - steane code ) , then the noisy pairs are projected to a classical mixture of the four bell states , i.e. , , ( with the phase error ) , ( with the bit error ) , and ( with both the phase and bit errors ) .moreover , if alice and bob choose a correct css code , which can be achieved by random sampling procedure , then css code can detect the position of the erroneous pair with high probability .thus , by performing bit and phase flip operation depending on the detected error positions , alice and bob can distill some qubit pairs that are very close in fidelity to the product state of .in general , implementation of the above scheme requires a quantum computer .fortunately , shor - preskill showed that the bit error detection and bit flip operation can be done classically , and the phase error detection and phase flip operation need not be done , but exactly the same key can be obtained by the privacy amplification , so that we do not need to possess a quantum computer for the key distillation .r. renner , and r. koenig , proc . of tcc 2005 , lncs , springer , vol .* 3378 * ( 2005 ) , m. ben - or , and dominic mayers , arxiv : quant - ph/0409062 , m. ben - or , michal horodecki , d. w. leung , d. mayers , j. oppenheim , theory of cryptography : second theory of cryptography conference , tcc 2005 , j.kilian ( ed . )springer verlag 2005 , vol . * 3378 * of lecture notes in computer science , pp .386 - 406 .
in this paper , we study the unconditional security of the so - called measurement device independent quantum key distribution ( mdiqkd ) with the basis - dependent flaw in the context of phase encoding schemes . we propose two schemes for the phase encoding , the first one employs a phase locking technique with the use of non - phase - randomized coherent pulses , and the second one uses conversion of standard bb84 phase encoding pulses into polarization modes . we prove the unconditional security of these schemes and we also simulate the key generation rate based on simple device models that accommodate imperfections . our simulation results show the feasibility of these schemes with current technologies and highlight the importance of the state preparation with good fidelity between the density matrices in the two bases . since the basis - dependent flaw is a problem not only for mdiqkd but also for standard qkd , our work highlights the importance of an accurate signal source in practical qkd systems . + * note : we include the erratum of this paper in appendix c. the correction does not affect the validity of the main conclusions reported in the paper , which is the importance of the state preparation in mdiqkd and the fact that our schemes can generate the key with the practical channel mode that we have assumed . *
in the past 30 years , the development of linear algebra libraries has been tremendously successful , resulting in a variety of reliable and efficient computational kernels .unfortunately these kernels are limited by a rigid interface that does not allow users to pass knowledge specific to the target problem .if available , such knowledge may lead to domain - specific algorithms that attain higher performance than any traditional library .the difficulty does not lay so much in creating flexible interfaces , but in developing algorithms capable of taking advantage of the extra information . in this paper , we present preliminary work on a linear algebra compiler , written in mathematica , that automatically exploits application - specific knowledge to generate high - performance algorithms .the compiler takes as input a target equation and information on the structure and properties of the operands , and returns as output algorithms that exploit the given information .in the same way that a traditional compiler breaks the program into assembly instructions directly supported by the processor , attempting different types of optimization , our linear algebra compiler breaks a target operation down to library - supported kernels , and generates not one but a family of viable algorithms .the decomposition process undergone by our compiler closely replicates the thinking process of a human expert .we show the potential of the compiler by means of a challenging operation arising in computational biology : the _ genome - wide association study _ ( gwas ) , an ubiquitous tool in the fields of genomics and medical genetics . as part of gwas, one has to solve the following equation where , , and are known quantities , and is sought after .the size and properties of the operands are as follows : , is full rank , is symmetric positive definite ( spd ) , , , and ; , , , and is either or of the order of . at the core of gwaslays a linear regression analysis with non - independent outcomes , carried out through the solution of a two - dimensional sequence of the generalized least - squares problem ( gls ) while gls may be directly solved , for instance , by matlab , or may be reduced to a form accepted by lapack , none of these solutions can exploit the specific structure pertaining to gwas . the nature of the problem , a sequence of correlated glss , allows multiple ways to reuse computation . also , different sizes of the input operands demand different algorithms to attain high performance in all possible scenarios .the application of our compiler to gwas , eq . [ eq : probdef ] , results in the automatic generation of dozens of algorithms , many of which outperform the current state of the art by a factor of four or more .the paper is organized as follows .related work is briefly described in section [ sec : related ] .sections [ sec : principles ] and [ sec : system - overview ] uncover the principles and mechanisms upon which the compiler is built . in section[ sec : generation - algs ] we carefully detail the automatic generation of multiple algorithms , and outline the code generation process . in section [ sec : performance ]we report on the performance of the generated algorithms through numerical experiments .we draw conclusions in section [ sec : conclusions ] .a number of research projects concentrate their efforts on domain - specific languages and compilers . among them , the spiral project and the tensor contraction engine ( tce ) , focused on signal processing transforms and tensor contractions , respectively .as described throughout this paper , the main difference between our approach and spiral is the inference of properties .centered on general dense linear algebra operations , one of the goals of the flame project is the systematic generation of algorithms .the flame methodology , based on the partitioning of the operands and the automatic identification of loop - invariants , has been successfully applied to a number of operations , originating hundreds of high - performance algorithms .the approach described in this paper is orthogonal to flame .no partitioning of the operands takes place . instead , the main idea is the mapping of operations onto high - performance kernels from available libraries , such as blas and lapack .in this section we expose the human thinking process behind the generation of algorithms for a broad range of linear algebra equations . as an example, we derive an algorithm for the solution of the gls problem , eq .[ eq : fgls ] , as it would be done by an expert .together with the derivation , we describe the rationale for every step of the algorithm .the exposed rationale highlights the key ideas on top of which we founded the design of our compiler. given eq .[ eq : fgls ] , the * first concern is the inverse operator * applied to the expression .since is not square , the inverse can not be distributed over the product and the expression needs to be processed first .the attention falls then on .the inversion of a matrix is costly and not recommended for numerical reasons ; therefore , since is a general matrix , we * factor * it . given the structure of ( spd ) , we choose a cholesky factorization , resulting in where is square and lower triangular .as is square , the inverse may now be distributed over the product , yielding .next , we process ; we observe that the quantity * appears multiple times * , and may be computed and reused to * save computation * : at this point , since is not square and the inverse can not be distributed , there are two * alternatives * : 1 ) multiply out ; or 2 ) factor , for instance through a qr factorization . in this example , we choose the former : one can prove that is spd , suggesting yet another factorization .we choose a cholesky factorization and distribute the inverse over the product : now that all the remaining inverses are applied to triangular matrices , we are left with a series of products to compute the final result . since all operands are matrices except the vector , we compute eq . [eq : exalg1step4 ] from right to left to * minimize the number of flops*. the final algorithm is shown in alg .[ alg : exalg - chol ] , together with the names of the corresponding blas and lapack building blocks . .... ( ! \sc potrf ! ) ( ! \sc trsm ! ) ( ! \sc syrk ! ) ( ! \sc potrf ! ) ( ! \sc trsv ! ) ( ! \sc gemv ! ) ( ! \sc trsv ! ) ( ! \sc trsv ! ) .... three ideas stand out as the guiding principles for the thinking process : * the first concern is to deal , whenever it is not applied to diagonal or triangular matrices , with the inverse operator .two scenarios may arise : a ) it is applied to a single operand , . in this casethe operand is factored with a suitable factorization according to its structure ; b ) the inverse is applied to an expression .this case is handled by either computing the expression and reducing it to the first case , or factoring one of the matrices and analyzing the resulting scenario .* when decomposing the equation , we give priority to a ) common segments , i.e. , common subexpressions , and b ) segments that minimize the number of flops ; this way we reduce the amount of computation performed . *if multiple alternatives leading to viable algorithms arise , we explore all of them .our compiler follows the above guiding principles to closely replicate the thinking process of a human expert . to support the application of these principles , the compiler incorporates a number of modules ranging from basic matrix algebra support to analysis of dependencies , including the identification of building blocks offered by available libraries . in the following ,we describe the core modules .matrix algebra : : the compiler is written using mathematica from scratch .we implement our own operators : addition ( plus ) , negation ( minus ) , multiplication ( times ) , inversion ( inv ) , and transposition ( trans ) .together with the operators , we define their precedence and properties , as commutativity , to support matrices as well as vectors and scalars .we also define a set of rewrite rules according to matrix algebra properties to freely manipulate expressions and simplify them , allowing the compiler to work on multiple equivalent representations .inference of properties : : in this module we define the set of supported matrix properties . as of now : identity , diagonal , triangular , symmetric , symmetric positive definite , and orthogonal . on top of these properties, we build an inference engine that , given the properties of the operands , is able to infer properties of complex expressions .this module is extensible and facilitates incorporating additional properties . building blocks interface : : this module contains an extensive list of patterns associated with the desired building blocks onto which the algorithms will be mapped. it also contains the corresponding cost functions to be used to construct the cost analysis of the generated algorithms . as with the properties module ,if a new library is to be used , the list of accepted building blocks can be easily extended .analysis of dependencies : : when considering a sequence of problems , as in gwas , this module analyzes the dependencies among operations and between operations and the dimensions of the sequence . through this analysis , the compiler rearranges the operations in the algorithm , reducing redundant computations .code generation : : in addition to the automatic generation of algorithms , the compiler includes a module to translate such algorithms into code .so far , we support the generation of matlab code for one instance as well as sequences of problems . to complete the overview of our compiler, we provide a high - level description of the compiler s _ reasoning_. the main idea is to build a tree in which the root node contains the initial target equation ; each edge is labeled with a building block ; and each node contains intermediate equations yet to be mapped .the compiler progresses in a breadth - first fashion until all leaf nodes contain an expression directly mapped onto a building block .while processing a node s equation , the search space is constrained according to the following criteria : 1 .if the expression contains an inverse applied to a single ( non - diagonal , non - triangular ) matrix , for instance , then the compiler identifies a set of viable factorizations for based on its properties and structure ; 2 .if the expression contains an inverse applied to a sub - expression , for instance , then the compiler identifies both viable factorizations for the operands in the sub - expression ( e.g. , ) , and segments of the sub - expression that are directly mapped onto a building block ( e.g. , ) ; 3 . if the expression contains no inverse to process ( as in , with and triangular ), then the compiler identifies segments with a mapping onto a building block .when inspecting expressions for segments , the compiler gives priority to common segments and segments that minimize the number of flops .all three cases may yield multiple building blocks . for each building blockeither a factorization or a segment both a new edge and a new children node are created. the edge is labeled with the corresponding building block , and the node contains the new resulting expression .for instance , the analysis of eq .[ eq : exalg1step2 ] creates the following sub - tree : in addition , thanks to the _ inference of properties _ module , for each building block , properties of the output operands are inferred from those of the input operands .each path from the root node to a leaf represents one algorithm to solve the target equation . by assembling the building blocks attached to each edge in the path ,the compiler returns a collection of algorithms , one per leaf .our compiler has been successfully applied to equations such as pseudo - inverses , least - squares - like problems , and the automatic differentiation of blas and lapack operations .of special interest are the scenarios in which sequences of such problems arise ; for instance , the study case presented in this paper , genome - wide association studies , which consist of a two - dimensional sequence of correlated gls problems .the compiler is still in its early stages and the code is not yet available for a general release .however , we include along the paper details on the input and output of the system , as well as screenshots of the actual working prototype .we detail now the application to gwas of the process described above .box [ box : input ] includes the input to the compiler : the target equation along with domain - specific knowledge arising from gwas , e.g , operands shape and properties . as a result ,dozens of algorithms are automatically generated ; we report on three selected ones ..... equation = { equal[b , times [ inv[times [ trans[x ] , inv[plus [ times[h , phi ] , times[plus[1 , minus[h ] ] , i d ] ] ] , x ] ] , trans[x ] , inv[plus [ times[h , phi ] , times[plus[1 , minus[h ] ] , i d ] ] ] , y ] ] } ; operandproperties = { { x , { `` input '' , `` matrix '' , `` fullrank '' } } , { y , { `` input '' , `` vector '' } } , { phi , { `` input '' , `` matrix '' , `` symmetric '' } } , { h , { `` input '' , `` scalar '' } } , { b , { `` output '' , `` vector '' } } } ; expressionproperties = { inv[plus [ times[h , phi ] , times[plus[1 , minus[h ] ] , i d ] ] ] , `` spd '' } ; sizeassumptions = { rows[x ] > cols[x ] } ; .... to ease the reader , we describe the process towards the generation of an algorithm similar to alg .[ alg : exalg - chol ] .the starting point is eq .[ eq : probdef ] .since is not square , the inverse operator applied to can not be distributed over the product ; thus , the inner - most inverse is .the inverse is applied to an expression , which is inspected for viable factorizations and segments . among the identified alternativesare a ) the factorization of the operand according to its properties , and b ) the computation of the expression . herewe concentrate on the second case .the segment is matched as the scal - add building block ( scaling and addition of matrices ) ; the operation is made explicit and replaced : now , the inner - most inverse is applied to a single operand , , and the compiler decides to factor it using multiple alternatives : cholesky ( ) , qr ( ) , eigendecomposition ( ) , and svd ( ) .all the alternatives are explored ; we focus now on the cholesky factorization ( potrf routine from lapack ) : after is factored and replaced by , the inference engine propagates a number of properties to based on the properties of and the factorization applied .concretely , is square , triangular and full - rank .next , since is triangular , the inner - most inverse to be processed in eq .[ eq : alg1step1 ] is . in this casetwo routes are explored : either factor ( is triangular and does not need further factorization ) , or map a segment of the expression onto a building block .we consider this second alternative .the compiler identifies the solution of a triangular system ( trsm routine from blas ) as a common segment appearing three times in eq .[ eq : alg1step1 ] , makes it explicit , and replaces it : since is square and full - rank , and x is also full - rank , inherits the shape of and is labelled as full - rank .as is not square , the inverse can not be distributed over the product yet .therefore , the compiler faces again two alternatives : either factoring or multiplying .we proceed describing the latter scenario while the former is analyzed in sec .[ subsec : alg - two ] . is identified as a building block ( syrk routine of blas ) , and made explicit : the inference engine plays an important role deducing properties of . during the previous steps , the engine has inferred that is full - rank and rows[w ] > cols[w ] ; therefore the following rule states that is spd .is spd if is full rank and has more rows than columns . ] .... isspdq [ times [ trans [ a_?isfullrankq ] , a _ ] / ; rows[a ] > cols[a ] : = true ; ....this knowledge is now used to determine possible factorizations for .we concentrate on the cholesky factorization : in eq .[ eq : alg1step4 ] , all inverses are applied to triangular matrices ; therefore , no more treatment of inverses is needed .the compiler proceeds with the final decomposition of the remaining series of products .since at every step the inference engine keeps track of the properties of the operands in the original equation as well as the intermediate temporary quantities , it knows that every operand in eq .[ eq : alg1step4 ] are matrices except for the vector .this knowledge is used to give matrix - vector products priority over matrix - matrix products , and eq .[ eq : alg1step4 ] is decomposed accordingly . in casethe compiler can not find applicable heuristics to lead the decomposition , it explores the multiple viable mappings onto building blocks .the resulting algorithm , and the corresponding output from mathematica , are assembled in alg .[ alg : alg - chol ] , chol - gwas . .... ( ! \sc scal - add ! ) ( ! \sc potrf ! ) ( ! \sc trsm ! ) ( ! \sc syrk ! ) ( ! \sc potrf ! ) ( ! \sc trsv ! ) ( ! \sc gemv ! ) ( !\sc trsv ! ) ( ! \sc trsv ! ) .... in this subsectionwe display the capability of the compiler to analyze alternative paths , leading to multiple viable algorithms . at the same time, we expose more examples of algebraic manipulation carried out by the compiler .the presented algorithm results from the alternative path arising in eq .[ eq : alg1step3 ] , the factorization of .since is a full - rank column panel , the compiler analyzes the scenario where is factored using a qr factorization ( geqrf routine in lapack ) : at this point , the compiler exploits the capabilities of the _ matrix algebra _ module to perform a series of simplifications : first , it distributes the transpose operator over the product .then , it applies the rule .... times [ trans [ q_?isorthonormalq , q _ ] - > i d , .... included as part of the knowledge - base of the module .the rule states that the product , when is orthogonal with normalized columns , may be rewritten ( - > ) as the identity matrix .next , since is square , the inverse is distributed over the product .more mathematical knowledge allows the compiler to rewrite the product as the identity . in eq .[ eq : alg2step4 ] , the compiler does not need to process any more inverses ; hence , the last step is to decompose the remaining computation into a sequence of products .once more , is the only non - matrix operand .accordingly , the compiler decomposes the equation from right to left .the final algorithm is put together in alg .[ alg : alg - qr ] , qr - gwas . .... ( ! \sc scal - add ! ) = ( ! { \sc potrf } ! ) : = ( ! { \sc trsm } ! ) ( ! { \sc geqrf } ! ) : = ( ! { \sc trsv } ! ) : = ( ! { \sc gemv } ! ) : = ( ! { \sc trsv } ! ) .... this third algorithm exploits further knowledge from gwas , concretely the structure of , in a manner that may be overlooked even by human experts .again , the starting point is eq .[ eq : probdef ] .the inner - most inverse is . instead of multiplying out the expression within the inverse operator, we now describe the alternative path also explored by the compiler : factoring one of the matrices in the expression .we concentrate in the case where an eigendecomposition of ( syevd or syevr from lapack ) is chosen : where is a square , orthogonal matrix with normalized columns , and is a square , diagonal matrix . in this scenario ,the _ matrix algebra _module is essential ; it allows the compiler to work with alternative representations of eq .[ eq : alg3step1 ] .we already illustrated an example where the product , orthonormal , is replaced with the identity matrix .the freedom gained when defining its own operators , allows the compiler to perform also the opposite transformation : .... i d - > times [ q , trans [ q ] ] ; i d - > times [ trans [ q ] , q ] ; .... to apply these rules , the compiler inspects the expression for orthonormal matrices : is found to be orthonormal and used instead of in the right - hand side of the previous rules .the resulting expression is the algebraic manipulation capabilities of the compiler lead to the derivation of further multiple equivalent representations of eq .[ eq : alg3step2 ] .we recall that , although we focus on a concrete branch of the derivation , the compiler analyzes the many alternatives . in the branch under study ,the quantities and are grouped on the left- and right - hand sides of the inverse , respectively : then , since both and are square , the inverse is distributed : finally , by means of the rules : .... inv [ q_?isorthonormalq ] - > trans [ q ] ; inv [ trans [ q_?isorthonormalq ] ] - > q ; .... which state that the inverse of an orthonormal matrix is its transpose , the expression becomes : the resulting equation is the inner - most inverse in eq .[ eq : alg3step3 ] is applied to a diagonal object ( is diagonal and a scalar ) .no more factorizations are needed , is identified as a scal - add building block , and exposed : is a diagonal matrix ; hence only the inverse applied to remains to be processed . among the alternative steps, we consider the mapping of the common segment , that appears three times , onto the gemm building block ( matrix - matrix product ) : from this point on , the compiler proceeds as shown for the previous examples , and obtains , among others , alg .[ alg : alg - eigen ] , eig - gwas . .... = ( ! { \sc syevx } ! ) ( ! { \sc add - scal } ! ) ( ! { \sc gemm } ! ) ( ! { \sc scal } ! ) : = ( ! { \sc gemm } ! ) = ( ! { \sc geqrf } ! ) ( ! { \sc gemv } ! ) ( ! { \sc gemv } ! ) ( ! { \sc gemv } ! ) ( ! { \sc trsv } ! ) .... at first sight , alg .[ alg : alg - eigen ] might seem to be a suboptimal approach .however , as we show in sec .[ sec : performance ] , it is representative of a family of algorithms that play a crucial role when solving a certain sequence of gls problems within gwas . we have illustrated how our compiler , closely replicating the reasoning of a human expert , automatically generates algorithms for the solution of a single gls problem . as shown in eq .[ eq : probdef ] , in practice one has to solve one - dimensional ( ) or two - dimensional ( ) sequences of such problems . in this contextwe have developed a module that performs a loop dependence analysis to identify loop - independent operations and reduce redundant computations . for space reasons, we do not further describe the module , and limit to the automatically generated cost analysis .the list of patterns for the identification of building blocks included in the _ building blocks interface _ module also incorporates the corresponding computational cost associated to the operations .given a generated algorithm , the compiler composes the cost of the algorithm by combining the number of floating point operations performed by the individual building blocks , taking into account the loops over the problem dimensions .table [ tab : cost ] includes the cost of the three presented algorithms , which attained the lowest complexities for one- and two - dimensional sequences . while qr - gwas and chol - gwas share the same cost for both types of sequences , suggesting a very similar behavior in practice , the cost of eig - gwas differs in both cases .for the one - dimensional sequence the cost of eig - gwas is not only greater in theory , the practical constants associated to its terms increase the gap . on the contrary , for the two - dimensional sequence , the cost of eig - gwas is lower than the cost of the other two .this analysis suggests that qr - gwas and chol - gwas are better suited for the one - dimensional case , while eig - gwas is better suited for the two - dimensional one . in sec .[ sec : performance ] we confirm these predictions through experimental results . [ cols="<,^,^,^",options="header " , ] the translation from algorithms to code is not a straightforward task ; in fact , when manually performed , it is tedious and error prone . to overcome this difficulty, we incorporate in our compiler a module for the automatic generation of code . as of now, we support matlab ; an extension to fortran , a much more challenging target language , is planned .we provide here a short overview of this module .given an algorithm as derived by the compiler , the code generator builds an _ abstract syntax tree _ ( ast ) mirroring the structure of the algorithm .then , for each node in the ast , the module generates the corresponding code statements .specifically , for the nodes corresponding to _ for _ loops , the module not only generates a for statement but also the specific statements to extract subparts of the operands according to their dimensionality ; as for the nodes representing the building blocks , the generator must map the operation to the specific matlab routine or matrix expression . as an example of automatically generated code , the matlab routine corresponding to the aforementioned eig - gwas algorithm for a two - dimensional sequenceis illustrated in fig .[ fig : eig - code ] .we turn now the attention to numerical results . in the experiments ,we compare the algorithms automatically generated by our compiler with lapack and genabel , a widely used package for gwas - like problems . for details on genabels algorithm for gwas , gwfgls , we refer the reader to .we present results for the two most representative scenarios in gwas : one - dimensional ( ) , and two - dimensional ( ) sequences of gls problems .the experiments were performed on an 12-core intel xeon x5675 processor running at 3.06 ghz , with 96 gb of memory .the algorithms were implemented in c , and linked to the multi - threaded gotoblas and the reference lapack libraries .the experiments were executed using 12 threads .we first study the scenario .we compare the performance of qr - gwas and chol - gwas , with genabel s gwfgls , and gels - gwas , based on lapack s gels routine .the results are displayed in fig .[ fig : oney ] .as expected , qr - gwas and chol - gwas attain the same performance and overlap .most interestingly , our algorithms clearly outperform gels - gwas and gwfgls , obtaining speedups of 4 and 8 , respectively . , , .the improvement in the performance of our algorithms is due to a careful exploitation of both the properties of the operands and the sequence of gls problems . ]next , we present an even more interesting result . the current approach of all state - of - the - art libraries to the case is to repeat the experiment times with the same algorithm used for . on the contrary, our compiler generates the algorithm eig - gwas , which particularly suits such scenario . as fig .[ fig : manyy ] illustrates , eig - gwas outperforms the best algorithm for the case , chol - gwas , by a factor of 4 , and therefore outperforms gels - gwas and gwfgls by a factor of 16 and 32 respectively . , , .chol - gwas is best suited for the scenario , while eig - gwas is best suited for the scenario . ]the results remark two significant facts : 1 ) the exploitation of domain - specific knowledge may lead to improvements in state - of - the - art algorithms ; and 2 ) the library user may benefit from the existence of multiple algorithms , each matching a given scenario better than the others . in the case of gwasour compiler achieves both , enabling computational biologists to target larger experiments while reducing the execution time .we presented a linear algebra compiler that automatically exploits domain - specific knowledge to generate high - performance algorithms .our linear algebra compiler mimics the reasoning of a human expert to , similar to a traditional compiler , decompose a target equation into a sequence of library - supported building blocks .the compiler builds on a number of modules to support the replication of human reasoning . among them ,the _ matrix algebra _module , which enables the compiler to freely manipulate and simplify algebraic expressions , and the _ properties inference _ module , which is able to infer properties of complex expressions from the properties of the operands .the potential of the compiler is shown by means of its application to the challenging _ genome - wide association study _ equation .several of the dozens of algorithms produced by our compiler , when compared to state - of - the - art ones , obtain n - fold speedups . as future workwe plan an extension to the _ code generation _ module to support fortran .also , the asymptotic operation count is only a preliminary approach to estimate the performance of the generated algorithms . there is the need for a more robust metric to suggest a `` best '' algorithm for a given scenario .the authors gratefully acknowledge the support received from the deutsche forschungsgemeinschaft ( german research association ) through grant gsc 111 .bientinesi , p. , eijkhout , v. , kim , k. , kurtz , j. , van de geijn , r. : sparse direct factorizations through unassembled hyper - matrices .computer methods in applied mechanics and engineering * 199 * ( 2010 ) 430438 anderson , e. , bai , z. , bischof , c. , blackford , s. , demmel , j. , dongarra , j. , du croz , j. , greenbaum , a. , hammarling , s. , mckenney , a. , sorensen , d. : users guide .third edn .society for industrial and applied mathematics , philadelphia , pa ( 1999 ) pschel , m. , moura , j.m.f . ,johnson , j. , padua , d. , veloso , m. , singer , b. , xiong , j. , franchetti , f. , gacic , a. , voronenko , y. , chen , k. , johnson , r.w ., rizzolo , n. : : code generation for dsp transforms . proceedings of the ieee , special issue on `` program generation , optimization , and adaptation '' * 93*(2 ) ( 2005 ) 232 275 baumgartner , g. , auer , a. , bernholdt , d.e . , bibireata , a. , choppella , v. , cociorva , d. , gao , x. , harrison , r.j . , hirata , s. , krishnamoorthy , s. , krishnan , s. , chung lam , c. , lu , q. , nooijen , m. , pitzer , r.m . ,ramanujam , j. , sadayappan , p. , sibiryakov , a. , bernholdt , d.e . , bibireata , a. , cociorva , d. , gao , x. , krishnamoorthy , s. , krishnan , s. : synthesis of high - performance parallel programs for a class of ab initio quantum chemistry models . in : proceedings of the ieee .( 2005 ) 2005 fabregat - traver , d. , bientinesi , p. : knowledge - based automatic generation of partitioned matrix expressions . in gerdt , v. , koepf , w. , mayr , e. , vorozhtsov , e. ,eds . : computer algebra in scientific computing .volume 6885 of lecture notes in computer science . ,springer berlin / heidelberg ( 2011 ) 144157 fabregat - traver , d. , bientinesi , p. : automatic generation of loop - invariants for matrix operations . in : computational science and its applications , international conference , los alamitos , ca , usa , ieee computer society ( 2011 ) 8292 fabregat - traver , d. , aulchenko , y.s . ,bientinesi , p. : fast and scalable algorithms for genome studies . technical report , aachen institute for advanced study in computational engineering science ( 2012 ) available at http://www.aices.rwth-aachen.de:8080/aices/preprint/documents/aices-2012-05-01.pdf .
we present a prototypical linear algebra compiler that automatically exploits domain - specific knowledge to generate high - performance algorithms . the input to the compiler is a target equation together with knowledge of both the structure of the problem and the properties of the operands . the output is a variety of high - performance algorithms , and the corresponding source code , to solve the target equation . our approach consists in the decomposition of the input equation into a sequence of library - supported kernels . since in general such a decomposition is not unique , our compiler returns not one but a number of algorithms . the potential of the compiler is shown by means of its application to a challenging equation arising within the _ genome - wide association study_. as a result , the compiler produces multiple `` best '' algorithms that outperform the best existing libraries .
recently , the generalizations of the logarithmic and exponential functions have attracted the attention of researchers .one - parameter logarithmic and exponential functions have been proposed in the context of non - extensive statistical mechanics , relativistic statistical mechanics and quantum group theory .two and three - parameter generalization of these functions have also been proposed .these generalizations are in current use in a wide range of disciplines since they permit the generalization of special functions : hyperbolic and trigonometric , gaussian / cauchy probability distribution function etc .also , they permit the description of several complex systems , for instance in generalizing the stretched exponential function .as mentioned above , the one - parameter generalizations of the logarithm and exponential functions are not univoquous .the -logarithm function is defined as the value of the area underneath the non - symmetric hyperbole , , in the interval ] , but a generalization of the natural logarithmic function definition , which is recovered for .the area is negative for , it vanishes for and it is positive for , independently of the values . given the area underneath the curve , for ] and it is given by : this is a non - negative function , with , for any . for ,one has that , for and , for .notice that letting one has generalized the euler s number : instead of using the standard entropic index in eqs .( [ eq : gen_log ] ) and ( [ eq : eqtilde ] ) , we have adopted the notation .the latter notation permits us to write simple relations as : or , bringing the inversion point around .these relations lead to simpler expressions in population dynamics problems and the generalized stretched exponential function contexts .also , they simplify the generalized sum and product operators , where a link to the aritmethical and geometrical averages of the generalized functions is established . this logarithm generalization , as shown in ref . , is the one of non - extensive statistical mechanics .it turns out to be precisely the form proposed by montroll and badger to unify the verhulst ( ) and gompertz ( ) one - species population dynamics model .the -logarithm leads exactly to the richards growth model : where , is the population size at time , is the carrying capacity and is the intrinsic growth rate .the solution of eq .( [ eq : richard_model ] ) is the _-generalized logistic _equation } = e_{-{\tilde q}}[-\ln_{\tilde q}(p_0^{-1})e^{-\kappa t } ] = e_{-{\tilde q}}[\ln_{-\tilde q}(p_0)e^{-\kappa t}] ] , where ^{1-\gamma / d_f}-1\right\}/[d_f(1-\gamma / d_f)] ] . calling , , and ,this equation is the richard s model [ eq .( [ eq : richard_model ] ) ] with an effort rate . in this contextthe parameter acquires a physical meaning related to the interaction range and fractal dimension of the cellular structure . if the interaction does not depend on the distance , , and it implies that .this physical interpretation of has only been possible due to richards model underlying microscopic description .introduced by nicholson in 1954 , scramble and contest are types of intraspecific competition models that differ between themselves in the way that limited resources are shared among individuals . in scramble competition , the resource is equally shared among the individuals of the population as long as it is available . in this case , there is a critical population size , above which , the amount of resource is not enough to assure population survival . in the contest competition ,stronger individuals get the amount of resources they need to survive .if there is enough resources to all individuals , population grows , otherwise , only the strongest individuals survive ( strong hierarchy ) , and the population maintains itself stable with size . from experimental data , it is known that other than the important parameter ( and sometimes ) , additional parameters in more complex models are needed to adjust the model to the given population .one of the most general discrete model is the -ricker model .this model describes well scramble competition models but it is unable to put into a unique formulation the contest competition models such as hassel model , beverton - holt model and maynard - smith - slatkin model . our main purpose is to show that eq .( [ eq : limite ] ) is suitable to unify most of the known discrete growth models into a simple formula .this is done in the following way . in sec .[ sec : loquistic ] , we show that the richards model [ eq .( [ eq : richard_model ] ) ] , which has an underlying microscopic model , has a physical interpretation to the parameter , and its discretization leads to a generalized logistic map .we briefly study the properties of this map and show that some features of it ( fixed points , cycles etc . )are given in terms of the -exponential function .curiously , the map attractor can be suitably written in terms of -exponentials , even in the logistic case . in sec .[ sec : generalized_theta_ricker ] , using the -exponential function , we generalize the -ricker model and analytically calculate the model fixed points , as well as their stability . in sec .[ sec : generalizedskellam ] , we consider the generalized skellam model .these generalizations allow us to recover most of the well - known scramble / contest competition models .final remarks are presented in sec [ sec : conclusion ] .to discretize eq . ([ eq : richard_model ] ) , call and ^{\tilde q} ] , one obtains the _ logistic map _ , , which is the classical example of a _ dynamic system _ obtained from the discretization of the verhulst model .although simple , this map presents a extremely rich behavior , universal period duplication , chaos etc . .let us digress considering the feigenbaum s map : , with , and .firstly , let us consider the particular case .if one writes , with being a constant , then : ] are the fixed point is stable for and is stable for , where notice the presence of the -exponentials in the description of the attractors , even for the logistic map .the generalized logistic map also presents the rich behavior of the logistic map as depicted by the bifurcation diagram of fig .the inset of fig .[ fig1 ] displays the lyapunov exponents as function of the central parameter . in fig .[ fig2 ] we have scaled the axis to / ( \rho_{max}\tilde{q}) ] , so that for , and . when , then . in fig .[ fig3 ] we show the histograms of the distribution of the variable .we see that as increases , the histograms have the same shape as the logistic histogram has , but it is crooked in the counter clock sense around .the _ -ricker _ model is given by : } , \label{eq : theta_ricker}\ ] ] where . notice that is the relevant variable , where . in this way eq .( [ eq : theta_ricker ] ) can be simply written as . for ,one finds the standard _model . for arbitrary , expanding the exponential tothe first order one obtains the generalized logistic map [ eq .( [ eq : loquistic ] ) ] which becomes the logistic map , for .the -ricker , ricker and quadratic models are all scramble competion models . if one switches the exponential function for the -generalized exponential in eq .( [ eq : theta_ricker ] ) , one gets the _ generalized -ricker model _ : = \frac{\kappa_1 x_i}{\left[1 + \tilde{q } r \left ( \frac{x_i}{\kappa}\right)^{\theta } \right]^{1/\tilde{q } } } \ ; . \label{eq : generalized_theta_ricker_model}\ ] ] to obtain standard notation , write and , so that .the generalized model with , leads to the _ hassel _ model , which can be a scramble or contest competition model .one well - known contest competition model is the _ beverton - holt _ model , which is obtained taking . for ,one recovers the ricker model and for , one recovers the logistic model .it is interesting to mention that the beverton - holt model is one of the few models that have the time evolution explicitly written : $ ] . from this equation, one sees that , for and for . using arbitrary values of in eq .( [ eq : generalized_theta_ricker_model ] ) , for one recovers the -ricker model , and for , the _ maynard - smith - slatkin _model is recovered .the latter is a scramble / contest competition model . for ,one recovers the generalized logistic map .the trivial linear model is retrieved for . in terms of the relevant variable , eq .( [ eq : generalized_theta_ricker_model ] ) is rewritten as : where and we stress that the important parameters are , and .( [ eq : final ] ) is suitable for data analysis and the most usual known discrete growth models are recovered with the judicious choice of the and parameters as it shown in table [ tabela ] .some typical bifurcation diagrams of eq .( [ eq : final ] ) are displayed in fig .[ figbdtrm ] ..summary of the parameters to obtain discrete growth models from eq .( [ eq : final ] ) . in the competition type column ,_ s _ and _ c _ stand for scramble and contest models , respectively .the symbol stands for arbitrary values .[ cols="<,^,^ , < " , ]we have shown that the -generalization of the exponential function is suitable to describe discrete growth models .the parameter is related to the range of a repulsive potential and the dimensionality of the fractal underlying structure . from the discretization of the richard s model, we have obtained a generalization for the logistic map and briefly studied its properties .an interesting generalization is the one of -ricker model , which allows to have several scramble or contest competition discrete growth models as particular cases .equation ( [ eq : final ] ) allows the use of softwares to fit data to find the most suitable known model throughout the optimum choice of and .furthermore , one can also generalize the skellam contest model .only a few specific models mentioned in ref . are not retrieved from our generalization .actually , we propose a general procedure where we do not necessarily need to be tied to a specific model , since one can have arbitrary values of and .the authors thank c. a. s. terariol for fruitful discussions .asm acknowledges the brazilian agency cnpq ( 303990/2007 - 4 and 476862/2007 - 8 ) or support .rsg also acknowledges cnpq ( 140420/2007 - 0 ) for support .ale acknowledges cnpq for the fellowship and fapesp and mct / cnpq fundo setorial de infra - estrutura ( 2006/60333 - 0 ) .
here we show that a particular one - parameter generalization of the exponential function is suitable to unify most of the popular one - species discrete population dynamics models into a simple formula . a physical interpretation is given to this new introduced parameter in the context of the continuous richards model , which remains valid for the discrete case . from the discretization of the continuous richards model ( generalization of the gompertz and verhuslt models ) , one obtains a generalized logistic map and we briefly study its properties . notice , however that the physical interpretation for the introduced parameter persists valid for the discrete case . next , we generalize the ( scramble competition ) -ricker discrete model and analytically calculate the fixed points as well as their stability . in contrast to previous generalizations , from the generalized -ricker model one is able to retrieve either scramble or contest models . complex systems , population dynamics ( ecology ) , nonlinear dynamics 89.75.-k , 87.23.-n , 87.23.cc , 05.45.-a
knowledge - base systems must typically deal with imperfection in knowledge , in particular , in the form of incompleteness , inconsistency , and uncertainty . with this motivation ,several frameworks for manipulating data and knowledge have been proposed in the form of extensions to classical logic programming and deductive databases to cope with imperfections in available knowledge .abiteboul , _ et al . _ , liu , and dong and lakshmanan dealt with deductive databases with incomplete information in the form of null values .kifer and lozinskii have developed a logic for reasoning with inconsistency .extensions to logic programming and deductive databases for handling uncertainty are numerous .they can broadly be categorized into non - probabilistic and probabilistic formalisms .we review previous work in these fields , with special emphasis on probabilistic logic programming , because of its relevance to this paper .* non - probabilistic formalisms * _ ( 1 ) fuzzy logic programming _ :this was essentially introduced by van emden in his seminal paper on quantitative deduction , and further developed by various researchers , including steger _ , schmidt __ ._ ( 2 ) annotated logic programming _ :this framework was introduced by subrahmanian , and later studied by blair and subrahmanian , and kifer and li . while blair and subrahmanian s focus was paraconsistency , kifer and li extended the framework of into providing a formal semantics for rule - based systems with uncertainty .finally , this framework was generalized by kifer and subrahmanian into the generalized annotated programming ( gap ) framework ) .all these frameworks are inherently based on a lattice - theoretic semantics .annotated logic programming has also been employed with the probabilistic approach , which we will discuss further below . _ ( 3 ) evidence theoretic logic programming _ : this has been mainly studied by baldwin and monk and baldwin ) .they use dempster s evidence theory as the basis for dealing with uncertainty in their logic programming framework .* probabilistic formalisms * indeed , there has been substantial amount of research into probabilistic logics ever since boole .carnap is a seminal work on probabilistic logic .fagin , halpern , and megiddo study the satisfiability of systems of probabilistic constraints from a model - theoretic perspective .gaifman extends probability theory by borrowing notions and techniques from logic .nilsson uses a possible worlds " approach to give model - theoretic semantics for probabilistic logic . notion of probabilistic entailment is similar to that of nilsson .some of the probabilistic logic programming works are based on probabilistic logic approaches , such as ng and subrahmanian s work on probabilistic logic programming and ng s recent work on empirical databases .we discuss these works further below .we will not elaborate on probabilistic logics any more and refer the reader to halpern for additional information .works on probabilistic logic programming and deductive databases can be categorized into two main approaches , annotation - based , and implication based .* annotation based approach * : ng and subrahmanian were the first to propose a probabilistic basis for logic programming .their syntax borrows from that of annotated logic programming , although the semantics are quite different .the idea is that uncertainty is always associated with individual atoms ( or their conjunctions and disjunctions ) , while the rules or clauses are always kept classical . in ,uncertainty in an atom is modeled by associating a probabilistic truth value with it , and by asserting that it lies in an interval .the main interest is in characterizing how precisely we can bound " the probabilities associated with various atoms . in terms of the terminology of belief and doubt , we can say , following kifer and li , that the combination of belief and doubt about a piece of information might lead to an interval of probabilities , as opposed a precise probabilities .but , as pointed out in , even if one starts with precise point probabilities for atomic events , probabilities associated with compound events can only be calculated to within some exact upper and lower bounds , thus naturally necessitating intervals .but then , the same argument can be made for an agent s belief as well as doubt about a fact , they both could well be intervals . in this sense, we can say that the model of captures only the belief .a second important characteristic of this model is that it makes a conservative assumption that nothing is known about the interdependence of events ( captured by the atoms in an input database ) , and thus has the advantage of not having to make the often unrealistic independence assumption .however , by being conservative , it makes it impossible to take advantage of the ( partial ) knowledge a user may have about the interdependence among some of the events . from a technical perspective, only annotation constants are allowed in .intuitively , this means only constant probability ranges may be associated with atoms .this was generalized in a subsequent paper by ng and subrahmanian to allow annotation variables and functions .they have developed fixpoint and model - theoretic semantics , and provided a sound and weakly complete proof procedure ._ have proposed a sound ( propositional ) probabilistic calculus based on conditional probabilities , for reasoning in the presence of incomplete information . although they make use of a datalog - based interface to implement this calculus , their framework is actually propositional . in related works ,ng and subrahmanian have extended their basic probabilistic logic programming framework to capture stable negation in , and developed a basis for dempster - shafer theory in . * implication based approach * : while many of the quantitative deduction frameworks ( van emden , fitting , debray and ramakrishnan , etc . )are implication based , the first implication based framework for probabilistic deductive databases was proposed in .the idea behind implication based approach is to associate uncertainty with the facts as well as rules in a deductive database .sadri in a number of papers developed a hybrid method called information source tracking ( ist ) for modeling uncertainty in ( relational ) databases which combines symbolic and numeric approaches to modeling uncertainty .lakshmanan and sadri pursue the deductive extension of this model using the implication based approach .lakshmanan generalizes the idea behind ist to model uncertainty by characterizing the set of ( complex ) scenarios under which certain ( derived ) events might be believed or doubted given a knowledge of the applicable belief and doubt scenarios for basic events .he also establishes a connection between this framework and modal logic .while both are implication based approaches , strictly speaking , they do not require any commitment to a particular formalism ( such a probability theory ) for uncertainty manipulation .any formalism that allows for a consistent calculation of numeric certainties associated with boolean combination of basic events , based on given certainties for basic events , can be used for computing the numeric certainties associated with derived atoms .recently , lakshmanan and shiri unified and generalized all known implication based frameworks for deductive databases with uncertainty ( including those that use formalisms other than probability theory ) into a more abstract framework called the parametric framework .the notions of conjunctions , disjunctions , and certainty propagations ( via rules ) are parameterized and can be chosen based on the applications .even the domain of certainty measures can be chosen as a parameter . under such broadly generic conditions, they proposed a declarative semantics and an equivalent fixpoint semantics .they also proposed a sound and complete proof procedure .finally , they characterized conjunctive query containment in this framework and provided necessary and sufficient conditions for containment for several large classes of query programs .their results can be applied to individual implication based frameworks as the latter can be seen as instances of the parametric framework .conjunctive query containment is one of the central problems in query optimization in databases .while the framework of this paper can also be realized as an instance of the parametric framework , the concerns and results there are substantially different from ours . in particular , to our knowledge ,this is the first paper to address data complexity in the presence of ( probabilistic ) uncertainty .* other related work * fitting has developed an elegant framework for quantitative logic programming based on bilattices , an algebraic structure proposed by ginsburg in the context of many - valued logic programming .this was the first to capture both belief and doubt in one uniform logic programming framework . in recent work ,lakshmanan _ et al ._ have proposed a model and algebra for probabilistic relational databases .this framework allows the user to choose notions of conjunctions and disjunctions based on a family of strategies .in addition to developing complexity results , they also address the problem of efficient maintenance of materialized views based on their probabilistic relational algebra .one of the strengths of their model is not requiring any restrictive independence assumptions among the facts in a database , unlike previous work on probabilistic relational databases . in a more recent work , dekhtyar and subrahmanian developed an annotation based framework where the user can have a parameterized notion of conjunction and disjunction . in not requiring independence assumptions , and being able to allow the user to express her knowledge about event interdependence by means of a parametrized family of conjunctions and disjunctions , both have some similarities to this paper .however , chronologically , the preliminary version of this paper was the first to incorporate such an idea in a probabilistic framework . besides , the frameworks of are substantially different from ours . in a recent work ng studies _ empirical _ databases , where a deductive database is enhanced by empirical clauses representing statistical information .he develops a model - theoretic semantics , and studies the issues of consistency and query processing in such databases .his treatment is probabilistic , where probabilities are obtained from statistical data , rather than being subjective probabilities .( see halpern for a comprehensive discussion on statistical and subjective probabilities in logics of probability . )ng s query processing algorithm attempts to resolve a query using the ( regular ) deductive component of the database . if it is not successful , then it reverts to the empirical component , using the notion of _ most specific reference class _ usually used in statistical inferences .our framework is quite different in that every rule / fact is associated with a confidence level ( a pair of probabilistic intervals representing belief and doubt ) , which may be subjective , or may have been obtained from underlying statistical data .the emphasis of our work is on ( _ i _ ) the characterization of different modes for combining confidences , ( _ ii _ ) semantics , and , in particular , ( _ iii _ ) termination and complexity issues .the contributions of this paper are as follows .* we associate a _ confidence level _ with facts and rules ( of a deductive database ) .a confidence level comes with both a _ belief _ and a _ doubt _ ( in what is being asserted ) [ see section [ motiv ] for a motivation ] .belief and doubt are subintervals of ] the set of all closed subintervals over ] .a _ confidence level _ is an element of .we denote a confidence level as ,~ [ \gamma,\delta]\rangle ] . in our approach confidence levelsare associated with facts and rules .the intended meaning of a fact ( or rule ) having a confidence ,~ [ \gamma,\delta]\rangle ] is that and are the lower and upper bounds of the expert s _ belief _ in , and and are the lower and upper bounds of the expert s _ doubt _ in .these notions will be formalized in section [ prob - calc ] .the following example illustrates such a scenario .( the figures in all our examples are fictitious . )consider the results of gallup polls conducted before the recent canadian federal elections .\1 . of the peoplesurveyed , between 50% and 53% of the people in the age group 19 to 30 favor the liberals .between 30% and 33% of the people in the above age group favor the reformists .+ 3 . between 5% and8% of the above age group favor the tories .the reason we have ranges for each category is that usually some tolerance is associated with the results coming from such polls .also , we do not make the proportion of undecided people explicit as our interest is in determining the support for the different parties .suppose we assimilate the information above in a probabilistic framework .for each party , we compute the probability that a _randomly _ chosen person from the sample population of the given age group will ( not ) vote for that party .we transfer this probability as the _ subjective _ probability that _ any _ person from that age group ( in the actual population ) will ( not ) vote for the party .the conclusions are given below , where says will vote for party , - says belongs to the age group specified above . , , and are constants , with the obvious meaning . , [ 0.35 , 0.41]\rangle}}{{\mbox{ } } } ] - . : , [ 0.55 , 0.61]\rangle}}{{\mbox{ } } } ] - . : , [ 0.8 , 0.86]\rangle}}{{\mbox{ } } } ] - . as usual ,each rule is implicitly universally quantified outside the entire rule .each rule is expressed in the form ,~ [ \gamma,\delta]\rangle<\hspace*{-9pt } \rule[2pt]{80pt}{0.5pt} } } body\ ] ] where ] ( ] , obtained by summing the endpoints of the belief ranges for reform and tories . notice that in this case ( or ) is not necessarily .this shows we can not regard the expert s doubt as the complement ( with respect to 1 ) of his belief .thus , if we have to model what _ necessarily _ follows according to the expert s knowledge , then we must carry both the belief and the doubt explicitly . note that this example suggests just one possible means by which confidence levels could be obtained from statistical data . as discussed before, gaps in an expert s knowledge could often directly result in both belief and doubt . in general, there could be many ways in which both belief and doubt could be obtained and associated with the basic facts . given this, we believe that an independent treatment of both belief and doubt is both necessary and interesting for the purpose of obtaining the confidence levels associated with derived facts . our approach to independently capture belief and doubt makes it possible to cope with incomplete knowledge regarding the situations in which an event is true , false , or unknown in a general setting .kifer and li and baldwin have argued that incorporating both belief and doubt ( called disbelief there ) is useful in dealing with incomplete knowledge , where different evidences may contradict each other .however , in their frameworks , doubt need not be maintained explicitly .for suppose we have a belief and a disbelief associated with a phenomenon .then they can both be absorbed into one range ] the set of all closed subintervals over ] .we denote the elements of as ,~ [ \gamma,\delta]\rangle ] .define the following orders on this set .let ,~ [ \gamma_{1},\delta_{1}]\rangle ] , ,~ [ \gamma_{2},\delta_{2}]\rangle ] be any two elements of .,~ [ \gamma_{1},\delta_{1}]\rangle\langle[\alpha_{2},\beta_{2}],~ [ \gamma_{2},\delta_{2}]\rangle ] iff , and , + ,~ [ \gamma_{1},\delta_{1}]\rangle\langle[\alpha_{2},\beta_{2}],~ [ \gamma_{2},\delta_{2}]\rangle ] iff , and , + ,~ [ \gamma_{1},\delta_{1}]\rangle\langle[\alpha_{2},\beta_{2}],~ [ \gamma_{2},\delta_{2}]\rangle ] iff , and , some explanation is in order .the order can be considered the _ truth _ ordering : truth " relative to the expert s knowledge increases as belief goes up and doubt comes down .the order is the _ knowledge _ ( or information ) ordering : knowledge " ( the extent to which the expert commits his opinion on an assertion ) increases as both belief and doubt increase .the order is the _ precision _ ordering : precision " of information supplied increases as the probability intervals become narrower .the first two orders are analogues of similar orders in bilattices .the third one , however , has no counterpart there .it is straightforward to see that each of the orders , , and is a partial order . has a least and a greatest element with respect to each of these orders . in the following ,we give the definition of meet and join with respect to the order .operators with respect to the other orders have a similar definition .[ meet - join ] let be as defined in definition [ conf - lattice ] .then the meet and join corresponding to the truth , knowledge ( information ) , and precision orders are defined as follows .the symbols and denote meet and join , and the subscripts , , and represent truth , knowledge , and precision , respectively .,~ [ \gamma_{1},\delta_{1}]\rangle \otimes_t\langle[\alpha_{2},\beta_{2}],~ [ \gamma_{2},\delta_{2}]\rangle ] , ] .\2 . ,~ [ \gamma_{1},\delta_{1}]\rangle \oplus_t\langle[\alpha_{2},\beta_{2}],~ [ \gamma_{2},\delta_{2}]\rangle ] ] . \3 . ,~ [ \gamma_{1},\delta_{1}]\rangle \otimes_k\langle[\alpha_{2},\beta_{2}],~ [ \gamma_{2},\delta_{2}]\rangle ] ] .\4 . ,~ [ \gamma_{1},\delta_{1}]\rangle \oplus_k\langle[\alpha_{2},\beta_{2}],~ [ \gamma_{2},\delta_{2}]\rangle ] ] . \5 . ,~ [ \gamma_{1},\delta_{1}]\rangle \otimes_p\langle[\alpha_{2},\beta_{2}],~ [ \gamma_{2},\delta_{2}]\rangle ] ] .\6 . ,~ [ \gamma_{1},\delta_{1}]\rangle \oplus_p\langle[\alpha_{2},\beta_{2}],~ [ \gamma_{2},\delta_{2}]\rangle ] ] . the top and bottom elements with respect to the various orders are as follows .the subscripts indicate the associated orders , as usual . , [ 0 , 0]\rangle ] , + , [ 1 , 1]\rangle ] , + , [ 1 , 0]\rangle ] . corresponds to total belief and no doubt ; is the opposite . represents maximal information ( total belief and doubt ) , to the point of being probabilistically inconsistent : belief and doubt probabilities sum to more than 1 ; gives the least information : no basis for belief or doubt ; is maximally precise , to the point of making the intervals empty ( and hence inconsistent , in a non - probabilistic sense ) ; is the least precise , as it imposes only trivial bounds on belief and doubt probabilities .fitting defines a bilattice to be _ interlaced _ whenever the meet and join with respect to any order of the bilattice are monotone with respect to the other order .he shows that it is the interlaced property of bilattices that makes them most useful and attractive .we say that a trilattice is _ interlaced _ provided the meet and join with respect to any order are monotone with respect to any other order .we have [ lem : interlaced ] the trilattice defined above is interlaced .* proof . * follows directly from the fact that and are monotone functions .we show the proof for just one case .let ,~ [ \gamma_{1},\delta_{1}]\rangle\langle[\alpha_{3},\beta_{3}],~ [ \gamma_{3},\delta_{3}]\rangle ] and ,~ [ \gamma_{2},\delta_{2}]\rangle\langle[\alpha_{4},\beta_{4}],~ [ \gamma_{4},\delta_{4}]\rangle ] . then ,~ [ \gamma_{1},\delta_{1}]\rangle \otimes\langle[\alpha_{2},\beta_{2}],~ [ \gamma_{2},\delta_{2}]\rangle ] , [ max\{\gamma_1 , \gamma_2\ } , max\{\delta_1 , \delta_2\}]\rangle } } { \mbox { }}_t { \mbox{ } } = ] since , we have , and . similarly , and .this implies ,~ [ \gamma_{1},\delta_{1}]\rangle \otimes_t\langle[\alpha_{2},\beta_{2}],~ [ \gamma_{2},\delta_{2}]\rangle\langle[\alpha_{3},\beta_{3}],~ [ \gamma_{3},\delta_{3}]\rangle \otimes_t\langle[\alpha_{4},\beta_{4}],~ [ \gamma_{4},\delta_{4}]\rangle } } \leq_f { \mbox{ } } \mbox { iff } \alpha_1 \leq \alpha_2 , \beta_2 \leq \beta_1 \mbox { and } \gamma_2 \leq \gamma_1 , \delta_1 \leq \delta_2\ ] ] in our opinion , the fourth order , while technically elegant , does not have the same intuitive appeal as the three orders truth , knowledge , and precision mentioned above . hence, we do not consider it further in this paper .the algebraic properties of confidence levels and their underlying lattices are interesting in their own right , and might be used for developing alternative bases for quantitative logic programming .this issue is orthogonal to the concerns of this paper .given the confidence levels for ( basic ) events , how are we to derive the confidence levels for compound events which are based on them ? since we are working with probabilities , our combination rules must respect probability theory .we need a model of our knowledge about the interaction between events . a simplistic model studied in the literature ( see barbara _ et al . ) assumes _ independence _ between all pairs of events .this is highly restrictive and is of limited applicability .a general model , studied by ng and subrahmanian is that of _ ignorance _ : assume no knowledge about event interaction .although this is the most general possible situation , it can be overly conservative when _ some _ knowledge is available , concerning some of the events .we argue that for real - life " applications , no single model of event interaction would suffice . indeed , we need the ability to parameterize " the model used for event interaction , depending on what _ is _ known about the events themselves . in this section ,we develop a probabilistic calculus which allows the user to select an appropriate mode " of event interaction , out of several choices , to suit his needs .let * l * be an arbitrary , but fixed , first - order language with finitely many constants , predicate symbols , infinitely many variables , and no function symbols .we use ( ground ) atoms of * l * to represent basic events .we blur the distinction between an event and the formula representing it .our objective is to characterize confidence levels of boolean combinations of events involving the connectives , in terms of the confidence levels of the underlying basic events under various modes ( see below ) .we gave an informal discussion of the meaning of confidence levels in section [ motiv ] .we use the concept of _ possible worlds _ to formalize the semantics of confidence levels .[ defn : semantics ] ( _ semantics of confidence levels _ ) according to the expert s knowledge , an event can be true , false , or unknown .this gives rise to 3 possible worlds .let respectively denote _ true _ , _ false _ , and _unknown_. let denote the world where the truth - value of is , , and let denote the probability of the world then the assertion that the confidence level of is ,~ [ \gamma,\delta]\rangle ] , written ,~ [ \gamma,\delta]\rangle ] , corresponds to the following constraints : where and are the lower and upper bounds of the _ belief _ in , and and are the lower and upper bounds of the _ doubt _ in . equation ( [ eq : possibleworlds ] ) imposes certain restrictions on confidence levels .[ defn : consistentcf ] ( _ consistent confidence levels _ ) we say a confidence level ,~ [ \gamma,\delta]\rangle ] is _ consistent _ if equation ( [ eq : possibleworlds ] ) has an answer .it is easily seen that : [ prop : consistentcf ] confidence level ,~ [ \gamma,\delta]\rangle ] is _ consistent _ provided _ ( i ) _ and , and _( ii ) _ .the consistency condition guarantees at least one solution to equation ( [ eq : possibleworlds ] ) .however , given a confidence level ,~ [ \gamma,\delta]\rangle ] , there may be values in the ] interval to form an answer to equation ( [ eq : possibleworlds ] ) , and vice versa .we can `` trim '' the upperbounds of ,~ [ \gamma,\delta]\rangle ] as follows to guarantee that for each value in the ] interval which form an answer to equation ( [ eq : possibleworlds ] ) .[ defn : reducedcf ] ( _ reduced confidence level _ ) we say a confidence level ,~ [ \gamma,\delta]\rangle ] is _ reduced _ if for all ] there exist , such that , , is a solution to equation ( [ eq : possibleworlds ] ) .it is obvious that a reduced confidence level is consistent .[ prop : reducedcf ] confidence level ,~ [ \gamma,\delta]\rangle ] is _ reduced _ provided _ ( i ) _ and , and _( ii ) _ , and . [ prop : reduction ] let ,~ [ \gamma,\delta]\rangle ] be a consistent confidence level .let and .then , the confidence level , [ \gamma , min(\delta,\delta')]}} ] .thus , negation simply swaps belief and doubt .* follows from the observation that ,~ [ \gamma,\delta]\rangle ] implies that and , where ( ) denotes the probability of the possible world where event is _ true _ ( _ false _ ) . the following theorem establishes the confidence levels of compound formulas as a function of those of the constituent formulas , under various modes .[ thm : combinations ] let and be any events and let ,~ [ \gamma_{1},\delta_{1}]\rangle ] and ,~ [ \gamma_{2},\delta_{2}]\rangle ] .then the confidence levels of the compound events and are given as follows .( in each case the subscript denotes the mode . ) , ] . , [ max\{0 , \gamma_1 + \gamma_2 -1\} ] . , [ 1 - ( 1 - \gamma_1 ) \times ( 1 - \gamma_2 ) , 1 - ( 1 - \delta_1 ) \times ( 1 - \delta_2)]\rangle ] . , [ max\{\gamma_1 , \gamma_2\ } , max\{\delta_1 , \delta_2\}]\rangle ] . , [ min\{1 , \gamma_1 + \gamma_2\ } , min\{1 , \delta_1 + \delta_2\}]\rangle ] . , [ min\{1 , \gamma_1+\gamma_2\ } , min\{1 , \delta_1+\delta_2\}]\rangle ] .* proof . *each mode is characterized by a system of constraints , and the confidence level of the formulas are obtained by extremizing certain objective functions subject to these constraints .the scope of the possible interaction between and can be characterized as follows ( also see ) . according to the expert s knowledge ,each of can be true , false , or unknown .this gives rise to 9 possible worlds .let respectively denote _ true _ , _ false _ , and _unknown_. let denote the world where the truth - value of is and that of is , ., is the world where is true and is false , while is the world where is false and is unknown .suppose denotes the probability associated with world .then the possible scope of interaction between and can be characterized by the following constraints . the above system of constraints must be satisfied for all modes .specific constraints for various modes are obtained by adding more constraints to those in equation ( [ ign - eq ] ) . in all cases ,the confidence levels for and are obtained as follows . ,\\ & & [ min(\sigma_{{\mbox { }}\not \models f\circ g } { \mbox { } } ) , max(\sigma_{{\mbox { }}\not \models f\circ g } { \mbox { }})]\rangle\end{aligned}\ ] ] where is or . : _ignorance_. the constraints for ignorance are exactly those in equation ( [ ign - eq ] ) .the solution to the above linear program can be shown to be , ] , , [ max\{0 , \gamma_1 + \gamma_2 -1\} ] .the proof is very similar to the proof of a similar result in the context of belief intervals ( no doubt ) by ng and subrahmanian . : _independence_. independence of events and can be characterized by the equation , where is the conditional probability of the event given event .more specifically , since in our model an event can be _ true _ , _ false _ , or _ unknown _ , ( in other words , we model belief and doubt independently ) we have : then the constraints characterizing independence is obtained by adding the following equations to the system of constraints ( [ ign - eq ] ) . the belief in , and doubt in can be easily verified from the system of constraints [ ign - eq ] and [ ind - eq ] to obtain the doubt in we need to compute the minimum and maximum of .it is easy to verify that : the belief in is obtained similarly ( in the dual manner . )thus , we have verified that , [ 1 - ( 1 - \gamma_1 ) \times ( 1 - \gamma_2 ) , 1 - ( 1 - \delta_1 ) \times ( 1 - \delta_2)]\rangle ] . : _ positive correlation _ : two events and are positively correlated if they overlap as much as possible .this happens when either ( _ i _ ) occurrence of implies occurrence of , or ( _ ii _ ) occurrence of implies occurrence of . in our frameworkwe model belief and doubt independently , and positive correlation is characterized by 4 possibilities : \(a ) occurrence of implies occurrence of , and non - occurrence of implies non - occurrence of .+ ( b ) occurrence of implies occurrence of , and non - occurrence of implies non - occurrence of .+ ( c ) occurrence of implies occurrence of , and non - occurrence of implies non - occurrence of .+ ( d ) occurrence of implies occurrence of , and non - occurrence of implies non - occurrence of .each of these four condition sets generates its own equations . for example , ( a ) can be captured by adding the following equations to the system of constraints [ ign - eq ] . hence , for condition ( a ) , the system of constraints [ ign - eq ] becomes the analysis is further complicated by the fact that the confidence levels of and determine which of these cases apply , and it may be different for the lowerbound and upperbound probabilities .for example , if ( ) , then the lowerbound ( upperbound ) for belief in is obtained when occurrence of implies occurrence of .otherwise , these bounds are obtained when occurrence of implies occurrence of .the solution to these linear programs can be shown to be , [ max\{\gamma_1 , \gamma_2\ } , max\{\delta_1 , \delta_2\}]\rangle ] .a more intuitive approach to the derivation of confidence levels for conjunction and disjunction of positively correlated events is to rely on the observation that these events overlap to the maximum extent possible . in our frameworkit means the worlds where is _ true _ and those where is _ true _ overlap maximally , and hence , one is included in the other .similarly , since we model belief and doubt independently , the worlds where is _ false _ and those where is _ false _ also overlap maximally .the combination formulas can be derived directly using these observations . : _ negative correlation _ :negative correlation is an appropriate mode to use whenever we know that events and overlap as little as possible .this is to be contrasted with positive correlation , where the extent of overlap is the greatest possible .mutual exclusion , is a special case of negative correlation where the sum of the probabilities of the two events does not exceed 1 . in this casethe two events do not overlap at all . in the classical framework , mutual exclusion of two events and characterized by the statement : ( _ i _ ) occurrence of implies non - occurrence of , and vice versa . on the other hand ,if the two events and are negatively correlated but not mutually exclusive , we have : ( _ ii _ ) non - occurrence of implies occurrence of , and vice versa . in case ( _ i _ ) the sum of the probabilities of the two events is at most 1 , while in case ( _ ii _ )this sum exceeds 1 and hence the two events can not be mutually exclusive . in our frameworkwe model belief and doubt independently , and each of the above conditions translates to two conditions as follows .note that in our framework , `` not _ true _ '' means `` _ false _ or _ unknown _ '' , and `` not _ false _ '' means `` _ true _ or _ unknown _ '' .* event is _ true _ implies is not _ true _ , and vice versa .this condition generates the equation . *the dual of condition ( a ) , when the non - occurrence of the two events do nt overlap .event is _ false _ implies is not _false _ , and vice versa .this condition generates the equation .* event is not _ true _ implies is _ true _ , and vice versa .this condition generates the equations , , , and . *the dual of ( _ c _ ) , is not __ implies is _ false _ , and vice versa , which generates the equations , , , and .similar to the case for positive correlation , the confidence levels of and determine which of these cases apply .for example , if , then case ( c ) should be used to determine the lowerbound for belief in .alternatively , and more intuitively , we can characterize negative correlation by observing that the worlds where is _ true _ and those where is _ true _ overlap minimally , and the worlds where is _ false _ and those where is _ false _ also overlap minimally .the confidences of and can be obtained using the equations , or directly from the alternative characterization : , [ min\{1 , \gamma_1 + \gamma_2\ } , min\{1 , \delta_1 + \delta_2\}]\rangle ] : _ mutual exclusion _ : mutual exclusion is a special case of negative correlation .the main difference is that it requires the sum of the two probabilities to be at most 1 , which is not necessarily the case for negative correlation ( see the previous case ) . in the classical framework ,if two events are mutually exclusive , their negation are not necessarily mutually exclusive .rather , they are negatively correlated . in our framework , however , one or both conditions ( _ a _ ) and ( _ b _ ) , discussed in the previous case , can hold .the appropriate condition is determined by the confidence levels of the two mutually exclusive events , and the corresponding combination formula can be obtained from the combination formulas of negative correlation .the following formulas , for example , are for mutually exclusive events and where ( but no other restriction ) . , [ min\{1 , \gamma_1+\gamma_2\ } , min\{1 , \delta_1+\delta_2\}]\rangle ] . next , we show that the combination formulas for various modes preserve consistent as well as reduced confidence levels . the case for reduced confidence levelsis more involved and will be presented first .the other case is similar , for which we only state the theorem .[ thm : reduced ] suppose and are any formulas and assume their confidence levels are reduced ( definition [ defn : reducedcf ] ) .then the confidence levels of the formulas and , obtained under the various modes above are all reduced .* let ,~ [ \gamma_{1},\delta_{1}]\rangle ] and ,~ [ \gamma_{2},\delta_{2}]\rangle ] .since the confidence levels of and are reduced , we have : + + + + the consistency of the confidence levels of the combination events and in different modes as derived in theorem [ thm : combinations ] follow from the above constraints . for examplelet us consider , [ max\{\gamma_1 , \gamma_2\ } , min\{1 , \delta_1 + \delta_2\}]\rangle\ ] ] we need to show \(1 ) + ( 2 ) + ( 3 ) + ( 4 ) to prove ( 1 ) : if then ( 1 ) holds. otherwise , assume , without loss of generality , that .we can write + and hence and ( 1 ) follows .inequality ( 2 ) follows easily from . to prove ( 3 ) : if then ( 3 ) holds .otherwise , we can write + and hence and ( 3 ) follows . note that if then follows from the above constraint . to prove ( 4 ) let and where . then .proving the consistency of the confidence levels of other combinations and other modes are similar and will not be elaborated here .[ thm : consistent ] suppose and are any formulas and assume their confidence levels are consistent ( definition [ defn : consistentcf ] ) .then the confidence levels of the formulas and , obtained under the various modes above are all consistent .* proof is similar to the previous theorem and is omitted .in this section , we develop a framework for probabilistic deductive databases using a language of probabilistic programs ( p - programs ) .we make use of the probabilistic calculus developed in section [ prob - calc ] and develop the syntax and declarative semantics for programming with confidence levels .we also provide the fixpoint semantics of programs in this framework and establish its equivalence to the declarative semantics .we will use the first - order language * l * of section [ prob - calc ] as the underlying logical language in this section .* syntax of p - programs : * a _ rule _ is an expression of the form , , where are atoms and ,~ [ \gamma,\delta]\rangle ] is the confidence level associated with the rule ) . ] .when , we call this a _ fact_. all variables in the rule are assumed to be universally quantified outside the whole rule , as usual . we restrict attention to range restricted rules , as is customary .a _ probabilistic rule _( p - rule ) is a triple , where is a rule , is a mode indicating how to conjoin the confidence levels of the subgoals in the body of ( and with that of itself ) , and is a mode indicating how the confidence levels of different derivations of an atom involving the head predicate of are to be disjoined .we say ( is the mode associated with the body ( head ) of , and call it the _ conjunctive _ ( _ disjunctive _ ) mode .we refer to as the underlying rule of this p - rule .when is a fact , we omit for obvious reasons .a _ probabilistic program _( p - program ) is a finite collection of p - rules such that whenever there are p - rules whose underlying rules define the same predicate , the mode associated with their head is identical .this last condition ensures different rules defining the same predicate agree on the manner in which confidences of identical -atoms generated by these rules are to be combined .the notions of herbrand universe and herbrand base associated with a p - program are defined as usual .a p - rule is ground exactly when every atom in it is ground .the herbrand instantiation of a p - program is defined in the obvious manner .the following example illustrates our framework .[ medical - ex ] people are assessed to be at high risk for various diseases , depending on factors such as age group , family history ( with respect to the disease ) , etc .accordingly , high risk patients are administered appropriate medications , which are prescribed by doctors among several alternative ones .medications cause side effects , sometimes harmful ones , leading to other symptoms and diseases . here , the extent of risk , administration of medications , side effects ( caused by medications ) , and prognosis are all uncertain phenomena , and we associate confidence levels with them . the following program is a sample of the uncertain knowledge related to these phenomena .- , [ 0.1,0.1]\rangle}}{{\mbox{ } } } ] , - . , [ 0,0]\rangle}}{{\mbox{ } } } ] - , . , [ 0.12,0.12]\rangle}}{{\mbox{ } } } ] - . , [ 0.70,0.70]\rangle}}{{\mbox{ } } } ] , - .we can assume an appropriate set of facts ( the edb ) in conjunction with the above program .for rule 1 , it is easy to see that each ground atom involving the predicate - has at most one derivation . thus , a disjunctive mode for this rule will be clearly redundant , and we have suppressed it for convenience .a similar remark holds for rule 2 .rule 1 says that if a person is midaged and the disease has struck his ancestors , then the confidence level in the person being at high risk for is given by propagating the confidence levels of the body subgoals and combining them with the rule confidence in the sense of .this could be based on an expert s belief that the factors and - contributing to high risk for the disease are independent .each of the other rules has a similar explanation .for the last rule , we note that the potential of a medication to cause side effects is an intrinsic property independent of whether one takes the medication .thus the conjunctive mode used there is independence .finally , note that rules 3 and 4 , defining , use positive correlation as a conservative way of combining confidences obtained from different derivations . for simplicity , we show each interval in the above rules as a point probability .still , note that the confidences for atoms derived from the program will be genuine intervals .* a valuation based semantics : * we develop the declarative semantics of p - programs based on the notion of valuations .let be a p - program .probabilistic valuation _ is a function which associates a confidence level with each ground atom in .we define the satisfaction of p - programs under valuations , with respect to the truth order of the trilattice ( see section [ prob - calc ] ) .we say a valuation _ satisfies _ a ground p - rule , denoted , provided .the intended meaning is that in order to satisfy this p - rule , must assign a confidence level to that is no less true ( in the sense of ) than the result of the conjunction of the confidences assigned to s by and the rule confidence , in the sense of the mode .even when a valuation satisfies ( all ground instances of ) each rule in a p - program , it may not satisfy the p - program as a whole .the reason is that confidences coming from different derivations of atoms are combined strengthening the overall confidence .thus , we need to impose the following additional requirement .let be a ground p - rule , and a valuation .then we denote by - the confidence level propagated to the head of this rule under the valuation and the rule mode , given by the expression .let be the partition of such that ( i ) each contains all ( ground ) p - rules which define the same atom , say , and ( ii ) and are distinct , whenever .suppose is the mode associated with the head of the p - rules in .we denote by - the confidence level determined for the atom under the valuation using the program .this is given by the expression - .we now define satisfaction of p - programs .[ def : satisfaction ] let be a p - program and a valuation .then satisfies , denoted exactly when satisfies each ( ground ) p - rule in , and for all atoms , - .the additional requirement ensures the valuation assigns a strong enough confidence to each atom so it will support the combination of confidences coming from a number of rules ( pertaining to this atom ) .a p - program logically implies a p - fact , denoted , provided every valuation satisfying also satisfies .we next have let be a valuation and a p - program .suppose the mode associated with the head of each p - rule in is positive correlation .then iff satisfies each rule in .* proof*. we shall show that if - for all rules defining a ground atom , then - , where the disjunctive mode for is positive correlation .this follows from the formula for , obtained in theorem [ thm : combinations ] .it is easy to see that .but then , - implies that - and hence - .the above proposition shows that when positive correlation is the only disjunctive mode used , satisfaction is very similar to the classical case . for the declarative semantics of p - programs ,we need something like the least " valuation satisfying the program .it is straightforward to show that the class of all valuations from to itself forms a trilattice , complete with all the 3 orders and the associated meets and joins .they are obtained by a pointwise extension of the corresponding order / operation on the trilattice .we give one example . for valuations , iff , ; , .one could investigate least " with respect to each of the 3 orders of the trilattice . in this paper, we confine attention to the order .the least ( greatest ) valuation is then the valuation * false * ( * true * ) which assigns the confidence level ( ) to every ground atom .we now have [ lem : lvaluation ] let be any p - program and be any valuations satisfying .then is also a valuation satisfying .in particular , is the least valuation satisfying . ** we prove this in two steps .first , we show that for any ground p - rule + + whenever valuations and satisfy , so does .secondly , we shall show that for a p - program , whenever _ atom - conf_ and _ atom - conf_ , then we also have _ atom - conf_ .the lemma will follow from this .\(1 ) suppose and .we prove the case where the conjunctive mode associated with this rule is ignorance .the other cases are similar .it is straightforward to verify the following .\(i ) .+ ( ii ) . from ( i ) and( ii ) , we have , showing .\(2 ) suppose are any two valuations satisfying a p - program .let be the set of all ground p - rules in whose heads are .let _ rule - conf_ and _ rule - conf_ .since and , we have that and , where is the disjunctive mode associated with .again , we give the proof for the case is ignorance as the other cases are similar .let _ rule - conf_ .clearly , and .thus , and .it then follows that , which was to be shown .we take the least valuation satisfying a p - program as characterizing its declarative semantics .consider the following p - program .,~[0.3,0.45]\rangle}}{{\mbox{ } } } ] .,~[0.1,0.2]\rangle}}{{\mbox{ } } } ] . + 3 .,~[0,0.1]\rangle}}{{\mbox{ } } } ] . 4 .,~[0.1,0.2]\rangle}}{{\mbox{ } } } ] . in the followingwe show three valuations , of which and satisfy , while does not .in fact , is the least valuation satisfying .,~[0,0]\rangle & \langle[0.8,0.9],~[0.05,0.1]\rangle & \langle[0.5,0.9],~[0,0]\rangle \\ v_2 & \langle[0.9,1],~[0,0]\rangle & \langle[0.9,1],~[0,0]\rangle & \langle[0.5,0.7],~[0.1,0.4]\rangle \\v_3 & \langle[0.9,0.95],~[0,0.1]\rangle & \langle[0.7,0.8],~[0.1,0.2]\rangle & \langle[0.45,0.8],~[0.1,0.4]\rangle \end{array} ] ,~[0.1,0.2]\rangle \wedge_{ign } \langle[0.8,0.9],~[0.05,0.1]\rangle = ] ,~[0,0.1]\rangle \le_t \langle[0.9,1],~[0,0]\rangle ] further , the confidence level of computed by the combination of rules 1 and 2 is also satisfied by , namely , ,~[0.3,0.45]\rangle \wedge_{ind } \langle[0.9,1],~[0,0]\rangle ) \vee_{pc } ( \langle[0.6,0.8],~[0.1,0.2]\rangle ] * fixpoint semantics : * we associate an immediate consequence " operator with a p - program , defined as follows .[ tp - defn ] let be a p - program and its herbrand instantiation .then is a function , defined as follows .for any probabilistic valuation , and any ground atom , there exists a p - rule such that . call a valuation _ consistent _ provided for every atom , is consistent , as defined in section [ lattice ] .[ thm : tpmonotone ] ( 1 ) always maps consistent valuations to consistent valuations .( 2 ) is monotone and continuous .* proof . *( 1 ) this fact follows theorem [ thm : consistent ] , where we have shown that the combination functions for all modes map consistent confidence levels to consistent confidence levels . ( 2 ) this follows from the fact that the combination functions for all modes are themselves monotone and continuous .we define bottom - up iterations based on in the usual manner .+ ( which assigns the truth - value to every ground atom ) .+ , for a successor ordinal .+ , for a limit ordinal .we have the following results . [ prop : fixpoint ] let be any valuation and be a p - program. then satisfies iff .* proof . * _ ( only if ) ._ if satisfies , then by definition [ def : satisfaction ] , for all atoms , - and hence . __ if , then by the definition of ( definition [ tp - defn ] ) for all atoms , - and hence satisfies .the following theorem is the analogue of the van emden - kowalski theorem for classical logic programming .[ thm : lfplvaluation ] let be a p - program . then the following claims hold. + ( i ) the -least valuation satisfying .+ ( ii ) for a ground atom , iff . ** follows lemma [ lem : lvaluation ] , theorem [ thm : tpmonotone ] and proposition [ prop : fixpoint ] .proof is similar to the analogous theorem of logic programming and details are omitted .since confidences coming from different derivations of a fact are combined , we need a notion of disjunctive proof - trees .we note that the notions of substitution , unification , etc .are analogous to the classical ones .a variable appearing in a rule is _ local _ if it only appears in its body .[ dpt ] let be a(n atomic ) goal and a p - program .then a _ disjunctive proof - tree _ ( dpt ) for with respect to is a tree defined as follows . has two kinds of nodes : _ rule _ nodes and _ goal _ nodes .each rule node is labeled by a rule in and a substitution .each goal node is labeled by an atomic goal .the root is a goal node labeled .let be a goal node labeled by an atom . thenevery child ( if any ) of is a rule node labeled , where is a rule in whose head is unifiable with using the mgu .we assume that each time a rule appears in the tree , its variables are renamed to new variables that do not appear anywhere else in the tree .hence in the label actually represents a renamed instance of the rule .if is a rule node labeled , then whenever an atom occurs in the body of , has a goal child labeled .+ 4 . for any two substitutions occurring in , , for every variable . in other words ,all substitutions occurring in are compatible . a node without childrenis called a leaf .a _ proper _ dpt is a finite dpt such that whenever has a goal leaf labeled , there is no rule in whose head is unifiable with .we only consider proper dpts unless otherwise specified .a rule leaf is a _node ( represents a database fact ) while a goal leaf is a _ failure _ node .* remarks : * \(1 ) the definition of disjunctive proof tree captures the idea that when working with uncertain information in the form of probabilistic rules and facts , we need to consider the disjunction of all proofs in order to determine the best possible confidence in the goal being proved .\(2 ) however , notice that the definition does _ not _ insist that a goal node should have rule children corresponding to all possible unifiable rules and mgu s .\(3 ) we assume without loss of generality that all rules in the p - program are standardized apart by variable renaming so they share no common variables .\(4 ) a goal node can have several rule children corresponding to the same rule .that is , a goal node can have children labeled , where is ( a renamed version of ) a rule in the program .but we require that , , be distinct .\(5 ) we require all substitutions in the tree to be compatible .the convention explained above ensures there will be no conflict among them on account of common variables across rules ( or different invocations of the same rule ) .\(6 ) note that a dpt can be finite or infinite .\(7 ) in a proper dpt , goal leaves are necessarily failure nodes ; this is not true in non - proper dpts .\(8 ) a proper dpt with no failure nodes has only rule leaves , hence , it has an odd height .confidences are associated with ( finite ) dpts as follows .[ dpt - conf ] let be a p - program , a goal , and any finite dpt for with respect to .we associate confidences with the nodes of as follows .each failure node gets the confidence ,~[1,1]\rangle ] , the * false * confidence level with respect to truth ordering , ( see section [ lattice ] ) .each success node labeled , where is a rule in , and is the confidence of rule , gets the confidence .suppose is a rule node labeled , such that the confidence of is , its ( conjunctive ) mode is , and the confidences of the children of are .then gets the confidence .suppose is a goal node labeled , with a ( disjunctive ) mode such that the confidences of its children are .then gets the confidence .we recall the notions of identity and annihilator from algebra ( see ullman ) .let be any element of the confidence lattice and be any operation of the form or of the form , being any of the modes discussed in section [ prob - calc ] .then is an _ identity _ with respect to , if .it is an _ annihilator _ with respect to , if .the proof of the following proposition is straightforward .[ identity - annihilator ] the truth - value ,~[1,1]\rangle ] is an identity with respect to disjunction and an annihilator with respect to conjunction .the truth - value ,~[0,0]\rangle ] is an identity with respect to conjunction and an annihilator with respect to disjunction .these claims hold for all modes discussed in section [ prob - calc ] . in view of this proposition, we can consider only dpts without failure nodes without losing any generality .we now proceed to prove the soundness and completeness theorems .first , we need some definitions .a _ branch _ of a dpt is a set of nodes of , defined as follows .the root of belongs to .whenever a goal node is in , exactly one of its rule children ( if any ) belongs to .finally , whenever a rule node belongs to , all its goal children belong to .we extend this definition to the subtrees of a dpt in the obvious way .a _ subbranch _ of rooted at a goal node is the branch of the subtree of rooted at .we can associate a substitution with a ( sub)branch as follows .( 1 ) the substitution associated with a success node labeled is just .( 2 ) the substitution associated with an internal goal node is simply the substitution associated with its unique rule child in .( 3 ) the substitution associated with an internal rule node in which is labeled is the composition of and the substitutions associated with the goal children of . the substitution associated with a branch is that associated with its root .we say a dpt is _ well - formed _ if it satisfies the following conditions : ( i ) is proper , ( ii ) for every goal node in , for any two ( sub)branches of rooted at , the substitutions associated with and are distinct .the second condition ensures no two branches correspond to the same classical proof " of the goal or a sub - goal . without this condition ,since the probabilistic conjunctions and disjunctions are not idempotent , the confidence of the same proof could be wrongly combined giving an incorrect confidence for the ( root of the ) dpt .henceforth , we will only consider well - formed dpts , namely , dpts that are proper , have no failure nodes , and have distinct substitutions for all ( sub ) branches corresponding to a goal node , for all goal nodes .[ soundness ] [ soundness ] let be a p - program and a ( ground ) goal . if there is a finite well - formed dpt for with respect to with an associated confidence at its root , then .* proof . *first , we make the following observations regarding the combination functions of theorem [ thm : combinations ] : \(1 ) conjunctive combination functions ( all modes ) are monotone .+ ( 2 ) disjunctive combination functions ( all modes ) are monotone .+ ( 3 ) if and are confidence levels , then and for all conjunctive and disjunctive combination functions ( all modes ) .we prove the soundness theorem by induction on the height of the dpt .assume the well - formed dpt of height is for the goal .note that has an odd height , for some , since it is a proper dpt with no failure nodes ( see remark 7 at the beginning of this section ) . : . in this casethe dpt consists of the goal root labeled and one child labeled , where is a rule in whose head is unifiable with .note that this child node is a success leaf .it represents a fact . obviously , in the first iteration of , , where is the confidence level of .it follows from the monotonicity of , that . : . assume the inductive hypothesis holds for every dpt of height , where .consider the dpt for .the root has rule children labeled .each is either a fact , or has goal children .consider the subtrees of rooted at these goal grand children of . by the inductive hypothesis , the confidence levels associated with the goal grand children by the dpt are less than or equal to their confidence levels calculated by , , .hence , by properties ( 1)-(3 ) above , the confidence level associated to by is less than or equal to the confidence level of obtained by another application of , .hence .note that must be well - formed otherwise this argument is not valid .[ completeness ] [ completeness ] let be a p - program and a goal such that for some number , .then there is a finite dpt for with respect to with an associated confidence at its root , such that .* let be the smallest number such that .we shall show by induction on that there is a dpt for with respect to such that the confidence computed by it is at least . :this implies ,~[1,1]\rangle ] .this case is trivial .the dpt consists of a failure node labeled . :suppose the result holds for a certain number .we show that it also holds for .suppose is a ground atom such that .now , there exists a rule such that is the mode associated with its body , and is the mode associated with its head , and there exists a ground substitution such that .consider the dpt for obtained as follows .let the root be labeled .the root has a rule child corresponding to each rule instance used in the above computation of .let be a rule child created at this step , and suppose is the rule instance corresponding to it and let be the substitution used to unify the head of the original rule with the atom .then has goal children with labels respectively . finally , by induction hypothesis, we can assume that ( i ) a dpt for is rooted at the node labeled , and ( ii ) the confidence computed by this latter tree is at least , .in this case , it readily follows from the definition of the confidence computed by a proof - tree that the confidence computed by is at least is a rule defining , is the confidence associated with it , is the mode associated with its head , and is the mode associated with its body .but this confidence is exactly .the induction is complete and the theorem follows .theorems [ soundness ] and [ completeness ] together show that the confidence of an arbitrary ground atom computed according to the fixpoint semantics and using an appropriate disjunctive proof tree is the same .this in turn is the same as the confidence associated according to the ( valuation based ) declarative semantics .in particular , as we will discuss in section [ termination ] , when the disjunctive mode associated with all recursive predicates is positive correlation , the theorems guarantee that the exact confidence associated with the goal can be effectively computed by constructing an appropriate finite dpt ( according to theorem [ completeness ] ) for it .even when these modes _ are _ used indiscriminately , we can still obtain the confidence associated with the goal with an arbitrarily high degree of accuracy , by constructing dpts of appropriate height .in this section , we first compare our work with that of ng and subrahmanian ( see section [ intro ] for a general comparison with non - probabilistic frameworks ) . first , let us examine the ( only ) mode " for disjunction used by them .they combine the confidences of an atom coming from different derivations by taking their intersection . indeed , the bottom of their lattice is a valuation ( called formula function " there ) that assigns the interval \leftarrow] \leftarrow ]. + : ] .+ : }} ] . as an example , let the conjunctive mode used in be independence and let the disjunctive mode used be positive correlation ( or , in this case , even ignorance ! ) . then would assign the confidence ,~[0,0]\rangle\leftarrow] \leftarrow ]. + : ] . in this case , the least fixpoint of is only attained at and it assigns the range \leftarrow ] to . )notice that as long as one uses any arithmetic annotation function such that the probability of the head is less than the probability of the subgoals of ( which is a reasonable annotation function ) , this problem will arise .the problem ( for the unintuitive behavior ) lies with the mode for disjunction .again , we emphasize that different combination rules ( modes ) are appropriate for different situations . now , consider the p - program corresponding to the annotated program , obtained in the same way as was done in example [ problem1 ] .let the conjunctive mode used in be independence and let the disjunctive mode be positive correlation or ignorance. then would assign the confidence level ~[0,0]\rangle ] for instead .then under positive correlation ( for disjunction ) ,~[0,0]\rangle ] .the former makes more intuitive sense , although the latter ( more conservative under ) is obviously not wrong .also , in the latter case , the is reached only at .now , we discuss termination and complexity issues of p - programs .let the _ closure ordinal _ of be the smallest ordinal such that .we have the following [ thm : ordinal ] let be any p - program .then the closure ordinal of can be as high as but no more .* proof . *the last p - program discussed in example [ problem2 ] has a closure ordinal of . since is continuous ( theorem [ thm : tpmonotone ] ) its closure ordinal is at most .[ defn : datacomplexity ] ( _ data complexity _ ) we define the _ data complexity_ of a p - program as the time complexity of computing the least fixpoint of as a function of the size of the database , the number of constants occurring in .it is well known that the data complexity for datalog programs is polynomial .an important question concerning any extension of ddbs to handle uncertainty is whether the data complexity is increased compared to datalog .we can show that under suitable restrictions ( see below ) the data complexity of p - programs is polynomial time .however , the proof can not be obtained by ( straightforward extensions of ) the classical argument for the data complexity for datalog . in the classical case , once a ground atom is derived during bottom - up evaluation , future derivations of it can be ignored . in programming with uncertainty , complications arise because we _ can not _ ignore alternate derivations of the same atom : the confidences obtained from them need to be combined , reinforcing the overall confidence of the atom .this calls for a new proof technique .our technique makes use of the following additional notions .define a _ disjunctive derivation tree _ ( ddt ) to be a well - formed dpt ( see section [ proof - theory ] for a definition ) such that every goal and every substitution labeling any node in the tree is ground .note that the height of a ddt with no failure nodes is an odd number ( see remark 7 at the beginning of section [ proof - theory ] ) .we have the following results . [ ddt - bu - eval ]let be a p - program and any ground atom in .suppose the confidence determined for in iteration of bottom - up evaluation is .then there exists a ddt of height for such that the confidence associated with by is exactly . * proof . *the proof is by induction on . : . in iteration 1, bottom - up evaluation essentially collects together all edb facts ( involving ground atoms ) and determines their confidences from the program . without loss of generality , we may suppose there is at most one edb fact in corresponding to each ground atom ( involving an edb predicate ) .let be any ground atom whose confidence is determined to be in iteration 1 . then there is an edb fact in .the associated ddt for corresponding to this iteration is the tree with root labeled and a rule child labeled .clearly , the confidence associated with the root of this tree is , and the height of this tree is ( , for . :assume the result for all ground atoms whose confidences are determined ( possibly revised ) in iteration .suppose is a ground atom whose confidence is determined to be in iteration .this implies there exist ground instances of rules , ; such that ( i ) the confidence of computed at the end of iteration is ( ) , and ( ii ) , where is the disjunctive mode for the predicate . by induction hypothesis , there are ddts , , each of height or less , for the atoms , which exactly compute the confidences , respectively , corresponding to iteration .consider the tree for by ( i ) making rule children of the root and ( ii ) making the , ( ) subtrees of ( ) .it is trivial to see that is a ddt for and its height is .further the confidence computes for the root is exactly .this completes the induction and the proof .proposition [ ddt - bu - eval ] shows each iteration of bottom - up evaluation corresponds in an essential manner to the construction of a set of ddts one for each distinct ground atom whose confidence is determined ( or revised ) in that iteration .our next objective is to establish a termination bound on bottom - up evaluation .ddt branches are defined similar to those of dpt .let be a ddt .then a branch of is a subtree of , defined as follows .\(i ) the root belongs to every branch .+ ( ii ) whenever a goal node belongs to a branch , exactly one of its rule children , belongs to the branch .+ ( iii ) whenever a rule node belongs to a branch , all its goal children belong to the branch .let be a ground atom and any ddt ( not necessarily for ) .then is -non - simple provided it has a branch containing two goal nodes and such that is an ancestor of and both are labeled by atom .a ddt is -simple if it is not -non - simple .finally , a ddt is simple if it is -simple for every atom .let be a ddt and be a branch of in which an atom appears .then we define the _ number of violations of simplicity _ of with respect to to be one less than the total number of times the atom occurs in .the number of violations of the simplicity of the ddt with respect to is the sum of the number of violations of the branches of in which occurs .clearly , is -simple exactly when the number of violations with respect to is 0 .our first major result of this section follows .[ term - bound ] let be a p - program such that only positive correlation is used as the disjunctive mode for recursive predicates .let , and is any simple ddt for = , .then at most iterations of naive bottom - up evaluation are needed to compute the least fixpoint of .essentially , for p - programs satisfying the conditions mentioned above , the theorem ( i ) shows that naive bottom - up evaluation of is guaranteed to terminate , and ( ii ) establishes an upper bound on the number of iterations of bottom - up evaluation for computing the least fixpoint , in terms of the maximum height of any simple tree for any ground atom .this is the first step in showing that p - programs of this type have a polynomial time data complexity .we will use the next three lemmas ( lemmas [ lem : key][lem : boundfora ] ) in proving this theorem .[ lem : key ] let be any ground atom , and let be a ddt for corresponding to , for some .suppose is -non - simple , for some atom .then there is a ddt for with the following properties : \(i ) the certainty of computed by equals that computed by .+ ( ii ) the number of violations of simplicity of with respect to is less than that of .* proof*. let be the ddt described in the hypothesis of the claim .let be the label of the root of , and assume without loss of generality that is identical to .( the case when is distinct from is similar . )let be the last goal node from the root down ( e.g. in the level - order ) , which is distinct from the root and is labeled by . since corresponds to applications of the operator , we have the following . (* ) every branch of must be isomorphic to some branch of which does not contain the node .this can be seen as follows .let be the iteration such that the subtree of rooted at , say , corresponds to .then clearly , every rule applicable in iteration is also applicable in iteration .this means every branch of constructed from a sequence of rule applications is also constructible in iteration and hence there must be a branch of that is isomorphic to such a branch .it follows from the isomorphism that the isomorphic branch of can not contain the node .associate a logical formula with each node of as follows .\(i ) the formula associated with each ( rule ) leaf is * true*. + ( ii ) the formula associated with a goal node with rule children and associated formulas , is .+ ( iii ) the formula associated with a rule parent with goal children and associated formulas is . let the formula associated with the node be . to simplify the exposition , but at no loss of generality , let us assume that in , every goal node has exactly two rule children . then the formula associated with the root can be expressed as . by ( * ) above , we can see that logically implies , . by the structure of a ddt, we can then express as , for some formula . construct a ddt from by deleting the parent of the node , as well as the subtree rooted at .we claim that ( * * ) the formula associated with the root of is equivalent to that associated with the root of . to see this , notice that the formula associated with the root of can now be expressed as . by simple application of propositional identities, it can be seen that this formula is equivalent to .but this is exactly the formula associated with the root of t , which proves ( * * ) .finally , we shall show that , together with any conjunctive mode , satisfy the following absorption laws : .+ .the first of these laws follows from the fact that for all modes we consider in this paper , , where is the lattice ordering .the second is the dual of the first . in view of the absorption laws, it can be seen that the certainty for computed by above is identical to that computed by .this proves the lemma , since has at least one fewer violations of simplicity with respect to .[ lem : simple ] let be a ddt for an atom .then there is a simple ddt for such that the certainty of computed by it is identical to that computed by .* proof*. follows by an inductive argument using lemma [ lem : key ] .[ lem : boundfora ] let be an atom and be the maximum height of any simple ddt for . then certainty of in is identical to that in , for all .* proof*. let be the ddt for corresponding to .note that height( .let represent the certainty computed by for , which is . by lemma [ lem : simple ], there is a simple ddt , say , for , which computes the same certainty for as .clearly , height( .let represent the certainty computed by for , . by the soundness theorem , and monotonicity of , we can write .it follows that .now we can complete the proof of theorem [ term - bound ] .* proof of theorem [ term - bound]*. let be the maximum height of any simple ddt for any atom .it follows from the above lemmas that the certainty of any atom in is identical to that in , for all , from which the theorem follows .it can be shown that the height of simple ddts is polynomially bounded by the database size .this makes the above result significant .this allows us to prove the following theorem regarding the data complexity of the above class of p - programs .[ complexity ] let be a p - program such that only positive correlation is used as the disjunctive mode for recursive predicates . then its least fixpoint can be computed in time polynomial in the database size . in particular ,bottom - up naive evaluation terminates in time polynomial in the size of the database , yielding the least fixpoint .* by theorem [ term - bound ] we know that the least fixpoint model of can be computed in at most iterations where is the maximum height of any simple ddt for any ground atom with respect to ( iterations to arrive at the fixpoint , and one extra iteration to verify that a fixpoint has been reached . )notice that each goal node in a ddt corresponds to a database predicate .let be the maximum arity of any predicate in , and be the number of constants occurring in the program .notice that under the data complexity measure ( definition [ defn : datacomplexity ] ) is a constant .the maximum number of distinct goal nodes that can occur in any branch of a simple ddt is .this implies the height above is clearly a polynomial in the database size .we have thus shown that bottom - up evaluation of the least fixpoint terminates in a polynomial number of iterations .the fact that the amount of work done in each iteration is polynomial in is easy to see .the theorem follows .we remark that our proof of theorem [ complexity ] implies a similar result for van emden s framework . to our knowledge , this is the first polynomial time result for rule - based programming with ( probabilistic ) uncertainty .we should point out that the polynomial time complexity is preserved whenever modes other than positive correlation are associated with non - recursive predicates ( for disjunction ) . more generally , suppose is the set of all recursive predicates and is the set of non - recursive predicates in the kb , which are possibly defined in terms of those in .then any modes can be freely used with the predicates in while keeping the data complexity polynomial .finally , if we know that the data does not contain cycles , we can use any mode even with a recursive predicate and still have a polynomial time data complexity .we also note that the framework of annotation functions used in enables an infinite family of modes to be used in propagating confidences from rule bodies to heads .the major differences with our work are ( i ) in a fixed mode " for disjunction is imposed unlike our framework , and ( ii ) they do not study the complexity of query answering , whereas we establish the conditions under which the important advantage of polynomial time data complexity of classical datalog can be retained .more importantly , our work has generated useful insights into how modes ( for disjunction ) affect the data complexity .finally , a note about the use of positive correlation as the disjunctive mode for recursive predicates ( when data might contain cycles ) .the rationale is that different derivations of such recursive atoms could involve some amount of overlap ( the degree of overlap depends on the data ) .now , positive correlation ( for disjunction ) tries to be conservative ( and hence sound ) by assuming the extent of overlap is maximal , so the combined confidence of the different derivations is the least possible ( under ) .thus , it _ does _ make sense even from a practical point of view .we motivated the need for modeling both belief and doubt in a framework for manipulating uncertain facts and rules .we have developed a framework for probabilistic deductive databases , capable of manipulating both belief and doubt , expressed as probability intervals .belief doubt pairs , called confidence levels , give rise to a rich algebraic structure called a trilattice .we developed a probabilistic calculus permitting different modes for combining confidence levels of events .we then developed the framework of p - programs for realizing probabilistic deductive databases .p - programs inherit the ability to parameterize " the modes used for combining confidence levels , from our probabilistic calculus .we have developed a declarative semantics , a fixpoint semantics , and proved their equivalence .we have also provided a sound and complete proof procedure for p - programs .we have shown that under disciplined use of modes , we can retain the important advantage of polynomial time data complexity of classical datalog , in this extended framework .we have also compared our framework with related work with respect to the aspects of termination and intuitive behavior ( of the semantics ) .the parametric nature of modes in p - programs is shown to be a significant advantage with respect to these aspects .also , the analysis of trilattices shows insightful relationships between previous work ( ng and subrahmanian ) and ours . interesting open issues which merit further research include ( 1 ) semantics of p - programs under various trilattice orders and various modes , including new ones , ( 2 ) query optimization , ( 3 ) handling inconsistency in a framework handling uncertainty , such as the one studied here .the authors would like to thank the anonymous referees for their careful reading and their comments , many of which have resulted in significant improvements to the paper .kifer , m. , & li , a. ( 1988 ) . on the semantics of rule - based expert systems with uncertainty .gyssens , m. , paradaens , j. , & van gucht , d. ( eds ) , _conf . on database theory_. bruges ,belgium : springer - verlag lncs-326 .lakshmanan , l. v. s. , & shiri , n. ( 1997 ) . a parametric approach to deductive databases with uncertainty . .( a preliminary version appeared in proc .workshop on logic in databases ( lid96 ) , springer - verlag , lncs-1154 , san miniato , italy ) .ng , r. t. , & subrahmanian , v. s. ( 1991 ) . .report umiacs - tr-91 - 49 , cs - tr-2647 .institute for advanced computer studies and department of computer science university of maryland , college park , md 20742 .
we propose a framework for modeling uncertainty where both belief and doubt can be given independent , first - class status . we adopt probability theory as the mathematical formalism for manipulating uncertainty . an agent can express the uncertainty in her knowledge about a piece of information in the form of a _ confidence level _ , consisting of a pair of intervals of probability , one for each of her belief and doubt . the space of confidence levels naturally leads to the notion of a _ trilattice _ , similar in spirit to fitting s bilattices . intuitively , the points in such a trilattice can be ordered according to truth , information , or precision . we develop a framework for _ probabilistic deductive databases _ by associating confidence levels with the facts and rules of a classical deductive database . while the trilattice structure offers a variety of choices for defining the semantics of probabilistic deductive databases , our choice of semantics is based on the truth - ordering , which we find to be closest to the classical framework for deductive databases . in addition to proposing a declarative semantics based on valuations and an equivalent semantics based on fixpoint theory , we also propose a proof procedure and prove it sound and complete . we show that while classical datalog query programs have a polynomial time data complexity , certain query programs in the probabilistic deductive database framework do not even terminate on some input databases . we identify a large natural class of query programs of practical interest in our framework , and show that programs in this class possess polynomial time data complexity , not only do they terminate on every input database , they are guaranteed to do so in a number of steps polynomial in the input database size . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ]
on a daily basis , people undergo numerous interactions with objects that barely register on a conscious level . for instance , imagine a person shopping at a grocery store as shown in figure [ fig : main ] .suppose she picks up a can of juice to load it in her shopping cart .the distance of the can is maintained fixed due to the constant length of her arm .when she checks the expiration date on the can , the distance and orientation towards the can is adjusted with respect to her eyes so that she can read the label easily . in the next aisle, she may look at a lcd screen at a certain distance to check the discount list in the store .thus , this example shows that spatial arrangement between objects and humans is subconsciously established in 3d . in other words , even though people do not consciously plan to maintain a particular distance and orientation when interacting with various objects , these interactions usually have some consistent pattern .this suggests the existence of an egocentric object prior in the person s field of view , which implies that a 3d salient object should appear at a predictable location , orientation , depth , size and shape when mapped to an egocentric rgbd image .our main conjecture stems from the recent work on human visual perception , which shows that _ humans possess a fixed size prior for salient objects_. this finding suggests that a salient object in 3d undergoes a transformation such that people s visual system perceives it with an approximately fixed size . even though , each person s interactions with the objects are biased by a variety of factors such as hand dominance or visual acuity , common trends for interacting with objects certainly exist . in this work, we investigate whether one can discover such consistent patterns by exploiting egocentric object prior from the first - person view in rgbd frames .our problem can be viewed as an inverse object affordance task . while the goal of a traditional object affordance task is to predict human behavior based on the object locations , we are interested in predicting potential salient object locations based on the human behavior captured by an egocentric rgbd camera .the core challenge here is designing a representation that would encode generic characteristics of visual saliency without explicitly relying on object class templates or hand skin detection .specifically , we want to design a representation that captures how a salient object in the 3d world , maps to an egocentric rgbd image . assuming the existence of an egocentric object prior in the first - person view , we hypothesize that a 3d salient object would map to an egocentric rgbd image with a predictable shape , location , size and depth pattern .thus , we propose an egoobject representation that represents each region of interest in an egocentric rgbd video frame by its _ shape _ , _ location _ , _ size _ , and _depth_. note that using egocentric camera in this context is important because it approximates the person s gaze direction and allows us to see objects from a first - person view , which is an important cue for saliency detection . additionally , depth information is also beneficial because it provides an accurate measure of object s distance to a person .we often interact with objects using our hands ( which have a fixed length ) , which suggests that depth defines an important cue for saliency detection as well . thus assuming the existence of an egocentric object prior, our egoobject representation should allow us to accurately predict pixelwise saliency maps in egocentric rgbd frames . to achieve our goals , we create a new egocentric rgbd saliency dataset .our dataset captures people s interactions with objects during various activities such as shopping , cooking , dining . additionally , due to the use of egocentric - stereo cameras, we can accurately capture depth information of each scene .finally we note that our dataset is annotated for the following three tasks : saliency detection , future saliency prediction , and interaction classification .we show that we can successfully apply our proposed egocentric representation on this dataset and achieve solid results for these three tasks .these results demonstrate that by using our egoobject representation , we can accurately characterize an egocentric object prior in the first - person view rgbd images , which implies that salient objects from the 3d world map to an egocentric rgbd image with predictable characteristics of shape , location , size and depth .we demonstrate that we can learn this egocentric object prior from our dataset and then exploit it for 3d saliency detection in egocentric rgbd images .region proposals . for each of the regions we then generate a feature vector that captures shape , location , size and depth cues and use these features to predict the 3d saliency of region . ]* saliency detection in images . * in the past , there has been much research on the task of saliency detection in 2d images .some of the earlier work employs bottom - up cues , such as color , brightness , and contrast to predict saliency in images .additionally , several methods demonstrate the importance of shape cues for saliency detection task . finally , some of the more recent work employ object - proposal methods to aid this task . unlike the above listed methods that try to predict saliency based on contrast , brightness or color cues , we are more interested in expressing an egocentric object prior based on shape , location , size and depth cues in an egocentric rgbd image .our goal is then to use such prior for 3d saliency detection in the egocentric rgbd images .* egocentric visual data analysis .* in the recent work , several methods employed egocentric ( first - person view ) cameras for the tasks such as video summarization , video stabilization , object recognition , and action and activity recognition . in comparison to the prior egocentric approaches we propose a novel problem , which can be formulated as an inverse object affordance problem : our goal is to detect 3d saliency in egocentric rgbd images based on human behavior that is captured by egocentric - stereo cameras .additionally , unlike prior approaches , we use * egocentric - stereo * cameras to capture egocentric rgbd data . in the context of saliency detection ,the depth information is important because it allows us to accurately capture object s distance to a person .since people often use hands ( which have fixed length ) to interact with objects , depth information defines an important cue for saliency detection in egocentric rgbd environment . unlike other methods , which rely on object detectors , or hand and skin segmentation , we propose egoobject representation that is based solely on shape , location , size and depth cues in an egocentric rgbd images .we demonstrate that we can use our representation successfully to predict 3d saliency in egocentric rgbd images .based on our earlier hypothesis , we conjecture that objects from the 3d world map to an egocentric rgbd image with some predictable _ shape _ , _ location _ , _ size _ and _ depth_. we encode such characteristics in a region of interest using an egoobject map , \in \mathds{r}^{n_s\times n_l \times n_b \times n_d \times n_c} ] captures a geometric properties such as area , perimeter , edges , and orientation of .* : perimeter divided by the squared root of the area , the area of a region divided by the area of the bounding box , major and minor axes lengths .* : we employ boundary cues , which include , sum and average contour strength of boundaries in region and also minimum and maximum ultrametric - contour values that lead to appearance and disappearance of the smaller regions inside .* : eccentricity and orientation of and also the diameter of a circle with the same area as region . a location feature ^\mathsf{t} ]encodes the size of the bounding box and area of the region . * : area and perimeter of region . * : area and aspect ratio of the bounding box corresponding to the region .^\mathsf{t} ] computes the relationship with neighboring regions , : * : * : * : * : where and are the feature vector constructued by the min - pooling and max - pooling of neighboring regions for each dimension . takes average of neighboring features and is the feature of the top nearest neighbor .* summary . * for every region of interest in an egocentric rgbd frame, we produce a dimensional feature vector denoted by .we note that some of these features have been successfully used previously in tasks other than 3d saliency detection . additionally , observe that we do not use any object - level feature or hand or skin detectors as is done .this is because , in this work , we are primarily interested in studying the idea that salient objects from the 3d world are mapped to an egocentric rgbd frame with a consistent shape , location , size and depth patterns .we encode these cues with our egoobject representation and show its effectiveness on egocentric rgbd data in the later sections of the paper .given an rgbd frame as an input to our problem , we first feed rgb channels to an mcg method , which generates proposal regions .then , for each of these regions , we generate our proposed features and use it as an input to the random forest classifier ( rf ) . using a rf, we aim to learn the function that takes the feature vector corresponding to a particular region as an input , and produces an output for one our proposed tasks for region ( i.e. saliency value or interaction classification ) .we can formally write this function as .we apply the following pipeline for the following three tasks : 3d saliency detection , future saliency prediction , and interaction classification . however ,for each of these tasks we define a different output objective and train rf classifier according to that objective separately for each task .below we describe this procedure for each task in more detail . * 3d saliency detection . *we train a random forest _regressor _ to predict region s intersection over union ( iou ) with a ground truth salient object . to train the rf regressor we sample regions from our dataset , and extract our features from each of these regions .we then assign a corresponding ground truth iou value to each of them and train a rf regressor using trees .our rf learns the mapping ] , and ] and concatenate these new distances to the original features .such gaze normalization scheme ensures robustness to our system in the case of gaze fluctuations .seconds , where the same object is salient .our goal here is to predict an object that will be salient after seconds .] seconds , where the same object is salient .our goal here is to predict an object that will be salient after seconds .] seconds , where the same object is salient .our goal here is to predict an object that will be salient after seconds .] seconds , where the same object is salient .our goal here is to predict an object that will be salient after seconds .] seconds , where the same object is salient .our goal here is to predict an object that will be salient after seconds .] seconds , where the same object is salient .our goal here is to predict an object that will be salient after seconds . ] * interaction classification .* most of the current computer vision systems classify objects by specific object class templates ( cup , phone , etc ) .however , these templates are not very informative and can not be used effectively beyond the tasks of object detection . adding object s function , andthe type of interaction related to that object would allow researchers to tackle a wider array of problems overlapping vision and psychology . to predict an interaction type at a given frame , for each frame we select top highest ranked regions according to their predicted saliency score .we then classify each of these regions either as sight or touch . finally , we take the majority label from these classification predictions , and use it to classify an entire frame as sight or touch .| c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c ? c & mf & ap & mf & ap & mf & ap & mf & ap & mf & ap & mf & ap & mf & ap & mf & ap & mf & ap + fts & 7.3 & 0.6 & 8.9 & 1.3 & 10.6 & 1.2 & 5.8 & 0.5 & 4.5 & 1.1 & 17.2 & 2.0 & 20.0 & 2.5 & 5.0 & 0.7 & 9.9 & 1.2 + mcg & 10.4 & 4.7 & 13.8 & 7.0 & 21.1 & 12.7 & 7.1 & 2.9 & 12.5 & 5.7 & 23.6 & 12.2 & 31.2 & 14.9 & 11.1 & 5.1 & 16.4 & 8.1 + gbmr & 8.0 & 3.0 & 15.6 & 6.8 & 14.7 & 7.0 & 6.8 & 3.0 & 4.3 & 1.3 & 32.7 & 18.3 & 46.0 & 30.2 & 12.9 & 5.7 & 17.6 & 9.4 + salobj & 7.2 & 2.7 & 19.9 & 7.4 & 21.3 & 10.0 & 15.4 & 5.1 & 5.8 & 2.2 & 24.1 & 9.2 & 49.3 & 28.3 & 9.0 & 3.4 & 19.0 & 8.5 + gbvs & 7.2 & 3.0 & 21.3 & 11.4 & 20.0 & 10.6 & 16.1 & 8.8 & 4.3 & 1.4 & 23.1 & 13.8 & 50.9 & * 50.2 & 11.6 & 5.7 & 19.3 & 13.1 + objectness & 11.5 & 5.6 & * 35.1 & * 24.3 & 39.2 & 29.4 & 11.7 & 6.9 & 4.7 & 1.9 & 27.1 & 17.1 & 47.4 & 42.2 & 13.0 & 6.4 & 23.7 & 16.7 + * ours ( rgb ) & 25.7 & 16.2 & 34.9 & 21.8 & 37.0 & 23.0 & 23.3 & 14.4 & * 28.9 & * 18.5 & 32.0 & 18.7 & * 56.0 & 39.6 & 30.3 & 21.8 & 33.5 & 21.7 + * ours ( rgb - d ) & * 36.9 & * 26.6 & 30.6 & 18.2 & * 55.3 & * 45.4 & * 26.8 & * 19.3 & 18.8 & 10.5 & * 37.9 & * 25.4 & 50.6 & 38.4 & * 40.2 & * 28.5 & * 37.1 & * 26.5 + * * * * * * * * * * * * * * * * * * * *we now present our egocentric rgbd saliency dataset . our dataset records people s interactions with their environment from a first - person view in a variety of settings such as shopping , cooking , dining , etc .we use egocentric - stereo cameras to capture the depth of a scene as well .we note that in the context of our problem , the depth information is particularly useful because it provides an accurate distance from an object to a person .since we hypothesize that a salient object from the 3d world maps to an egocentric rgbd frame with a predictable depth characteristic , we can use depth information as an informative cue for 3d saliency detection task .our dataset has annotations for three different tasks : saliency detection , future saliency prediction , and interaction classification .these annotations enables us to train our models in a supervised fashion and quantitatively evaluate our results .we now describe particular characteristics of our dataset in more detail . * data collection . *we use two stereo gopro hero 3 ( black edition ) cameras with baseline to capture our dataset .all videos are recorded at with .the stereo cameras are calibrated prior to the data collection and synchronized manually with a synchronization token at the beginning of each sequence. * depth computation . *we compute disparity between the stereo pair after stereo rectification .a cost space of stereo matching is generated for each scan line and match each pixel by exploiting dynamic programming in a coarse - to- fine manner .* sequences . *we record video sequences that capture people s interactions with object in a variety of different environments .these sequences include : cooking , supermarket , eating , hotel , hotel , dishwashing , foodmart , and kitchen sequences .* saliency annotations .* we use grabcut software to annotate salient regions in our dataset .we generate annotated frames for kitchen , cooking , and eating sequences , and annotated frames for supermarket and foodmart sequences respectively , and annotated frames for hotel and hotel sequences respectively , and annotated frames for dishwashing sequence ( for a total of frames with per - pixel salient object annotations ) . in fig .[ fig : data_po ] , we illustrate a few images from our dataset and the depth channels corresponding to these images . to illustrate ground truth labels , we overlay these images with saliency annotations . additionally , in fig . [fig : data_stats ] , we provide statistics that capture different properties of our dataset such as the location , depth , and size of annotated salient regions from all sequences . each video sequence from our dataset is marked by a different color in this figure .we observe that these statistics suggest that different video sequences in our dataset exhibit different characteristics , and captures a variety of diverse interactions between people and objects .* annotations for future saliency prediction . *in addition , we also label our dataset to predict future saliency in egocentric rgbd image after frames .specifically , we first find the frame pairs that are frames apart , such that the same object is present in both of the frames .we then check that this object is non - salient in the earlier frame and that it is salient in the later frame .finally , we generate per - pixel annotations for these objects in both frames .we do this for the cases where the pair of frames are , and seconds apart .we produce annotated frames for kitchen , for cooking , for eating , for supermarket , for hotel , for hotel , for foodmart , and frames dishwashing sequences .we present some examples of these annotations in fig .[ fig : data_fut ] . * annotations for interaction classification . *to better understand the nature of people s interactions with their environment we also annotate each interaction either as _ sight _ or as _ touch_.in this section , we present the results on our egocentric rgbd saliency dataset for three different tasks , which include 3d saliency detection , future saliency prediction and interaction classification .we show that using our egoobject feature representation , we achieve solid quantitative and qualitative results for each of these tasks .to evaluate our results , we use the following procedure for all three tasks .we first train random forest ( rf ) using the training data from sequences .we then use this trained rf to test it on the sequence that was not used in the training data .such a setup ensures that our classifier is learning a meaningful pattern in the data , and thus , can generalize well on new data instances .we perform this procedure for each of the sequences separately and then use the resulting rf model to test on its corresponding sequence . for the saliency detection and future saliency prediction tasks , our method predicts pixelwise saliency for each frame in the sequence . to evaluate our results we use two different measures : a maximum f - score ( mf ) along the precision - recall curve , and average precision ( ap ) . for the task of interaction classification, we simply classify each interaction either as sight or as touch .thus , to evaluate our performance we use the fraction of correctly classified predictions .we now present the results for each of these tasks in more detail . detecting 3d saliency in an egocentric rgbd settingis a novel and relatively unexplored problem .thus , we compare our method with the most successful saliency detection systems for 2d images . sequences .these visualizations demonstrate that in each of these sequences , our method captures an egocentric object prior that has a distinct shape , location , and size pattern . ]these visualizations demonstrate that in each of these sequences , our method captures an egocentric object prior that has a distinct shape , location , and size pattern . ]these visualizations demonstrate that in each of these sequences , our method captures an egocentric object prior that has a distinct shape , location , and size pattern . ]these visualizations demonstrate that in each of these sequences , our method captures an egocentric object prior that has a distinct shape , location , and size pattern . ]these visualizations demonstrate that in each of these sequences , our method captures an egocentric object prior that has a distinct shape , location , and size pattern . ]these visualizations demonstrate that in each of these sequences , our method captures an egocentric object prior that has a distinct shape , location , and size pattern . ] in table [ po_table ] , we present quantitative results for the saliency detection task on our dataset .we observe that our approach outperforms all the other methods by and in mf and ap evaluation metrics respectively .these results indicate that saliency detection methods designed for _ non - egocentric _images do not generalize well to the _ egocentric _ images .this can be explained by the fact that in most _ non - egocentric _ saliency detection datasets , images are displayed at a pretty standard scale , with little occlusions , and also close to the center of an image .however , in the egocentric environment , salient objects are often occluded , they appear at a small scale and around many other objects , which makes this task more challenging . furthermore , we note that none of these baseline methods use depth information . based on the results , in table [ po_table ] ,we observe that adding depth features to our framework provides accuracy gains of and according to mf and ap metrics respectively . finally , we observe that the results of different methods vary quite a bit from sequence to sequence .this confirms that our egocentric rgbd saliency dataset captures various aspects of people s interactions with their environment , which makes it challenging to design a method that would perform equally well in each of these sequences. based on the results , we see that our method achieves best results in and sequences ( out of ) according to mf and ap evaluation metrics respectively , which suggests that exploiting egocentric object prior via shape , location , size , and depth features allows us to predict visual saliency robustly across all sequences . additionally , we present our qualitative results in fig . [fig : po_preds ] .our saliency heatmaps in this figure suggest that we can accurately capture different types of salient interactions with objects . furthermore , to provide a more interesting visualization ofour learned egocentric object priors , we average our predicted saliency heatmaps for each of the selected sequences and visualize them in fig . [fig : avg_preds ] .we note that these averaged heatmaps have a certain shape , location , and size characteristics , which suggests the existence of an egocentric object prior in egocentric rgbd images . in fig .[ fig : feats ] , we also analyze , which features contribute the most for the saliency detection task . the feature importance is quantified by the mean squared error reduction when splitting the node by that feature in a random forest . in this case , we manually assign each of our features to one of groups .these groups include shape , location , size , depth , shape context , location context , size context and depth context features ( as shown in fig .[ fig : feats ] ) . for each group , we average the importance value of all the features belonging to that group and present it in figure [ fig : feats ] . based on this figure , we observe that shape features contribute the most for saliency detection .additionally , since location features capture an approximate gaze of a person , they are deemed informative as well .furthermore , we observe that size and depth features also provide informative cues for capturing the saliency in an egocentric rgbd image .as expected , the context feature are least important . in this section ,we present our results for the task of future saliency prediction .we test our trained rf model under three scenarios : predicting a salient object that will be used after , and seconds respectively .as one would expect , predicting the event further away from the present frame is more challenging , which is reflected by the results in table [ fut_table ] .for this task , we aim to use our egoobject representation to learn the cues captured by egocentric - stereo cameras that are indicative of person s future behavior .we compare our future saliency detector ( fsd ) to the saliency detector ( sd ) from the previous section and show that we can achieve superior results , which implies the existence and consistency of the cues that are indicative of person s future behavior .such cues may include person s gaze direction ( captured by an egocentric camera ) , or person s distance to an object ( captured by the depth channel ) , which are both pretty indicative of what the person may do next . in fig .[ fig : fut_preds ] , we visualize some of our future saliency predictions .based on these results , we observe , that even in a difficult environment such as supermarket , our method can make meaningful predictions ..future saliency results according to max f - score ( mf ) and average precision ( ap ) evaluation metrics .given a frame at time , our future saliency detector ( fsd ) predicts saliency for times , and ( denoted by seconds ) . as our baselinewe use a saliency detector ( sd ) from section [ tech_approach ] of this paper .we show that in every case we outperform this baseline according to both metrics .this suggests that using our representation , we can consistently learn some of the egocentric cues such as gaze , or person s distance to an object that are indicative of people s future behavior . [cols="^,^,^,^,^",options="header " , ] in this section , we report our results on the task of interaction classification . in this case , we only have two labels ( sight and touch ) and so we evaluate the performance as a fraction of correctly classified predictions .we compare our approach with a depth - based baseline , for which we learn an optimal depth threshold for each sequence , then for a given input frame , if a predicted salient region is further than this threshold , our baseline classifies that interaction as _ sight _ , otherwise the baseline classifies it as _touch_. due to lack of space , we do not present the full results .however , we note that our approach outperforms depth - based baseline in out of categories and achieves higher accuracy on average in comparison to this baseline .we also illustrate some of the qualitative results in fig .[ fig : po_preds ] .these results indicate that we can use our representation to successfully classify people s interactions with objects by sight or touch .in this paper , we introduced a new psychologically inspired approach to a novel 3d saliency detection problem in egocentric rgbd images . we demonstrated that using our psychologically inspired egoobject representation we can achieve good results for the three following tasks : 3d saliency detection , future saliency prediction , and interaction classification .these results suggest that an egocentric object prior exists and that using our representation , we can capture and exploit it for accurate 3d saliency detection on our egocentric rgbd saliency dataset .
on a minute - to - minute basis people undergo numerous fluid interactions with objects that barely register on a conscious level . recent neuroscientific research demonstrates that humans have a fixed size prior for salient objects . this suggests that a salient object in 3d undergoes a consistent transformation such that people s visual system perceives it with an approximately fixed size . this finding indicates that there exists a consistent egocentric object prior that can be characterized by shape , size , depth , and location in the first person view . in this paper , we develop an egoobject representation , which encodes these characteristics by incorporating shape , location , size and depth features from an egocentric rgbd image . we empirically show that this representation can accurately characterize the egocentric object prior by testing it on an egocentric rgbd dataset for three tasks : the 3d saliency detection , future saliency prediction , and interaction classification . this representation is evaluated on our new egocentric rgbd saliency dataset that includes various activities such as cooking , dining , and shopping . by using our egoobject representation , we outperform previously proposed models for saliency detection ( relative improvement for 3d saliency detection task ) on our dataset . additionally , we demonstrate that this representation allows us to predict future salient objects based on the gaze cue and classify people s interactions with objects .
recently , the both schools of takemura and takayama have developed a quite interesting minimization method called holonomic gradient descent method(hgd ) .it utilizes grbner basis in the ring of differential operator with rational coefficients .grbner basis in the differential operators plays a central role in deriving some differential equations called a pfaffian system for optimization .hgd works by a mixed use of pfaffian system and an iterative optimization method .it has been successfully applied to several maximum likelihood estimation ( mle ) problems , which have been intractable in the past .for example , hgd solve numerically the mle problems for the von mises - fisher distribution and the fisher - bingham distribution on the sphere ( see , sei et al.(2013 ) and nakayama et al.(2011 ) ) .furthermore , the method has also been applied to the evaluation of the exact distribution function of the largest root of a wishart matrix , and it is still rapidly expanding the area of applications(see , hashiguchi et al.(2013 ) ) . on the other hand , in statistical models ,it is not rare that parameters are constrained and therefore the mle problem with constraints has been surely one of fundamental topics in statistics . in this paper, we develop hgd for mle problems with constraints , which we call the constrained holonomic gradient descent(chgd ) .the key of chgd is to separate the process into ( a ) updating of new parameter values by newton - raphson method with penalty function and ( b ) solving a pfaffian system .we consider the following the constrained optimization problem . where , and are all assumed to be continuously differentiable function . is an equality constraint function and is an inequality constraint function . in this paper ,the objective function is assumed to be holonomic .we call the interior region defined by the constraint functions _ the feasible region_. a penalty function method replaces a constrained optimization problem by a series of unconstrained problems. it is performed by adding a term to the objective function that consists of a penalty parameter and a measure of violation of the constraints . in our simulation, we use _ the exact penalty function method_. the definition of the exact penalty function is given by ( see yabe ( 2006 ) ) . that we seek the minimum of a holonomic function and the point which gives the minimum .in hgd , we use the iterative method together with a pfaffian system . in this paper, we use the the newton - raphson iterative minimization method in which the renewal rule of the search point is given by where and is the hessian of at .hgd is based on the theory of the grbner basis . in the following ,we refer to the relation of a numerical method and the grbner basis .let be the differential ring written as \langle \partial_1, ..,\partial_n \rangle \nonumber\end{aligned}\ ] ] where ] is a field and \langle \partial_1, .. ,\partial_n \rangle \in i ] , with be a standard basis in the quotient vector space which is a finite dimensional vector spaces .let be the grbner basis of .the rank of arbitrary differential operations can be reduced by normalization by .assume that holds .for a solution of put .then , it holds that ( see , e.g.,nakayama et al.(2011 ) ) where is a matrix with as a element {j } , \ \i=1, ... ,n,\ \ j=1 ... ,t\end{aligned}\ ] ] this proves the assertion . the above differential equations are called _ pfaffian differential equations _ or _ pfaffian system _ of .so we can calculate the gradient of by using pfaffian differential equations .then , and are also given by pfaffian differential equations .( see hibi et al.(2012 ) ) let be the normal form of by and be the normal form of by . then we have , where denotes the first entry of a vector . for hgd , we first give an ideal for holonomic function and calculate the grbner basis of and then the standard basis are given by .the coefficient matrix for pfaffian system is led by this standard basis , and and are calculated from by starting from a initial point through the pfaffian equations .after these , we can compute automatically the optimum solution by a mixed use of then newton - raphson method .the algorithm is given by below .* set and take an initial point and evaluate . *evaluate and from and calculate the newton direction , * update a search point by . *evaluate by solving pfaffian equations numerically .* set and calculate and goes to step.2 and repeat until convergence .the key step of the above algorithm is step 4 .we can not evaluate by inputting in the function since the hgd treats the case that is difficult to calculate numerically .instead , we only need calculate and numerically for a given initial value .now , we propose the method in which we add constraint conditions to hgd and call it the constrained holonomic gradient descent method(chgd ) . for treating constraints we use the penalty function and add it to objective function and make a new objective function and can treat it as the unconstrained optimization problem .we use hgd for evaluation of gradients and hessian and use the exact penalty function method for constraints .the value of updating a search point can be obtained as the product of directional vector and step size .the step size is chosen so that the following armijo condition is satisfied .in fact we chose such that where and is the approximation of given by . the initial value of is set and then is made smaller iteratively until satisfies equation ( [ eq_s ] ) , or . in our algorithm ,holonomic gradient descent plays a role to calculate the gradient vectors and then the penalty function plays a role to control the step size iteratively .we apply chgd for mle for von mises distribution(vm ) .the process of applying for hgd is shown in nakayama et al.(2011 ) .the density function of vm is given by .the parameters of vm , and , show concentration and mean of angle data respectively .we set the parameters for mle and .now we solve the constrained optimization problem given by . let be sample data .let be sample size .then , and . in our simulation, we set the vm s parameter of which the true value and the initial value . we tried the 2 patterns of constraints .both of the case worked under the same condition except constraints . in figure 1, the constraint is . in figure 2 ,the constraint is .figures 1,2 are the drawing of the trace of the search point . the result of simulation , the convergence point of hgd is . in figure 1 ,the convergence point of chgd is . in figure 2 ,the convergence point of chgd is . in the chgd ,the search direction is almost same as the hgd , because the direction is decided by the hgd s algorithm . while , the constraints play the role to judge the search point is within the feasible region or not and decide the step size .chgd is the effective method for optimization with constraints .however , whenever chgd increases the cost of runtimes than hgd regardless of whether the solution is in the feasible region or not .the following table shows the runtimes when the optimization solution is within the feasible region . in table[ tb1 ] , all numbers are the means of 500 times trials . the optimization problem is equation ( [ optvm ] ) .sample data is drawn from the vm with .the third column of table [ tb1 ] is the result with only newton - raphson method which optimize directly , not use pfaffian system .thus , we see that hgd and chgd is faster than newton - raphson method .we see that the runtimes of chgd is longer than hgd in general , where the both of solutions are almost the same value when the solution is inside the feasible region .sometimes the process finishes early by constraints , when the solution is outside the feasible region .although , we need consider the cost of calculation of chgd . 99 hashiguchi , h. , numata , y. , takayama , n. , takemura , a. ( 2013 ) ._ `` the holonomic gradient method for the distribution function of the largest root of a wishart matrix''_. journal of multiva , riate analysis 117 ( 2031 ) 296 - 312 nakayama , h. , nishiyama , k. , noro , m. , ohara , k. , sei , t. , takayama , n. , takemura , a. ( 2011 ) ._ `` holonomic gradient descent and its application to the fisher bingham integral''_. advances in applied mathematics , 47(3 ) , 639 - 658 .yabe , h. ( 2006 ) ._ `` introduction and application of optimization problem(japanese)''_. surikougakusha publisher .cox , d. a. , little , j. , oshea , d. ( 2007 ) ._ `` ideals , varieties , and algorithms : an introduction to computational algebraic geometry and commutative algebra ( vol .10)''_. springer verlag .
recently , the school of takemura and takayama have developed a quite interesting minimization method called _ holonomic gradient descent method _ ( hgd ) . it works by a mixed use of pfaffian differential equation satisfied by an objective holonomic function and an iterative optimization method . they successfully applied the method to several maximum likelihood estimation ( mle ) problems , which have been intractable in the past . on the other hand , in statistical models , it is not rare that parameters are constrained and therefore the mle with constraints has been surely one of fundamental topics in statistics . in this paper we develop hgd with constraints for mle . * holonomic decent minimization method for restricted maximum likelihood estimation * + rieko sakurai , and toshio sakata + _ graduate school of medicine , kurume university 67 asahimachi , kurume 830 - 0011 , japan _ + _ faculty of design human science , kyushu university , 4 - 9 - 1 shiobaru minami - ku , fukuoka 815 - 8540 , japan _ + email : a213gm009s.kurume-u.ac.jp _ key words : holonomic gradinet descent method , newton - raphson method with penalty function , von mises - fisher distribution _
recent progress in quantum communication technology has confirmed that the biggest challenge in using quantum methods of communication is to provide scalable methods for building large - scale quantum networks .the problems arising in this area are related to physical realizations of such networks , as well as to designing new protocols that exploit new possibilities offered by the principles of quantum mechanics in long - distance communication .one of the interesting problems arising in the area of quantum internetworking protocols is the development of methods which can be used to detect errors that occur in large - scale quantum networks . a natural approach for developing such methods is to construct them on the basis of the methods developed for classical networks .the main contribution of this paper is the development of a method for exploring quantum networks by mobile agents which operate on the basis of information stored in quantum registers .we construct a model based on a quantum walk on cycle which can be applied to analyse the scenario of exploring quantum networks with a faulty sense of direction .one should note that the presented model allows studying the situations where all nodes in the network are connected .the reason for this is that a move can result in the shift of the token from the current position to any other position in the network .thus we do not restrict ourselves to a cycle topology .this paper is organized as follows . in the remaining part of this section we provide a motivation for the considered scenario and recall a classical scenario described by magnus - derek game . in section [ sec : quantum - magnus - derek ]we introduce a quantum the scenario of quantum network exploration with a distracted sense of direction . in section [ sec : application - quantum ] we analyse the behaviour of quantum mobile agents operating with various classes of strategies and describe non - adaptive and adaptive quantum strategies which can be employed by the players . finally , in section [ sec : final ] we summarize the presented work and provide some concluding remarks . as quantum networksconsist of a large number of independent parties it is crucial to understand how the errors , that occur during the computation on nodes , influence their behaviour . such errors may arise , in the first place , due to the erroneous work of particular nodes .therefore it is important to develop the methods that allow the exploration of quantum networks and the detection of malfunctioning nodes .one of the methods used to tackle this problem in classical networks is the application of mobile agents , _i.e. _ autonomous computer programs which move between hosts in a network .this method has been studied extensively in the context of intrusion detection , but it is also used as a convincing programming paradigm in other areas of software engineering . on the other hand , recent results concerning the exploration of quantum graphs suggest that by using the rules of quantum mechanics it is possible to solve search problems or rapidly detect errors in graphs . in this paperwe aim to combine both methods mentioned above .we focus on a model of mobile agents used to explore a quantum network . for the purpose of modelling such agents we introduce and study the quantum version of the magnus - derek game .this combinatorial game , introduced in , provides a model for describing a mobile agent acting in a communication network .the magnus - derek game was introduced in and analysed further in and .the game is played by two players : derek ( from _ direction _ or _ distraction _ ) and magnus ( from _ magnitude _ or _ maximization _ ) , who operate by moving a token on a round table ( cycle ) with nodes .initially the token is placed in the position . in each round ( step ) magnus decides about the number of positions for the token to move and derek decides about the direction : clockwise ( or ) or counter - clockwise ( or ) .magnus aims to maximize the number of nodes visited during the game , while derek aims to minimize this value .derek represents a distraction in the sense of direction . for example , a sequence of moves allowing magnus to visit three nodes , can be changed to due to the influence of derek represented by the and signs .the possibility of providing biased information about the direction prevents magnus permanently from visiting some nodes . in the classical scenarioone can introduce a function which , for a given number of nodes , gives the cardinality of the set of positions visited by the token when both players play optimally .it can be shown that this function is well defined and with being the smallest odd prime factor of .by we denote the number of moves required to visit the optimal number of nodes . in the case , the number of moves is optimal and equals .et al . _proved that if is a positive integer not equal to a power of , then there exists a strategy allowing magnus to visit at least nodes using at most moves .we distinguish two main types of regimes adaptive and non - adaptive . in the adaptive regime ,both players are able to choose their moves during the execution of the game . in the non - adaptive regime, magnus announces the sequence of moves he aims to perform .in particular , if the game is executed in the non - adaptive regime , derek can calculate his sequence of moves before the game . in the classical casethe problem of finding the optimal strategy for derek is -hard and is equivalent to the partition problem .let us now assume that the players operate by encoding their positions on a cycle in an -dimensional pure quantum states .thus the position of the token is encoded in a state . at the -th step ofthe game magnus decides to move and derek decides to move in direction .one can easily express the classical game by applying the notation of quantum states .the evolution of the system during the move described above is given by a unitary matrix of the form where .clearly , as the above permutation operators express only the classical subset of the possible moves , by using it one can not expect to gain with respect to the classical scenario . in particular , the operators as introduced above do not allow the preparation of a move by using the information encoded in a superposition . in order to exploit the possibilities offered by quantum mechanics in the magnus - derek scheme, we can use a quantum walk controlled by two registers . to achieve thiswe need to offer the players a larger state space .we introduce a quantum scheme by defining the following quantum version of the magnus - derek game . 1 .the state of the system is described by a vector of the form 2 .the initial state of the system reads .3 . at each stepthe players can choose their strategy , possibly using unitary gates . 1 .magnus operates on his register with any unitary gate resulting in a operation of the form performed on the full system .2 . derek operates on his register with any unitary gate .if his actions are position - independent the operation performed on the full system takes the form .however , in section [ sec : position_control ] we also allow position - controlled actions , resulting in the operator of the form .[ game:3b ] 4 .the change of the token position , resulting from the players moves , is described by the shift operator where the addition and the subtraction is in the appropriate ring .the single move in the game defined according to the above description is given by the position - independent operator taking this into account the state of the system after the execution of moves reads where each matrix depends on the move of each party .the distribution of the position on the cycle after moves is described by a reduced density matrix which represents the state of the token register after tracing - out the subsystems used to process the strategies . here represents the operation of tracing - out the subsystems used by magnus and derek to encode their strategies .the key part of this procedure is how the players choose their strategies .the selection of the method influences the efficiency of the exploration .below we study the possible methods and show how they influence the behaviour of the quantum version of the magnus - derek game . clearly , by using the unitary gates magnus and derek are able to prepare the superpositions of base states .for this reason , one needs to provide the notion of node visiting suitable for analysing quantum superpositions of states .therefore , we introduce the notion of _ visiting _ and _ attaining _ a position .we say that the position is visited in steps , if for some step the probability of measuring the position register in the state is 1 , _i.e._ in order to introduce the notion of attaining we use the concepts of measured quantum walk and concurrent hitting time .a -measured quantum walk from a discrete - time quantum walk starting in a state is a process defined by iteratively first measuring with the two projectors and . if is measured the process is stopped , otherwise a step operator is applied and the iteration is continued .a quantum random walk has a concurrent hitting - time if the -measured walk from this walk and initial state has a probability of stopping at a time . we say that the position is attained in steps , if -measured exploration walk has a concurrent hitting time , i.e. the exploration walk with initial state has a probability of stopping at a time equal to . with the help of these definitions , one can introduce the concepts of _ visiting strategy _ and _ attaining strategy_. [ def : visiting - strategy ] if for the given sequence of moves performed by magnus , there exists such that each position on the cycle is visited in steps , then we call such sequence of moves a _ visiting strategy_. [ def : attaining - strategy ] if for the given sequence of moves performed by magnus , each position on the cycle is attained , then we call such sequence of moves an _ attaining strategy_.the quantum scheme introduced in the previous section extends the space of strategies which can be used by both players .as there is a significant difference in situations where and , we will consider these cases separately .we start by considering the case . in this situationwe have two possible alternatives . inthe first one magnus uses the quantum version of the optimal classical strategy and derek while derek performs any possible quantum moves . in the second scenarioboth players are able to explore all possible quantum moves .let us first consider the quantum scheme executed by magnus with the use of the classical optimal strategy . as in the classical casederek is not able to prevent magnus from visiting all the nodes , it is natural to ask if he can achieve any advantage using unitary moves .if the number of nodes is equal to , for some integer , the optimal strategy for magnus can be computed at the beginning of the game . this strategy _i.e. _ a sequence of magnitudes is given as ( see lemma 2 in ) where denotes the repetition of the moves starting from the beginning of the sequence until the move preceding the and excluding it .the first few sequences resulting from eq .( [ eqn : classical - optimal-2k ] ) are presented in table [ tab : magnus2-moves - examples ] ..optimal moves to be performed by magnus when the number of nodes is equal to .magnus is able to visit all positions in moves by using this strategy . [cols="<,<",options="header " , ] by using this strategy in the classical case , magnus is able to visit all nodes using moves and derek is not able to prevent him from doing this .moreover , the bound for the number of moves required to visit all the nodes in the classical case is tight .let us now assume that magnus is using quantum moves constructed for the classical optimal strategy , but derek can use arbitrary quantum moves . for example , if magnus optimal strategy is realized by the following sequence of unitary gates first of all , as the moves performed by magnus allow him the sampling of the space of positions using steps , it can be easily seen that derek is not able to prevent magnus from attaining all nodes using moves . on the other hand ,derek is able to prevent magnus from visiting all nodes .he can achieve this using the strategy given as follows .[ str:2k - strategy - h - id ] for steps perform the following gate where denotes the hadamard gate .the probabilities of finding a token at each position for the scheme with magnus using the optimal strategy and derek using strategy [ str:2k - strategy - h - id ] is presented in fig .[ fig : qpos-2k - strategy - h - id ] .clearly , magnus is able to attain all the nodes is in steps .however , derek can prevent him from visiting all nodes in steps .this is expressed in the following .[ tw : vs - optimal ] let us take .then , there exists a strategy for derek preventing magnus from visiting all nodes in steps .moreover , there is no strategy for derek that enables him to prevent magnus from attaining all nodes in steps . _proof._the first part follows from the construction of the strategy [ str:2k - strategy - h - id ] .in fact , any strategy of this form , not necessarily using hadamard gate , will prevent magnus from visiting all nodes .the second part follows from the construction of the magnus strategy .let s assume that there is a strategy that allows derek to prevent magnus from attaining a position i.e. this position is not attained .then a -measured walk has no concurrent hitting time .thus , there is a non - zero probability that the process will not stop in steps .this means that at each step there is a non - zero amplitude for some state with _ i.e. _ the state will not get measured by effect .the sequence of directions resulting from the above used by derek in a classical version of the game would give him a strategy forbidding a visit in position .it is a contradiction of the properties of magnus classical strategy . the above proposition can be easily extended as , by using a quantum strategy with only one hadamard gate , derek can prevent magnus from visiting more than two nodes .this result shows that by using quantum moves against the classical strategy , derek is not able to exclude additional positions .however , he gains in comparison to the classical case as he is able to introduce more distraction in terms of the reliability of the exploration .in the situation , the quantum strategies used by derek to distract the sense of directions can depend on the type of information which is available to him . without the possibility to perform position - controlled operations he can only use classical information about history of choices of magnus unitaries that gives him an estimate of the current state . on the other hand , if he is able to decide about his move using the current position , the resulting strategy is more robust . in the classical case the adaptive strategy allows derek to use the knowledge about magnus move to choose a step according to the position of a token in the moment of the decision . in the quantum case , when a superposition of positions is possible and no measurement is allowed , derek s decision can not depend on the position of the token .instead , derek can maintain only information about the history and the current state of the walk in order to choose the optimal move . in this sectionwe provide such quantum adaptive strategy for derek under the principles of the game introduced in section [ sec : quantum - magnus - derek ] , _ i.e. _ without using controlled operations , which can be used by derek to execute his move .using the presented strategy derek can reduce the number of visited positions to 2 ( or even one in the case of odd ) at the cost of increasing the number of attained positions .the main result of this section can be stated as follows .[ tw : pm1 ] in the case when contains in its decomposition two distinct odd prime numbers and there exists a strategy for derek that allows him to assert that : 1 . only the starting position ( and the symmetric one in the case of even ) will be visited , 2 .the total number of attained positions during the walk will be at most = n-(n / q- n / pq)$ ] , assuming that magnus uses only permutation operators .one should note that for magnus can not apply the provided strategy .moreover , the strategy could be applied recursively by excluding subsequent pairs of least odd prime divisors in order to slightly improve this result not all multiplications of need to be attained and the number of attained positions would be at most , for . in order to prove this , we provide a method for constructing a strategy for derek , which allows him to obtain the desired result .we show that the provided strategy guarantees that the amplitudes of a state , at every step corresponding to a fixed set of positions , will be equal to zero and , as a consequence , there is zero probability of measuring any such positions during the walk _i.e. _ none of them is attained .the first requirement for derek is the choice of the set of _ restricted positions _ , _i.e. _ positions which will be protected from being visited or attained by magnus .restricted positions have to be distributed on the cycle in a regular way .more precisely , we have the following .[ fact : structure ] a set of restricted positions which can be chosen by derek in order to construct a strategy in proposition [ tw : pm1 ] is a subset of where is a divisor of . __ to show that the set of restricted position has to be of this form it is sufficient to prove that the intervals between subsequent restricted positions have to be equal .let us assume that this is not the case and consider three subsequent restricted positions .if the distance between two of them is even , magnus , after visiting the position in the middle , would be able to visit one of the restricted positions .if both distances are odd , but different , then the sum of them is even and by repeating the reasoning track we obtain that magnus is able to visit at least one of the restricted positions . after choosing the set and for a given position on the cycle , derek can choose his move independently from the magnus choice .when we assign the positions with possible directions according to particular magnitudes it turns out that some of the positions are not distinguishable from derek s point of view .let us call two positions _ symmetric _ if their distance to the nearest restricted position in the direction indicated by the coin register is identical .this allows us to state the following .[ fact : symmetric ] considering two symmetric positions the sets of directions that can be chosen by derek in order to avoid visiting restricted positions are identical for every magnus call .one can note that the relation of being symmetric is invariant under the action of the step operator .two facts stated above allow derek to restrict the choice of moves in such manner that he is able prevent magnus from visiting the set of restricted positions . however , the most important part of derek s strategy is steering the state of the system into a superposition of symmetric states .such a state guarantees the possibility to perform a strategy in which none of the states will be visited ( only attained ) .when such a superposition is achievable from the beginning , derek achieves the result similar to the classical case ( equal number of restricted positions ) assuming that none of the states is visited . on the other hand ,when he needs to adopt to the standard situation when the starting state is a base state with one particular position , then the number of states that are attained is greater than the number of the positions visited in the classical scenario .[ fact : distance ] if a superposition of two symmetric states has been created from a base state , the beginning position must be equally distant from two closest positions from every set of the restricted positions ._ if this were not the case , the resulting states in a superposition would not be equally distant from restricted positions and , as the result , not symmetric . the above stated facts allow the formulation of the proof of proposition [ tw : pm1 ] ._ proof of proposition [ tw : pm1 ] ._ as a consequence of fact [ fact : distance ] , the starting point has to belong to every restricted set .using fact [ fact : structure ] , magnus can design a strategy that allows him to visit all positions from an arbitrarily fixed set ( by calling appropriate multiplications of ) even when restricted to the permutation operators .thus derek has to choose two prime divisors of and decide which will be used as the restricted positions set , according to the magnus first move .the optimal choice is to use two smallest factors . in this casethe optimal strategy for magnus would be to call and visit all positions that are multiplications of , and then switch to .for this reason derek uses the restricted set which is identical as in the case of restricting excluding all the positions numbered with common multiplications of and .from fact [ fact : symmetric ] it follows that each strategy excluding a given set of positions allows the exclusion of the same set while operating on a superposition .taking into account the above considerations , we define the strategy for derek , which fulfills the requirements of proposition [ tw : pm1 ] .[ str : adaptive - no - control ] for any classical strategy used by magnus , derek has to perform the following steps : * step :* apply the hadamard gate . if magnus chooses a magnitude equal to , for some , apply and repeat this step .+ if magnus chooses other magnitude go to * step 3*. if magnus chooses a multiplication of ( respectively ) , set the restricted positions to be ( respectively ) .apply the classical strategy . having strategy [ str : adaptive - no - control ] , while magnus applies the magnitude equal to and derek performs step 2 , no positions restricted in terms of prop .[ tw : pm1 ] . will be attained .starting from the moment that magnus chooses some other magnitude derek applies unitaries that correspond to the classical strategy .this ensures that none of the restricted positions will be attained ( otherwise magnus would be able to visit more than positions in the classical case , see proof of prop .[ tw : vs - optimal ] ) . an example of state evolution in the game executed using strategy [ str : adaptive - no - control ] is presented in fig .[ fig : nocontrol ] .the starting position is the only one that is visited .[ fig : nocontrol ] as it was shown in the previous section a strategy allowed for derek in the scenario introduced at the beginning of this paper is not sufficient to maintain the number of restricted positions characteristic for classical strategy and limiting magnus only to attain positions .however , the notion of adaptive strategies for derek can be transferred into quantum scenario . in order to let derek use position information in his strategy we have to modify the model introduced in section [ sec : quantum - magnus - derek ] .we do this by replacing the local operators available to derek with the position - controlled operators of the form .having such operators at his disposal , derek is able to apply a different strategy to each part of the state separately .let us consider magnus - derek game on , , positions with being the least prime divisor of .when the set of operators available for derek includes the operators of the form where is an arbitrary position and is an arbitrary local unitary operation then the maximum number of attained positions for magnus is equal to ( as in the classical case ) and the total number of visited positions is at most 2 ( respectively 1 if is an odd number ) ._ proof._in the simplest case derek leads to a superposition of two states . in this casehe needs only to ensure that the superposition will not vanish .an example of such strategy is presented in fig .[ fig : controlled - one - hadamard ] .[ str : adaptive - controled ] for any magnus strategy based on permutation operators , when and is a prime number , derek has to perform the following steps : * step :* apply the hadamard gate .if is even do nothing as long as magnus move is equal to .if is odd go to * step 3*. find a set of equally distant positions that is disjoint with already visited positions .apply classical strategy to both parts of the state using position controlled operators .the strategy is based on the classical one that is proven to be optimal .if there would be a strategy for magnus that allows him to attain additional position , there would be also an analogous classical strategy ( see proof of prop .[ tw : vs - optimal ] ) . one can also consider a modification of the above strategy with the additional ability of operating on the superposition of more than two base states . the main restriction on the strategy executed by derek is , in this case , the equality of amplitudes for every position and current magnus call . when the condition is satisfied derek is able to set an arbitrary direction in every position of the cycle using the and operators .the example is shown in the fig .[ fig : pos_controled ] .after the second step the state of the token is a superposition of at least three states . as the consequence , the probabilities are more distributed over the cycle .the presented game provides a model for studying the exploration of quantum networks .the model presented in this paper is based on a quantum walk on a cycle . despite its simplicity, the presented model can be used to describe complex networks and study the behaviour of mobile agents acting in such network .one should note that in the case of the magnus - derek game the main objective is to optimize the number of nodes visited during the game .the actual goal of visiting depends on the computation which is required to take place at the nodes . we have shown that by extending the space of possible moves , both players can significantly change the parameters of the exploration . in particular ,if magnus uses the sequence of moves optimal for the classical case , derek is able to prevent him from visiting all nodes .we have assumed that in the quantum scenario not only the number of attained positions is at stake but also the number of positions that are visited by magnus .we have considered a modification of a classical strategy that enables both players to preform their tasks efficiently .this analysis provides an interesting insight into the difficulty of achieving quantum - oriented goals .we have also shown that without a proper model of adaptiveness , it is not possible for derek to obtain the results analogous to the classical case ( the number of restricted positions is lower or the no - visiting condition is validated ) . performing a strategy optimized in order to reduce the number of visited slotsrequires a trade - off with the total number of attained positions . with additional control resourcesthe total number of attained positions is maintained if the number of visited positions is strictly limited .the authors acknowledge the support by the polish national science centre ( ncn ) under the grant number dec-2011/03/d / st6/00413 .jam would like to acknowledge interesting discussions with m. mc gettrick and c. rver .bernardes and e. dos santos moreira .implementation of an intrusion detection system based on mobile agents . in _ proceedings of the international symposium on software engineering for parallel and distributed systems , 2000_ , pages 158164 , 2000 .t. e. chapuran , p. toliver , n. a. peters , j. jackel , m. s. goodman , r. j. runser , s. r. mcnown , n. dallmann , r. j. hughes , k. p. mccabe , j. e. nordholt , c. g. peterson , k. t. tyagi , l. mercer , and h. dardy .optical networking for quantum key distribution and quantum communications ., 11:105001 , 2009 .
we develop a model which can be used to analyse the scenario of exploring quantum network with a distracted sense of direction . using this model we analyse the behaviour of quantum mobile agents operating with non - adaptive and adaptive strategies which can be employed in this scenario . we introduce the notion of node visiting suitable for analysing quantum superpositions of states by distinguishing between visiting and attaining a position . we show that without a proper model of adaptiveness , it is not possible for the party representing the distraction in the sense of direction , to obtain the results analogous to the classical case . moreover , with additional control resources the total number of attained positions is maintained if the number of visited positions is strictly limited . + keywords : quantum mobile agents ; quantum networks ; two - person quantum games = 1
the study of processor shared queues has received much attention over the past 45 or so years .the processor sharing ( ps ) discipline has the advantage over , say , first - in first - out ( fifo ) , in that shorter jobs tend to get through the system more rapidly .ps models were introduced during the 1960 s by kleinrock ( see , ) . in recent yearsthere has been renewed attention paid to such models , due to their applicability to the flow - level performance of bandwidth - sharing protocols in packet - switched communication networks ( see - ) .perhaps the simplest example of such a model is the -ps queue . herecustomers arrive according to a poisson process with rate parameter , the server works at rate , there is no queue , and if there are customers in the system each gets an equal fraction of the server .ps and fifo models differ significantly if we consider the `` sojourn time '' .this is defined as the time it takes for a given customer , called a `` tagged customer '' , to get through the system ( after having obtained the required amount of service ) .the sojourn time is a random variable that we denote by . for the simplest model ,the distribution of depends on the total service time that the customer requests and also on the number of other customers present when the tagged customer enters the system .one natural variant of the -ps model is the finite population model , which puts an upper bound on the number of customers that can be served by the processor .the model assumes that there are a total of customers , and each customer will enter service in the next time units with probability . at any time there are customers being served and the remaining customers are in the general population .hence the total arrival rate is ] .we give results for the three cases : , and . within each case of eigenvectors have different behaviors in different ranges of .for the eigenvalues are given by where and we observe that the leading term in ( [ s31_nu ] ) ( ) is independent of the eigenvalue index and corresponds to the relaxation rate in the standard queue .for the standard -ps model ( with an customer population ) , it is well known ( see , ) that the tail of the sojourn time density is ^{-5/6},\ ; t\to\infty\ ] ] where is a constant .this problem corresponds to solving an infinite system of odes , which may be obtained by letting in the matrix ( with a fixed ) .the spectrum of the resulting infinite matrix is purely continuous , which leads to the sub - exponential and algebraic factors in ( [ s31_ptsim ] ) .the result in ( [ s31_nu ] ) shows that the ( necessarily discrete ) spectrum of has , for and , all of the eigenvalues approaching , with the deviation from this limit appearing only in the third ( ) and fourth ( ) terms in the expansion in ( [ s31_nu ] ) .note that in ( [ s31_nu ] ) is independent of . comparing ( [ s31_ptsim ] ) to ( [ s2_fs_ptsim ] ) with ( [ s31_nu ] ) and , we see that the factors ^{-5/6} ] .note that ( [ s31_ptsim ] ) corresponds to letting and then in the finite population model , while ( [ s2_fs_ptsim ] ) has with a finite large .the expansion in ( [ s31_nu ] ) breaks down when becomes very large , and we note that when , the three terms , and become comparable in magnitude .the expansion in ( [ s31_nu ] ) suggests that can be allowed to be slightly large with , but certainly not as large as , which would be needed to calculate all of the eigenvalues of . as we stated before , here we do not attempt to get the eigenvalues of large index .now consider the eigenvectors , for and .these have the expansion \ ] ] where and are related by and \frac{d}{dz}\big[e^{-z^2/4}\mathrm{he}_j(z)\big]+\big(\frac{\alpha}{2}-\frac{2}{3}\beta\big)ze^{-z^2/4}\mathrm{he}_j(z),\ ] ] with here is the hermite polynomial , so that and .the constant is a normalization constant which depends upon , and , but not .apart from the factor in ( [ s31_phi ] ) we see that the eigenvectors are concentrated in the range and this corresponds to .the zeros of the hermite polynomials correspond to sign changes ( with ) of the eigenvectors , and these are thus spaced apart .we next give expansions of on other spatial scales , such as , and , where ( [ s31_phi ] ) ceases to be valid .these results will involve normalization constants that we denote by , but each of these will be related to , so that our results are unique up to a multiplicative constant .note that the eigenvalues of are all simple , which can be shown by a standard sturm - liouville type argument . for ( hence ) we find that where ^{2j+1}}\exp\big[(2j+1)(1-\sqrt{\rho})^{1/4}\sqrt{x}\big].\ ] ] the normalization constants and are related by ,\ ] ] by asymptotic matching between ( [ s31_phi ] ) and ( [ s31_phisim ] ) .next we consider and scale .the leading term becomes \ ] ] with \nonumber\\ & & + \frac{\rho-4\sqrt{\rho}+2}{2\rho}\log\big[\frac{\rho\xi+2 - 2\sqrt{\rho}+\sqrt{\rho^2\xi^2 + 4\rho\xi(1-\sqrt{\rho})}}{2(1-\sqrt{\rho})}\big]\nonumber\\ & & + \frac{1}{2}\log\bigg[\frac{(\rho-2\sqrt{\rho}+2)\xi+2(1-\sqrt{\rho})-(\sqrt{\rho}-2)\sqrt{\rho\xi^2 + 4\xi(1-\sqrt{\rho})}}{2(1-\sqrt{\rho})}\bigg],\nonumber\end{aligned}\ ] ] ,\ ] ] \ ] ] and can be written as the integral \ ] ] where \bigg\}\ ] ] and and are , respectively , the derivatives of ( [ s31_f ] ) and ( [ s31_f1 ] ) . note that and are independent of the eigenvalue index , while and do depend on it .the constants and are related by , by asymptotic matching between ( [ s31_phisim ] ) and ( [ s31_phileading ] ) .finally we consider the scale .the expansion in ( [ s31_phisim ] ) with ( [ s31_fg ] ) develops a singularity as and ceases to be valid for small . for obtain where the contour integral is a small loop about , and by asymptotic matching of ( [ s31_phisim ] ) as with ( [ s31_phisim3 ] ) as , we next consider with .the eigenvalues are now small , of the order , with note that now we do not see the coalescence of eigenvalues , as was the case when , and the eigenvalue index appears in the leading term in ( [ s31_nuj ] ) . the form in ( [ s31_nuj ] )also suggests that the tail behavior in ( [ s2_fs_sim ] ) is achieved when .now the zeros of the eigenvectors will be concentrated in the range where , and introducing the new spatial variable , with we find that where is again the hermite polynomial , and is again a normalization constant , possibly different from ( [ s31_phi ] ) . on the -scale with obtain and and are related by by asymptotic matching between ( [ s31_phijx ] ) and ( [ s31_phijk1 ] ) .note that and for the first two eigenvectors ( ) , ( [ s31_phijx ] ) is a special case of ( [ s31_phijk1 ] ) .the expression in ( [ s31_phijk1 ] ) holds for and for , but not for or . for the latter we must use ( [ s31_phijx ] ) and for we shall show that where is a closed loop that encircles the branch cut , where and ] . to summarize , we have shown that the eigenvalues have very different behaviors for ( cf .( [ s31_nu ] ) ) , ( cf .( [ s31_nuj ] ) ) and ( cf .( [ s31_nu2term ] ) ) . in the first case , as , the eigenvalues all coalesce about which is the relaxation rate of the standard model .however , higher order terms in the expansion show the splitting of the eigenvalues ( cf . and in ( [ s31_nu ] ) ) , which occurs at the term in the expansion of the . for the eigenvalues are small , of order , but even the leading term depends upon ( cf .( [ s31_nuj ] ) ) . when there is again a coalescence of the eigenvalues , now about where is given by ( [ s31_ab ] ) .the splitting now occurs at the first correction term , which is of order .we also note that the leading order dependence of the on occurs always in a simple linear fashion .our analysis will also indicate how to compute higher order terms in the expansions of the and , for all three cases of and all ranges of .ultimately , obtaining the leading terms for the reduces in all 3 cases of to the classic eigenvalue problem for the quantum harmonic oscillator , which can be solved in terms of hermite polynomials .we proceed to compute the eigenvalues and eigenvectors of the matrix above ( [ s2_fs_sum ] ) , treating respectively the cases , and , in subsections 4.1 - 4.3 .we always begin by considering the scaling of , with , where the oscillations or sign changes of the eigenvectors occur , and this scale also determines the asymptotic eigenvalues .then , other spatial ranges of will be treated , which correspond to the `` tails '' of the eigenvectors .we recall that so that means that the service rate exceeds the maximum total arrival rate .when and the distribution of in ( [ s2_fs_probn ] ) behaves as \sim ( 1-\rho)\rho^n ] , at the next two orders ( and ) we then obtain and note that the coefficients in ( [ s41_expandcy ] ) will depend upon and also the eigenvalue index .we also require the eigenfunctions to decay as .changing variables from to with we see that , where (z) ] , in view of ( [ s41_c2 ] ) .we determine by a solvability condition for ( [ s41_lphi1 ] ) .we multiply ( [ s41_lphi1 ] ) by and integrate from to , and use the properties of hermite polynomials ( see ) .then we conclude that for all .thus there is no term in the expansion in ( [ s31_nu ] ) . to solve for we write the right - hand side of ( [ s41_lphi1 ] ) as where and are as in ( [ s31_ab ] ) .then we can construct a particular solution to ( [ s41_lphi1 ] ) ( with ) in the form where , , and are determined from solving ( [ s41_abc ] ) leads to as in ( [ s31_phi1 ] ) .to compute , the term in ( [ s31_nu ] ) , we use the solvability condition for the equation ( [ s41_order2 ] ) for .we omit the detailed derivation .we note that ( [ s31_c4 ] ) is singular in the limit , while in ( [ s31_c1c2 ] ) vanishes in this limit . also , is quadratic in while is linear in , so that ( [ s31_nu ] ) becomes invalid both as and as the eigenvalue index becomes large .we next consider the on the spatial scales , and . for ( ) ,the expansion in ( [ s31_phi ] ) ceases to be valid .we expand the leading term in ( [ s31_phi ] ) as to obtain ,\ ] ] which suggests that we expand the eigenvector in the form ( [ s31_phisim ] ) on the -scale , noticing also that .we set in ( [ s2_fs_rec ] ) with given by ( [ s31_nu ] ) and having the form in ( [ s31_phisim ] ) .for we obtain the following odes for and : ^ 2+(\sqrt{\rho}-1)x-\frac{1}{x}+\frac{c_1}{\sqrt{\rho}}=0\ ] ] and (x)=0.\ ] ] using and in ( [ s31_c1c2 ] ) , ( [ s41_xf(x)ode ] ) and ( [ s41_xg(x)ode ] ) can be easily solved and the results are in ( [ s31_fg ] ) .we note that ( [ s41_xf(x)ode ] ) can be rewritten as ^ 2=\big(\sqrt{1-\sqrt{\rho}}\sqrt{x}-1/\sqrt{x}\big)^2. ] , where and .our asymptotic analysis predicts that will undergo a single sign change , and on the scale , ( [ s31_phijx ] ) shows that should be approximately linear in , with a zero at which corresponds to ( ) , in view of ( [ s31_x ] ) .the exact eigenvector undergoes a sign change when changes from 74 to 75 , in excellent agreement with the asymptotics .while the eigenvector is approximately linear near , figure [ f1 ] shows that it is not globally linear , and achieves a minimum value at .but our analysis shows that on the scale , we must use ( [ s31_phijk1 ] ) , and when and , ( [ s31_phijk1 ] ) achieves a minimum at , which corresponds to .this demonstrates the necessity of treating both the and scales . in figure[ f2 ] we retain and , but now plot the second eigenvector .we see that typically , and its graph is approximately tangent to the -axis near . in figure [ f3 ]we `` blow up ''the region near , plotting for ] figure [ f3 ] shows roughly a parabolic profile , as predicted by ( [ s31_phijx ] ) , but the larger picture in figure [ f2 ] again demonstrates that the -scale result in ( [ s31_phijk1 ] ) must be used when is further away from , as ( [ s31_phijk1 ] ) will , for example , predict the minimum value seen in figure [ f2 ] .next we consider , maintaining . nowthe asymptotic result in the main range is in ( [ s31_u ] ) and ( [ s31_phieig ] ) , and when we have . in figure [ f5 ]we plot ( ) in the range ] ; there are two sign changes , between and 22 , and and 35 .now and the approximation in ( [ s31_phieig ] ) has zeros where , which corresponds to , leading to the numerical values of and .next we consider . nowthe main range is the -scale result in ( [ s31_phi])-([s31_ab ] ) .for we now plot the `` symmetrized '' eigenvector(s ) . in figures [ f8 ] and [ f9 ] we always have and .figure [ f8 ] has for ] to the corresponding approximation from ( [ s2_fs_ptsim ] ) , and we note that both must approach as .our asymptotic analysis predicts that for and , the term in ( [ s2_fs_sum ] ) should dominate for times , while if or , the term dominates for times . in table[ table3 ] we take and let and . for , the largest eigenvalue is , which we list in the last row of the table , and the second largest eigenvalue is . for , we have and . in table[ table4 ] we take . now and when , and and when . in table[ table5 ] we take and , when , and , when .tables [ table3]-[table5 ] show that the approximation resulting from ( [ s2_fs_ptsim ] ) is quite accurate , though it may take fairly large times before the exact and approximate results ultimately reach the limit .the relative errors improve as we go from to to , which is again consistent with our asymptotic analysis , as when there is the most coalescence ( for ) of the eigenvalues , making it hard to distinguish from the others .+ 5 & 0.5398 & 0.5415 & 0.32% & 0.3383 & 0.3387 & 0.14% + 10 & 0.3362 & 0.3363 & 0.03% & 0.2024 & 0.2024 & 0.01% + 15 & 0.2679 & 0.2680 & 2.4e-05 & 0.1570 & 0.1570 & 1.5e-05 + 20 & 0.2338 & 0.2338 & 2.2e-06 & 0.1343 & 0.1343 & 1.6e-06 + 30 & 0.1996 & 0.1996 & 1.9e-08 & 0.1207 & 0.1207 & 1.7e-07 + 50 & 0.1722 & 0.1722 & 1.6e-12 & 0.1002 & 0.1002 & 2.1e-10 + 100 & 0.1517 & 0.1517 & .0e-12 & 0.0934 & 0.0934 & 2.6e-12 + & 0.1312 & 0.1312 & & 0.0662 & 0.0662 & + 2 l. kleinrock , _ analysis of a time - shared processor _, naval research logistics quarterly 11 ( 1964 ) , 59 - 73 .+ l. kleinrock , _ time - shared systems : a theoretical treatment _ , j. acm 14 ( 1967 ) , 242 - 261 .+ d. p. heyman , t. v. lakshman , and a. l. neidhardt , _ a new method for analysing feedback - based protocols with applications to engineering web traffic over the internet _acm sigmetrics ( 1997 ) , 24 - 38 .+ l. massouli and j. w. roberts , _ bandwidth sharing : objectives and algorithms _ , proc .ieee infocom , new york , ny , usa ( 1999 ) , 1395 - 1403 .+ m. nabe , m. murata , and h. miyahara , _ analysis and modeling of world wide web traffic for capacity dimensioning of internet access lines _ ,evaluation , 34 ( 1998 ) , 249 - 271 .+ d. mitra and j. a. morrison , _ asymptotic expansions of moments of the waiting time in closed and open processor - sharing systems with multiple job classes _ , adv .in appl . probab .15 ( 1983 ) , 813 - 839 .+ j. a. morrison and d. mitra , _ heavy - usage asymptotic expansions for the waiting time in closed processor - sharing systems with multiple classes _ , adv . in appl .17 ( 1985 ) , 163 - 185 .+ j. a. morrison , _ asymptotic analysis of the waiting - time distribution for a large closed processor - sharing system _ , siam j. appl .46 ( 1986 ) , 140 - 170 .+ j. a. morrison , _ moments of the conditioned waiting time in a large closed processor - sharing system _ , stochastic models 2 ( 1986 ) , 293 - 321 .+ j. a. morrison , _ conditioned response - time distribution for a large closed processor - sharing system in very heavy usage _ , siam j. appl .47 ( 1987 ) , 1117 - 1129 .+ j. a. morrison , _ conditioned response - time distribution for a large closed processor - sharing system with multiple classes in very heavy usage _ , siam j. appl .48 ( 1988 ) , 1493 - 1509 .+ k. c. sevcik and i. mitrani , _ the distribution of queueing network states at input and output instants _ , j. acm 28 ( 1981 ) , 358 - 371 .+ f. pollaczek , _ la loi dattente des appels tlphoniques _ , c. r. acad .paris 222 ( 1946 ) , 353 - 355 .+ j. w. cohen , _ on processor sharing and random service ( letter to the editor ) _ , j. appl .prob . 21 ( 1984 ) , 937. + w. magnus , f. oberhettinger , and r. p. soni , formulas and theorems for the special functions of mathematical physics , springer - verlag , new york , 1966 .
we consider sojourn or response times in processor - shared queues that have a finite population of potential users . computing the response time of a tagged customer involves solving a finite system of linear odes . writing the system in matrix form , we study the eigenvectors and eigenvalues in the limit as the size of the matrix becomes large . this corresponds to finite population models where the total population is . using asymptotic methods we reduce the eigenvalue problem to that of a standard differential equation , such as the hermite equation . the dominant eigenvalue leads to the tail of a customer s sojourn time distribution . + * keywords : * finite population , processor sharing , eigenvalue , eigenvector , asymptotics .
proteins are essential building blocks of living organisms .they function as catalyst , structural elements , chemical signals , receptors , etc . the molecular mechanism of protein functions are closely related to their structures .the study of structure - function relationship is the holy grail of biophysics and has attracted enormous effort in the past few decades .the understanding of such a relationship enables us to predict protein functions from structure or amino acid sequence or both , which remains major challenge in molecular biology .intensive experimental investigation has been carried out to explore the interactions among proteins or proteins with other biomolecules , e.g. , dnas and/or rnas . in particular , the understanding of protein - drug interactions is of premier importance to human health .a wide variety of theoretical and computational approaches has been proposed to understand the protein structure - function relationship .one class of approaches is biophysical . from the point of view of biophysics , protein structure , function , dynamics and transport are , in general , dictated by protein interactions .quantum mechanics ( qm ) is based on the fundamental principle , and offers the most accurate description of interactions among electrons , photons , atoms and even molecules .although qm methods have unveiled many underlying mechanisms of reaction kinetics and enzymatic activities , they typically are computationally too expensive to do for large biomolecules . based on classic physical laws , molecular mechanics ( mm ) can , in combination with fitted parameters , simulate the physical movement of atoms or molecules for relatively large biomolecular systems like proteins quite precisely .however , it can be computationally intractable for macromoelcular systems involving realistic biological time scales . many time - independent methods like normal mode analysis ( nma ) , elastic network model ( enm ) , graph theory and flexibility - rigidity index ( fri ) are proposed to capture features of large biomolecules .variational multiscale methods are another class of approaches that combine atomistic description with continuum approximations .there are well developed servers for predicting protein functions based on three - dimensional ( 3d ) structures or models from the homology modeling ( here homology is in biological sense ) of amino acid sequence if 3d structure is not yet available .another class of important approaches , bioinformatical methods , plays a unique role for the understanding of the structure - function relationship .these data - driven predictions are based on similarity analysis .the essential idea is that proteins with similar sequences or structures may share similar functions .also , based on sequential or structural similarity , proteins can be classified into many different groups .once the sequence or structure of a novel protein is identified , its function can be predicted by assigning it to the group of proteins that share similarities to a good extent .however , the degree of similarity depends on the criteria used to measure similarity or difference .many measurements are used to describe similarity between two protein samples .typical approaches use either sequence or physical information , or both . among them , sequence alignment can describe how closely the two proteins are related .protein blast , clustalw2 , and other software packages can preform global or local sequence alignments .based on sequence alignments , various scoring methods can provide the description of protein similarity .additionally , sequence features such as sequence length and occurrence percentage of a specific amino acid can also be employed to compare proteins .many sequence based features can be derived from the position - specific scoring matrix ( pssm ) . moreover, structural information provides an efficient description of protein similarity as well .structure alignment methods include rigid , flexible and other methods .the combination of different structure alignment methods and different measurements such as root - mean - square deviation ( rmsd ) and z - score gives rise to various ways to quantify the similarity among proteins .as per structure information , different physical properties such as surface area , volume , free energy , flexible - rigidity index ( fri ) , curvature , electrostatics etc . can be calculated .a continuum model , poisson boltzmann ( pb ) equation delivers quite accurate estimation for electrostatics of biomolecules .there are many efficient and accurate pb solvers including pbeq , mibpb , etc .together with physical properties , one can also extract geometrical properties from structure information .these properties include coordinates of atoms , connections between atoms such as covalent bonds and hydrogen bonds , molecular surfaces and curvatures .these various approaches reveal information of different scales from local atom arrangement to global architecture .physical and geometrical properties described above add different perspective to analyze protein similarities .due to the advance in bioscience and biotechnology , biomolecular structure date sets are growing at an unprecedented rate .for example , the http://www.rcsb.org/pdb/home/home.do[protein data bank ( pdb ) ] has accumulated more than a hundred thousand biomolecular structures .the prediction of the protein structure - function relationship from such huge amount of data can be extremely challenging .additionally , an eve - growing number of physical or sequence features are evaluated for each data set or amino - acid residue , which adds to the complexity of the data - driven prediction . to automatically analyze excessively large data sets in molecular biology , many machine learning methodshave been developed .these methods are mainly utilized for the classification , regression , comparison and clustering of biomolecular data .clustering is an unsupervised learning method which divides a set of inputs into groups without knowing the groups beforehand .this method can unveil hidden patterns in the data set .classification is a supervised learning method , in which , a classifier is trained on a given training set and used to do prediction for new observations .it assigns observation to one of several pre - determined categories based on knowledge from training data set in which the label of observations is known .popular methods for classification include support vector machine ( svm ) , artificial neural network ( ann ) , deep learning , etc . in classification ,each observation in training the set has a feature vector that describes the observation from various perspectives and a label that indicates to which group the observation belongs . a model trained on the training set indicates to which group a new observation belongs with feature vector and unknown label . to improve the speed of classification and reduce effect from irrelevant features, many feature selection procedures have been proposed .machine learning approach are successfully used for protein hot spot prediction .the data - driven analysis of the protein structure - function relationship is compromised by the fact that same protein may have different conformations which possess different properties or delivers different functions . for instance , hemoglobins have taut form with low affinity to oxygen and relaxed form with high affinity to oxygen ; and ion channels often have open and close states .different conformations of a given protein may only have minor differences in their local geometric configurations .these conformations share the same sequence and may have very similar physical properties .however , their minor structural differences might lead to dramatically different functions .therefore , apart from the conventional physical and sequence information , geometric and topological information can also play an important role in understanding the protein structure - function relationship .indeed , geometric information has been extensively used in the protein exploration . in contrast , topological information has been hardly employed in studying the protein structure - function relationship . in general , geometric approaches are frequently inundated with too much geometric detail and are often prohibitively expensive for most realistic biomolecular systems , while traditional topological methods often incur in too much reduction of the original geometric and physical information .persistent homology , a new branch of applied topology , is able to bridge traditional geometry and topology .it creates a variety of topologies of a given object by varying a filtration parameter , such as a radius or a level set function . in the past decade, persistent homology has been developed as a new multiscale representation of topological features .the 0-th dimensional version was originally introduced for computer vision applications under the name size function " and the idea was also studied by robins .the persistent homology theory was formulated , together with an algorithm given , by edelsbrunner et al . , and a more general theory was developed by zomorodian and carlsson . there has since been significant theoretical development , as well as various computational algorithms .often , persistent homology can be visualized through barcodes , in which various horizontal line segments or bars are the homology generators which survive over filtration scales .persistence diagrams are another equivalent representation .computational homology and persistent homology have been applied to a variety of domains , including image analysis , chaotic dynamics verification , sensor network , complex network , data analysis , shape recognition and computational biology .compared with traditional computational topology and/or computational homology , persistent homology inherently has an additional dimension , the filtration parameter , which can be utilized to embed some crucial geometric or quantitative information into the topological invariants . the importance of retaining geometric information in topological analysis has been recognized , and topology has been advocated as a new approach for tackling big data sets . recently , we have introduced persistent homology for mathematical modeling and prediction of nano particles , proteins and other biomolecules .we have proposed molecular topological fingerprint ( mtf ) to reveal topology - function relationships in protein folding and protein flexibility .we have employed persistent homology to predict the curvature energies of fullerene isomers , and analyze the stability of protein folding .more recently , we have introduced resolution based persistent topology .most recently , we have developed new multidimensional persistence , a topic that has attracted much attention in the past few years , to better bridge geometry and traditional topology and achieve better characterization of biomolecular data .we have also introduced the use of topological fingerprint for resolving ill - posed inverse problems in cryo - em structure determination .the objective of the present work is to explore the utility of mtfs for protein classification and analysis .we construct feature vectors based on mtfs to describe unique topological properties of protein in different scales , states and/or conformations .these topological feature vectors are further used in conjugation with the svm algorithm for the classification of proteins .we validate the proposed mtf - svm strategy by distinguishing different protein conformations , proteins with different local secondary structures , and proteins from different superfamilies or families .the performance of proposed topological method is demonstrated by a number of realistic applications , including protein binding analysis , ion channel study , etc .the rest of the paper is organized as following .section [ sec : methods ] is devoted to the mathematical foundations for persistent homology and machine learning methods .we present a brief description of simplex and simplicial complex followed by basic concept of homology , filtration , and persistence in section [ persistenthomology ] .three different methods to get simplicial complex , vietoris - rips complex , alpha complex , and ech complex are discussed .we use a sequence of graphs of channel proteins to illustrate the growth of a vietoris - rips complex and corresponding barcode representation of topological persistence . in section [ svm+roc ] , fundamental concept of support vector machineis discussed .an introduction of transformation of the original optimization problem is given .a measurement for the performance of classification model known as receiver operating characteristic is described .section [ feature+preprocessing ] is devoted to the description of features used in the classification and pre - processing of topological feature vectors . in section [ sec : numerical ], four test cases are shown .case 1 and case 2 examine the performance of the topological fingerprint based classification methods in distinguishing different conformations of same proteins . in case 1 , we use the structure of the m2 channel of influenza a virus with and without an inhibitor . in case 2 , we employ the structure of hemoglobin in taut form and relaxed form .case 3 validates the proposed topological methods in capturing the difference between local secondary structures . in this study, proteins are divided into three groups , all alpha protein , all beta protein , and alpha+beta protein . in case 4 ,the ability of the present method for distinguishing different protein families is examined .this paper ends with some concluding remarks .this section presents a brief review of persistent homology theory and illustrates its use in proteins . a brief description of machine learning methods is also given .the topological feature selection and construction from biomolecular data are described in details .* simplex * a -simplex denoted by is a convex hull of vertices which is represented by a set of points where is a set of affinely independent points .geometrically , a - is a line segment , a -simplex is a triangle , a -simplex is a tetrahedron , and a -simplex is a -cell ( a four dimensional object bounded by five tetrahedrons ) .a of the -simplex is defined as a convex hull formed from a subset consisting vertices .* simplicial complex * a simplicial complex is a finite collection of simplices satisfying two conditions .first , faces of a simplex in are also in ; second , intersection of any two simplices in is a face of both the simplices .the highest dimension of simplices in determines dimension of .* homology * for a simplicial complex , a -chain is a formal sum of the form ] is oriented -simplex from . for simplicity, we choose .all these -chains on form an abelian group , called chain group and denoted as .a boundary operator over a -simplex is defined as , ,\ ] ] where \partial_n\partial_{n-1}\partial_1\partial_0 ] .points above the diagonal line are considered as good predictors and those below the line are considered as poor predictors . if a point is below the diagonal line , the predictor can be inverted to be a good predictor . for points that are close to the diagonal line, they are considered to act similarly to random guess which implies a relatively useless predictor .roc curve is obtained by plotting fpr and tpr as continuous functions of threshold value .the area between roc curve and axis represents probability that the classifier assigns higher score to a randomly chosen positive sample than to a randomly chosen negative sample if positive is set to have higher score than negative .the area under the curve ( auc ) of roc is a measure of classifier quality .intuitively , a higher auc implies a better classifier . in this work ,algebraic topology is employed to discriminate proteins . specifically , we compute mtfs through the filtration process of protein structural data .mtfs bear the persistence of topological invariants during the filtration and are ideally suited for protein classification .to implement our topological approach in the svm algorithm , we construct protein feature vectors by using mtfs .we select distinguishing protein features from mtfs .these features can be both long lasting and short lasting betti 0 , betti 1 , and betti 2 intervals . table [ tab : features ] lists topological features used for classification. detailed explanation of these features is discussed .the length and location value of bars are in the unit of angstrom ( ) for protein data .
protein function and dynamics are closely related to its sequence and structure . however prediction of protein function and dynamics from its sequence and structure is still a fundamental challenge in molecular biology . protein classification , which is typically done through measuring the similarity between proteins based on protein sequence or physical information , serves as a crucial step toward the understanding of protein function and dynamics . persistent homology is a new branch of algebraic topology that has found its success in the topological data analysis in a variety of disciplines , including molecular biology . the present work explores the potential of using persistent homology as an independent tool for protein classification . to this end , we propose a molecular topological fingerprint based support vector machine ( mtf - svm ) classifier . specifically , we construct machine learning feature vectors solely from protein topological fingerprints , which are topological invariants generated during the filtration process . to validate the present mtf - svm approach , we consider four types of problems . first , we study protein - drug binding by using the m2 channel protein of influenza a virus . we achieve 96% accuracy in discriminating drug bound and unbound m2 channels . additionally , we examine the use of mtf - svm for the classification of hemoglobin molecules in their relaxed and taut forms and obtain about 80% accuracy . the identification of all alpha , all beta , and alpha - beta protein domains is carried out in our next study using 900 proteins . we have found a 85% success in this identification . finally , we apply the present technique to 55 classification tasks of protein superfamilies over 1357 samples . an average accuracy of 82% is attained . the present study establishes computational topology as an independent and effective alternative for protein classification . key words : persistent homology , machine learning , protein classification , topological fingerprint . * running title : topological protein classification *
when considering regression with a large number of predictors , variable selection becomes important .numerous methods have been proposed in the literature for the purpose of variable selection , ranging from the classical information criteria such as aic and bic to regularization based modern techniques such as the nonnegative garrote [ breiman ( ) ] , the lasso [ tibshirani ( ) ] and the scad [ fan and li ( ) ] , among many others .although these methods enjoy excellent performance in many applications , they do not take the hierarchical or structural relationship among predictors into account and therefore can lead to models that are hard to interpret .consider , for example , multiple linear regression with both main effects and two - way interactions where a dependent variable and explanatory variables are related through where .commonly used general purpose variable selection techniques , including those mentioned above , do not distinguish interactions from main effects and can select a model with an interaction but neither of its main effects , that is , and .it is therefore useful to invoke the so - called effect heredity principle [ hamada and wu ( ) ] in this situation .there are two popular versions of the heredity principle [ chipman ( ) ] . under _strong heredity _, for a two - factor interaction effect to be active both its parent effects , and , should be active ; whereas under _ weak heredity _ only one of its parent effects needs to be active .likewise , one may also require that can be active only if is also active .the strong heredity principle is closely related to the notion of marginality [ nelder ( ) , mccullagh and nelder ( ) , nelder ( ) ] which ensures that the response surface is invariant under scaling and translation of the explanatory variables in the model .interested readers are also referred to mccullagh ( ) for a rigorous discussion about what criteria a sensible statistical model should obey .li , sudarsanam and frey ( ) recently conducted a meta - analysis of 113 data sets from published factorial experiments and concluded that an overwhelming majority of these real studies conform with the heredity principles .this clearly shows the importance of using these principles in practice .these two heredity concepts can be extended to describe more general hierarchical structure among predictors . with slight abuse of notation ,write a general multiple linear regression as where and . throughout this paper, we center each variable so that the observed mean is zero and , therefore , the regression equation has no intercept . in its most general form , the hierarchical relationship among predictors can be represented by sets , where contains the parent effects of the predictor .for example , the dependence set of is in the quadratic model ( [ 2way ] ) . in order that the variable can be considered for inclusion , all elements of be included under the strong heredity principle , and at least one element of should be included under the weak heredity principle .other types of heredity principles , such as the partial heredity principle [ nelder ( ) ] , can also be incorporated in this framework .the readers are referred to yuan , joseph and lin ( ) for further details . as pointed out by turlach ( ), it could be very challenging to conform with the hierarchical structure in the popular variable selection methods . in this paperwe specifically address this issue and consider how to effectively impose such hierarchical structures among the predictors in variable selection and coefficient estimation , which we refer to as _ structured variable selection and estimation_. despite its great practical importance , structured variable selection and estimation has received only scant attention in the literature .earlier interests in structured variable selection come from the analysis of designed experiments where heredity principles have proven to be powerful tools in resolving complex aliasing patterns .hamada and wu ( ) introduced a modified stepwise variable selection procedure that can enforce effect heredity principles .later , chipman ( ) and chipman , hamada and wu ( ) discussed how the effect heredity can be accommodated in the stochastic search variable selection method developed by george and mcculloch ( ) .see also joseph and delaney ( ) for another bayesian approach . despite its elegance, the bayesian approach can be computationally demanding for large scale problems .recently , yuan , joseph and lin ( ) proposed generalized lars algorithms [ osborne , presnell and turlach ( ) , efron et al .( ) ] to incorporate heredity principles into model selection .efron et al .( ) and turlach ( ) also considered alternative strategies to enforce the strong heredity principle in the lars algorithm .compared with earlier proposals , the generalized lars procedures enjoy tremendous computational advantages , which make them particularly suitable for problems of moderate or large dimensions .however , yuan and lin ( ) recently showed that lars may not be consistent in variable selection .moreover , the generalized lars approach is not flexible enough to incorporate many of the hierarchical structures among predictors .more recently , zhao , rocha and yu ( ) and choi , li and zhu ( ) proposed penalization methods to enforce the strong heredity principle in fitting a linear regression model . however , it is not clear how to generalize them to handle more general heredity structures and their theoretical properties remain unknown . in this paperwe propose a new framework for structured variable selection and estimation that complements and improves over the existing approaches .we introduce a family of shrinkage estimator that is similar in spirit to the nonnegative garrote , which yuan and lin ( ) recently showed to enjoy great computational advantages , nice asymptotic properties and excellent finite sample performance .we propose to incorporate structural relationships among predictors as linear inequality constraints on the corresponding shrinkage factors .the resulting estimates can be obtained as the solution of a quadratic program and very efficiently solved using the standard quadratic programming techniques .we show that , unlike lars , it is consistent in both variable selection and estimation provided that the true model has such structures .moreover , the linear inequality constraints can be easily modified to adapt to any situation arising in practical problems and therefore is much more flexible than the existing approaches .we also extend the original nonnegative garrote as well as the proposed structured variable selection and estimation methods to deal with the generalized linear models .the proposed approach is much more flexible than the generalized lars approach in yuan , joseph and lin ( ) .for example , suppose a group of variables is expected to follow strong heredity and another group weak heredity , then in the proposed approach we only need to use the corresponding constraints for strong and weak heredity in solving the quadratic program , whereas the approach of yuan , joseph and lin ( ) is algorithmic and therefore requires a considerable amount of expertise with the generalized lars code to implement these special needs .however , there is a price to be paid for this added flexibility : it is not as fast as the generalized lars .the rest of the paper is organized as follows .we introduce the methodology and study its asymptotic properties in the next section . in section [ sec3 ]we extend the methodology to generalized linear models .section [ sec4 ] discusses the computational issues involved in the estimation .simulations and real data examples are presented in sections [ sec5 ] and [ sec6 ] to illustrate the proposed methods .we conclude with some discussions in section [ sec7 ] .the original nonnegative garrote estimator introduced by breiman ( ) is a scaled version of the least square estimate .given independent copies of where is a -dimensional covariate and is a response variable , the shrinkage factor is given as the minimizer to where , with slight abuse of notation , , , and is a dimensional vector whose element is and is the least square estimate based on ( [ 1.1 ] ) . here is a tuning parameter .the nonnegative garrote estimate of the regression coefficient is subsequently defined as , . with an appropriately chosen tuning parameter ,some of the scaling factors could be estimated by exact zero and , therefore , the corresponding predictors are eliminated from the selected model .following yuan , joseph and lin ( ) , let contain the parent effects of the predictor . under the strong heredity principle , we need to impose the constraint that for any if . a naive approach to incorporating effectheredity is therefore to minimize ( [ shrink ] ) under this additional constraint .however , in doing so , we lose the convexity of the optimization problem and generally will end up with problems such as multiple local optima and potentially np hardness . recall that the nonnegative garrote estimate of is . since with probability one , will be selected if and only if scaling factor , in which case behaves more or less like an indicator of the inclusion of in the selected model .therefore , the strong heredity principles can be enforced by requiring note that if , ( [ strong ] ) will force the scaling factor for all its parents to be positive and consequently active .since these constraints are linear in terms of the scaling factor , minimizing ( [ shrink ] ) under ( [ strong ] ) remains a quadratic program .figure [ fig : strong ] illustrates the feasible region of the nonnegative garrote with such constraints in contrast with the original nonnegative garrote where no heredity rules are enforced .we consider two effects and their interaction with the corresponding shrinking factors denoted by , and , respectively . in both situationsthe feasible region is a convex polyhedron in the three dimensional space .( left ) and the relaxed constraint ( right ) . ] similarly , when considering weak heredity principles , we can require that however , the feasible region under such constraints is no longer convex as demonstrated in the left panel of figure [ fig : weak ] .subsequently , minimizing ( [ shrink ] ) subject to ( [ weaknon ] ) is not feasible . to overcome this problem , we suggest using the convex envelop of these constraints for the weak heredity principles : again, these constraints are linear in terms of the scaling factor and minimizing ( [ shrink ] ) under ( [ weak ] ) remains a quadratic program .note that implies that and , therefore , ( [ weak ] ) will require at least one of its parents to be included in the model .in other words , constraint ( [ weak ] ) can be employed in place of ( [ weaknon ] ) to enforce the weak heredity principle .the small difference between the feasible regions of ( [ weaknon ] ) and ( [ weak ] ) also suggests that the selected model may only differ slightly between the two constraints .we opt for ( [ weak ] ) because of the great computational advantage it brings about .to gain further insight to the proposed structured variable selection and estimation methods , we study their asymptotic properties .we show here that the proposed methods estimate the zero coefficients by zero with probability tending to one , and at the same time give root- consistent estimate to the nonzero coefficients provided that the true data generating mechanism satisfies such heredity principles .denote by the indices of the predictors in the true model , that is , .write as the estimate obtained from the proposed structured variable selection procedure. under strong heredity , the shrinkage factors can be equivalently written in the lagrange form [ boyd and vandenberghe ( ) ] subject to for some lagrange parameter . for the weak heredity principle, we replace the constraints with .[ thm1 ] assume that and is positive definite . if the true model satisfies the strong / weak heredity principles , and in a fashion such that as goes to , then the structured estimate with the corresponding heredity principle satisfies for any , and if .all the proofs can be accessed as the supplement materials .note that when , there is no penalty and the proposed estimates reduce to the least squares estimate which is consistent in estimation .the theorems suggest that if instead the tuning parameter escapes to infinity at a rate slower than , the resulting estimates not only achieve root- consistency in terms of estimation but also are consistent in variable selection , whereas the ordinary least squares estimator does not possess such model selection ability .the nonnegative garrote was originally introduced for variable selection in multiple linear regression .but the idea can be extended to more general regression settings where depends on through a scalar parameter where is a -dimensional unknown coefficient vector .it is worth pointing out that such extensions have not been proposed in literature so far . a common approach to estimating is by means of the maximum likelihood .let be a negative log conditional likelihood function of .the maximum likelihood estimate is given as the minimizer of for example , in logistic regression , more generally , can be replaced with any loss functions such that its expectation with respect to the joint distribution of is minimized at . to perform variable selection , we propose the following extension of the original nonnegative garrote .we use the maximum likelihood estimate as a preliminary estimate of .similar to the original nonnegative garrote , define .next we estimate the shrinkage factors by subject to and for any . in the case of normal linear regression , becomes the least squares and it is not hard to see that the solution of ( [ gnng ] ) always satisfies because all variables are centered .therefore , without loss of generality , we could assume that there is no intercept in the normal linear regression .the same , however , is not true for more general and , therefore , is included in ( [ gnng ] ) .our final estimate of is then given as for . to impose the strong or weak heredity principle, we add additional constraints or , respectively .theorem [ thm1 ] can also be extended to more general regression settings .similar to before , under strong heredity , subject to for some . under weak heredity principles, we use the constraints instead of .we shall assume that the following regularity conditions hold : is a strictly convex function of the second argument ; the maximum likelihood estimate is root- consistent ; the observed information matrix converges to a positive definite matrix , that is , where is a positive definite matrix .[ thm2 ] under regularity conditions , if in a fashion such that as goes to , then for any , and if provided that the true model satisfies the same heredity principles .similar to the original nonnegative garrote , the proposed structured variable selection and estimation procedure proceeds in two steps .first the solution path indexed by the tuning parameter is constructed .the second step , oftentimes referred to as tuning , selects the final estimate on the solution path .we begin with linear regression . for both types of heredity principles ,the shrinkage factors for a given can be obtained from solving a quadratic program of the following form : where is a matrix determined by the type of heredity principles , is a vector of zeros , and means `` greater than or equal to '' in an element - wise manner .equation ( [ qpsvs ] ) can be solved efficiently using standard quadratic programming techniques , and the solution path of the proposed structured variable selection and estimation procedure can be approximated by solving ( [ qpsvs ] ) for a fine grid of s .recently , yuan and lin ( ) showed that the solution path of the original nonnegative garrote is piecewise linear , and used this to construct an efficient algorithm for building its whole solution path .the original nonnegative garrote corresponds to the situation where the matrix of ( [ qpsvs ] ) is a identity matrix .similar results can be expected for more general scenarios including the proposed procedures , but the algorithm will become considerably more complicated and running quadratic programming for a grid of tuning parameter tends to be a computationally more efficient alternative .. the objective function of ( [ qpsvs ] ) can be expressed as because does not depend on , ( [ qpsvs ] ) is equivalent to \\[-8pt ] & & \qquad \mbox{subject to } \sum _ { j=1}^p \theta_j\le m \mbox { and } h\theta\succeq{\mathbf0},\nonumber\end{aligned}\ ] ] which depends on the sample size only through and the gram matrix .both quantities are already computed in evaluating the least squares .therefore , when compared with the ordinary least squares estimator , the additional computational cost of the proposed estimating procedures is free of sample size .once the solution path is constructed , our final estimate is chosen on the path according to certain criterion .such criterion often reflects the prediction accuracy , which depends on the unknown parameters and needs to be estimated .a commonly used criterion is the multifold cross validation ( cv ) .multifold cv can be used to estimate the prediction error of an estimator .the data are first equally split into subsets . using the proposed method , and data ,construct estimate .the cv estimate of the prediction error is we select the tuning parameter by minimizing .it is often suggested to use in practice [ breiman ( ) ] .it is not hard to see that estimates since the first term is the inherent prediction error due to the noise , one often measures the goodness of an estimator using only the second term , referred to as the model error : clearly , we can estimate the model error as , where is the noise variance estimate obtained from the ordinary least squares estimate using all predictors .similarly for more general regression settings , we solve for some matrix .this can be done in an iterative fashion provided that the loss function is strictly convex in its second argument .at each iteration , denote }_0,\theta^{[0]}) ] and update the estimate by minimizing }+\theta ^{[0]}_0\bigr ) \bigl[\mathbf{z}_i\bigl(\theta-\theta^{[0]}\bigr)+\bigl(\theta_0- \theta^{[0]}_0\bigr ) \bigr ] \\ & & \qquad{}+{\frac{1}{2 } } \ell''\bigl(y_i,\mathbf{z}_i\theta^{[0]}+\theta ^{[0]}_0\bigr ) \bigl[\mathbf{z}_i\bigl(\theta-\theta^{[0]}\bigr)+\bigl(\theta_0- \theta^{[0]}_0\bigr ) \bigr]^2 \biggr ) , \end{aligned}\ ] ] subject to and , where the derivatives are taken with respect to the second argument of .now it becomes a quadratic program .we repeat this until a certain convergence criterion is met . in choosing the optimal tuning parameter for general regression, we again use the multifold cross - validation .it proceeds in the same fashion as before except that we use a loss - dependent cross - validation score : this section we investigate the finite sample properties of the proposed estimators .to fix ideas , we focus our attention on the usual normal linear regression .we first consider a couple of models that represent different scenarios that may affect the performance of the proposed methods . in each of the following models , we consider three explanatory variables that follow a multivariate normal distribution with with three different values for : and . a quadratic model with nine terms is considered .therefore , we have a total of nine predictors , including three main effects , three quadratic terms and three two - way interactions . to demonstrate the importance of accounting for potential hierarchical structure among the predictor variables , we apply the nonnegative garrote estimator that recognizes strong heredity , weak heredity and without heredity constraints .in particular , we enforce the strong heredity principle by imposing the following constraints : to enforce the weak heredity , we require that we consider two data - generating models , one follows the strong heredity principles and the other follows the weak heredity principles : the first model follows the strong heredity principle : the second model is similar to model i except that the true data generating mechanism now follows the weak heredity principle : for both models , the regression noise . for each model , 50 independent observations of are collected , and a quadratic model with nine terms is analyzed .we choose the tuning parameter by ten - fold cross - validation as described in the last section .following breiman ( ) , we use the model error ( [ me ] ) as the gold standard in comparing different methods .we repeat the experiment for 1000 times for each model and the results are summarized in table [ tab : ex1 ] .the numbers in the parentheses are the standard errors .we can see that the model errors are smaller for both weak and strong heredity models compared to the model that does not incorporate any of the heredity principles .paired -tests confirmed that most of the observed reductions in model error are significant at the 5% level .= 9.2 cm 9.2cm@ & * no heredity * & * weak heredity * & * strong heredity * + + & 1.79 & 1.70 & 1.59 + & ( 0.05 ) & ( 0.05 ) & ( 0.04 ) + & 1.57 & 1.56 & 1.43 + & ( 0.04 ) & ( 0.04 ) & ( 0.04 ) + & 1.78 & 1.69 & 1.54 + & ( 0.05 ) & ( 0.04 ) & ( 0.04 ) + + & 1.77 & 1.61 & 1.72 + & ( 0.05 ) & ( 0.05 ) & ( 0.04 ) + & 1.79 & 1.53 & 1.70 + & ( 0.05 ) & ( 0.04 ) & ( 0.04 ) + & 1.79 & 1.68 & 1.76 + & ( 0.04 ) & ( 0.04 ) & ( 0.04 ) + for model i , the nonnegative garrote that respects the strong heredity principles enjoys the best performance , followed by the one with weak heredity principles .this example demonstrates the benefit of recognizing the effect heredity .note that the model considered here also follows the weak heredity principle , which explains why the nonnegative garrote estimator with weak heredity outperforms the one that does not enforce any heredity constraints . for modelii , the nonnegative garrote with weak heredity performs the best .interestingly , the nonnegative garrote with strong heredity performs better than the original nonnegative garrote .one possible explanation is that the reduced feasible region with strong heredity , although introducing bias , at the same time makes tuning easier . to gain further insight, we look into the model selection ability of the structured variable selection .to separate the strength of a method and effect of tuning , for each of the simulated data , we check whether or not there is any tuning parameter such that the corresponding estimate conforms with the true model . the frequency for each method to select the right modelis given in table [ tab : ex1f ] , which clearly shows that the proposed structured variable selection methods pick the right models more often than the original nonnegative garrote .note that the strong heredity version of the method can never pick model ii correctly as it violates the strong heredity principle .we also want to point out that such comparison , although useful , needs to be understood with caution . in practice ,no model is perfect and selecting an additional main effect so that model ii can satisfy strong heredity may be a much more preferable alternative to many .@ & * no heredity * & * weak heredity * & * strong heredity * + + & 65.5% & 71.5% & 82.0% + & 85.0% & 86.5% & 90.5% + & 66.5% & 73.5% & 81.5% + + & 65.5% & 75.5% & 0.00% + & 83.0% & 90.0% & 0.00% + & 56.5% & 72.5% & 0.00% + we also checked how effective the ten - fold cross - validation is in picking the right model when it does not follow any of the heredity principles .we generated the data from the model where the set up for simulation remains the same as before .note that this model does not follow any of the heredity principles . for each run, we ran the nonnegative garrote with weak heredity , strong heredity and no heredity .we chose the best among these three estimators using ten - fold cross - validation .note that the three estimators may take different values of the tuning parameter . among 1000 runs , 64.1% of the time, nonnegative garrote with no heredity principle was elected .in contrast , for either model i or model ii with a similar setup , less than 10% of the time nonnegative garrote with no heredity principle was elected .this is quite a satisfactory performance .the next example is designed to illustrate the effect of the magnitude of the interaction on the proposed methods .we use a similar setup as before but now with four main effects , four quadratic terms and six two - way interactions .the true data generating mechanism is given by where and with chosen so that the signal - to - noise ratio is always .similar to before , the sample size .figure [ fig : revsim1 ] shows the mean model error estimated over 1000 runs .we can see that the strong and weak heredity models perform better than the no heredity model and the improvement becomes more significant as the strength of the interaction effect increases .to fix the idea , we have focused on using the least squares estimator as our initial estimator .the least squares estimators are known to perform poorly when the number of predictors is large when compared with the sample size . in particular , it is not applicable when the number of predictors exceeds the sample size .however , as shown in yuan and lin ( ) , other initial estimators can also be used . in particular , they suggested ridge regression as one of the alternatives to the least squares estimator . to demonstrate such an extension, we consider again the regression model ( [ eq : effectsize ] ) but with ten main effects and ten quadratic terms , as well as 45 interactions .the total number of effects ( ) exceeds the number of observations ( ) and , therefore , the ridge regression tuned with gcv was used as the initial estimator .figure [ fig : aoasrevsim ] shows the solution path of the nonnegative garrote with strong heredity , weak heredity and without any heredity for a typical simulated data with . :solution for different versions of the nonnegative garrote . ]it is interesting to notice from figure [ fig : aoasrevsim ] that the appropriate heredity principle , in this case strong heredity , is extremely valuable in distinguishing the true effect from other spurious effects .this further confirms the importance of heredity principles .in this section we apply the methods from section [ sec2 ] to several real data examples .the first is the prostate data , previously used in tibshirani ( ) .the data consist of the medical records of male patients who were about to receive a radical prostatectomy .the response is the level of prostate specific antigen , and there are 8 explanatory variables .the explanatory variables are eight clinical measures : log(cancer volume ) ( lcavol ) , log(prostate weight ) ( lweight ) , age , log(benign prostatic hyperplasia amount ) ( lbph ) , seminal vesicle invasion ( svi ) , log(capsular penetration ) ( lcp ) , gleason score ( gleason ) and percentage gleason scores 4 or 5 ( pgg45 ) .we consider model ( [ 2way ] ) with main effects , quadratic terms and two way interactions , which gives us a total of 44 predictors .figure [ fig : prostate ] gives the solution path of the nonnegative garrote with strong heredity , weak heredity and without any heredity constraints .the vertical grey lines represent the models that are selected by the ten - fold cross - validation . to determine which type of heredity principle to use for the analysis, we calculated the ten - fold cross - validation scores for each method .the cross - validation scores as functions of the tuning parameter are given in the right panel of figure [ fig : prostate - cv ] .cross - validation suggests the validity of heredity principles .the strong heredity is particularly favored with the smallest score . using ten - fold cross - validation ,the original nonnegative garrote that neglects the effect heredity chooses a six variable model : _ lcavol , lweight , lbph , gleason , lbph : svi _ and _ svi : pgg45_. note that this model does not satisfy heredity principle , because _ gleason _ and _ svi : pgg45 _ are included without any of its parent factors .in contrast , the nonnegative garrote with strong heredity selects a model with seven variables : _ lcavol , lweight , lbph , svi , gleason , gleason _ and _ lbph : svi_. the model selected by the weak heredity , although comparable in terms of cross validation score , is considerably bigger with 16 variables . the estimated model errors for the strong heredity , weak heredity and no heredity nonnegative garrote are , and , respectively , which clearly favors the methods that account for the effect heredity .to further assess the variability of the ten - fold cross - validation , we also ran the leave - one - out cross - validation on the data .the leave - one - out scores are given in the right panel of figure [ fig : prostate - cv ] .it shows a similar pattern as the ten - fold cross - validation . in what follows, we shall continue to use the ten - fold cross - validation because of the tremendous computational advantage it brings about . to illustrate the strategy in more general regression settings , we consider a logistic regression for the south african heart disease data previously used in hastie , tibshirani and friedman ( ) .the data consist of 9 different measures of 462 subjects and the responses indicating the presence of heart disease .we again consider a quadratic model .there is one binary predictor which leaves a total of 53 terms .nonnegative garrote with strong heredity , weak heredity and without heredity were applied to the data set .the solution paths are given in figure [ fig : heart - path ] .the cross - validation scores for the three different methods are given in figure [ fig : heart - cv ] .as we can see from the figure , nonnegative garrote with strong heredity principles achieves the lowest cross - validation score , followed by the one without heredity principles . to gain further insight on the merits of the proposed structured variable selection and estimation techniques ,we apply them to seven benchmark data sets , including the previous two examples . the ozone data , originally used in breiman and friedman ( ) , consist of the daily maximum one - hour - average ozone reading and eight meteorological variables in the los angeles basin for 330 days in 1976 . the goal is to predict the daily maximum one - hour - average ozone reading using the other eight variables .the boston housing data include statistics for 506 census tracts of boston from the 1970 census [ harrison and rubinfeld ( ) ] .the problem is to predict the median value of owner - occupied homes based on 13 demographic and geological measures .the diabetes data , previously analyzed by efron et al .( ) , consist of eleven clinical measurements for a total of 442 diabetes patients .the goal is to predict a quantitative measure of disease progression one year after the baseline using the other ten measures that were collected at the baseline .along with the prostate data , these data sets are used to demonstrate our methods in the usual normal linear regression setting . to illustrate the performance of the structured variable selection and estimation in more general regression settings , we include two other logistic regression examples along with the south african heart data .the pima indians diabetes data have 392 observations on nine variables .the purpose is to predict whether or not a particular subject has diabetes using eight remaining variables .the bupa liver disorder data include eight variables and the goal is to relate a binary response with seven clinical measurements .both data sets are available from the uci repository of machine learning databases [ newman et al .( ) ] .we consider methods that do not incorporate heredity principles or respect weak or strong heredity principles . for each method, we estimate the prediction error using ten - fold cross - validation , that is , the mean squared error in the case of the four linear regression examples , and the misclassification rate in the case of the three classification examples .table [ tab : real ] documents the findings .similar to the heart data we discussed earlier , the total number of effects ( ) can be different for the same number of main effects ( ) due to the existence of binary variables . as the results from table [ tab : real ] suggest , incorporating the heredity principles leads to improved prediction for all seven data sets .note that for the four regression data sets , the prediction error depends on the scale of the response and therefore should not be compared across data sets .for example , the response variable of the diabetes data ranges from 25 to 346 with a variance of 5943.331 .in contrast , the response variable of the prostate data ranges from to 5.48 with a variance of 1.46 ..0d2.0d3.0d4.3cc@ * data * & & & & & & + boston & 506 & 13 & 103 & 12.609 & & 12.661 + diabetes & 442 & 10 & 64 & 3077.471 & & 3116.989 + ozone & 203 & 9 & 54 & 16.558 & & 15.397 + prostate & 97 & 8 & 44 & 0.624 & 0.632 & * 0.584 * + bupa & 345 & 6 & 27 & 0.287 & 0.279 & * 0.267 * + heart & 462 & 9 & 53 & 0.286 & 0.275 & * 0.262 * + pima & 392 & 8 & 44 & 0.199 & 0.214 & * 0.196 * +when a large number of variables are entertained , variable selection becomes important . with a number of competing models that are virtually indistinguishable in fitting the data , it is often advocated to select a model with the smaller number of variables .but this principle alone may lead to models that are not interpretable . in this paper we proposed structured variable selection and estimation methods that can effectively incorporate the hierarchical structure among the predictors in variable selection and regression coefficient estimation .the proposed methods select models that satisfy the heredity principle and are much more interpretable in practice .the proposed methods adopt the idea of the nonnegative garrote and inherit its advantages .they are easy to compute and enjoy good theoretical properties .similar to the original nonnegative garrote , the proposed method involves the choice of a tuning parameter which also amounts to the selection of a final model . throughout the paper ,we have focused on using the cross - validation for such a purpose .other tuning methods could also be used . in particular, it is known that prediction - based tuning may result in unnecessarily large models .several heuristic methods are often adopted in practice to alleviate such problems .one of the most popular choices is the so - called one standard error rule [ breiman et al .( ) ] , where instead of choosing the model that minimizes the cross - validation score , one chooses the simplest model with a cross - validation score within one standard error from the smallest .our experience also suggests that a visual examination of the solution path and the cross - validation scores often leads to further insights .the proposed method can also be used in other statistical problems whenever the structures among predictors should be respected in model building . in some applications , certain predictor variablesmay be known apriori to be more important than the others .this may happen , for example , in time series prediction where more recent observations generally should be more predictive of future observations .boyd , s. and vandenberghe , l. ( 2004 ) ._ convex optimization_. cambridge univ .press , cambridge .breiman , l. ( 1995 ) .better subset regression using the nonnegative garrote ._ technometrics _ * 37 * 373384 .breiman , l. and friedman , j. ( 1985 ) .estimating optimal transformations for multiple regression and correlation ._ j. amer .assoc . _ * 80 * 580598 .breiman , l. , friedman , j. , stone , c. and olshen , r. ( 1984 ) . _classifcation and regression trees_. chapman & hall / crc , new york .chipman , h. ( 1996 ) .bayesian variable selection with related predictors .j. statist . _* 24 * 1736 .chipman , h. , hamada , m. and wu , c. f. j. ( 1997 ) . a bayesian variable selection approach for analyzing designed experiments with complex aliasing . _ technometrics _ * 39 * 372381 .choi , n. , li , w. and zhu , j. ( 2006 ) .variable selection with the strong heredity constraint and its oracle property .technical report .efron , b. , johnstone , i. , hastie , t. and tibshirani , r. ( 2004 ) .least angle regression ( with discussion ) .statist . _* 32 * 407499 .fan , j. and li , r. ( 2001 ) .variable selection via nonconcave penalized likelihood and its oracle properties ._ j. amer .assoc . _ * 96 * 13481360 .george , e. i. and mcculloch , r. e. ( 1993 ) .variable selection via gibbs sampling . _ j. amer . statist .assoc . _ * 88 * 881889 .hamada , m. and wu , c. f. j. ( 1992 ) .analysis of designed experiments with complex aliasing ._ journal of quality technology _ * 24 * 130137 .harrison , d. and rubinfeld , d. ( 1978 ) .hedonic prices and the demand for clean air ._ journal of environmental economics and management _ * 5 * 81102 .hastie , t. , tibshirani , r. and friedman , j. ( 2003 ) . _ the elements of statistical learning : data mining , inference , and prediction_. springer , new york .joseph , v. r. and delaney , j. d. ( 2007 ) . functionally induced priors for the analysis of experiments ._ technometrics _ * 49 * 111 .li , x. , sundarsanam , n. and frey , d. ( 2006 ) .regularities in data from factorial experiments ._ complexity _ * 11 * 3245 .mccullagh , p. ( 2002 ) .what is a statistical model ( with discussion ) ._ * 30 * 12251310 .mccullagh , p. and nelder , j. ( 1989 ) ._ generalized linear models _ , 2nd edchapman & hall , london .nelder , j. ( 1977 ) . a reformulation of linear models .a _ * 140 * 4877 .nelder , j. ( 1994 ) .the statistics of linear models ._ statist .comput . _ * 4 * 221234 .nelder , j. ( 1998 ) .the selection of terms in response - surface models how strong is the weak - heredity principle ? _statist . _ * 52 * 315318 .newman , d. , hettich , s. , blake , c. and merz , c. ( 1998 ) .uci repository of machine learning databases .information and computer science , univ .california , irvine , ca .available at http://www.ics.uci.edu/\textasciitilde mlearn / mlrepository.html[http://www.ics.uci.edu/\textasciitilde mlearn / mlrepository.html ] .osborne , m. , presnell , b. and turlach , b. ( 2000 ) . a new approach to variable selection in least squares problems ._ i m a j. numer .* 20 * 389403 .tibshirani , r. ( 1996 ) .regression shrinkage and selection via the lasso .b _ * 58 * 267288 .turlach , b. ( 2004 ) .discussion of `` least angle regression . '' _ ann .statist . _* 32 * 481490 .yuan , m. , joseph , v. r. and lin , y. ( 2007 ) .an efficient variable selection approach for analyzing designed experiments ._ technometrics _ * 49 * 430439. yuan , m. and lin , y. ( 2006 ) .model selection and estimation in regression with grouped variables .b _ * 68 * 4967 .yuan , m. and lin , y. ( 2007 ) . on the the nonnegative garrote estimator .b _ * 69 * 143161 .
in linear regression problems with related predictors , it is desirable to do variable selection and estimation by maintaining the hierarchical or structural relationships among predictors . in this paper we propose non - negative garrote methods that can naturally incorporate such relationships defined through effect heredity principles or marginality principles . we show that the methods are very easy to compute and enjoy nice theoretical properties . we also show that the methods can be easily extended to deal with more general regression problems such as generalized linear models . simulations and real examples are used to illustrate the merits of the proposed methods . , .
in this paper , we consider numerical methods for the computation of the -gradient flow of a planar curve : where is a time - dependent planar curve .the gradient flow is energy dissipative , since = - \int |\operatorname{grad}e({\mathbf{u}})|^2 ds \le 0.\ ] ] here , is an energy functional , and is the frchet derivative with respect to the -structure with line integral .thus , the curvature flow ( the curve shortening flow ) and the elastic flow ( the willmore flow ) have energy functionals = \int ds , \quad \text{and } \quad e[{\mathbf{u } } ] = \varepsilon^2 \int |{\bm{\upkappa}}|^2 ds + \int ds,\ ] ] where is the curvature vector , and is the tangential derivative ( example [ ex : gf ] ) .note that the elastic flow is a fourth - order nonlinear evolution equation .we consider a dissipative numerical scheme for , that is , a scheme which has the discrete energy dissipative property \le e[{\mathbf{u}}_h^n] ] , then equation is the curvature flow where is the curvature vector , and is the arc - length parameter .2 . ( elastic flow ) if = \varepsilon^2 \int |{\bm{\upkappa}}|^2 ds + \int ds ] is a _ b - spline curve of degree _ if is represented by ,\ ] ] for some knot vector and .the coefficient is called a _control point_. in fact , if the knot vector is disjoint ( i.e. , ) , then it is known that is a -function . for more details on the properties of b - spline functions ,we refer the reader to . in the present paper ,we only consider the periodic b - spline functions and curves .let \subset { \mathbb{r}} ] . then , if , we can see that for and .therefore , the function , \\n^\xi_{p , i+n}(\zeta ) , & \zeta \in [ \xi_{i+n},b ] , \\ 0 , & \text{otherwise } \end{cases } \quad i=1,2,\dots , p \label{eq : periodic - b - spline}\ ] ] is a periodic -function in ] for is also -periodic on ] be an interval , , with , and .then , we define a _ periodic b - spline basis function of degree p _ by for and by } ] expressed by ,\ ] ] for some .it is clear that a closed b - spline curve is a -curve .in this section , we derive a numerical scheme for geometric gradient flows for the energy functional given by .we first consider the time discretization , and recall the idea of the dpdm . using a similar approach as for the dpdm , we derive a discretization of the chain rule .the definition of the partial derivatives is a discrete analogue of the chain rule formula for a smooth function . in our case, the corresponding chain rule can be expressed as = \int \operatorname{grad}e({\mathbf{u } } ) \cdot { \mathbf{u}}_t ds({\mathbf{u } } ) .\label{eq : chain}\ ] ] here , we denote the line element of by to emphasize the dependence on .now , we discretize the chain rule .we first change the time derivatives to time differences by expressing ] and , respectively .moreover , the line element should be changed appropriately . in the original formula ,there is one function only .however , in the discretization , there are two functions and as in .therefore , we have some choices to discretize the term , for example , , , and . here, we use .then , we define a discrete gradient , , with a function that satisfies the following formula : - e[{\mathbf{v } } ] = \int \operatorname{grad}_{\mathrm{d}}e({\mathbf{u } } , { \mathbf{v } } ) ds\left ( \frac{{\mathbf{u}}+{\mathbf{v}}}{2 } \right ) , \quad \forall { \mathbf{u}},{\mathbf{v}}\in { \mathbf{h}}^m_\pi .\label{eq : disc - chain}\ ] ] thus , according to , the strong form of the time - discrete problem is written as follows : the discrete chain rule can then be expressed as - e[{\mathbf{v } } ] = \int_0 ^ 1 \left| \frac{{\mathbf{u}}_\zeta+{\mathbf{v}}_\zeta}{2 } \right| \operatorname{grad}_{\mathrm{d}}e({\mathbf{u } } , { \mathbf{v } } ) \cdot ( { \mathbf{u}}- { \mathbf{v } } ) d\zeta , \quad \forall { \mathbf{u}},{\mathbf{v}}\in { \mathbf{h}}^m_\pi , \label{eq : disc - chain2}\ ] ] and comparing with , we can derive the relationship between and the discrete first derivative as follows . here is a vector - valued function . letting ,the energy is expressed by = \int_0 ^ 1 g({\mathbf{u } } , { \mathbf{u}}_\zeta , \dots , \partial_\zeta^m { \mathbf{u } } ) d\zeta,\ ] ] and thus the discrete first derivative is given by here , we define the ( vector - valued ) partial derivatives , as functions that satisfy the relation for all .note that , as in the previous case ( subsection [ subsec : dpdm ] ) , the partial derivative may not be unique .now , instead of solving , we may solve the equation and the weak form of gives our semi - discrete scheme for the gradient flow .note that the time increment can differ at each step .[ scheme : semi - disc ] find that satisfies for given .we now consider the full discretization of the gradient flow .let be the space of closed b - spline curves of degree as defined in definition [ def : b - spline ]. then , by the sobolev embedding theorem , if .thus , we can derive a fully discretized problem by the galerkin method .[ scheme : full - disc ] let , , and .assume is given .find that satisfies for all .then , we can establish the discrete energy dissipation property .[ lem : dissipation ] let and satisfy the relation .then , we have - e[{\mathbf{u}}_h^n]}{\delta t_n } = - \int_0 ^ 1 \left| \frac{{\mathbf{u}}^{n+1}_{h,\zeta } + { \mathbf{u}}^n_{h,\zeta}}{2 } \right| \left| \frac{{\mathbf{u}}_h^{n+1 } - { \mathbf{u}}_h^n}{\delta t_n } \right|^2 d\zeta \le 0.\ ] ] substituting into the scheme , we have - e[{\mathbf{u}}_h^n]}{\delta t_n},\end{aligned}\ ] ] by the definition of the partial derivatives .hence we can establish the desired assertion .in this section , we show some numerical examples of the elastic flow computed by our scheme . here , the functional is the elastic energy = \varepsilon^2 \int |{\bm{\upkappa}}|^2 ds + \int ds = \int_0 ^ 1 \left ( \varepsilon^2 \frac{\det({\mathbf{u}}_\zeta,{\mathbf{u}}_{\zeta\zeta})^2}{|{\mathbf{u}}_\zeta|^5 } + |{\mathbf{u}}_\zeta| \right ) d\zeta , \label{eq : elastic - energy}\ ] ] where it is known that equation has a unique time - global solution ( see , e.g. , ( * ? ? ? * theorem 3.2 ) ) .therefore , the turning number is invariant . to calculate the discrete partial derivatives for ,let for .then , the energy density function for is . we can compute the partial derivatives of since which implies we can derive the partial derivatives of in several ways . in the following examples ,these derivatives are computed by dividing as follows : the first and the second terms of the right - hand side are calculated as \\ & \times \big[v_{2,\zeta\zeta}(u_{1,\zeta}-v_{1,\zeta } ) - v_{1,\zeta\zeta}(u_{2,\zeta}-v_{2,\zeta } ) \\ & - u_{2,\zeta}(u_{1,\zeta\zeta}-v_{1,\zeta\zeta } ) + u_{1,\zeta}(u_{2,\zeta\zeta}-v_{2,\zeta\zeta } ) \big],\end{aligned}\ ] ] and respectively .although we have omitted them , we can derive partial derivatives of with these equations . before showing numerical examples, we recall the steady - state solutions for the elastic flow .it is known that steady closed curves of the elastic energy are circles of radius , the figure - eight - shaped curve with scale , and their multiple versions ( see figure [ fig : steady ] ) .their energies are = 4 \pi \varepsilon , \quad e[\text{eight - shaped } ] \approx \varepsilon \cdot 21.2075,\ ] ] respectively .the exact value of the latter energy is expressed by the elliptic integrals ( cf . ) . in our numerical examples , we set the time increment as and - e[{\mathbf{u}}_h^{n-1}]}{\delta t_{n-1 } } \right)^{-2 } \right\ } , \quad n \ge 1.\ ] ] we found these values empirically .we solved equation at each step with the newton method .moreover , in each step , when two adjacent control points are too close ( more precisely , when the distance is less than ) , one of them was removed .note that the energy may increase when control points are eliminated ; however , the shape of the curve will be less affected ( see figure [ fig : elim ] ) . with respect to the control points as shown in the upper figure .the right two figures show the b - spline curves of the same degree with respect to the control points except for the point . ]we show six examples here . in all examples, we use the b - spline curves of degree .therefore , every curve below is of class . videos of the following examples are available on youtube .[ ex : circle_0 ] the first example is shown in figure [ fig : circle_0 ] .the initial curve is a circle.the parameters are in this example , the elimination of control points was not necessary .figure [ fig : circle_0-curve ] shows the evolution of the curve at .the energy at is .note that the exact value of the energy at the steady state is .figure [ fig : circle_0-energy ] shows the evolution of the energy .the discrete energy dissipation property is clearly visible . in figure [ fig :circle_0-curve ] , one can observe that the curve shrinks as the curvature flow until , and it stops shrinking when the radius approaches ..45 .,title="fig : " ] .45 .,title="fig : " ] [ ex : eight_0 ] the second example is shown in figure [ fig : eight_0 ] .the initial curve is figure - eight - shaped .the parameters are as in the previous case , the elimination of control points is not necessary .figure [ fig : eight_0-curve ] shows the evolution of the curve at and figure [ fig : eight_0-energy ] shows the evolution of the energy .the energy at is .note that the exact value of the energy at the steady state is approximately . in figure[ fig : eight_0-curve ] , first the small loop ( the right loop ) shrinks faster than the larger one .when the scale of the right loop becomes , shrinking stops , and the left one begins to shrink .finally , the left one also stops shrinking , and the curve approaches the steady state ..45 .,title="fig : " ] .45 .,title="fig : " ] [ ex : double_0 ] the third example is shown in figure [ fig : double_0-curve ] .the initial shape of the curve is a cardioid - like curve as shown in figure [ fig : double_0 - 1 ] .the initial parameters of the curve are as in the previous cases , the elimination of control points is not necessary .figure [ fig : double_0-curve ] shows the evolution of the curve at and figure [ fig : double_0-energy ] shows the evolution of the energy . in this case , the steady - state is a double - looped circle with radius .therefore , the energy of the solution at ( ) is approximately twice the value of that of example [ ex : circle_0 ] .the behavior of the curve is similar to example [ ex : eight_0 ] .that is , first the smaller loop shrinks until the scale is approximately .then , the larger one shrinks and the curve approaches the steady state ..49 .,title="fig : " ] .49 .,title="fig : " ] + .49 .,title="fig : " ] .49 .,title="fig : " ] . ][ ex : circle_1 ] this example shows a topology - changing solution .the initial curve is the one shown in figure [ fig : circle_1 - 1 ] , and figure [ fig : circle_1-curve ] shows its evolution .figures [ fig : circle_1-energy ] and show the evolution of the energy and the number of control points , respectively .the parameters are one can observe that the topology of the curve changes at around ( figures [ fig : circle_1 - 5 ] and ) . at the same time , the energy decreases drastically ( figure [ fig : circle_1-energy ] ) , and some control points become concentrated ( figure [ fig : circle_1-elim ] ) .as mentioned earlier , we implement an algorithm that deletes a control point if it is too close to the adjacent point .therefore , the elimination of control points occurs when the topology changes , and the number of control points finally converges to ..32 .,title="fig : " ] .32 .,title="fig : " ] .32 .,title="fig : " ] + .32 .,title="fig : " ] .32 .,title="fig : " ] .32 .,title="fig : " ] .45 .,title="fig : " ] .45 .,title="fig : " ] .32 at the times described as in the figures .the small circles represent control points , is the number of control points , and the gray colored curve is the solution.,title="fig : " ] .32 at the times described as in the figures .the small circles represent control points , is the number of control points , and the gray colored curve is the solution.,title="fig : " ] .32 at the times described as in the figures .the small circles represent control points , is the number of control points , and the gray colored curve is the solution.,title="fig : " ] + .32 at the times described as in the figures .the small circles represent control points , is the number of control points , and the gray colored curve is the solution.,title="fig : " ] .32 at the times described as in the figures .the small circles represent control points , is the number of control points , and the gray colored curve is the solution.,title="fig : " ] .32 at the times described as in the figures .the small circles represent control points , is the number of control points , and the gray colored curve is the solution.,title="fig : " ] + .32 at the times described as in the figures .the small circles represent control points , is the number of control points , and the gray colored curve is the solution.,title="fig : " ] .32 at the times described as in the figures .the small circles represent control points , is the number of control points , and the gray colored curve is the solution.,title="fig : " ] .32 at the times described as in the figures .the small circles represent control points , is the number of control points , and the gray colored curve is the solution.,title="fig : " ] [ ex : eight_1 ] the following two examples investigate problems with more complicated solutions .the initial shape of the curve is shown in figure [ fig : eight_1 - 1 ] , and figure [ fig : eight_1-curve ] shows its evolution .figures [ fig : eight_1-energy ] and show the evolution of the energy and the number of control points , respectively .the parameters are in this example , the topology of the curve changes frequently .for example , the loop in the upper left of the curve disappears at around .when the topology changes , the energy decreases rapidly as in example [ ex : circle_1 ] , and the number of control points also decreases at the same time . the final value of is ..32 .,title="fig : " ] .32 .,title="fig : " ] .32 .,title="fig : " ] + .32 .,title="fig : " ] .32 .,title="fig : " ] .32 .,title="fig : " ] .45 .,title="fig : " ] .45 .,title="fig : " ] [ ex : circle_3 ] the initial curve for the final example is shown in figure [ fig : circle_3 - 1 ] , and figure [ fig : circle_3-curve ] shows its evolution .figures [ fig : circle_3-energy ] and show the evolution of the energy and the number of control points , respectively .the parameters are the solution displays complicated behavior as in example [ ex : eight_1 ] , and the topology changes frequently . however , since the turning number of the initial curve is one , the steady state is a circle with radius .the energy and the number of control points decrease drastically when the topology changes , and the final number of control points is ..32 .,title="fig : " ] .32 .,title="fig : " ] .32 .,title="fig : " ] + .32 .,title="fig : " ] .32 .,title="fig : " ] .32 .,title="fig : " ] .45 .,title="fig : " ] .45 .,title="fig : " ]i would like to thank prof .yoshihiro tonegawa and dr .takahito kashiwabara for bringing this topic to my attention and encouraging me through valuable discussions .this work was supported by the program for leading graduate schools , mext , japan , and by jsps kakenhi ( no .15j07471 ) .
in this paper , we develop an energy dissipative numerical scheme for gradient flows of planar curves , such as the curvature flow and the elastic flow . our study presents a general framework for solving such equations . to discretize time , we use a similar approach to the discrete partial derivative method , which is a structure - preserving method for the gradient flows of graphs . for the approximation of curves , we use b - spline curves . owing to the smoothness of b - spline functions , we can directly address higher order derivatives . in the last part of the paper , we consider some numerical examples of the elastic flow , which exhibit topology - changing solutions and more complicated evolution . videos illustrating our method are available on youtube .
coined quantum walks ( qws ) on graphs were firstly defined in ref . and have been extensively analyzed in the literature .many experimental proposals for the qws were given previously , with some actual experimental implementations performed in refs .the key feature of the coined qw model is to use an internal state that determines possible directions that the particle can take under the action of the shift operator ( actual displacement through the graph ) .another important feature is the alternated action of two unitary operators , namely , the coin and shift operators .although all discrete - time qw models have the `` alternation between unitaries '' feature , the coin is not always necessary because the evolution operator can be defined in terms of the graph vertices only , without using an internal space as , for instance , in szegedy s model or in the ones described in refs . .more recently , the staggered quantum walk ( sqw ) model was defined in refs . , where a recipe to generate unitary and hermitian local operators based on the graph structure was given .the evolution operator in the sqw model is a product of local operators .moves the particle to the neighborhood of , but not further away . ]the sqw model contains a subset of the coined qw class of models , as shown in ref . , and the entire szegedy model class .although covering a more general class of quantum walks , there is a restriction on the local evolution operations in the sqw demanding hermiticity besides unitarity .this severely compromises the possibilities for actual implementations of sqws on physical systems because the unitary evolution operators , given in terms of time - independent hamiltonians having the form , are non - hermitian in general . to have a model , that besides being powerful as the sqw , to be also fitted for practical physical implementations , it would be necessary to relax on the hermiticity requirement for the local unitary operators . in this work, we propose an extension of the sqw model employing non - hermitian local operators .the concatenated evolution operator has the form where and are unitary and hermitian , and are general angles representing specific systems energies and time intervals ( divided by the planck constant ) .the standard sqw model is recovered when and . with this modification , sqw with hamiltoniansencompasses the standard sqw model and includes new coined models . besides , with the new model, it is easier to devise new experimental proposals such as the one described in ref . .[ sqwwh_graph1 ] depicts the relation among the discrete - time qw models .szegedy s model is included in the standard sqw model class , which itself is a subclass of the sqw model with hamiltonians .flip - flop coined qws that are in szegedy s model are also in the sqw model .flip - flop coined qws using hadamard and grover coins , as represented in fig .[ sqwwh_graph1 ] , are examples .there are coined qws , which are in the sqw model with hamiltonians general class , but not in the standard sqw model , as for example , the one - dimensional qws with coin , where is the pauli matrix , with angle not a multiple of .those do not encompass all the possible coined qw models , as there are flip - flop coined qws , which although being built with non - hermitian unitary evolution , can not be put in the sqw model with hamiltonians .for instance , when the fourier coin is employed , where and , being the hilbert space dimension .the structure of this paper is as follows . in sec .[ sec2 ] , we describe how to obtain the evolution operator of the sqw with hamiltonians on a generic simple undirected graph . in sec .[ sec3 ] , we calculate the wave function using the fourier analysis for the one - dimensional lattice and the standard deviation of the probability distribution . in sec .[ sec4 ] , we characterize which coined qws are included in the class of sqws with hamiltonians . finally , in sec .[ sec5 ] we draw our conclusions .let be a simple undirected graph with vertex set and edge set .a tessellation of is a partition of so that each element of the partition is a clique .a clique is a subgraph of that is complete .an element of the partition is called a polygon .the tessellation covers all vertices but not necessarily all edges .let be the hilbert space spanned by the computational basis , that is , each vertex is associated with a vector of the canonical basis .each polygon spans a subspace of the , whose basis comprises the vectors of the computational basis associated with the vertices in the polygon .let be the number of polygons and let be a polygon for some .a unit vector _ induces _polygon if the following two conditions are fulfilled : first , the vertices of is a clique in .second , the vector has the form so that for and otherwise .the simplest choice is the uniform superposition given by for .there is a recipe to build a unitary and hermitian operator associated with the tessellation , when we use the following structure : is unitary because the polygons are non - overlapping , that is , for . is hermitian because it is a sum of hermitian operators .then , .an operator of this kind is called an _orthogonal reflection _ of graph .each induces a polygon and we say that induces the tessellation .the idea of the staggered model is to define a second operator that must be independent of .define a second tessellation by making another partition of with polygons for , where is the number of polygons .for each polygon , define unit vectors so that for and otherwise .the simplest choice is the uniform superposition given by for .likewise , define is an orthogonal reflection . to obtain the evolution operator we demand that the union of tessellations and should cover the edges of , where tessellation is the union of polygons for and tessellation is the union of polygons for .this demand is necessary because edges that do not belong to the tessellation union can be removed from the graph without changing the dynamics .the standard sqw dynamics is given by the evolution operator where the unitary and hermitian operators and are constructed as described in eqs .( [ h_0 ] ) and ( [ h_1 ] ) .however , such graph - based construction of the operators does not correspond , in general , to the evolution of the real physical systems which are unitary but non - hermitian instead .actually , the unitary and non - hermitian operators do not have a nice representation as in eqs .( [ h_0 ] ) and ( [ h_1 ] ) . in the following ,we introduce and analyze a method for constructing `` physical evolutions '' using the graph - based unitary and hermitian operators .we define the staggered qw model with hamiltonians by the evolution operator where and are angles . can be written as the standard sqw model is obtained when and . the staggered qw model with hamiltoniansis characterized by two tessellations and the angles and .the evolution operator is the product of two _ local _ unitary operators .local in the sense discussed before , that is , if a particle is on vertex , it will move to the neighborhood of only .some graphs are not 2-tessellable as discussed in ref . . in this case, we have to use more than two tessellations until covering all edges and eq .( [ u ] ) must be extended accordingly .one of the simplest example of a sqw model with hamiltonians is the one - dimensional lattice ( or chain ) as in fig .[ 1dlattice ] .if we wish to use the minimum number of tessellations that cover all vertices and edges , the only choice are the two tesselations represented in the figure and correspond to two alternate interactions between first neighbors .therefore the evolution operator in the one - dimensional case with is given by where and for the sake of simplicity , we choose and to be independent from . is defined on hilbert space , whose computational basis is . while the diagonal forms of the hamiltonians ( [ h0 ] ) and ( [ h1 ] ) with -eigenvectors ( [ eq : genual_0 ] ) and ( [ eq : genual_1 ] ) , respectively , are more appropriate to the qw related computations, one can not immediately see the connections to interactions energies that they usually represent . for actual implementations , it is more convenient to write down it in terms of bosonic operators as in that form the first term represents the occupations of each site and the second one represents hopping hamiltonians .note that since the qw models considered here are single particles quantum walks , the corresponding picture in terms of hamiltonians ( [ h00 ] ) and ( [ h11 ] ) implementation is to consider a single excitation in the encoding physical system .the joint hamiltonian describes a large number of physical systems , from cold atoms trapped in optical lattices to a linear array of electromechanical resonators .however the alternated action of the two local unitary operators in ( [ unit ] ) requires that the hamiltonians and be applied independently .this requires a more involved process of alternating interactions in the system , which demands an external control particular to each physical system .a proposal on how to implement it in a one dimensional array of coupled superconducting transmission line resonators is discussed elsewhere . to start our analysis , in fig .[ fig : sqwwh_graph2 ] we show the probability distribution for the 1d sqw with hamiltonians ( [ h0 ] ) and ( [ h1 ] ) after 60 steps with parameters , , and .the initial condition assumed was .a quantum walk with those parameters was analyzed by ref .note the typical profile , which is similar to the coined qw , but certainly not to the continuous - time qw ., , , and initial condition . ]in order to find the spectral decomposition of the evolution operator , we perform a basis change that takes advantage of the system symmetries .let us define the fourier basis by the vectors where $ ] .for a fixed , those vectors define a plane that is invariant under the action of the evolution operator , which is confirmed by the following results : where the analysis of the dynamics can be reduced to a two - dimensional subspace of by defining a reduced evolution operator .\ ] ] is unitary since a vector in this subspace is mapped to hilbert space after multiplying its first entry by and its second entry by .the eigenvalues of ( the same of ) are , where note that in ( [ eq : a ] ) depends on , as well as others parameters .the non - trivial eigenvectors of are where the eigenvectors of the evolution operator associated with eigenvalues are and we can write , , , and initial condition . ]if we take as the initial condition , the quantum walk state at time is given by where and the probability distribution is obtained after calculating and .the probability distribution would not be symmetric in this case ( localized initial condition ) , as can be seen in fig .[ fig : sqwwh_graph2a ] . those resultsextend the corresponding ones obtained in ref . .the results of ref . can be extended in order to include parameter of the sqw with hamiltonians .the asymptotic expression for the odd moments with initial condition is ^{2n}dk+o(t^{2n-2}),\ ] ] and for the even moments is the square of the standard deviation is for and , it simplifies asymptotically to as function of and .the value of at the center of the plot is zero . ][ fig : sqwwh_graph5 ] shows the plot of as function of and .the maximum value of is , which is achieved for the points on a circle with center at and radius , for instance , when and . when and , is the direct sum of pauli matrices , \ ] ] likewise , with a diagonal shift of one entry .any flip - flop coined qw on a graph with a coin operator of the form , where is an orthogonal reflection of , is equivalent to a sqw with hamiltonians on a larger graph .the procedure to obtain is described in ref .we briefly review it in the next paragraph . and a degree-3 vertex .edge has label .the other edges have labels to . ], one has to replace a degree- vertex by a -clique .the vertex labels of the enlarged graph have the form `` '' , where is the label of the vertex in the original graph and is the edge incident on . ] let be the vertex labels and let be the edge labels of graph .the action of the flip - flop shift operator on vectors of the computational basis associated with is where and are adjacent and is the label of the edge as shown in fig .[ fig : sqwwh_graph6a ] . as for all edges .the -eigenvectors are and there is a -eigenvector for each edge .we are assuming that .then induces the red polygons of fig .[ fig : sqwwh_graph6b ] . after replacing each degree- vertex of by a -clique, we obtain graph of fig .[ fig : sqwwh_graph6b ] on which the equivalent sqw is defined .the degree-5 vertex is converted into a 5-clique and the degree-3 vertex is converted into a 3-clique .the vertex labels of have the form `` '' , where is the label of the vertex in the original graph and is the edge incident on . with this notation , it is straightforward to check that the unitary and hermitian operator that induces the red tessellation is given by eq .( [ s ] ) , when we use vectors in uniform superposition .now , we can cast the evolution operator in the form demanded by the staggered model with hamiltonians . since , the shift operator can be put in the form e with and modulo a global phase .if the coin is and is an orthogonal reflection then any flip - flop coined qw on is equivalent to a sqw on with evolution operator operator induces the blue tessellation depicted in fig . [ fig : sqwwh_graph6b ] .it is known that grover s algorithm can be described as a coined qw on the complete graph using a flip - flop shift operator and the grover coin .therefore , grover s algorithm can also be reproduced by the sqw model .extensions of grover s algorithm analyzed by long _ et al . _ and hyer use operator where is the unit uniform superposition of the computational basis and is an angle , in place of the usual grover operator .this kind of extension can be reproduced by sqw model with hamiltonians because when is given by eq .( [ h_0 ] ) can be written as modulo a global phase .we can choose values for and that reproduce eq .( [ new_form ] ) .we have introduced an extension of the standard staggered qw model by using orthogonal reflections as hamiltonians .orthogonal reflections are local unitary operators in the sense that they respect the connections represented by the edges of a graph . besides , orthogonal reflections are hermitian by definition .this means that if is an orthogonal reflection of a graph , then is a local unitary operator associated with . in order to define a nontrivial evolution operator ,we need to employ a second orthogonal reflection of .the generic form of the evolution operator of the sqw with hamiltonians for 2-tessellable graphs is , where and are angles .this form is fitted for physical implementations in many physical systems , such as , cold atoms trapped in optical lattices and arrays of electromechanical resonators .we have obtained the wave function of sqws with hamiltonians on the line and analyzed the standard deviation of the probability distribution . for a localized initial condition at the origin ,the maximum spread of the probability distribution for an evolution operator of the form is obtained when .we have also characterized the class of coined qws that are included in the sqw model with hamiltonians and we have described how to convert those coined qws on a graph into their equivalent formulation in terms of sqws on an extended graph obtained from by replacing degree- vertices into -cliques . as a last remark , we call attention that recently it was shown numerically that searching one marked vertex using the original sqw on the two - dimensional square lattice has no speedup compared to classical search using random walks . on the other hand ,the sqw with hamiltonians with is able to find the marked vertex after steps at least as fast as the equivalent algorithm using coined quantum walks .rp acknowledges financial support from faperj ( grant n. e-26/102.350/2013 ) and cnpq ( grants n. 303406/2015 - 1 , 474143/2013 - 9 ) and also acknowledges useful discussions with pascal philipp and stefan boettcher .jkm acknowledges financial support from cnpq grant pdj 165941/2014 - 6 .mco acknowledges support by fapesp through the research center in optics and photonics ( cepof ) and by cnpq .dorit aharonov , andris ambainis , julia kempe , and umesh vazirani .quantum walks on graphs . in _ proceedings of the thirty - third annual acm symposium on theory of computing _ , stoc 01 , pages 5059 , new york , ny , usa , 2001 .acm .j. lozada - vera , a. carrillo , o. p. des neto , j. khatibi moqadam , m. d. lahaye , and m. c. de oliveira .quantum simulation of the anderson hamiltonian with an array of coupled nanoresonators : delocalization and thermalization effects ., 3(9):116 , 2016 .
quantum walks are recognizably useful for the development of new quantum algorithms , as well as for the investigation of several physical phenomena in quantum systems . actual implementations of quantum walks face technological difficulties similar to the ones for quantum computers , though . therefore , there is a strong motivation to develop new quantum - walk models which might be easier to implement . in this work , we present an extension of the staggered quantum walk model that is fitted for physical implementations in terms of time - independent hamiltonians . we demonstrate that this class of quantum walk includes the entire class of staggered quantum walk model , szegedy s model , and an important subset of the coined model .
millisecond pulsars ( msps ) have for some time been known to exhibit exceptional rotational stability , with decade long observations providing timing measurements with accuracies similar to atomic clocks ( e.g. ) .such stability lends itself well to the pursuit of a wide range of scientific goals , e.g. observations of the pulsar psr b1913 + 16 showed a loss of energy at a rate consistent with that predicted for gravitational waves , whilst the double pulsar system psr j0737 - 3039a / b has provided precise measurements of several ` post keplerian ' parameters allowing for additional stringent tests of general relativity . for a detailed review of pulsar timingrefer to e.g. . in brief , the arrival times of pulses ( toas ) for a particular pulsar will be recorded by an observatory in a series of discrete observations over a period of time .these arrival times must all be transformed into a common frame of reference , the solar system barycenter , in order to correct for the motion of the earth . a model for the pulsarcan then be fitted to the toas ; this characterises the properties of the pulsar s orbital motion , as well as its timing properties such as its orbital frequency and spin down .this is most commonly carried out using the tempo2 pulsar - timing packages , or more recently , the bayesian pulsar timing package temponest . when performing this fitting process , both tempo2 and temponest assume purely gaussian statistics in the properties of the uncorrelated noise . in realistic datasets , however , this assumption is not necessarily correct .if the underlying probability density function ( pdf ) for the noise is not gaussian , for example , if there is an excess of outliers relative to a gaussian distribution , modifiers to the toa error bars that scale their size are used to find the best approximation to a gaussian distribution .this can be performed using a single modifier for a given receiving system determined across an entire dataset , or as in the ` fixdata ' plugin for tempo2 , where the modifier is determined separately for a series of short time lags .while the latter of these two approaches can better account for a non - gaussian distribution in the uncorrelated noise , it does so at the expense of a potentially large number of additional free parameters , and ultimately does not address the core issue , that the underlying distribution is not gaussian .both approaches then have the direct consequence of decreasing the precision with which one can estimate the timing parameters , and any other signals of interest , such as intrinsic red spin noise due to rotational irregularities in the neutron star or correlated noise due to a stochastic gravitational wave background ( gwb ) generated by , for example , coalescing black holes ( e.g. ) .indeed , currently all published limits on the signals induced by a gwb have been obtained under the assumption that the statistics of the toa errors are gaussian ( see e.g. ) . in this paperwe introduce a method of performing a robust bayesian analysis of non - gaussianity present in pulsar timing data , simultaneously with the pulsar timing model , and additional stochastic parameters such as those describing the red noise , and dispersion measure variations .the parameters used to define the presence of non - gaussianity are zero for gaussian processes , giving a simple method of defining the strength of non - gaussian behaviour . in section [ section : bayes ]we will describe the basic principles of our bayesian approach to data analysis , giving a brief overview of how it may be used to perform model selection , and introduce multinest . in sections [ section :nongausslike ] and [ section : toy ] we introduce the non - gaussian likelihood we will use in our pulsar timing analysis , and apply it to a simple toy problem . in section [ section : pulsarnongaussian ] we then extend this likelihood to the subject of pulsar timing , and apply it to both simulated and real data in sections [ section : pulsarsims ] and [ section : realdata ] respectively , before finally offering some concluding remarks in section [ section : conclusion ] .this research is the result of the common effort to directly detect gravitational waves using pulsar timing , known as the european pulsar timing array ( epta ) .given a set of data , bayesian inference provides a consistent approach to the estimation of a set of parameters in a model or hypothesis . in particular ,bayes theorem states that : where is the posterior probability distribution of the parameters , is the likelihood , is the prior probability distribution , and is the bayesian evidence .since the evidence is independent of the parameters it is typically ignored when one is only interested in performing parameter estimation . in this case inferencesare obtained by taking samples from the ( unnormalised ) posterior using , for example , standard markov chain monte carlo ( mcmc ) sampling methods . for model selection ,however , the evidence is key , and is defined simply as the factor required to normalise the posterior over : where is the dimensionality of the parameter space . as the evidence is just the average of the likelihood over the prior, it will be larger for a simpler model with a compact parameter space if more of that parameter space is likely .more complex models where large areas of parameter space have low likelihood values will have a smaller evidence even if the likelihood function is very highly peaked , unless they are significantly better at explaining the data .thus , the evidence automatically implements occam s razor .the question of model selection between two models and can be answered via the model selection ratio , commonly referred to as the ` bayes factor ' : where is the a priori probability ratio for the two models , which in this work we will set to unity but occasionally requires further consideration .the bayes factor then allows us to obtain the probability of one model compared the other simply as : in practice when performing bayesian analysis we do not work with the likelihood , but the log likelihood . in this casethe quantity of interest is the log bayes factor , which is simply the difference in the log evidence for the two models .for example , a difference in the log evidence of 3 for two competing models gives a bayes factor of , which in turn gives a probability of .we use the difference in the log evidence in sections [ section : pulsarsims ] and [ section : realdata ] to perform model selection between our gaussian and non - gaussian models .while many techniques exist for calculating the evidence , such as thermodynamic integration , it remains a challenging task both numerically and computationally , with evidence evaluation at least an order - of - magnitude more costly than parameter estimation .nested sampling is an approach designed to make the calculation of the evidence more efficient , and also produces posterior inferences as a by - product .the multinest algorithm ( ) builds upon this nested sampling framework , and provides an efficient means of sampling from posteriors that may contain multiple modes and/or large ( curving ) degeneracies , and also calculates the evidence .since its release multinest has been used successfully in a wide range of astrophysical problems , including inferring the properties of a potential stochastic gravitational wave background in pulsar timing array data , and is also used in the bayesian pulsar timing package temponest .this technique has greatly reduced the computational cost of bayesian parameter estimation and model selection , and is employed in this paper .in this section we will outline the method adopted for including non - gaussian behaviour in our analysis .we use the approach developed in , which is based on the energy eigenmode wavefunctions of a simple harmonic oscillator .we will describe this in brief below in order to aid future discussion .we begin by considering our data , the vector of length , as the sum of some signal and noise such that : we can then construct the likelihood that the residuals after subtracting our model signal from the data follows an uncorrelated gaussian distribution of width as : ,\ ] ] with the diagonal noise covariance matrix for the residuals , such that , and the determinant of .we now extend this to the general case in order to allow for non - gaussian distributions by modelling our pdf as the sum of a set of gaussians , modified by hermite polynomials ( see e.g. for previous uses of hermite polynomials in describing departures from gaussianity ) , defined as : therefore , for a general random variable the pdf for fluctuations in can be written : \left|\sum_{n=0}^{\infty}\alpha_nc_nh_n\left(\frac{x}{\sqrt{2}\sigma}\right)\right|^2\ ] ] with free parameters that describe the relative contributions of each term to the sum , and is a normalization factor. equation [ eq : prx ] forms a complete set of pdfs , normalised such that : \left(\frac{x}{\sqrt{2}\sigma}\right)c_mh_m\left(\frac{x}{\sqrt{2}\sigma}\right ) = \delta_{mn},\ ] ] with the kronecker delta , where the ground state , , reproduces a standard gaussian pdf , and any non - gaussianity in the distribution of will be reflected in non - zero values for the coefficients associated with higher order states .the only constraint we must place on the values of the amplitudes is : with the maximum number of coefficients to be included in the model for the pdf .this is performed most simply by setting : we can therefore rewrite eq . [eq : gausslike ] in this more general form as : \nonumber \\ & \times&\prod_{i=1}^{n_d}\left|\sum_{n=0}^{n_{\mathrm{max}}}\alpha_nc_nh_n\left(\frac{d_i - s_i}{\sqrt{2}\sigma}\right)\right|^2.\end{aligned}\ ] ] the advantage of this method is that one may use a finite set of non - zero to model the non - gaussianity , without mathematical inconsistency .any truncation of the series still yields a proper distribution , in contrast to the more commonly used edgeworth expansion ( e.g. ) .before applying the formalism described in section [ section : nongausslike ] to the practice of pulsar timing , we first demonstrate its use in a toy problem .here our data vector contains 10000 points drawn from a non - gaussian distribution obtained using eq .[ eq : prx ] , with parameters listed in table [ table : toyparams ] ..parameters used to generate non - gaussian noise in a simple toy problem . [ cols="^,^,^ " , ] [ table : logevidence ] table [ table : logevidence ] lists the log evidence values for different sets of non - gaussian coefficients , normalised such that the log evidence for no additional coefficients ( i.e. assuming gaussian statistics ) is 0 .we see that there is a significant increase in the log evidence ( 39 ) when including even just two coefficients , indicating definitive support for their inclusion in the model . as the number increases the rise in evidence increases , reaching a maximum with 4 included coefficients .given the timing model , red noise and dispersion measure variation solutions that were subtracted from the data were obtained from a gaussian analysis , we will however still include coefficients up to and including in the full analysis .given the large dimensionality of the problem this analysis can not be carried out using multinest . as such we make use of the guided hamiltonian sampler used previously in pulsar timing analysis in .this sampler makes use of both gradient information in the likelihood , and also the hessian in order to efficiently sample from large parameter spaces .table [ table:0437 ] lists the timing model parameter estimates and their nominal standard deviations for both the gaussian and non - gaussian analysis . in all caseswe find the parameter estimates and their uncertainties to be consistent between both methods . in fig .[ figure:0437noise ] ( top ) we show the one and two - dimensional marginalised posterior distributions for the red noise and dispersion measure variation power law amplitudes and spectral indicies for the non - gaussian ( left ) and gaussian ( right ) analysis .both are also extremely consistent with one another , however when overlaying the two sets of 1-dimensional posterior distributions for each of the 4 parameters separately ( bottom 4 panels ) some differences become apparent between the non - gaussian ( blue dashed lines ) and gaussian ( red solid lines ) analysis . in particularthe dispersion measure variation power law parameter estimates show a slight shift towards higher amplitudes and shallower spectral indices in the non - gaussian case . despite these similarities in the timing and stochastic parameter estimates between the gaussian and non - gaussian analysis , fig .[ figure:0437pdf ] indicates a definitive detection of non - gaussianity in the dataset , in agreement with the difference in the log evidence for the noise only analysis . in the top plotwe show the one and two - dimensional marginalised posterior distributions for the 5 non - gaussian coefficients fit in the analysis of j0437 .vertical lines are included at 0 where visible in the plots , however , except for all the coefficients are inconsistent with this value . in the bottom plotwe then show the set of equally weighted pdfs obtained from the non - gaussian analysis ( black lines ) setting .in addition we over plot the mean of the distribution ( red line ) and a unit gaussian ( blue line ) all of which have been normalised to have a sum of 1 .the difference between the gaussian and non - gaussian pdfs is clear , with a larger probability for both small ( ) and larger ( ) deviations than given by the gaussian pdf . that such a significant detection of non - gaussianity does not lead to larger changes in the parameter estimatescan potentially be attributed to a frequency dependence on the significance of the parameters . in 10 cm j0437 data was found to be describable through gaussian statistics alone .this would suggest that the non - gaussianity we detect exists primarily at low frequencies . in figure[ figure:0437normres ] we show the normalised residuals from fig .[ figure:0437noise ] separated into its 10 cm , 20 cm and 50 cm components , along with histograms for each wavelength . herethe increase in non - gaussian behaviour can clearly be seen as the wavelength increases .given the lowest frequencies have the greatest degree of non - gaussianity it is less surprising that there is little impact on the timing or red spin noise parameters , as the low frequency data contributes the least to these parts of the model .the low frequencies do , however , contribute greatly to the constraints on dispersion measure variations , and it is here we see the greatest difference between the gaussian and non - gaussian models .lcc + pulsar name & j0437 + mjd range & 50191.055619.2 + data span ( yr ) & 14.86 + number of toas & 5052 + + model parameter & non gaussian & gaussian + right ascension , ( rad ) & 1.20979650940(10 ) & 1.20979650943(11 ) + declination , ( rad ) & .82471224153(8 ) & .82471224154(8 ) + pulse frequency , ( s ) & 173.6879458121850(3 ) & 173.6879458121849(4 ) + first derivative of pulse frequency , ( s ) & .728365(4) & .728365(4) + dispersion measure , dm ( ) & 2.64462(11 ) & 2.64461(11 ) + first derivative of dispersion measure , ( ) & (6) & (7) + dm2 ( pc yr ) & (2) & (2) + proper motion in right ascension , ( masyr ) & 121.439(3 ) & 121.441(3 ) + proper motion in declination , ( masyr ) & .474(3 ) & .474(3 ) + parallax , ( mas ) & 6.4(2 ) & 6.3(2 ) + orbital period , ( d ) & 5.7410462(3 ) & 5.7410461(3 ) + epoch of periastron , ( mjd ) & 54530.1722(3 ) & 54530.1721(3 ) + projected semi - major axis of orbit , ( lt - s ) & 3.36671463(8 ) & 3.36671464(8 ) + longitude of periastron , ( deg ) & 1.35(2 ) & 1.36(2 ) + orbital eccentricity , & 1.91800(14) & 1.91796(15) + first derivative of orbital period , & 3.724(6) & 3.724(6) + first derivative of , ( ) & 1(2) & 1(2) + periastron advance , ( deg / yr ) & 0.0150(12 ) & 0.0150(13 ) + companion mass , ( ) & 0.223(14 ) & 0.223(15 ) + longitude of ascending node , ( degrees ) & 208.0(12 ) & 208.3(13 ) + orbital inclination angle , ( degrees ) & 137.1(8 ) & 137.3(8 ) + [ table:0437 ] + + {ngnoise.pdf } & \includegraphics[width=100mm]{gaussnoise.pdf } \\ \hspace{-1.5 cm } \includegraphics[width=100mm]{redamp.pdf } & \hspace{-1.5 cm } \includegraphics[width=100mm]{redspec.pdf } \\ \hspace{-1.5 cm } \includegraphics[width=100mm]{dmamp.pdf } & \hspace{-1.5 cm } \includegraphics[width=100mm]{dmspec.pdf } \\ % \vspace{-1.5 cm } \end{array} ] {j043710cmnormres_scissored.pdf } & \includegraphics[width=85 mm , height=65mm]{10cmnormreshistogram.png } \\\includegraphics[width=80mm]{j043720cmnormres_scissored.pdf } & \includegraphics[width=85 mm , height=65mm]{20cmnormreshistogram.png } \\ \includegraphics[width=80mm]{j043750cmnormres_scissored.pdf } & \includegraphics[width=85 mm , height=65mm]{50cmnormreshistogram.png } \\ % \end{array}$ ]in this paper we have introduced a method of performing a robust bayesian analysis of non - gaussianity present in the residuals in pulsar timing analysis , simultaneously with the pulsar timing model , and additional stochastic parameters such as those describing the red noise , and dispersion measure variations present in the data .deviations from gaussianity are described using a set of parameters that act to modify the probability density of the noise , such that describes gaussian noise , and any non zero values provide support for non - gaussian behaviour .the advantage of this method is that one may use a finite set of non - zero to model the non - gaussianity , without mathematical inconsistency .any truncation of the series still yields a proper distribution , in contrast to the more commonly used edgeworth expansion ( e.g. ) .we applied this method to two simulated datasets . in simulation onethe noise was drawn from a non - gaussian distribution , and in simulation 2 it was purely gaussian . in simulation 1 ,the effect of the non - gaussianity was to introduce a higher proportion of outliers relative to a gaussian distribution .this resulted in an overestimation of the toa uncertainties when assuming a gaussian likelihood , and decreased the precision with which the timing model parameters could be extracted compared to an analysis that correctly incorporated the non - gaussian behaviour on the noise . in the second case we showed that the parameter estimates of the timing model parameters of interest were consistent when including , or not , the parameters , as is to be expected when the noise is gaussian .we then applied this method to the publicly available parkes pulsar timing array ( ppta ) data release 1 dataset for the binary pulsar j0437 .we detect a significant non - gaussian component in the non - thermal component of the uncorrelated noise , however as the non - gaussianity is most dominant in the lowest frequency data the impact on the timing precision in the pulsar is minimal , with only the parameter estimates of the power law dispersion measure variations being visible changed between the gaussian and non - gaussian analysis .janssen g. h. , stappers b. w. , kramer m. , purver m. , jessner a. , cognard i. , 2008 , in bassa c.,wang z. , cumming a. , kaspiv.m ., eds , aip conf . proc .983 , 40 years of pulsars : millisecond pulsars , magnetars and more .phys . , new york , p. 633skilling j. , 2004 , in fischer r. , preuss r. , von toussaint u. , eds , aip conf .735 , bayesian inference and maximum entropy methods in science and engineering ., new york , p. 395
we introduce a method for performing a robust bayesian analysis of non - gaussianity present in pulsar timing data , simultaneously with the pulsar timing model , and additional stochastic parameters such as those describing red spin noise and dispersion measure variations . the parameters used to define the presence of non - gaussianity are zero for gaussian processes , giving a simple method of defining the strength of non - gaussian behaviour . we use simulations to show that assuming gaussian statistics when the noise in the data is drawn from a non - gaussian distribution can significantly increase the uncertainties associated with the pulsar timing model parameters . we then apply the method to the publicly available 15 year parkes pulsar timing array data release 1 dataset for the binary pulsar j0437 . in this analysis we present a significant detection of non - gaussianity in the uncorrelated non - thermal noise , but we find that it does not yet impact the timing model or stochastic parameter estimates significantly compared to analysis performed assuming gaussian statistics . the methods presented are , however , shown to be of immediate practical use for current european pulsar timing array ( epta ) and international pulsar timing array ( ipta ) datasets . [ firstpage ] methods : data analysis , pulsars : general , pulsars : individual
a frequently used method for edge - preserving image denoising is the variational approach which minimizes the rudin - osher - fatemi ( rof ) functional . in a discrete ( penalized )form the rof functional can be written as where is the given corrupted image and denotes the discrete gradient operator which contains usually first order forward differences in vertical and horizontal directions .the regularizing term can be considered as discrete version of the total variation ( tv ) functional .since the gradient does not penalize constant areas the minimizer of the rof functional tends to have such regions , an effect known as staircasing .an approach to avoid this effect consists in the employment of higher order differences / derivatives .since the pioneering work which couples the tv term with higher order terms by infimal convolution various techniques with higher order differences / derivatives were proposed in the literature , among them . in various applications in image processing and computer vision the functions of interesttake values on the circle or another manifold .processing manifold - valued data has gained a lot of interest in recent years .examples are wavelet - type multiscale transforms for manifold data and manifold - valued partial differential equations .finally we like to mention statistical issues on riemannian manifolds and in particular the statistics of circular data .the tv notation for functions with values on a manifold has been studied in using the theory of cartesian currents .these papers were an extension of the previous work were the authors focus on -valued functions and show in particular the existence of minimizers of certain energies in the space of functions with bounded total cyclic variation .the first work which applies a cyclic tv approach among other models for imaging tasks was recently published by cremers and strekalovskiy in .the authors unwrapped the function values to the real axis and proposed an algorithmic solution to account for the periodicity .an algorithm which solves tv regularized minimization problems on riemannian manifolds was proposed by lellmann et al . inthey reformulate the problem as a multilabel optimization problem with an infinite number of labels and approximate the resulting hard optimization problem using convex relaxation techniques .the algorithm was applied for chromaticity - brightness denoising , denoising of rotation data and processing of normal fields for visualization .another approach to tv minimization for manifold - valued data via cyclic and parallel proximal point algorithms was proposed by one of the authors and his colleagues in .it does not require any labeling or relaxation techniques .the authors apply their algorithm in particular for diffusion tensor imaging and interferometric sar imaging . for cartan - hadamard manifolds convergence of the algorithmwas shown based on a recent result of bak .unfortunately , one of the simplest manifolds that is not of cartan - hadamard type is the circle . in this paperwe deal with the incorporation of higher order differences into the energy functionals to improve denoising results for -valued data .note that the ( second - order ) total generalized variation was generalized for tensor fields in .however , to the best of our knowledge this is the first paper which defines second order differences of cyclic data and uses them in regularization terms of energy functionals for image restoration .we focus on a discrete setting .first we provide a meaningful definition of higher order differences for cyclic data which we call _ absolute cyclic differences_. in particular our absolute cyclic first order differences resemble the geodesic distance ( arc length distance ) on the circle .as the geodesics the absolute cyclic second order differences take only values in ] denotes the component - by - component application of if , and {2\pi } = \pm \pi ] . in this casewe get this proves the first assertion .for we can again assume that .exploiting that we have to consider the following three cases : if , then is the identity matrix , and if , then and . bywe have if , then and . herewe obtain this finishes the proof .for a proper , closed , convex function ] has components for , then we get using , that by lemma [ lem : proxy_r ] the minimizers over of are given by where by corollary [ diff_data ] the minimum of is determined by .note that for and for .we distinguish two cases . 1 .if , then attains its smallest value exactly for and by we obtain with as in .corollary [ diff_data ] implies that ^d } e_{k,\sigma } ( x)\qquad \forall \sigma \in { \mathbb{z}}\backslash \{r - \langle k , w \rangle \}.\ ] ] finally , there exists exactly one such that and by we conclude that is the unique minimizer of over .2 . if , , then attains its smallest value exactly for and by corollary [ diff_data ] the minimum of the corresponding functions is smaller than those of the other functions in .we obtain and as in part 1 of the proof we conclude that are the minimizers of over .this finishes the proof .next we focus on .[ thm : p2 ] let in , and , where is adapted to the respective length of . 1 .if , then the unique minimizer of is given by 2 .if , then has the two minimizers the proof follows the lines of the proof of theorem [ lem : proxy_b1 ] using lemma [ lem : e_quad ] . finally , we need the proximal mapping for given .the proximal mapping of the ( squared ) cyclic distance function was also computed ( for more general manifolds ) in . herewe give an explicit expression for spherical data .[ theo : prox_quad ] for let then the minimizer(s ) of are given by where is defined by and the minimum is obviously , the minimization of can be done component wise so that we can restrict our attention to . 1. first we look at the minimization problem over which reads and has the following minimizer and minimum : 2 . for the original problem consider the related energy functionals on , namely by part 1 of the proof these functions have the minimizers and we distinguish three cases : 1 .if , then the minimum in occurs exactly for and it holds for we see that and 2 . if , then has its minimum exactly for and which is in for or and 3 . in the case the minimum in is attained for so that we have both solutions from i ) and ii ) .this completes the proof .the proximal point algorithm ( ppa ) on the euclidean space goes back to . recentlythis algorithm was extended to riemannian manifolds of non - positive sectional curvature and also to hadamard spaces .a cyclic version of the proximal point algorithm ( cppa ) on the euclidean space was given in , see also the survey .a cppa for hadamard spaces can be found in . in the cppathe original function is split into a sum and , iteratively , the proximal mappings of the functions are applied in a cyclic way .the great advantage of this method is that often the proximal mappings of the summands are much easier to compute or can even be given in a closed form . in the followingwe develop a cppa for functionals of -valued signals and images containing absolute cyclic first and second order differences .first we have a look at the one - dimensional case , i.e. , at signals . for given -valued signals represented by , , and regularization parameters , , we are interested in where to apply a cppa we set , split into an even and an odd part and into three sums then the objective function decomposes as we compute in the -th cycle of the cppa the signal the different proximal values can be obtained as follows : 1 . by proposition [ theo : prox_quad ] with playing the role of we get 2 . for , we obtain the vectors by applying theorem [ lem : proxy_b1 ] with independently for the pairs , .3 . for , we compute by applying theorem [ lem : proxy_b1 ] with independently for the vectors , .the parameter sequence of the algorithm should fulfill this property is also essential for proving the convergence of the cppa for real - valued data and data on a hadamard manifold , see . in our numerical experimentswe choose with some initial parameter which clearly fulfill .the whole procedure is summarized in algorithm [ alg : cppa ] .* input * fulfilling and , or , , data or + initialize , initialize the cycle length as ( 1d ) or ( 2d ) a convergence criterion are reached next we consider two - dimensional data , i.e. , images of the form , . our functional includes __ h__orizontal and _ _ v__ertical cyclic first and second order differences and and _ _ d__iagonal ( mixed ) differences .for non - negative regularization parameters , and not all equal to zero we are looking for where here the objective function splits as with the following summands : again we set and compute the proximal value of by proposition [ theo : prox_quad ] .each of the sums in and can be split analogously as in the one - dimensional case , where we have to consider row and column vectors now .this results in functions whose proximal values can be computed by theorem [ lem : proxy_b1 ] .finally , we split . into the four sums anddenote the inner sums by .clearly , the proximal values of the functions , can be computed separately for the vectors by theorem [ lem : proxy_b1 ] with . in summary, the computation can be done by algorithm [ alg : cppa ] .note that the presented approach immediately generalizes to arbitrary dimensions .since is not a hadamard space , the convergence analysis of the cppa in can not be applied .we show the convergence of the cppa for the 2d -valued function under certain conditions .the 1d setting in can then be considered as a special case . in the following ,let .our first condition is that the data is dense enough , this means that the distance between neighboring pixels is sufficiently small .similar conditions also appear in the convergence analysis of nonlinear subdivision schemes for manifold - valued data in . in the context of nonlinear subdivision schemes , even more severe restrictions such as` almost equally spaced data ' are frequently required .this imposes additional conditions on the second order differences to make the data almost lie on a ` line ' .our analysis requires only bounds on the first , but not on the second order differences .our next requirement is that the regularization parameters in are sufficiently small . for large parameters any solution tends to become almost constant . in this case , if the data is for example equidistantly distributed on the circle , e.g. , in 1d , any shift is again a solution . in this situationthe model loses its interpretation which is an inherent problem due to the cyclic structure of the data . finally , the parameter sequence of the cppa has to fulfill with a small norm .the later can be achieved by rescaling .our convergence analysis is based on a convergence result in and an unwrapping procedure .we start by reformulating the convergence result for the cppa of real - valued data , which is a special case of and can also be derived from .[ thm : bacak ] let , where , , are proper , closed , convex functionals on .let have a global minimizer .assume that there exists such that the iterates of the cppa ( see algorithm [ alg : cppa ] ) satisfy for all .then the sequence converges to a minimizer of .moreover the iterates fulfill + 2 \lambda_k^2 l^2 c(c+1 ) \quad \mbox{for all } \ ; x \in \mathbb r^{n \times m}.\label{noch_1 } \end{aligned}\ ] ] the next lemma states a discrete analogue of a well - known result on unwrapping or lifting from algebraic topology .we supply a short proof since we did not found it in the literature .[ lem : discretecovering ] let with .for not antipodal to fix an such that . then there exists a unique such that for all the following relations are fulfilled : 1 . 2 . .we call the lifted or unwrapped image of ( w.r.t . a fixed ) .for , , it holds by assumption on that .hence we have , where with an abuse of notation stands for an arbitrary representative in of . then obviously , are the unique values satisfying i ) and ii ) . for consider by assumption on we see that so that .thus is the unique value with properties i ) and ii ) .proceeding this scheme successively , we obtain the whole unique image fulfilling i ) and ii ) . for define where to measure how ` near ' the images and are to each other .[ lem : convlem2 ] let with and be not antipodal to .fix with and let be the corresponding lifting of .let ] we have \in { \mathcal s}(f,\delta) ] denotes the point reached after time t on the unit speed geodesic starting at in direction of .recall that a function is convex on if for all and all ] holds true .let with and ] and its lifting w.r.t . , we have i.e. , the canonical projection commutes with the proximal mappings .the function is based on the distance to the data .since , we have for all .the components of the proximal mapping are given by proposition [ theo : prox_quad ] from which we conclude for the proximal mappings of , , are given via proximal mappings of the first and second order cyclic differences .we consider the first order difference . by the triangle inequality, we have as well as . by the explicit form of the proximal mapping in theorem [ lem : proxy_b1 ]we obtain for , .next we consider the horizontal and vertical second order differences and .we have that , as well as .hence all contributing values of lie on a quarter of the circle . applying the proximal mapping in theorem [ lem : proxy_b1 ] the resulting data lie on one half of the circle .an analogous statement holds true for the horizontal part .hence the proximal mappings of the ordinary second differences agree with the cyclic version ( under identification via ) .this implies for , .finally , we consider the mixed second order differences .as above , we have for neighboring data items that the distance is smaller than . for all four contributing values of have that the pairwise distance is smaller by .thus again they lie on a quarter of the circle .hence , the proximal mapping for the ordinary mixed second differences agree with the cyclic version ( under identification via ) .this implies for , .we note that lemma [ lem : convlem5 ] does not guarantee that remains in .therefore it does not allow for an iterated application . in the following main theoremwe combine the preceding lemmas to establish this property .[ thm : convergence ] let with .let fulfill property and for some , where and .further , assume that the parameters of the functional in and satisfy .then the sequence generated by the cppa in algorithm [ alg : cppa ] converges to a global minimizer of .let be the lifting of of with respect to a base point not antipodal to and fixed with .further , let denote the real analog of . by lemma [ lem : convlem2 ] we have and for such that is also fulfilled forthe real - valued setting. then we can apply remark [ lem : convlem1 ] and conclude that the minimizer of fulfills . bywe obtain by lemma [ lem : convlem4 ] the iterates of the real - valued cppa fulfill hence which means that all iterates stay within .next , we consider the sequence of the cppa for the -valued data .we use induction to verify . by definitionwe have .assume that .by bijectivity of the lifting , cf .lemma [ lem : convlem2 ] , and since , we conclude . by lemma [ lem : convlem5 ]we obtain by the same argument as above we have again .finally , we know by theorem [ thm : bacak ] that and by lemma [ lem : convlem3 ] that is a global minimizer of .this completes the proof .for the numerical computations of the following examples , the algorithms presented in section [ sec : cpp ] were implemented in matlab .the computations were performed on a macbook pro with an intel core i5 , 2.6ghz and 8 gb of ram using matlab 2013 , version 2013a ( 8.1.0.604 ) on mac os 10.9.2 ..48 ( dashed red ) and disturbed signal by wrapped gaussian noise ( solid black ) . reconstructed signals using only the regularizer ( ) , only the regularizer ( ) , and both of them ( , ) . while suffers from the staircasing effect , shows weak results at constant areas .the combination of both regularizers in yields the best image.,title="fig : " ] .48 ( dashed red ) and disturbed signal by wrapped gaussian noise ( solid black ) . reconstructed signals using only the regularizer ( ) , only the regularizer ( ) , and both of them ( , ) .while suffers from the staircasing effect , shows weak results at constant areas . the combination of both regularizers in yields the best image.,title="fig : " ] .48 ( dashed red ) and disturbed signal by wrapped gaussian noise ( solid black ) . reconstructed signals using only the regularizer ( ) , only the regularizer ( ) , and both of them ( , ) . while suffers from the staircasing effect , shows weak results at constant areas . the combination of both regularizers in yields the best image.,title="fig : " ].48 ( dashed red ) and disturbed signal by wrapped gaussian noise ( solid black ) . reconstructed signals using only the regularizer ( ) , only the regularizer ( ) , and both of them ( , ) .while suffers from the staircasing effect , shows weak results at constant areas . the combination of both regularizers in yields the best image.,title="fig : " ] the first example of an artificial one - dimensional signal demonstrates the effect of different models containing absolute cyclic first order differences , second order differences or both combined .the function \to [ -\pi,\pi) ] of is continuous and the change from to at is just due to the chosen representation system .similarly the two constant parts with the values and differ only by a jump size of .for the noise around these two areas , we have the same situation .we apply algorithm [ alg : cppa ] with different model parameters and to which yields the restored signals .the restoration error is measured by the ` cyclic ' mean squared error ( cmse ) with respect to the arc length distance we use and iterations as stopping criterion . for any choice of parameters the computation time is about seconds .the result in figure [ subfig : tv ] is obtained using only the regularization ( ) .the restoration of constant areas is favored by this regularization term , but linear , quadratic and exponential parts suffer from the well - known ` staircasing ' effect . utilizing only the regularization ( ) , cf .figure [ subfig : tv2 ] , the restored function becomes worse in flat areas , but shows a better quality in the linear parts . by combining the regularization terms ( , )as illustrated in figure [ subfig : tv12 ] both the linear and the constant parts are reconstructed quite well and the cmse is smaller than for the other choices of parameters .note that and were chosen in with respect to an optimal cmse .the complex - valued synthetic aperture radar ( sar ) data is obtained emitting specific radar signals at equidistant points and measuring the amplitude and phase of their reflections by the earth s surface .the amplitude provides information about the reflectivity of the surface .the phase encodes both the change of the elevation of the surface s elements within the measured area and their reflection properties and is therefore rather arbitrary . when taking two sar images of the target area at the same time but from different angles or locations .the phase difference of these images encodes the elevation , but it is restricted to one wavelength and also includes noise .the result is the so called interferometric synthetic aperture radar ( insar ) data and consists of the ` wrapped phase ' or the ` principal phase ' , a value in representing the surface elevation . for more detailssee , e.g. , . after a suitable preregistrationthe same approach can be applied to two images from the same area taken at different points in time to measure surface displacements , e.g. , before and after an earthquake or the movement of glaciers .the main challenge in order to unwrap the phase is the presence of noise .ideally , if the surface would be smooth enough and no noise would be present , unwrapping is uniquely determined , i.e. , differences between two pixels larger than are regarded as a wrapping result and hence become unwrapped .there are several algorithms to unwrap , even combining the denoising and the unwrapping , see for example . for denoising , deledalle et al . use both sar images and apply a non - local means algorithm jointly to their reflection , the interferometric phase and the coherence ..48 using only the regularizer ( ) , only the regularizer ( , ) , and both of them ( , , ) . ,title="fig : " ] .48 using only the regularizer ( ) , only the regularizer ( , ) , and both of them ( , , ) . , title="fig : " ] .48 using only the regularizer ( ) , only the regularizer ( , ) , and both of them ( , , ) ., title="fig : " ] .48 using only the regularizer ( ) , only the regularizer ( , ) , and both of them ( , , ) . , title="fig : " ] .48 using only the regularizer ( ) , only the regularizer ( , ) , and both of them ( , , ) . , title="fig : " ] .48 using only the regularizer ( ) , only the regularizer ( , ) , and both of them ( , , ) ., title="fig : " ] [ [ application - to - synthetic - data . ] ] application to synthetic data .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in order to get a better understanding in the two - dimensional case , let us first take a look at a synthetic surface given on ^ 2 $ ] with the profile shown in figure [ subfig:2dorig ] .this surface consists of two plates of height divided at the diagonal , a set of stairs in the upper left corner in direction , a linear increasing area connecting both plateaus having the shape of an ellipse with major axis at the angle , and a half ellipsoid forming a dent in the lower right of the image with circular diameter of size and depth .the initial data is given by sampling the described surface at sampling points .the usual insar measurement would ideally result in data as given in figure [ subfig:2dwrapped ] , i.e. , the data is wrapped with respect to . in the figurethe resulting ideal phase is represented using the hue component of the hsv color space .again , the data is perturbed by wrapped gaussian noise , standard deviation , see figure [ subfig:2dnoisy ] . for an application of algorithm [ alg : cppa ] to the minimization problem , we have to fix five parameters which were chosen on such that they minimize the cmse .using only the cyclic first order differences with , see figure [ subfig:2dtv1 ] , the reconstructed image reproduces the piecewise constant parts of the stairs in the upper left part and the background , but introduces a staircasing in both linear increasing areas inside the ellipse and in the half ellipsoid .this is highlighted in the three magnifications in figure [ subfig:2dtv1 ] .applying only cyclic second order differences with manages to reconstruct the linear increasing part and the circular structure of the ellipsoid , but compared to the first case it even increases the cmse due to the approximation of the stairs and the background , see especially the magnification of the stairs in figure [ subfig:2dtv2 ] . combining first and second order cyclic differences by setting and , ,these disadvantages can be reduced , cf .figure [ subfig:2dtv12 ] .note especially the three magnified regions and the cmse . .48 , and ).,title="fig : " ] .48 , and ).,title="fig : " ] [ [ application - to - real - world - data . ] ] application to real - world data .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + next we examine a real - world example .the data from is a set of insar data recorded in 1991 by the ers-1 satellite capturing topographical information from the mount vesuvius .the data is available online and a part of it was also used as an example in for tv based denoising of manifold - valued data . in figure[ fig : vesuvius ] the phase is represented by the hue component of the hsv color space .we apply algorithm [ alg : cppa ] to the image of size , cf .figure [ subfig : vesuv - orig ] , with and .this reduces the noise while keeping all significant plateaus , ascents and descents , cf .figure [ subfig : vesuv - denoised ] .the left zoom illustrates how the plateau in the bottom left of the data is smoothened but kept in its main elevation shown in blue . in the zoom on the rightall major parts except the noise are kept .we notice just a little smoothening due to the linearization introduced by . in the bottom left of this detailsome of the fringes are eliminated , and a small plateau is build instead , shown in cyan .the computation time for the whole image using iterations as stopping criterion was 86.6 sec and 11.1 sec for each of the details of size .in this paper we considered functionals having regularizers with second order absolute cyclic differences for -valued data .their definition required a proper notion of higher order differences of cyclic data generalizing the corresponding concept in euclidian spaces .we derived a cppa for the minimization of our functionals and gave the explicit expressions for the appearing proximal mappings .we proved convergence of the cppa under certain conditions . to the best of our knowledgethis is the first algorithm dealing with higher order tv - type minimization for -valued data .we demonstrated the denoising capabilities of our model on synthetic as well as on real - world data .future work includes the application of our higher order methods for cyclic data to other imaging tasks such as segmentation , inpainting or deblurring . for deblurring, the usually underlying linear convolution kernel has to be replaced by a nonlinear construction based on intrinsic ( also called karcher ) means .this leads to the task of solving the new associated inverse problem .further , we intend to investigate other couplings of first and second order derivatives similar to infimal convolutions or gtv for euclidean data .finally , we want to set up higher order tv - like methods for more general manifolds , e.g. higher dimensional spheres . here, we do not believe that it is possible to derive explicit expressions for the involved proximal mappings at least not for riemannian manifolds of nonzero sectional curvature .instead , we plan to resort to iterative techniques .d. p. bertsekas .incremental gradient , subgradient , and proximal methods for convex optimization : a survey .technical report lids - p-2848 , laboratory for information and decision systems , mit , cambridge , ma , 2010 .p. grohs , h. hardering , and o. sander .optimal a priori discretization error bounds for geodesic finite elements . technical report 2013 - 16 , seminar for applied mathematics , eth zrich , switzerland , 2013 .
in many image and signal processing applications , as interferometric synthetic aperture radar ( sar ) or color image restoration in hsv or lch spaces the data has its range on the one - dimensional sphere . although the minimization of total variation ( tv ) regularized functionals is among the most popular methods for edge - preserving image restoration such methods were only very recently applied to cyclic structures . however , as for euclidean data , tv regularized variational methods suffer from the so called staircasing effect . this effect can be avoided by involving higher order derivatives into the functional . this is the first paper which uses higher order differences of cyclic data in regularization terms of energy functionals for image restoration . we introduce absolute higher order differences for -valued data in a sound way which is independent of the chosen representation system on the circle . our absolute cyclic first order difference is just the geodesic distance between points . similar to the geodesic distances the absolute cyclic second order differences have only values in $ ] . we update the cyclic variational tv approach by our new cyclic second order differences . to minimize the corresponding functional we apply a cyclic proximal point method which was recently successfully proposed for hadamard manifolds . choosing appropriate cycles this algorithm can be implemented in an efficient way . the main steps require the evaluation of proximal mappings of our cyclic differences for which we provide analytical expressions . under certain conditions we prove the convergence of our algorithm . various numerical examples with artificial as well as real - world data demonstrate the advantageous performance of our algorithm .
due to the growing demand in data traffic , large improvements in the spectral efficiency are required .network densification has been identified as a possible way to achieve the desired spectral efficiency gains .this approach consists of deploying a large number of low powered base stations ( bss ) known as small cells . with the addition of small cell bss ,the overall system is known as a heterogeneous cellular network ( hetnet ) .co - channel deployment of small cell bss results in high intercell interference if their operation is not coordinated .interference coordination techniques such as intercell interference coordination ( icic ) has been extensively studied for multi - tier hetnet scenarios .icic relies on orthogonalizing time and frequency resources allocated to the macrocell and the small cell users .orthogonalization in time is achieved by switching off the relevant subframes belonging to the macrocell thereby reducing inter - tier interference to the small cell bss .orthogonalization in frequency can be achieved with fractional frequency reuse where the users in the inner part of the cells are scheduled on the same frequency resources in contrast to the users at the cell edge whom are scheduled on available orthogonal resources . distributed andjoint power control strategies for dominant interference supression in hetnets is discussed in .the performance of multiple antenna ( i.e. , mimo ) hetnets using the above mentioned techniques is analyzed in and .the effects of random orthogonal beamforming with maximum rate scheduling for mimo hetnets is studied in .the effects of imperfect channel state information ( csi ) with limited feedback mimo is investigated in for a two - tier hetnet .in addition to orthogonalization , interference coordination can also be achieved by means of transmit beamforming at the bss. however , there seems to be limited literature on transmit beamforming techniques to coordinate interference in hetnets .transmit beamforming techniques have been well explored in the multiuser ( mu ) mimo literature to mitigate or reduce the effects of intracell interference .performance superiority at low signal - to - noise - ratio ( snr ) of the leakage based beamforming technique compared to zero - forcing beamforming ( zfbf ) is shown in .with zfbf , complete mu intracell interference cancellation takes place if perfect csi is present at the bs and the number of transmit antennas exceeds the total number of receive antennas . however , leakage based beamforming focuses on maximizing the desired signal - to - leakage - noise - ratio ( slnr ) without any restrictions on the number of transmit antennas .the focus of this paper is on the performance gains of a two - tier hetnet with active interference coordination .intracell and intercell interference is coordinated by deploying leakage based beamformers at the macrocell and microcell bss .we summarize the contributions of this paper as follows : * we evaluate the performance gains of full coordination and macro - only coordination techniques relative to no coordination for two - tier hetnets .the impact of imperfect csi on the performance of these coordination techniques is also investigated .* we demonstrate the effect of network densification with varying degrees of bs coordination on the mean per - user signal - to - interference - plus - noise - ratio ( sinr ) and compare the simulated mean per - user sinr results with the analytical approximations over a wide range of snr . the mean per - user sinr decreases with an increasing microcell count .however , we show that coordination substantially reduces the rate of sinr decrease . *we show that in the absence of coordination , network densification does not provide any gain in the sum rate , whereas with coordination , a linear increase in the sum rate is observed . _notation : _ we use the symbols and to denote a matrix and a vector , respectively . , , , denote the conjugate transpose , the inverse and the trace of the matrix , respectively . and stand for the vector and scalar norms , respectively . ] . denotes the complex gaussian i.i.d .intercell interfering channel vector from bs to user located in cell .that is , . and are used to denote the desired and intercell interfering channels , respectively , regardless of the originating bs type ; i.e. , can represent the intercell interfering link from the macrocell bs for a particular user placed in a microcell.] from to simplify the notation .] is the additive white gaussian noise at receiver having an independent complex gaussian distribution with variance .finally , is defined as , refers to the total effective radiated transmit power ( erp ) from bs .naturally , the erp of the macrocell bs is higher than the microcell bss . is a reference distance of meter ( m ) for far field transmit antennas , is the distance to mobile user from the bs , is the pathloss exponent for urban macro ( uma ) or urban micro ( umi ) depending on the transmitting bs and is the correlated shadow fading value with a standard deviation , obtained from the gudmundson model with a decorrelation distance of m. snr with respect to bs and user is defined as , where is the receiver noise variance at user . from ( [ rs ] ) ,the sinr at user being served by bs can be expressed as the leakage based technique to generate beamforming vectors is as described in , where the main idea is to maximize the desired signal power relative to the noise and total interference powers caused to other users ( leakage power ) .the slnr for user served by the bs is defined as for single - stream transmission ( where each user is equipped with a single receive antenna ) , the leakage based beamforming vector desired for user being served by bs is given by the normalized version of the such that .the structure of ( [ slnr - bf ] ) remains unchanged regardless of the coordination strategy .however , the composition of depends on the coordination strategy considered , as described in section iii . for the simple case of no coordination \ ] ] is the concatenated channel of all users being served by bs apart from user . assuming the distribution of intracell and intercell interference terms in ( [ sinr ] ) is identical to the distribution of noise, the mean sum rate for cell can be expressed as =\mathbb{e}\bigg[\sum\limits_{k=1}^{k_{n}}\log_{2}(1+\gamma_{n , k})\bigg].\ ] ] the mean sum rate over cells can then be expressed as =\mathbb{e}\bigg[\sum\limits_{\substack{j=1}}^{n}\sum\limits_{k=1}^{k_{j}}\log_{2}(1+\gamma_{j , k})\bigg].\ ] ] from ( [ sinr ] ) , the mean per - user sinr can be expressed as ] is extremely cumbersome .instead , we consider an approximation motivated by the work in , which allows us to express the mean per - user sinr as [ msinr ] & [ _ n , k ] + & . the statistical expectations in both the numerator and the denominator of ( [ msinr ] )can be evaluated further .an approach to derive the closed - form approximation of ( [ msinr ] ) is presented in the appendix . on the other hand , ( [ msinr ] )can be rewritten in its equivalent trace form as [ msinr2 ] & [ _ n , k ] + & , where and . the expression in ( [ msinr2 ] )is used to approximate the mean per - cell sum rate over a wide range of snr and the mean per - user sinr over a large number of channel realizations as specified in section iv .it is idealistic to assume perfect csi at all times to generate the leakage based beamforming vectors .thus , we consider channel imperfections via channel estimation errors as mentioned in .the imperfect channel at bs of user after introducing channel estimation errors is given by here , controls the level of csi imperfection . results in perfect csi and models complete uncertainty . is a complex gaussian error vector with a statistically identical structure to .it is shown in that can be used to determine the impact of several factors on imperfect csi and can be a function of the length of the estimation pilot sequence , doppler frequency and snr .the concatenated channel and the leakage based beamforming vector for user in cell can be expressed as ( [ slnr - bf ] ) and ( [ concchannel ] ) when replacing with and with , respectively .the sinr with imperfect csi can be expressed as in ( [ sinr ] ) when replacing with , with and with , respectively . as the leakage based beamforming vectorsare designed with imperfect csi , the sinr expressed in ( [ sinr ] ) will contain channel estimation errors .[ tab : tab1 ] in this section , we describe the bs coordination strategies considered .* _ no coordination _ in this case , each bs coordinates the desired and intracell interfering links locally .that is , the bss only consider maximizing the slnr of users belonging to its own coverage area .the concatenated channel used to compute the leakage based beamforming vector weights for user in cell is given in ( [ concchannel ] ) .we treat this strategy as the baseline case . *_ full coordination _ in this case , we assume that each bs has knowledge of its own users desired channels and all intracell and intercell interfering channels. the channel information may be exchanged by exploiting the intercell orthogonal reference signals via the backhaul interface . with the use of the fully acquired csi for each desired and interfering link ,downlink leakage based beamformers can be designed to minimize the leakage power within the cell as well as to the other cells .the concatenated channel used to compute the leakage based beamforming vector weights for user in cell can be expressed as + [ fcconc ] _n , k=[_n,1, ,_n , k-1,_n , k+1, ,_n , k_n , + _ n,1, ,_n , n ] .+ here denotes the concatenated intercell interfering channels transmitted from bs to all users in cell , given by + [ iciconc ] _n , m=[_m,1,_m,2,_m,3 ,_m , k_m ] . * _ macro - only coordination _ in this case , we assume that the macrocell bs has knowledge of the intercell interfering channels from itself to all microcell users .the macrocell bs uses this information to coordinate transmission to its own users , as well as to the users located in each microcell , respectively .the concatenated channel used to compute the leakage based beamforming weight vectors for user in cell can be expressed as ( [ fcconc ] ) and ( [ concchannel ] ) if is the macrocell and microcell bs , respectively . * _ no inter - tier interference _ this is an ideal case , where we assume that no cross - tier interference exists .this means that users in a particular tier only experience intra - tier interference .coordination is however present within each cell regardless of the tier . in computing the leakage based beamforming weight vector for user in cell , the concatenated channel will be given by ( [ concchannel ] ) if bs is the macrocell bs .otherwise , for a microcell bs it is given as + [ niticonc ] _n , k=[_n,1, ,_n , k-1,_n , k+1, ,_n , k_n , + _ n,1, ,_n , n-1 ] , + where refer to microcell bs indices .table [ tab : tab1 ] summarizes the different bs coordination strategies with the respective structures for .we consider a two - tier hetnet system comprising of a single macrocell and two microcells ( unless otherwise stated ) .we carry out monte - carlo simulations to evaluate the system performance over channel realizations .the location of the macrocell bs was fixed at the origin of the circular coverage area with radius .the locations of the microcell bss inside the macrocell coverage area were uniformly generated subject to a spacing constraint .the minimum distance between two microcells was fixed to twice the radius of the microcell , i.e. , , such that there is no overlap between successive microcells . in table[ tab : tab2 ] , we specify the remainder of the simulation parameters and their corresponding values . .simulation parameters and values [ cols="^,^ " , ] [ tab : tab2 ] from ( [ sr1 ] ) vs. macrocell snr [ db ] for perfect and imperfect csi where .the squares denote the approximated mean per - cell sum rates computed with ( [ msinr2 ] ) . ] from ( [ sr1 ] ) vs. macrocell snr [ db ] for perfect and imperfect csi where .the squares denote the approximated mean per - cell sum rates computed with ( [ msinr2 ] ) . ] from ( [ sr1 ] ) vs. macrocell snr [ db ] for perfect and imperfect csi where .the squares denote the approximated mean per - cell sum rates computed with ( [ msinr2 ] ) . ] from ( [ sr1 ] ) vs. macrocell snr [ db ] for perfect and imperfect csi where .the squares denote the approximated mean per - cell sum rates computed with ( [ msinr2 ] ) . ]performance vs. number of microcells at snr=10 db with perfect csi for full , no and macro only coordination strategies .the approximated mean per - user sinrs are computed with ( [ msinr2 ] ) . ] performance from ( [ sr2 ] ) vs. number of microcells at snr=10 db with perfect csi for full , no and macro only coordination strategies . ] per macrocell from ( [ sr2 ] ) vs. number of microcells at snr=10 db with perfect csi for full , no and macro only coordination strategies . ][ nc ] shows the mean per - cell sum rate performance given by ( [ sr1 ] ) vs. macrocell snr with no coordination in the hetnet .we consider perfect and imperfect csi at the bss . in the high snr regime, inter - tier interference causes the mean sum rates to saturate for macrocell and microcells , respectively .the dominant factor contributing to the poor mean sum rate performance of microcell users is the large inter - tier interference from the macro bs resulting from its high transmit power .this behaviour is a result of the uncoordinated nature of the hetnet . with imperfect csi, we again consider the mean sum rate performance with , where further degradation in the macrocell and microcell rates can be observed . the approximated mean per -cell sum rates based on ( [ msinr2 ] ) are shown to closely match the simulated responses .the variation between the simulated and analytical sinr responses can be justified from the fact that the approximation in ( [ msinr2 ] ) becomes less tight with increasing snr .the uncoordinated network performance can be compared to the case where the hetnet is fully coordinated .[ fc ] demonstrates the mean per - cell sum rate performance given by ( [ sr1 ] ) vs. macrocell snr for perfect and imperfect csi .two major trends can be observed from the result .first is the near increase in the microcell rates over the entire snr range relative to the baseline case ( fig .secondly , microcell to microcell interference has a marginal impact on the macrocell user rates due to their low transmit powers .this is demonstrated by comparing fig .[ fc ] to fig .[ nc ] .as the macrocell bs is the dominant source of interference to the microcell users , we consider the case where coordination takes place at the macrocell bs only .[ moc ] demonstrates the mean per - cell sum rate given by ( [ sr1 ] ) vs. macrocell snr performance of the macro only coordination strategy .both the macro and microcell rates are found to be approximately equivalent to the full coordination case , observed by comparing fig .[ fc ] and fig .this suggests that if we can coordinate the transmission to minimize the most dominant source of interference , we are able to achieve near full coordination performance . moreover , this strategy significantly reduces the backhaul overheads by eliminating the need to equip the microcell bss with out - of - cell csi .[ niti ] depicts the mean per - cell sum rate performance given by ( [ sr1 ] ) vs. macrocell snr of the no inter - tier interference coordination strategy .due to zero cross - tier interference , this strategy results in superior mean per - cell sum rate performance in comparison with the other coordination strategies .it is worth comparing fig .[ niti ] to fig .[ fc ] , and noting that the mean sum rate performance of full coordination approaches the performance of no inter - tier interference .this demonstrates the value of bs coordination in a hetnet . the effect of increasing the microcell density is shown in fig .[ mpusinr ] , where we plot the mean per - user sinr as a function of the number of microcells .we observe that the mean per - user sinr decreases linearly with increasing number of microcells .when the number of microcells is less than 5 , there is a marginal difference between macro only coordination and full coordination mean per - user sinr .this suggests that at low microcell density , it is advantageous to avoid paying the high price of backhaul overheads for full coordination performance .when there are more than 5 microcells , the gap between full coordination and macro - only coordination techniques starts to increase . approximately , a db difference in the mean per - user sinr is seen with microcells in the system .the difference in the slopes of the various strategies demonstrates the impact of bs coordination in a hetnet with network densification .thus , coordination arrests the rate of decay of the mean per - user sinr in a hetnet .in addition to the above , the result demonstrates the validity of the approximated mean per - user sinr in ( [ msinr2 ] ) .these are shown to closely match the simulated mean per - user sinr performance for all the coordination techniques .[ mrp ] shows the microcell sum rate performance as defined in ( [ sr2 ] ) at the mean , and percentiles with respect to number of microcells at a snr of db . with full coordination ,the microcell sum rate increases linearly with the number of microcells , as majority of the interference is being suppressed by the leakage based beamformers .a similar trend can be observed for the macro only coordination case , however the microcell sum rate performance gains are lower compared to the full coordination case as the number of microcells increases .the no coordination case suffers from strong macro and other microcell interference resulting in a saturated sum rate at higher number of microcells .we now study the effect of deploying multiple macrocell bss on the microcell sum rate performance . for comparison purposes , we consider scenarios with both single and three overlapping macrocells with inter - site distances of km . in both cases ,a maximum of microcell bss are randomly dropped at the edge of the macrocell at a radius of , such that the minimum distance between successive microcell bss is .[ mrp2 ] shows the mean microcell sum rate as a function of the number of microcells for both the single and three macrocell bss cases at a snr of 10db .it is seen that the sum rate of the single macrocell bs case is significantly higher than the three overlapping macrocell bs case .this is due to higher aggregate intercell interference resulting from other macrocells and microcells located within these macrocells .compared to fig .[ mrp ] where the microcells are randomly placed anywhere within the macrocell coverage area , the no coordination performance benefits the most from the microcells being deployed at the edge of the macrocell .this can be seen from the mean sum rate , as it shows a linear growth up to 7 microcells in comparison with 3 microcells .we also observe that the improvement in mean sum rate with cell edge deployment of microcells is higher for the no coordination strategy . at 10 microcells ,the increase in the mean sum rate for the full coordination strategy is approximately 3.6 bps / hz , while the increase with no coordination is about 10 bps / hz .in this paper , we demonstrate the rate gains provided by bs coordination in hetnets . with bs coordination , the sum rate is seen to increase linearly and the mean per - user sinr decreases linearly with the number of microcells .however , the rate of mean per - user sinr degradation is reduced significantly with increased degrees coordination at the bss in the hetnet . at a low density of microcells , macro - onlycoordination performs close to full coordination .however , this is not the case with a higher density of microcells where increasing amounts of interference from the microcells is being added .in addition to the above , the impact of multiple macrocells is also investigated . here ,degradation in the mean microcell sum rate is observed for all the respective coordination strategies in comparison to the case where only one macrocell is present .using an eigenvalue decomposition , ( [ app1 ] ) can be rewritten as =\mathbb{e}\bigg[\big|\mathbf{h}_{n , k}(\mathbf{x\lambda{}x}^{*}+\sigma_{k}^{2}\mathbf{i})^{-1}\mathbf{h}_{n , k}^{*}\big|^{2}\bigg]\\ & \hspace{68pt}=\mathbb{e}\bigg[\big|\boldsymbol{\delta}_{n , k}(\mathbf{\lambda}+\sigma_{k}^{2}\mathbf{i})^{-1}\boldsymbol{\delta}_{n , k}^{*}\big|^{2}\bigg],\end{aligned}\ ] ] where has the same statistics as as is a unitary matrix .hence , =\mathbb{e}\bigg[\big|\sum\limits_{i=1}^{z_{n}}(\mathbf{\lambda}_{ii}+\sigma_{k}^{2})^{-1}|\boldsymbol{\delta}_{n , k , i}|^{2}\big|^{2}\bigg],\ ] ] where is the element of .since is a zero mean complex gaussian random variable with variance , it follows that is an exponential random variable with mean . using the standard properties of the exponential random variable ,can be expressed as .\ ] ] a similar approach can be taken to evaluate the mean per - user intracell and intercell interference powers . further averaging over the eigenvalues , in ( [ app4 ] ) is possible as the density of eigenvalues is known .however , due to space limitations we leave this approach for future work .1 cisco , cisco visual networking index : global mobile data traffic forcast update , 20112016 " , white paper , feb .a. ghosh , n. mangalvedhe , r. ratasuk , b. mondal , m. cudak , e. vistosky , t.a .thomas , j.g .andrews , p. xia , h.s .jo , h.s .dhillon and t.d .novlan , heterogeneous cellular networks : from theory to practice " , _ ieee commun . mag .50 , no . 6 , pp .5464 , jun . 2012 .a. alexiou , wireless world 2020 " , _ ieee veh .technol . mag ._ , vol . 9 , no . 1 , pp .46 - 53 , mar .2014 . c. kosta , b. hunt , a.u .quddus and r. tafazolli , on interference avoidance through inter - cell interference coordination ( icic ) based on ofdma mobile systems " , _ ieee commun .surveys tuts .3 , pp . 973995 , aug . 2012 .d. lopez - perez , i. guvenc , g.d.l .roche , m. kountouris , t. quek and j. zhang , enhanced intercell interference coordination challenges in heterogeneous networks " , _ ieee commun . mag .2230 , jun . 2011 .g. boudreau , j. panicker , n. guo , r. chang , n. wang and s. vrzic , interference coordination and cancellation for 4 g networks " , _ ieee commun . mag .4 , pp . 7481 , apr . 2009 . e. hossain , m. rasti , h. tabassum and a. abdelnasser , evolution toward 5 g multi - tier cellular wireless networks : an interference management perspective " , _ ieee wireless commun .3 , pp . 118127 , jun . 2014 .dhillon , m. kountouris and j.g .andrews , downlink mimo hetnets : modelling , ordering results and performance analysis " , _ ieee trans .wireless commun .10 , pp . 52085222 , oct .dhillon , m. kountouris and j.g .andrews , downlink coverage probability in mimo hetnets " , _ in proc .ieee 46th conference on signals , systems and computers ( asilomar ) _ , pp .683687 , nov .s. park , w. seo , y. kim , s. lim and d. hong , beam subset selection strategy for interference reduction in two - tier femtocell networks " , _ ieee trans .wireless commun ._ , vol . 9 , no . 11 , pp . 34403449 , octs. akoum , m. kountouris and r.w .heath , on imperfect csi for the downlink of a two - tier network " , _ ieee intern .symp . on info .theory ( isit ) _ , pp .553337 , july 2011 .w. liu , s. han and c. yang , hybrid cooperative transmission in heterogeneous networks " , _ in proc .ieee 23rd conference on personal , indoor and mobile radio communications ( pimrc ) _ , pp .921925 , sep .m. hong , r - y .sun , h. baligh and z - q .luo , joint base station clustering and beamformer design for partial coordinated transmission in heterogeneous networks " , _ submitted to ieee j. on sel .areas commun ._ , nov . 2012 .available online : arxiv.org/pdf/1203.6390 .t. yoo and a. goldsmith , on the optimality of multiantenna broadcast scheduling using zero - forcing beamforming " , _ ieee j. sel .areas in commun .3 , pp . 528541 , mar .peel , b.m hochwald and a.l .swindlehurst , a vector perturbation technique for near capacity multiantenna multiuser communication - part i : channel inversion and regularization " , _ ieee trans .53 , no.1 , pp . 195202 , jan .m. sadek , a. tarighat and a. sayed , a leakage - based precoding scheme for downlink multi - user mimo channels " , _ ieee trans .wireless commun ._ , vol . 6 , no .5 , pp . 17111721 , may 2007 . m. gudmundson , correlation model for shadow fading in mobile radio systems " , _ electronics letters _23 , pp . 21452146 , nov .l. yu , w. liu and r. langley , sinr analysis of the subtraction - based smi beamformer " , _ ieee trans . signal process ._ , vol.58 , no.11 , pp .59265932 , nov .h. suraweera , p.j .smith and m. shafi , capacity limits and performance analysis of cognitive radio with imperfect channel knowledge " , _ ieee trans .technol . _ ,4 , pp . 18111822 , may 2010 .a. ahn , r.w .heath , performance analysis of maximum ratio combining with imperfect channel estimation in presence of cochannel interferences " , _ ieee trans .wireless commun ._ , vol . 8 , no .3 , pp . 10801085 , mar .n. lee and w. shin , adaptive feedback scheme on k - cell miso interfering broadcast channel with limited feedback " , _ ieee trans . wireless .2 , pp . 401406 , feb . 2011 .
in this paper we demonstrate the rate gains achieved by two - tier heterogeneous cellular networks ( hetnets ) with varying degrees of coordination between macrocell and microcell base stations ( bss ) . we show that without the presence of coordination , network densification does not provide any gain in the sum rate and rapidly decreases the mean per - user signal - to - interference - plus - noise - ratio ( sinr ) . our results show that coordination reduces the rate of sinr decay with increasing numbers of microcell bss in the system . validity of the analytically approximated mean per - user sinr over a wide range of signal - to - noise - ratio ( snr ) is demonstrated via comparison with the simulated results .
throughout this paper , we consider a unique server addressing two parallel queues numbered and , respectively .incoming jobs enter either queue and require random service times ; the server then processes jobs according to the so - called shortest queue first ( sqf ) policy. specifically , let ( resp . ) denote the workload in queue ( resp .queue ) at a given time , including the remaining amount of work of the job possibly in service ; the server then proceeds as follows : * queue ( resp .queue ) is served if , and ( resp . if and ) ; * if only one of the queues is empty , the non empty queue is served ;* if both queues are empty , the server remains idle until the next job arrival .in contrast to fixed priority disciplines where the server favors queues in some predefined order remaining unchanged in time ( e.g. , classical preemptive or non - preemptive head - of - line priority schemes ) , the sqf policy enables the server to dynamically serve queues according to their current state .the performance analysis of such a queueing discipline is motivated by the so - called sqf packet scheduling policy recently proposed to improve the quality of internet access on high speed communication links . as discussed in , sqf policy is designed to serve the shortest queue , i.e. , the queue with the least number of waiting packets ; in case of buffer overflow , packets are dropped from the longest queue .thanks to this simple policy , the scheduler consequently prioritizes constant bit rate flows associated with delay - sensitive applications such as voice and audio / video streaming with intrinsic rate constraints ; priority is thus implicitly given to smooth flows over data traffic associated with bulk transfers that sense network bandwidth by filling buffers and sending packets in bursts .in this paper , we consider the fluid version of the sqf discipline . instead of packets ( i.e. , individual jobs ), we deal with the workload ( i.e. , the amount of fluid in each queue ) . since the fluid sqf policy considers the shortest queue in volume , that is , in terms of workload , its performance is quantitatively described by the variations of variables and . to simplify the analysis, we here suppose that the buffer capacity for both queues and is infinite .moreover , we assume that incoming jobs enter either queue according to a poisson process ; in view of the above application context , one can argue that such poisson arrivals can model traffic where sources have peak rates significantly higher than that of the output link ; such processes can hardly represent , however , the traffic variations of locally constant bit rate flows .this poisson assumption , although limited in this respect , is nevertheless envisaged here in view of its mathematical tractability and as a first step towards the consideration of more complicated arrival patterns .the above framework enables us to define the pair representing the workloads in the stationary regime in each queue as a continuous - state markov process in . in the following , we determine the probability distribution of the couple by studying its laplace transform .the problem can then essentially be formulated as follows .[ prob1 ] given the domain and analytic functions , , , , and in , determine two bivariate laplace transforms , and two univariate laplace transforms , , analytic in and such that equations for some analytic function , together hold for in .note that each condition or with brings the latter equations respectively to to the best knowledge of the authors , the mathematical analysis of the sqf policy has not been addressed in the queueing literature .some comparable queueing disciplines have nevertheless been studied : * the _ longest queue first _ ( lqf )symmetric policy is considered in , where the author studies the stationary distribution of the number of waiting jobs , in each queue ; reducing the analysis to a boundary value problem on the unit circle , an integral formula is provided for the generating function of the pair ; * the _ join the shortest 2-server queue _ ( jsq ) , where an arriving customer joins the shortest queue if the number of waiting jobs in queues are unequal , is analyzed in .the bivariate generating function for the number of waiting jobs is then determined as a meromorphic function in the whole complex plane , whose associated poles and residues are calculated recursively .while the above quoted studies address the stationary distribution of the number of jobs in each queue , we here consider the real - valued process of workload components whose stationary analysis requires the definition of its infinitesimal generator on the relevant functional space .besides , the laplace transform of the distribution of proves to be meromorphic not on the entire complex plane , but on the plane cut along some algebraic singularities ( while the solution for jsq exhibits polar singularities only ) ; as a both quantitative and qualitative consequence , the decay rate of the stationary distribution at infinity for sqf may change according to the system load from that defined by the smallest polar singularity to that defined by the smallest algebraic singularity .the organization of the paper is as follows . in section[ ma ] , a markovian analysis provides the basic equations for the stationary distribution of the coupled queues ; the functional equations verified by the relevant laplace transforms are further derived in section [ ltd ] . in section [ expi ] , we specialize the discussion to the so - called symmetric exponential case where arrival rates are identical , and where service distribution are both exponential with identical mean ; the functional equations are then specified and shown to involve a key cubic equation . specifically ,* problem [ prob1 ] * for the symmetric case is shown to reduce to the following .[ prob2 ] solve the functional equation for function , where given functions , and are related to one branch of a key cubic polynomial equation .for real , the solution is written in terms of a series involving all iterates for .the analytic extension of solution to some domain of the complex plane is further studied in section [ sqfqueue ] ; this enables us to derive the empty queue probability along with the tail behavior of the workload distribution in each queue for the symmetric case .the latter is then compared to that of the associated preemptive head of line ( hol ) policy .concluding remarks are finally presented in section [ cl ] .the proofs for basic functional equations as well as some technical results are deferred to the appendix for better readability .as described in the introduction , we assume that incoming jobs consecutively enter queue ( resp .queue ) according to a poisson process with mean arrival rate ( resp . ) .their respective service times are independent and identically distributed ( i.i.d . ) with probability distribution , ( resp . , ) and mean ( resp . mean ) .let ( resp . ) denote the mean load of queue ( resp .queue ) and denote the total load of the system . since the system is work conserving , its stability condition is and we assume it to hold in the rest of this paper . in this section ,we first specify the evolution equations for the system and further derive its infinitesimal generator .first consider the total workload of the union of queues and .for any work - conserving service discipline ( such as sqf ) , the distribution of is independent of that discipline and equals that of the global single queue .the aggregate arrival process is poisson with rate and the i.i.d .service times have the averaged distribution with mean .the stationary probability for the server to be in idle state , in particular , equals let ( resp . ) be the number of job arrivals within time interval at queue ( resp .queue ) ; if ( resp . ) is the service time of the -th job arriving at queue ( resp . ) , the total work brought within into queue ( resp . ) equals ( resp . . denoting by ( resp . ) the workload in queue ( resp . ) at time , define indicator functions and by respectively . with the above notation, the sqf policy governs workloads and according to the evolution equations for and some initial conditions , .this defines the pair , , as a markov process with state space ( see figure [ fig1 ] for sample paths of process ) . as a first result , integrating each equation ( [ path ] ) over interval ] ( all intervening poisson processes have rates lower than ) and for drift rates ( when non zero , the service rate is the constant ) .once generator is determined , the stationary distribution of is known ( see or ) to satisfy the prerequisites of section [ ma ] , we now study integral equation .since the problem is linear in unknown distribution , it is tractable through laplace transform techniques .let and its closure .assumptions in section [ ig ] , for the existence of regular densities and with respective support and ( see equations ( [ cones12])-([axes12 ] ) ) enable us to define their laplace transforms , by for , where ; using the expectation operator , definitions ( [ ff ] ) equivalently read , \ ; \ ; g_1(s_1 ) = \e\big[e^{-s_1u_1}\ind_{\{0=u_2<u_1\}}\big].\ ] ] the laplace transforms and of regular densities and with respective support and ( see equations ( [ cones12])-([axes12 ] ) ) are similarly defined by for ; equivalent definitions can be similarly written in terms of the expectation operator . expression ( [ dphi ] ) for distribution and the above definitions then enable to define the laplace transform of the pair by for .finally , let ( resp . ) denote the laplace transform of service time ( resp . ) at queue ( resp .queue ) for ( resp . ) ; set in addition and * a ) * transforms , and , together satisfy for , where and . *b ) * transforms and ( resp . , ) satisfy for , with \\ - \lambda_2\mathbb{e } \left [ e^{-s_1u_1-s_2u_2}\ind_{\{0 \leq u_2 < u_1\}}e^{-s_2\mathcal{t}_2}\ind_{\{\mathcal{t}_2 > u_1-u_2\ } } \right ] .\label{h}\end{gathered}\ ] ] * c ) * constants and satisfy relation .[ resol ] * a ) * fix .the test function , , belongs to and has derivatives , . besides, we have hence = ( b_1(s_1)-1)\theta(\mathbf{u}) ] .applying proposition [ generator ] , formula ( [ gen1 ] ) for then yields with defined in ( [ ker ] ) . integrating that expression of over closed quarter plane with respect to distribution and using assumptions *a.1 * -*a.2 * , relation then gives with and ; using ( [ ldef ] ) finally provides ( [ fonct ] ) .* b ) * as detailed in appendix [ a2 ] , there exists a family of functions with , such that , _ and _ . for given , ,the function defined by , , therefore belongs to and satisfies pointwise in with ( note that ) .apply then formula ( [ gen1 ] ) to regularized test function and integrate this expression over against distribution to define in view of ( [ stationary ] ) , we have and , provided that has a finite limit as , we must have .the detailed calculation of that limit ( depending on the pair ) is performed in appendix [ a2 ] and condition is shown to reduce to first equation ( [ fonctcomp ] ) . exchanging indices 1 and 2 provides second equation ( [ fonctcomp ] ) , after noting that changes into .* c ) * adding equations ( [ fonctcomp ] ) gives ( [ fonct ] ) if and only if holds .computing by letting in ( [ fonct ] ) readily gives with .identity ( [ pk ] ) is obviously pollaczek - khintchin formula for the transform of the total workload in the global queue , with i.i.d .service times having distribution defined by ( [ db ] ) . [ coroltech ] let be defined by ( [ h ] ) .transform satisfies for such that .similarly , transform satisfies for such that .[ c.2 ] function is finite for any given ; if , the product is therefore zero . as second equation ( [ fonctcomp ] ) then implies ( [ fonctg1 ] ) .relation ( [ fonctg2 ] ) is similarly derived . in this section ,we first compare the sqf system with the hol queue , where one queue has head of line ( hol ) priority over the other ; such a comparison then enables us to extend the analyticity domain of laplace transforms , and , .+ let , , denote the workload in queue when the other queue has hol priority ; similarly , let denote the workload in queue when this queue has hol priority over the other . finally , given two real random variables and , said to dominate in the strong order sense ( for short , ) if and only if for any positive non - decreasing measurable function .workload verifies for all .[ domstoch ] we clearly have almost surely for all , where is defined by .equation ( [ path ] ) consequently entails that pathwise , which implies the strong stochastic domination .similarly , we have almost surely for all and ( [ path ] ) entails pathwise , hence the strong stochastic domination .assume that random variable has an analytic laplace transform in the domain for some real .laplace transform can be analytically extended to domain and transform can be analytically extended to .similarly , transform can be analytically extended to and can be analytically extended to .[ extensions ] assume first that and are real with ; given , we have ; using the domination property of proposition [ domstoch ] and the previous inequality , definition ( [ ff ] ) of on entails \leq \mathbb{e}\big[e^{-(s_1+s_2)\overline{u}_2}\big];\ ] ] we then deduce that can be analytically continued to any point verifying and . assuming now that and , domination property yields and definition ( [ ff ] ) of on entails in turn \leq \mathbb{e}\big[e^{- s_2 \overline{u}_2}\big];\ ] ] can therefore be analytically continued to any point verifying and . we conclude that can be analytically continued to domain , as claimed .writing definition ( [ ffb ] ) of as ] and non positive for ] . with the convention , we can define analytic or meromorphic extensions of these functions in the complex plane as follows .function ( resp . ) can be analytically ( resp .meromorphically ) extended to the cut plane ] , whereas function is well - defined for \zeta^-,\zeta^+[ ] .let us then define functions and by respectively . by construction ,function is a meromorphic extension of in ] .+ for notation simplicity , we will still denote by and their respective analytic continuation and defined above . consider now equation , whose unique non - zero solution is . as , it is easily verified that solution is associated with branch if and with branch if .define then ( note that for all ] and and two complex conjugate roots and .+ * b ) * algebraic functions , and are analytic on the cut plane ] and ] , formulas ( [ cardan ] ) enable us to analytically continue function to the cut plane ] and function to the cut plane ] .function is increasing while function reaches its minimum at some point ; is decreasing on interval \zeta^+,s^*[ ] .recall from proposition [ roots - r ] that entails ; conversely , we have for . function is thus defined and regular for \eta_1,+\infty[ ] and therefore tends to 0 as .the finite sum in ( [ quotientbis ] ) thus converges as .formula ( [ gsym ] ) for eventually follows from the latter expansion inserted into second equation ( [ msymequ ] ) .we now specify the smallest singularity of laplace transform ; to this end , we first deal with the analyticity domain of auxiliary function .recall by definition ( [ msym ] ) that is known to be analytic at least in the half - plane , where is defined by .function can be analytically continued to the half - plane ( with ) defined by * in case , where we set ; * in case , where is the largest real root of discriminant .[ extendm ] the proof of proposition [ extendm ] is detailed in appendix [ a4 ] .we now turn to transform and determine its singularities with smallest module .recall by corollary [ extensions ] that has no singularity in .[ domaing ] the singularity with smallest module of transform is * for , a simple pole at with leading term with ; * for , an algebraic singularity at with leading term at first order in , where factor is given by \ ] ] with constants , and where , are given in ( [ zeta1 + - ] ) . [ gsing ]consider again the two following cases : + * a ) * if , write the 1st equation ( [ msymequ ] ) as ; \label{gsymbis}\ ] ] as , we have while .proposition [ extendm ] then ensures that is analytic at since for .as has no singularity for , we conclude from expression ( [ gsymbis ] ) that has a simple pole at with residue \ ] ] where .differentiating formula for at , we further calculate ; residue in leading term ( [ resgsym1 ] ) then follows ; + * b ) * if , let so that and where .proposition [ extendm ] then ensures that is analytic at since .we conclude from expression ( [ gsymbis ] ) and the latter discussion that is not a singularity of .+ by definition of , where is factorized as with , we obtain where and with constant . by expression ( [ gsymbis ] ) for , we then obtain \nonumber \\ & = g(\zeta^+ ) + r^+(s-\zeta^+)^{1/2 } + ... \nonumber\end{aligned}\ ] ] since with defined in ( [ resgsym2 ] ) . expansion ( [ resgsym2 ] ) then follows with associated factor ; we conclude that the singularity with smallest module of is , an algebraic singularity with order 1 .the results obtained in the previous section enable us to give a closed - form expression for the empty queue probability in terms of auxiliary function only .probability is given by with given by series expansion ( [ mseries ] ) .[ g0sym ] apply relation ( [ gsymbis ] ) for with ; as , we then derive that hence ; \label{g(0)bis}\ ] ] differentiating formula for at gives so that the first term inside brackets in ( [ g(0)bis ] ) reduces to .now , applying ( [ fonctmsym ] ) to value ( with corresponding pair and ) shows that the right - hand side of ( [ g(0)bis ] ) also equals , as claimed . by ( [ p0 ] ) and ( [ g(0 ) ] ), we derive the probability that either queue or is empty .we depict in figure [ figg0 ] the variations of in terms of load when fixing ( for comparison , the black dashed line represents the empty queue probability for the unique queue aggregating all jobs from either class or ) .the numerical results show that decreases to a positive limit , approximately , when tends to 1 ; this can be interpreted by saying that , while the global system is unstable and sees excursions of either variable or to large values , one of the queues remains less than the other for a large period of time and has therefore a positive probability to be emptied by the server .furthermore , the red dashed line depicts the empty queue probability if the server were to apply a preemptive hol policy with highest priority given to queue ; following lower bound ( [ stochlowup ] ) , we have .we further notice that for , the positive limit of derived above for sqf is close enough to the maximal limit of .the above observations consequently show that the sqf policy compares favorably to the optimal hol policy by guaranteeing a non vanishing empty queue probability for each traffic class at high load .we finally derive asymptotics for the distribution of workload or in either queue , i.e. , the estimates of tail probabilities for large queue content .we shall invoke the following tauberian theorem relating the singularities of a laplace transform to the asymptotic behavior of its inverse ( * ? ? ?* theorem 25.2 , p.237 ) .let be a laplace transform and be its singularity with smallest module , with as for and ( replace by if is finite ) .the laplace inverse of is then estimated by for , where denotes euler s function .[ tauberian ] note that the fact that is finite or not does not change the estimate of inverse at infinity . before using that theorem for the tail behavior of either or , we first state some simple bounds for their distribution tail . the global workload is identical to that in an queue with arrival rate and service rate .the complementary distribution function of is therefore given by for all , with ; the distribution tail of workload or therefore decreases at least exponentially fast at infinity .following upper bound ( [ stochlowup ] ) relating to variable corresponding to a hol service policy with highest priority given to queue , we further have for all .the laplace transform of is given by equation and is meromorphic in the cut plane ] , theorem [ asymptoticssym ] consequently provides the same exponential trend as that of upper bound ( [ asympthol ] ) for hol ; as a matter of fact , a large value of entails that queue behaves as if queue , with smaller workload , had a hol priority .the stationary analysis of two coupled queues addressed by a unique server running the sqf discipline has been generally considered for poisson arrival processes and general service time distributions ; required functional equations for the derivation of the stationary distribution for the coupled workload process have been derived .specializing the resolution of such equations to both exponentially distributed service times and the so - called `` symmetric case '' , all quantities of interest have been obtained by solving a single functional equation . the solution for that equation has been given , in particular , as a series expansion involving all consecutive iterates of an algebraic function related to a branch of some cubic equation .it must be noted that the curve represented by that cubic equation in the plane is singular ; in fact , whereas `` most '' cubic curves are regular ( i.e. , without multiple points ) , it can be easily checked that cubic has a double point at infinity . inequivalent geometric terms , cubic can be identified with a sphere when seen as a surface in , whereas most cubic curves are identified with a torus .this fact can be considered as an essential underlying feature characterizing the complexity of the present problem ; such geometric statements will be enlightened for solving the general asymmetric case in .an extended analyticity domain for solution has been determined as the half - plane , thus enabling to determine the singularity of laplace transform with smallest module .it could be also of interest to compare such extended domain to the maximal convergence domain of series expansion ( [ mseries ] ) ( recall the convergence of that series has been stated in theorem [ resolsym ] for real only ) ; in fact , the analyticity domain may not coincide with the validity domain for such a series representation .the discrete holomorphic dynamical system defined by the iterates , , definitely plays a central role for such a comparison . as an alternative approach to that of section [ sqfqueue ] , function may also be derived through a riemann - hilbert boundary value problem ; hints for such an approach can be summarized as follows .we successively note that * there exists \zeta^-,\zeta^+[ ] , we note that for with .equations then enable us to deduce the condition the above riemann - hilbert problem for function is , however , valid on open path only and not on the whole closed contour , defined as the image by functions of closed segment ] , where ramification points , are determined as the real negative roots of discriminant . as ,function is , in particular , analytic in the half - plane ; * by definition ( [ cubiceq ] ) , we may have only if , that is , or or ; in the case , we have and in the case , we conclude that we can not have if ; * by corollary [ extensions ] , transform is analytic on where if and if .* a ) * assume first that . in the plane , the diagonal intersects the curve at ( see fig . [ courbexi1 ] ) .further , we easily verify that for and condition ( [ conditionaa ] ) is therefore fulfilled in this first case .we then conclude that function is analytic for , and thus for ( recall by definition ( [ msym ] ) that is the sum of two non - negative laplace transforms ) .* b ) * assume now that ( see fig . [ courbexi2 ] ) .we have shown above that we can not have , which would otherwise imply .we thus necessarily have , which entails that for and condition ( [ conditionaa ] ) is therefore fulfilled in this second case .we then conclude that function is analytic for , hence for .
we analyze the so - called shortest queue first ( sqf ) queueing discipline whereby a unique server addresses queues in parallel by serving at any time that queue with the smallest workload . considering a stationary system composed of two parallel queues and assuming poisson arrivals and general service time distributions , we first establish the functional equations satisfied by the laplace transforms of the workloads in each queue . we further specialize these equations to the so - called `` symmetric case '' , with same arrival rates and identical exponential service time distributions at each queue ; we then obtain a functional equation for unknown function , where given functions , and are related to one branch of a cubic polynomial equation . we study the analyticity domain of function and express it by a series expansion involving all iterates of function . this allows us to determine empty queue probabilities along with the tail of the workload distribution in each queue . this tail appears to be identical to that of the head - of - line preemptive priority system , which is the key feature desired for the sqf discipline .
populations of self - sustained oscillators can exhibit various synchronization phenomena .for example , it is well known that a limit - cycle oscillator can exhibit phase locking to a periodic external forcing ; this phenomenon is called the forced synchronization .recently , it was also found that uncoupled identical limit - cycle oscillators subject to weak common noise can exhibit in - phase synchronization ; this remarkable phenomenon is called the common - noise - induced synchronization . in general , each oscillatory dynamics is described by a stable limit - cycle solution to an ordinary differential equation , and the phase description method for ordinary limit - cycle oscillators has played an essential role in the theoretical analysis of the synchronization phenomena . on the basis of the phase description ,optimization methods for the dynamical properties of limit - cycle oscillators have also been developed for forced synchronization and common - noise - induced synchronization .synchronization phenomena of spatiotemporal rhythms described by partial differential equations , such as reaction - diffusion equations and fluid equations , have also attracted considerable attention ( see also refs . for the spatiotemporal pattern formation ) .examples of earlier studies include the following . in reaction - diffusion systems ,synchronization between two locally coupled domains of excitable media exhibiting spiral waves has been experimentally investigated using the photosensitive belousov - zhabotinsky reaction .in fluid systems , synchronization in both periodic and chaotic regimes has been experimentally investigated using a periodically forced rotating fluid annulus and a pair of thermally coupled rotating fluid annuli .of particular interest in this paper is the experimental study on generalized synchronization of spatiotemporal chaos in a liquid crystal spatial light modulator ; this experimental synchronization can be considered as common - noise - induced synchronization of spatiotemporal chaos .however , detailed theoretical analysis of these synchronization phenomena has not been performed even for the case in which the spatiotemporal rhythms are described by stable limit - cycle solutions to partial differential equations , because a phase description method for partial differential equations has not been fully developed yet . in this paper , we theoretically analyze common - noise - induced phase synchronization between uncoupled identical hele - shaw cells exhibiting oscillatory convection ; the oscillatory convection is described by a stable limit - cycle solution to a partial differential equation .a hele - shaw cell is a rectangular cavity in which the gap between two vertical walls is much smaller than the other two spatial dimensions , and the fluid in the cavity exhibits oscillatory convection under appropriate parameter conditions ( see refs . and also references therein ) . in ref . , we recently formulated a theory for the phase description of oscillatory convection in the hele - shaw cell and analyzed the mutual synchronization between a pair of coupled systems of oscillatory hele - shaw convection ; the theory can be considered as an extension of our phase description method for stable limit - cycle solutions to nonlinear fokker - planck equations ( see also ref . for the phase description of spatiotemporal rhythms in reaction - diffusion equations ) . using the phase description method for oscillatory convection, we here demonstrate that uncoupled systems of oscillatory hele - shaw convection can be in - phase synchronized by applying weak common noise .furthermore , we develop a method for obtaining the optimal spatial pattern of the common noise to achieve synchronization .the theoretical results are validated by direct numerical simulations of the oscillatory hele - shaw convection .this paper is organized as follows . in sec .[ sec:2 ] , we briefly review our phase description method for oscillatory convection in the hele - shaw cell . in sec .[ sec:3 ] , we theoretically analyze common - noise - induced phase synchronization of the oscillatory convection . in sec .[ sec:4 ] , we confirm our theoretical results by numerical analysis of the oscillatory convection .concluding remarks are given in sec .[ sec:5 ] .in this section , for the sake of readability and being self - contained , we review governing equations for oscillatory convection in the hele - shaw cell and our phase description method for the oscillatory convection with consideration of its application to common - noise - induced synchronization . more details and other applications of the phase description method are given in ref . .the dynamics of the temperature field in the hele - shaw cell is described by the following dimensionless form ( see ref . and also references therein ) : the laplacian and jacobian are respectively given by the stream function is determined from the temperature field as where the rayleigh number is denoted by .the system is defined in the unit square : ] .the boundary conditions for the temperature field are given by where the temperature at the bottom ( ) is higher than that at the top ( ) .the stream function satisfies the dirichlet zero boundary condition on both and , i.e. , to simplify the boundary conditions in eq .( [ eq : bcty ] ) , we consider the convective component of the temperature field as follows : inserting eq .( [ eq : t_x ] ) into eqs .( [ eq : t])([eq : p_t ] ) , we derive the following equation for the convective component : where the stream function is determined by applying eq .( [ eq : t_x ] ) to eqs .( [ eq : bctx])([eq : bcty ] ) , we obtain the following boundary conditions for the convective component : that is , the convective component satisfies the neumann zero boundary condition on and the dirichlet zero boundary condition on .it should be noted that this system does not possess translational or rotational symmetry owing to the boundary conditions given by eqs .( [ eq : bcpx])([eq : bcpy])([eq : bcxx])([eq : bcxy ] ) . the dependence of the hele - shaw convection on the rayleigh number is well known , and the existence of stable limit - cycle solutions to eq .( [ eq : x ] ) is also well established ( see ref . and also references therein ) . in general , a stable limit - cycle solution to eq .( [ eq : x ] ) , which represents oscillatory convection in the hele - shaw cell , can be described by the phase and natural frequency are denoted by and , respectively .the limit - cycle solution possesses the following -periodicity in : . inserting eq .( [ eq : x_x0 ] ) into eqs .( [ eq : x])([eq : p_x ] ) , we find that the limit - cycle solution satisfies where the stream function is determined by from eq .( [ eq : t_x ] ) , the corresponding temperature field is given by ( e.g. , see fig .[ fig:2 ] in sec .[ sec:4 ] ) let represent a small disturbance added to the limit - cycle solution , and consider a slightly perturbed solution equation ( [ eq : x ] ) is then linearized with respect to as follows : as in the limit - cycle solution , the function satisfies the neumann zero boundary condition on and the dirichlet zero boundary condition on .note that is time - periodic through .therefore , eq . ( [ eq : linear ] ) is a floquet - type system with a periodic linear operator . defining the inner product of two functions as \!\!\bigr ] } } = \frac{1}{2\pi } \int_0^{2\pi } d\theta \int_0 ^ 1 dx \int_0 ^ 1 dy \ , u^\ast(x , y , \theta ) u(x , y , \theta ) , \label{eq : inner}\end{aligned}\ ] ] we introduce the adjoint operator of the linear operator by \!\!\bigr ] } } = { \ensuremath{\bigl[\!\!\bigl [ { \cal l}^\ast(x , y , \theta ) u^\ast(x , y , \theta ) , \ ,u(x , y , \theta ) \bigr]\!\!\bigr]}}.\label{eq : operator}\end{aligned}\ ] ] as in , the function also satisfies the neumann zero boundary condition on and the dirichlet zero boundary condition on .details of the derivation of the adjoint operator are given in ref . . in the following subsection, we utilize the floquet eigenfunctions associated with the zero eigenvalue , i.e. , we note that the right zero eigenfunction can be chosen as which is confirmed by differentiating eq .( [ eq : x0 ] ) with respect to . using the inner product of eq .( [ eq : inner ] ) with the right zero eigenfunction of eq .( [ eq : u0 ] ) , the left zero eigenfunction is normalized as \!\!\bigr ] } } = \frac{1}{2\pi } \int_0^{2\pi } d\theta \int_0 ^ 1 dx \int_0 ^ 1 dy \ , u_0^\ast(x , y , \theta ) u_0(x , y , \theta ) = 1.\end{aligned}\ ] ] here , we can show that the following equation holds ( see also refs . ) : = 0.\end{aligned}\ ] ] therefore , the following normalization condition is satisfied independently for each as follows : we now consider oscillatory hele - shaw convection with a weak perturbation applied to the temperature field described by the following equation : the weak perturbation is denoted by . inserting eq .( [ eq : t_x ] ) into eq .( [ eq : t_p ] ) , we obtain the following equation for the convective component : using the idea of the phase reduction , we can derive a phase equation from the perturbed equation ( [ eq : x_p ] ) .namely , we project the dynamics of the perturbed equation ( [ eq : x_p ] ) onto the unperturbed solution as \nonumber \\ & = \int_0 ^ 1 dx \int_0 ^ 1 dy \ , u_0^\ast(x , y , \theta ) \left [ \nabla^2 x + j(\psi , x ) - \frac{\partial \psi}{\partial x } + \epsilon p(x , y , t ) \right ] \nonumber \\ & \simeq \int_0 ^ 1 dx \int_0 ^ 1 dy \ , u_0^\ast(x , y , \theta ) \left [ \nabla^2 x_0 + j(\psi_0 , x_0 ) - \frac{\partial \psi_0}{\partial x } + \epsilon p(x , y , t ) \right ] \nonumber \\ & = \int_0 ^ 1 dx \int_0 ^ 1 dy \ , u_0^\ast(x , y , \theta ) \left [ \omega \frac{\partial}{\partial \theta } x_0(x , y , \theta ) + \epsilon p(x , y , t ) \right ] \nonumber \\ & = \int_0 ^ 1 dx \int_0 ^ 1 dy \ , u_0^\ast(x , y , \theta ) \ , \biggl [ \omega \ , u_0(x , y , \theta ) + \epsilon p(x , y , t ) \biggr ] \nonumber \\ & = \omega + \epsilon \int_0 ^ 1 dx \int_0 ^ 1 dy \ , u_0^\ast(x , y , \theta ) p(x , y , t),\end{aligned}\ ] ] where we approximated by the unperturbed limit - cycle solution .therefore , the phase equation describing the oscillatory hele - shaw convection with a weak perturbation is approximately obtained in the following form : where the _ phase sensitivity function _ is defined as ( e.g. , see fig .[ fig:2 ] in sec .[ sec:4 ] ) here , we note that the phase sensitivity function satisfies the neumann zero boundary condition on and the dirichlet zero boundary condition on , i.e. , as mentioned in ref . , eq . ( [ eq : theta_p ] ) is a generalization of the phase equation for a perturbed limit - cycle oscillator described by a finite - dimensional dynamical system ( see refs .however , reflecting the aspects of an infinite - dimensional dynamical system , the phase sensitivity function of the oscillatory hele - shaw convection possesses infinitely many components that are continuously parameterized by the two variables , and . in this paper , we further consider the case that the perturbation is described by a product of two functions as follows : that is , the space - dependence and time - dependence of the perturbation are separated . in this case , the phase equation ( [ eq : theta_p ] ) can be written in the following form : where the _ effective phase sensitivity function _ is given by ( e.g. , see fig .[ fig:5 ] in sec .[ sec:4 ] ) we note that the form of eq .( [ eq : theta_q ] ) is essentially the same as that of the phase equation for a perturbed limit - cycle oscillator described by a finite - dimensional dynamical system ( see refs .we also note that the effective phase sensitivity function can also be considered as the collective phase sensitivity function in the context of the collective phase description of coupled individual dynamical elements exhibiting macroscopic rhythms .in this section , using the phase description method in sec .[ sec:2 ] , we analytically investigate common - noise - induced synchronization between uncoupled systems of oscillatory hele - shaw convection .in particular , we theoretically determine the optimal spatial pattern of the common noise for achieving the noise - induced synchronization .we consider uncoupled systems of oscillatory hele - shaw convection subject to weak common noise described by the following equation for : where the weak common noise is denoted by .inserting eq .( [ eq : t_x ] ) into eq .( [ eq : t_xi ] ) for each , we obtain the following equation for the convective component : as in eq .( [ eq : p_x ] ) , the stream function of each system is determined by the common noise is assumed to be white gaussian noise , the statistics of which are given by here , we assume that the unperturbed oscillatory hele - shaw convection is a stable limit cycle and that the noise intensity is sufficiently weak . then , as in eq .( [ eq : theta_q ] ) , we can derive a phase equation from eq .( [ eq : x_xi ] ) as follows ) can be slightly different from the natural frequency given in eq .( [ eq : x_x0 ] ) ; however , this point is not essential in this paper because eq .( [ eq : lambda ] ) is independent of the value of the frequency .the theory of stochastic phase reduction for ordinary limit - cycle oscillators has been intensively investigated in refs . , but extensions to partial differential equations have not been developed yet . ] : where the effective phase sensitivity function is given by eq .( [ eq : zeta ] ) .once the phase equation ( [ eq : theta_xi ] ) is obtained , the lyapunov exponent characterizing the common - noise - induced synchronization can be derived using the argument by teramae and tanaka . from eqs .( [ eq : xi])([eq : theta_xi ] ) , the lyapunov exponent , which quantifies the exponential growth rate of small phase differences between the two systems , can be written in the following form : ^ 2 \leq 0 .\label{eq : lambda}\end{aligned}\ ] ] here , we used the following abbreviation : .equation ( [ eq : lambda ] ) represents that uncoupled systems of oscillatory hele - shaw convection can be in - phase synchronized when driven by the weak common noise , as long as the phase reduction approximation is valid . in the following two subsections ,we develop a method for obtaining the optimal spatial pattern of the common noise to achieve the noise - induced synchronization of the oscillatory convection .considering the boundary conditions of , eqs .( [ eq : bczx])([eq : bczy ] ) , we introduce the following spectral transformation : for and . the corresponding spectral decomposition of is given by by inserting eq .( [ eq : z_cossin ] ) into eq .( [ eq : zeta ] ) , the effective phase sensitivity function can be written in the following form : where the spectral transformation of is defined as the corresponding spectral decomposition of is given by for the sake of convenience in the calculation below , we rewrite the double sum in eq .( [ eq : zeta_double ] ) by the following single series : in eq .( [ eq : zeta_single ] ) , we introduced one - dimensional representations , and , where the mapping between and is bijective .accordingly , we obtain the following quantity : ^ 2 = \sum_{n=0}^\infty \sum_{m=0}^\infty s_n s_m q_n'(\theta ) q_m'(\theta ) , \label{eq : dzeta_squared}\end{aligned}\ ] ] where . from eqs .( [ eq : lambda])([eq : dzeta_squared ] ) , the lyapunov exponent normalized by the noise intensity , , can be written in the following form : ^ 2 = \sum_{n=0}^\infty \sum_{m=0}^\infty k_{nm } s_n s_m , \label{eq : lambda_k}\end{aligned}\ ] ] where each element of the symmetric matrix is given by by defining an infinite - dimensional column vector , eq .( [ eq : lambda_k ] ) can also be written as which is a quadratic form . using the spectral representation of the normalized lyapunov exponent , eq .( [ eq : lyapunov ] ) , we seek the optimal spatial pattern of the common noise for the synchronization . as a constraint, we introduce the following condition : that is , the total power of the spatial pattern is fixed at unity . under this constraint condition, we consider the maximization of eq .( [ eq : lyapunov ] ) . for this purpose, we define the lagrangian as where the lagrange multiplier is denoted by . setting the derivative of the lagrangian to be zero ,we can obtain the following equations : \frac{\partial f}{\partial \lambda } & = - \left ( \sum_{n=0}^\infty s_n^2 - 1 \right ) = 0,\end{aligned}\ ] ] which are equivalent to the eigenvalue problem described by these eigenvectors and the corresponding eigenvalues because the matrix , which is defined in eq .( [ eq : k ] ) , is symmetric , the eigenvalues are real numbers .consequently , under the constraint condition given by eq .( [ eq : unity ] ) , the optimal vector that maximizes eq .( [ eq : lambda ] ) coincides with the eigenvector associated with the largest eigenvalue , i.e. , therefore , the optimal spatial pattern can be written in the following form : where the coefficients in the double series correspond to the elements of the optimal vector associated with . from eq .( [ eq : lyapunov ] ) , the lyapunov exponent is then given by finally , we note that this optimization method can also be considered as the principal component analysis of the phase - derivative of the phase sensitivity function , .in this section , to illustrate the theory developed in sec .[ sec:3 ] , we numerically investigate common - noise - induced synchronization between uncoupled hele - shaw cells exhibiting oscillatory convection . the numerical simulation method is summarized in ref .modes for the dirichlet zero boundary condition and a cosine expansion with modes for the neumann zero boundary condition .the fourth - order runge - kutta method with integrating factor using a time step ( mainly , ) and the heun method with integrating factor using a time step were applied for the deterministic and stochastic ( langevin - type ) equations , respectively . ] .considering the boundary conditions of the convective component , eqs .( [ eq : bcxx])([eq : bcxy ] ) , we introduce the following spectral transformation : for and . the corresponding spectral decomposition of the convective component is given by in visualizing the limit - cycle orbit in the infinite - dimensional state space , we project the limit - cycle solution onto the - plane as the initial values were prepared so that the system exhibits single cellular oscillatory convection .the rayleigh number was fixed at , which gives the natural frequency , i.e. , the oscillation period .figure [ fig:1 ] shows the limit - cycle orbit of the oscillatory convection projected onto the - plane , obtained from direct numerical simulations of the dynamical equation ( [ eq : x ] ) .snapshots of the limit - cycle solution and other associated functions , and , are shown in fig .[ fig:2 ] , where the phase variable is discretized using grid points .we note that fig .[ fig:1 ] and fig .[ fig:2 ] are essentially reproductions of our previous results given in ref .details of the numerical method for obtaining the phase sensitivity function are given in refs . ( see also refs . ) . as seen in fig .[ fig:2 ] , the phase sensitivity function is spatially localized .namely , the absolute values of the phase sensitivity function in the top - right and bottom - left corner regions of the system are much larger than those in the other regions ; this fact reflects the dynamics of the spatial pattern of the convective component . as mentioned in ref . , the phase sensitivity function in this case possesses the following symmetry . for each , the limit - cycle solution and the phase sensitivity function , shown in fig .[ fig:2 ] , are anti - symmetric with respect to the center of the system , i.e. , where and .therefore , for a spatial pattern that is symmetric with respect to the center of the system , the corresponding effective phase sensitivity function becomes zero , i.e. , that is , such symmetric perturbations do not affect the phase of the oscillatory convection .the optimal spatial pattern is obtained as the best combination of single - mode spatial patterns , i.e. , eq .( [ eq : aopt ] ) .thus , we first consider the following single - mode spatial pattern : then , the effective phase sensitivity function is given by the following single spectral component : from eq .( [ eq : lambda ] ) , the lyapunov exponent for the single - mode spatial pattern can be written in the following form : ^ 2,\end{aligned}\ ] ] where .figure [ fig:3](a ) shows the normalized lyapunov exponent for single - mode spatial patterns , i.e. , .owing to the anti - symmetry of the phase sensitivity function , given in eq .( [ eq : anti - symmetric ] ) , the normalized lyapunov exponent exhibits a checkerboard pattern , namely , when the sum of and , i.e. , , is an odd number .the maximum of is located at ; under the condition of , the maximum of is located at .the single - mode spatial patterns , , , and , are shown in figs .[ fig:4](b)(c)(d ) , respectively .we note that and are anti - symmetric with respect to the center of the system , whereas is symmetric .these spatial patterns are used in the numerical simulations performed below .we now consider the optimal spatial pattern .figure [ fig:3](b ) shows the spectral components of the optimal spatial pattern , i.e. , , obtained by the optimization method developed in sec .[ subsec:3c ] ; figure [ fig:4](a ) shows the corresponding optimal spatial pattern , i.e. , , given by eq .( [ eq : aopt ] ) . as seen in fig .[ fig:3 ] , when the normalized lyapunov exponent for a single - mode spatial pattern , , is large , the absolute value of the optimal spectral components , , is also large . as seen in fig .[ fig:4](a ) , the optimal spatial pattern is similar to the snapshots of the phase sensitivity function shown in fig .[ fig:2 ] .in fact , as mentioned in sec .[ subsec:3c ] , the optimal spatial pattern corresponds to the first principal component of . reflecting the anti - symmetry of the phase sensitivity function , eq .( [ eq : anti - symmetric ] ) , the optimal spatial pattern is also anti - symmetric with respect to the center of the system .figure [ fig:5 ] shows the effective phase sensitivity functions for the spatial patterns shown in fig .[ fig:4 ] . when the normalized lyapunov exponent is large , the amplitude of the corresponding effective phase sensitivity function is also large .for the spatial pattern , which is symmetric with respect to the center of the system , the effective phase sensitivity function becomes zero , , as shown in eq .( [ eq : zeta_s ] ) . to confirm the theoretical results shown in fig .[ fig:5 ] , we obtain the effective phase sensitivity function by direct numerical simulations of eq .( [ eq : x_p ] ) with eq .( [ eq : p_aq ] ) as follows : we measure the phase response of the oscillatory convection by applying a weak impulsive perturbation with the spatial pattern to the limit - cycle solution with the phase ; then , normalizing the phase response curve by the weak impulse intensity , we obtain the effective phase sensitivity function .the effective phase sensitivity function obtained by direct numerical simulations with impulse intensity are compared with the theoretical curves in fig .[ fig:6 ] .the simulation results agree quantitatively with the theory .therefore , the phase response curve normalized by the impulse intensity converges to the effective phase sensitivity function as decreases . as shown in fig .[ fig:6](d ) , when the impulsive perturbation is not weak , the dependence of the phase response curve on the impulse intensity becomes nonlinear . in general , when the impulsive perturbation is not weak , the phase response curve is not equal to zero , even though the effective phase sensitivity function is equal to zero , .we also note that the linear dependence region of the phase response curve on the impulse is generally dependent on the spatial pattern of the impulse . ] . in this subsection , we demonstrate the common - noise - induced synchronization between uncoupled hele - shaw cells exhibiting oscillatory convection by direct numerical simulations of the stochastic ( langevin - type ) partial differential equation ( [ eq : x_xi ] ) .theoretical values of both the lyapunov exponents for several spatial patterns with the common noise intensity and the corresponding relaxation time toward the synchronized state are summarized in table [ table:1 ] .figure [ fig:7 ] shows the time evolution of the phase differences when the common noise intensity is .the initial phase values are for .figure [ fig:8 ] shows the time evolution of , which corresponds to fig .[ fig:7 ] .the relaxation times estimated from the simulation results agree reasonably well with the theory ( d ) should be constant because the effective phase sensitivity function is equal to zero , , for this case . as shown in fig .[ fig:6](d ) , when the perturbation is not sufficiently weak , the phase response curve is not equal to zero ; this higher order effect causes the slight variations shown in fig .[ fig:7](d ) . ] . as seen in fig .[ fig:7 ] and fig .[ fig:8 ] , the relaxation time for the optimal spatial pattern is actually much smaller than those for the single - mode spatial patterns . for the cases of single - mode patterns ,the relaxation time for the single - mode spatial pattern is also smaller than those for the other single - mode spatial patterns , and .we also note that the time evolution of both and for is significantly different from that for in spite of the similarity between the two spatial patterns of the neighboring modes ; this difference results from the difference of symmetry with respect to the center , as shown in eq .( [ eq : zeta_s ] ) .figure [ fig:9 ] shows a quantitative comparison of the lyapunov exponents between direct numerical simulations and the theory for the case of the optimal spatial pattern .the initial phase values are for , i.e. , the initial phase difference is .the results of direct numerical simulations are averaged over samples for different noise realizations .the simulation results quantitatively agree with the theory .figure [ fig:10 ] shows the global stability of the common - noise - induced synchronization of oscillatory convection for the case of the optimal spatial pattern ; namely , it shows that the synchronization is eventually achieved from arbitrary initial phase differences , i.e. , $ ] .although the lyapunov exponent based on the linearization of eq .( [ eq : theta_xi ] ) quantifies only the local stability of a small phase difference , as long as the phase reduction approximation is valid , this global stability holds true for any spatial pattern with a non - zero lyapunov exponent , namely , the lyapunov exponent is negative , , as found from eq .( [ eq : lambda ] ) .the global stability can be proved by the theory developed in ref . , i.e. , by analyzing the fokker - planck equation equivalent to the langevin - type phase equation ( [ eq : theta_xi ] ) ; in addition , the effect of the independent noise can also be included .our investigations in this paper are summarized as follows . in sec .[ sec:2 ] , we briefly reviewed our phase description method for oscillatory convection in the hele - shaw cell with consideration of its application to common - noise - induced synchronization . in sec .[ sec:3 ] , we analytically investigated common - noise - induced synchronization of oscillatory convection using the phase description method .in particular , we theoretically determined the optimal spatial pattern of the common noise for the oscillatory hele - shaw convection . in sec .[ sec:4 ] , we numerically investigated common - noise - induced synchronization of oscillatory convection ; the direct numerical simulation successfully confirmed the theoretical predictions .the key quantity of the theory developed in this paper is the phase sensitivity function .thus , we describe an experimental procedure to obtain the phase sensitivity function . as in eq .( [ eq : z_cossin ] ) , the phase sensitivity function can be decomposed into the spectral components , which are the effective phase sensitivity functions for the single - mode spatial patterns as shown in eq . ( [ eq : zjk ] ) . in a manner similar to the direct numerical simulations yielding fig. [ fig:6 ] , the effective phase sensitivity function for each single - mode spatial pattern can also be experimentally measured .therefore , in general , the phase sensitivity function can be constructed from a sufficiently large set of such .once the phase sensitivity function is obtained , the optimization method for common - noise - induced synchronization can also be applied in experiments .finally , we remark that not only the phase description method for spatiotemporal rhythms but also the optimization method for common - noise - induced synchronization have broad applicability ; these methods are not restricted to the oscillatory hele - shaw convection analyzed in this paper .for example , the combination of these methods can be applied to common - noise - induced phase synchronization of spatiotemporal rhythms in reaction - diffusion systems of excitable and/or heterogeneous media .furthermore , as mentioned above , also in experimental systems , such as the photosensitive belousov - zhabotinsky reaction and the liquid crystal spatial light modulator , the optimization method for common - noise - induced synchronization could be applied .y.k . is grateful to members of both the earth evolution modeling research team and the nonlinear dynamics and its application research team at ifree / jamstec for fruitful comments .is also grateful for financial support by jsps kakenhi grant number 25800222 .h.n . is grateful for financial support by jsps kakenhi grant numbers 25540108 and 22684020 , crest kokubu project of jst , and first aihara project of jsps .e. m. izhikevich , _ dynamical systems in neuroscience : the geometry of excitability and bursting _( mit press , cambridge , ma , 2007 ) . g. b. ermentrout and d. h. terman , _ mathematical foundations of neuroscience _ ( springer , new york , 2010 ) .h. nakao , t. yanagita , and y. kawamura , procedia iutam * 5 * , 227 ( 2012 ) .y. kawamura , h. nakao , k. arai , h. kori , and y. kuramoto , phys .* 101 * , 024101 ( 2008 ) . [ arxiv:0807.1285 ] + y. kawamura , physica d * 270 * , 20 ( 2014 ) . [ arxiv:1312.7054 ]
we investigate common - noise - induced phase synchronization between uncoupled identical hele - shaw cells exhibiting oscillatory convection . using the phase description method for oscillatory convection , we demonstrate that the uncoupled systems of oscillatory hele - shaw convection can exhibit in - phase synchronization when driven by weak common noise . we derive the lyapunov exponent determining the relaxation time for the synchronization , and develop a method for obtaining the optimal spatial pattern of the common noise to achieve synchronization . the theoretical results are confirmed by direct numerical simulations .
road traffic prediction plays an important role in intelligent transport systems by providing the required real - time information for traffic management and congestion control , as well as the long - term traffic trend for transport infrastructure planning .road traffic predictions can be broadly classified into short - term traffic predictions and long - term traffic forecasts .short - term prediction is essential for the development of efficient traffic management and control systems , while long - term prediction is mainly useful for road design and transport infrastructure planning .there are two major categories of techniques for road traffic prediction : those based on non - parametric models and those based on parametric models .non - parametric model based techniques , such as k - nearest neighbors ( knn ) model and artificial neural networks ( ann ) , are inherently robust and valid under very weak assumptions , while parametric model based techniques , such as auto - regressive integrated moving average ( arima ) model and its variants , allows to integrate knowledge of the underlying traffic process in the form of traffic models that can then be used for traffic prediction .both categories of techniques have been widely used and in this paper , we consider parametric model based techniques , particularly starima ( space - time autoregressive integrated moving average)-based techniques . as for the estimation of parameters and coefficients in starima model , overfitting easilyoccurs which makes the predictive performance poor as it overreacts to minor fluctuations in the training data .furthermore , the same model and hence the same correlation structure is used for traffic prediction at different time of the day , which is counter - intuitive and may not be accurate . to elaborate , consider an artificial example of two traffic stations and on a highway , where traffic station is at the down stream direction of .intuitively , the correlation between the traffic observed at and the traffic observed at will peak at a time lag corresponding to the time required to travel from to because at that time lag , the ( approximately ) same set of vehicles that have passed now have reached . obviously , the time required to travel from to depends on the traffic speed , which varies with the time of the day , e.g. peak hours and off - peak hours .accordingly , the time lag corresponding to the peak correlation between the traffic at and the traffic at should also vary with time of the day and , to be more specific , should approximately equal to the distance between and divided by the mean speed of vehicles between and .therefore , in designing the starima model for traffic prediction , the aforementioned time - varying lags should be taken into account for accurate traffic prediction . ] ] to validate the aforementioned intuition , we analyze the cross - correlation function ( ccf ) of traffic flow data at two traffic stations ( stations 6 and 3 ) , denoted as , from i-80 highway ( more details of data are discussed in section [ sub : data_collection ] ) with the formulation : }{\sigma_{uu}\sigma{}_{yy}}\label{eq : corr_u_y}\ ] ] where and are the traffic flow data collected in time slots from the two traffic stations , is the temporal order in the range of \subset\mathbf{n} ] where is the length of one temporal lag .note that depend on the spatial order in the starima model and can often be measured by loop detectors .furthermore , the advance in telecommunication and electronic technology also brings a number of new techniques that allows us to estimate the travel time , e.g. via smartphones .indeed , the observation discussed in the introduction section suggests another novel way to estimate travel time : we can infer travel time from the correlation of the observed traffic . in order to validate the above discussion, we further analyze the results in fig.[fig : ccf_3_6_different_time_period ] by using the f average speed information at stations 3 and 6 , which is collected in the same day as the traffic flow data used in fig.[fig : ccf_3_6_different_time_period ] .specifically , the average speed from station 3 to station 6 between 6:30 am and 8:30 am is 44.45 feet / second .the average speed is 67.05 feet / second between 19 pm and 24 pm .the maximal temporal lag during these two time periods is respectively 3 and 2 with 30 seconds in each temporal lag . as ,given during two time periods along with the corresponding best temporal lag and , we are able to obtain the following equation according to the theoretical analysis above : substituting the data into formulation , it is easy to find .this result agrees with our theoretical analysis and verifies our speculation that temporal is a function of the variation of average speed in section [ sec : introduction ] .an easy way to classify speed data is by dividing into peak time and off - peak periods .after that , the temporal lag can be calculated using , where is the average speed in time period \{peak , off - peak}. however an empirical classification is often prone to error and inaccuracy .recall the analysis in section [ sec : related_work ] , the evaluation of the average speed is sensitive to the time range selected for peak or off - peak period .it is obvious that the speed is not always fast even during off - peak period from fig .[ fig : speed_flow_data ] .therefore , in this paper an isodata based speed data classification algorithm is developed to deliver an accurate classification . using this algorithm , we firstly classify the speed data collected in each time slot into different clusters .after that , the time period clusters are confirmed based on the time slots contained in different speed clusters .* input : * , ,,,,, * return : * , ****( , ,,,, ) [ algo - isodata ] [ algo - for - start-1 ] , and [ algo - if-1 ] [algo - min - start ] [algo - min - end ] [ algo - endif-1 ] [ algo - for - end-1 ] assuming there is a set of speed data in which is the speed in time slot .the purpose here is to confirm a set of speed clusters , denoted as , where with cluster center and .based on , we can obtain another set of time period clusters , denoted as , in which .in addition , let be a set of continuous time slots , termed as a time range and defined as follows : where is the number of time slots contained in , is a threshold defined as the minimal number of time slots included in a time period .the speed data classification algorithm is given in algorithm [ alg : speed - data - classification ] .in line [ algo - isodata ] , the isodata algorithm is implemented to get speed clusters .the time period clusters are obtained from line [ algo - for - start-1 ] to [ algo - for - end-1 ] .it is worth mentioning that a decision is made to decide whether belongs to by comparing its capacity and threshold ( from line [ algo - if-1 ] to [ algo - for - end-1 ] ) .if does not belong to , line [ algo - min - start ] and [ algo - min - end ] are implemented to allocate each to other by the operation . is defined as the absolute difference between speed recorded in time slot and the average speed calculated during time period , which is presented in . according to the speed and time period clusters obtained from section [ sub : classification_speed ] , we propose a modified starima model , denoted as .the definitions of parameters , and in this model are the same as the original starima model , except that the temporal lag will vary with the spatial order and the average speed in different time periods .more precisely , given a time period , is defined as follows : in [ eq : starima_tv_model ] , is a vector in which each element represents the temporal lag between two station and with the spatial order . is calculated by where is the distance between these two stations and is the average speed in time period which is equal to .note that when , the " `` 0th order neighbor' of a station is itself such that the temporal lag is evaluated with the pacf used in arima .based on the data collection introduced in section [ sec : data_method ] , we utilize the speed and traffic flow data at stations 3 , 4 , 5 and 6 . at each station , there are 2880 data recorded in one day and the length of one time slot is 30 seconds . in order to eliminate noise in the data , we make a " `` smooth' operation by calculating the mean traffic flow every data points and regarding it as one data point . the experimental results are divided into two parts . in the first part , we provide the speed and time period clusters classified by our proposed algorithm . for the speed data , we choose . in the second part, we present the forecast results of traffic flow in different time periods and stations using model .we choose to calculate the mean traffic flow using original traffic flow data within 2 minutes .firstly , the configuration of input parameters of algorithm is given in table [ tab : the - configuration ] in which the speed data is the results after the smooth operation on the speed data collected from four stations . with this setting ,the smallest length of time range is 120 minutes .the speed and time period clusters classified by algorithm [ algo - isodata ] is presented in table [ tab : speed - time - cluster ] ..the input parameters in algorithm [ algo - isodata][tab : the - configuration ] [ cols="^,^,^,^",options="header " , ] in table [ tab : mse_mape_diff_algo_one_day ] , we compare the mape / mse of one day using , chaos and at four stations . except station 3 , all the mape / mse achieved by our model are better than those of the other two models . furthermore , in table [tab : the - mape / mse - of - two - periods - s6 ] , we present the mape / mse of the forecast results of station 6 using these three models , in which the forecast time ranges are 9 - 10am and 23 - 24pm . it can be seen that the performance of our model is almost coincident with the true data .and chaos comes to the second in the prediction during 23 - 24pm .motivated by the observation that the correlation between traffic at different traffic stations is time - varying and the time lag corresponding to the maximum correlation approximately equals to the distance between two traffic stations divided by the speed of vehicles between them , in this paper , we developed a modified starima model with time - varying lags for short - term traffic flow prediction .experimental results using real traffic data collected on a highway showed that the developed starima - based model with time - varying lags has superior accuracy compared with its counterpart developed using the traditional cross - correlation function and without employing time - varying lags . in an urban environment, the correlation between traffic tends to be much more intricate .it is part of our future work plan to develop prediction technique for urban roads that incorporates the knowledge of the underlying road topology .p. dellacqua , f. bellotti , r. berta , and a. de gloria , " time - aware multivariate nearest neighbor regression methods for traffic flow prediction , _ ieee trans .intelligent transportation systems _ , vol .16 , no . 6 , pp .33933402 , 2015 .kim , j .- s .et al . _ , " urban traffic flow prediction system using a multifactor pattern recognition model , _ ieee trans .intelligent transportation systems _ , vol . 16 , no . 5 , pp . 27442755 , 2015 .e. i. vlahogianni , m. g. karlaftis , and j. c. golias , " short - term traffic forecasting : where we are and where we re going , _ transportation research part c : emerging technologies _ , vol .43 , pp . 319 , 2014 .m. lippi , m. bertini , and p. frasconi , " short - term traffic flow forecasting : an experimental comparison of time - series analysis and supervised learning , _ ieee trans .intelligent transportation systems _ , vol . 14 , no . 2 , pp . 871882 , 2013 .l. song , " improved intelligent method for traffic flow prediction based on artificial neural networks and ant colony optimization . _ journal of convergence information technology _ , vol . 7 , no . 8 , 2012 . b. l. smith , b. m. williams , and r. k. oswald , " comparison of parametric and nonparametric models for traffic flow forecasting , _ transportation research part c : emerging technologies _ ,10 , no . 4 , pp . 303321 , 2002 .x. min , j. hu , q. chen , t. zhang , and y. zhang , " short - term traffic flow forecasting of urban network based on dynamic starima model , in _ 2009 ieee int . conf .intelligent transportation systems _ , pp . 16 .a. stathopoulos and m. g. karlaftis , " a multivariate state space approach for urban traffic flow modeling and prediction , _ transportation research part c : emerging technologies _ , vol .11 , no . 2 , pp . 121135 , 2003 .g. mao and b. d. anderson , " graph theoretic models and tools for the analysis of dynamic wireless multihop networks , in _ 2009 ieee wireless communications and networking conference_.1em plus 0.5em minus 0.4emieee , 2009 , pp . 16 .a. a. kannan , b. fidan , and g. mao , " robust distributed sensor network localization based on analysis of flip ambiguities , in _ ieee globecom 2008 - 2008 ieee global telecommunications conference_.1em plus 0.5em minus 0.4emieee , 2008 , pp . 16 .
based on the observation that the correlation between observed traffic at two measurement points or traffic stations may be time - varying , attributable to the time - varying speed which subsequently causes variations in the time required to travel between the two points , in this paper , we develop a modified space - time autoregressive integrated moving average ( starima ) model with time - varying lags for short - term traffic flow prediction . particularly , the temporal lags in the modified starima change with the time - varying speed at different time of the day or equivalently change with the ( time - varying ) time required to travel between two measurement points . firstly , a technique is developed to evaluate the temporal lag in the starima model , where the temporal lag is formulated as a function of the spatial lag ( spatial distance ) and the average speed . secondly , an unsupervised classification algorithm based on isodata algorithm is designed to classify different time periods of the day according to the variation of the speed . the classification helps to determine the appropriate time lag to use in the starima model . finally , a starima - based model with time - varying lags is developed for short - term traffic prediction . experimental results using real traffic data show that the developed starima - based model with time - varying lags has superior accuracy compared with its counterpart developed using the traditional cross - correlation function and without employing time - varying lags .
invasion of influenza virus into a host s upper respiratory tract leads to infection of healthy epithelial cells and subsequent production of progeny virions .infection also triggers a variety of immune responses . in the early stage of infection a temporary non - specific response ( innate immunity ) contributes to the rapid control of viral growth while in the late stage of infection , the adaptive immune response dominates viral clearance .the early immune response involves production of antiviral cytokines and cells , e.g. type 1 interferon ( ifn ) and natural killer cells ( nk cells ) , and is independent of virus type . in the special case of a first infection in a naive host ,the adaptive immune response , mediated by the differentiation of naive t cells and b cells and subsequent production of virus - specific t cells and antibodies , leads to not only a prolonged killing of infected cells and virus but also the formation of memory cells which can generate a rapid immune response to secondary infection with the same virus . t cells , which form a major component of adaptive immunity , play an important role in efficient viral clearance .however , available evidence suggests they are unable to clear virus in the absence of antibodies except in hosts with a very high level of pre - existing naive or memory t cells .some studies indicate that depletion of t cells could decrease the viral clearance rate and thus prolong the duration of infection .furthermore , a recent study of human a(h7n9 ) hospitalized patients has implicated the number of effector t cells as an important driver of the duration of infection .this diverse experimental and clinical data , sourced from a number of host - species , indicates that timely activation and elevation of t cell levels may play a major role in the rapid and successful clearance of influenza virus from the host .these observations motivate our modeling study of the role of t cells in influenza virus clearance .viral dynamics models have been extensively applied to the investigation of the antiviral mechanisms of t cell immunity against a range of pathogens , with major contributions for chronic infections such as hiv / siv , htlv - i and chronic lcmv . however , for acute infections such as measles and influenza , highly dynamical interactions between the viral load and the immune response occur within a very short time window , presenting new challenges for the development of models incorporating t cell immunity .existing influenza viral dynamics models , introduced to study specific aspects of influenza infection , are limited in their ability to capture all major aspects of the natural history of infection , hindering their use in studying the role of t cells in viral clearance .some models show a severe depletion of target cells ( i.e healthy epithelial cells susceptible to viral infection ) after viral infection . depletion may be due to either infection or immune - mediated protection .either way , these models are arguably incompatible with recent evidence that the host is susceptible to re - infection with a second strain of influenza a short period following primary exposure .furthermore , as reviewed by dobrovolny _ , target cell depletion in these models strongly limits viral expansion so that virus can be effectively controlled or cleared at early stage of infection even in the absence of adaptive immunity , which contradicts the experimental finding that influenza virus remains elevated in the absence of adaptive immune response . while a few models do avoid target cell depletion , they either assume immediate replenishment of target cells or a slow rate of virus invasion into target cells resulting in a much delayed peak of virus titer at day 5 post - infection ( rather than the observed peak at day 2 ) .moreover , models with missing or unspecified major immune components , e.g. no innate immunity , no antibodies or unspecified adaptive immunity , also indicate the need for further model development . for an in - depth review of the current virus dynamics literature on influenza ,we refer the reader to the excellent article by dobrovolny __ .t cells and b cells to produce effector t cells and antibodies , responsible for final clearance of virus . ] in this paper , we construct a within - host model of influenza viral dynamics in naive ( i.e. previously unexposed ) hosts that incorporates the major components of both innate and adaptive immunity and use it to investigate the role of t cells in influenza viral clearance . the model is calibrated against a set of published murine data from miao _ et al ._ and is then validated through demonstration of its ability to qualitatively reproduce a range of published data from immune - knockout experiments . using the model, we find that the recovery time defined to be the time when virus titer first drops below a chosen threshold in the ( deterministic ) model is negatively correlated with the level of effector t cells in an approximately exponential manner . to the best of our knowledge ,this relationship , with support in both h3n2-infected mice and h7n9-infected humans , has not been previously identified .the exponential relationship between t cell level and recovery time is shown to be remarkably robust to variation in a number of key parameters , such as viral production rate , ifn production rate , delay of effector t cell production and the level of antibodies . moreover , using the model , we predict that people with a lower level of naive t cells may receive significantly more benefit from induction of additional effector t cells . such production , arising from immunological memory , may be established through either previous viral infection or t cell - based vaccines .the model of primary viral infection is a coupled system of ordinary and delay differential equations , consisting of three major components ( see fig . 1 for a schematic diagram ) .13 describe the process of infection of target cells by influenza virus and are a major component in almost all models of virus dynamics in the literature .4 and 5 model ifn - mediated innate immunity .thirdly , adaptive immunity including t cells and b cell - produced antibodies for killing infected cells and neutralizing influenza virus respectively are described by eqs . in further detail , eq .1 indicates that the change in viral load ( ) is controlled by four factors : the production term ( ) in which virions are produced by infected cells ( ) at a rate ; the viral natural decay / clearance ( ) with a decay rate of ; the viral neutralisation terms ( and ) by antibodies ( both a short - lived antibody response driven by , e.g. igm , and a longer - lived antibody response driven by , e.g. igg and iga ) , and a consumption term ( ) due to binding to and infection of target cells ( ) . in eq . 2 , the term models logistic regrowth of the target cell pool .both target cells ( ) and resistant cells ( , those protected due to ifn - induced antiviral effect ) can produce new target cells , with a net growth rate proportional to the severity of infection , ( i.e. the fraction of dead cells ) . is the initial number of target cells and the maximum value for the target cell pool .target cells ( ) are consumed by virus ( ) due to binding ( ) , the same process as .note that and have different measurement units due to different units for viral load ( ) and infected cells ( ) .as already mentioned , the innate response may trigger target cells ( ) to become resistant ( ) to virus , at rate .resistant cells lose protection at a rate .this process also governs the evolution of virus - resistant cells ( ) in eq .5 . eq .3 describes the change of infected cells ( ) .they increase due to the infection of target cells by virus ( ) and die at a ( basal ) rate .two components of the immune response increase the rate of killing of infected cells .ifn - activated nk cells kill infected cells at a rate .effector t cells ( ) produced through differentiation from naive t cells in eq .6 kill at a rate . of noteour previous work has demonstrated that models of the innate response containing only ifn - induced resistance for target cells ( state ; eq .5 ) , while able to maintain a population of healthy uninfected cells , still control viral kinetics through target cell depletion , and therefore can not reproduce viral re - exposure data .given our interest in analysing a model that prevents target cell depletion , inclusion of ifn - activated nk cells ( term ) is an essential part of the model construction .4 models the innate response , as mediated by ifn ( ) .ifn is produced by infected cells at a rate and decays at a rate .6 models stimulation of naive t cells ( ) into the proliferation / differentiation process by virus at a rate ) , where is the maximum stimulation rate and indicates the viral load ( ) at which half of the stimulation rate is achieved .note that this formulation does not capture the process of antigen presentation and t cell activation , but rather is a simple way to establish the essential coupling between the viral load and the rate of t cell activation in the model . in eq .7 , the production of effector t cells ( ) is assumed to be an `` advection flux '' induced by a delayed virus - stimulation of naive t cells ( the first term on the righthand side of eq .the delayed variables , and , equal zero when .the introduction of the delay is to phenomenologically model the delay induced by both naive t cell proliferation / differentiation and effector t cell migration and localization to the site of infection for antiviral action .the delay also captures the experimental finding that naive t cells continue to differentiate into effector t cells in the absence of ongoing antigenic stimulation .the multiplication factor indicates the number of effector t cells produced from one naive t cell , where is the average effector t cell production rate over the delay period .the exponential form of the multiplication factor is derived based on the assumption that cell differentiation and proliferation follows a first - order advection reaction equation .effector t cells decay at a rate .similar to t cells , eqs . 8 and 9model the proliferation / differentiation of naive b cells , stimulated by virus presentation at rate .stimulation subsequently leads to production of plasma b cells ( ) after a delay .the multiplication factor indicates the number of plasma b cells produced from one naive b cell , where is the production rate .plasma b cells secrete antibodies , which exhibit two types of profiles in terms of experimental observation : a short - lived profile ( e.g. igm lasting from about day 5 to day 20 post - infection ) and a longer - lived profile ( e.g. igg and iga lasting weeks to months ) .these two antibody responses are modeled by eqs .10 and 11 wherein different rates of production ( and ) and consumption ( and ) are assumed .the model contains 11 equations and 30 parameters ( see table 1 ) .this represents a serious challenge in terms of parameter estimation , and clearly prevents a straightforward application of standard statistical techniques . to reduce uncertainty ,a number of parameters were taken directly from the literature , as per the citations in table . 1 .the rest were estimated ( as indicated in table 1 ) by calibrating the model against the published data from miao _ et al ._ who measured viral titer , t cell counts and igm and igg antibodies in laboratory mice ( exhibiting a full immune response ) over time during primary influenza h3n2 virus infection ( see for a detailed description of the experiment ) .the approach to estimating the parameters based on miao _ et al . _s data is provided in the _ supplementary material _ and the estimated parameter values are given in table 1 .note that the data were presented in scatter plots in the original paper , while we presented the data here in mean sd at each data collection time point for a direct comparison with our mean - field mathematical model . for model simulation ,the initial condition is set to be unless otherwise specified . the initial target cell number ( )was estimated by petrie _we estimate that of order 100 cells ( resident in the spleen ) are able to respond to viral infection ( ) ( personal communication , n. lagruta , monash university , australia ) .note that 100 naive t cells might underestimate the actual number of naive precursors that could respond to all the epitopes contained within the virus but does not qualitatively alter the model dynamics and predictions ( see _ results _ where the naive t cell number is varied between 0 to 200 ) . in the absence of further data, we also use this value for the initial naive b cell number ( ) , but again this choice does not qualitatively alter the model predictions . the numerical method and code ( implemented in matlab , version r2014b , the mathworks , natick , ma ) for solving the model are provided in the _ supplementary material_. ) . note that due to the limit of detection for the viral load ( occurring after 10 days post - infection as seen in viral load data ), the last three data points in the upper - left panel were not taken into consideration for model fitting . ]clinical influenza a(h7n9 ) patient data was used to test our model predictions on the relationship between t cell number and recovery time .the data was collected from 12 surviving patients infected with h7n9 virus during the first wave of infection in china in 2013 ( raw data is provided in _dataset s1 _ ; see the paper of wang _ et al . _ for details of data collection ; this study was reviewed and approved by the shaphc ethics committee ) .note that the clinical data were scarce for some patients .for those patients , we have assumed that the available data are representative of the unobserved values in the neighboring time period .for each patient , we took the average t cell number in peripheral blood mononuclear cells ( pbmc ) for the period from day 8 to day 22 ( or the recovery day if it comes earlier ) post - admission as a measure of the effector t cell level .this period was chosen _ a priori _ as it roughly matches the duration of the t cell profile and clinical samples were frequently collected in this period .the average t cell count was given by the ratio of the total area under the data points ( using trapezoidal integration ) to the number of days from day 8 to day 22 ( or the recovery day if it comes earlier ) .for those patients for which samples at days 8 and/or 22 were missing we specified the average t cell level at the missing time point to be equal to the value from the nearest sampled time available .) , effector t cells ( ) , short - lived antibody response ( ) and long - lived antibody response ( ) have been shown in fig . 2 .] we first analyze the model behavior in order to establish a clear understanding of the model dynamics .fig . 2 shows solutions ( time - series ) for the model compartments ( viral load , t cells and igm and igg antibody ) calibrated against the murine data from .solutions for the remaining model compartments are shown in fig .the model ( with both innate and adaptive components active ) prevents the depletion of target cells ( see fig .3 wherein over 50% of target cells remain during infection ) and results in a minor loss of just 1020% of healthy epithelial cells ( i.e. the sum of target cells ( ) and virus - resistant cells ( ) ; see supplementary fig .the primary driver for the maintenance of the target cell pool during acute viral infection is a timely activation of the innate immune response , and in particular the natural killer cells ( supplementary fig .since we have previously shown that the target - cell limited model ( even with the resistant cell compartment ) is unable to reproduce observations from heterologous re - exposure experiments , our model improves upon previous models where viral clearance was only achieved through depletion of target cells ( a typical solution shown in supplementary fig .importantly , our result is distinguished from that of saenz et . , wherein the healthy cell population was similarly maintained , but primarily through induction of the virus - resistant state , thereby rendering that model incapable of capturing re - infection behavior as established in animal models .the modeled viral dynamics exhibits three phases , each dominated by the involvement of different elements of the immune responses ( fig . 4 ) .immediately following infection ( 02 days post - infection ) and prior to the activation of the innate ( and adaptive ) immune responses , virus undergoes a rapid exponential growth ( fig .4a ) . in the second phase ( 25 days post - infection ), the innate immune response successfully limits viral growth ( fig .4a ) . in the third phase ( 46 days post - infection ) , adaptive immunity ( antibodies and t cells )is activated and viral load decreases rapidly , achieving clearance .4b and 4c demonstrate the dominance of the different immune mechanisms at different phases . in fig .4b models with and without immunity are indistinguishable until day 2 ( shaded region ) , before diverging dramatically when the innate and then adaptive immune responses influence the dynamics . in fig . 4c, models with and without an adaptive response only diverge at around day 4 as the adaptive response becomes active .we have further shown that this three - phase property is a robust feature of the model , emergent from its mathematical structure and not a property of fine tuning of parameters ( see supplementary fig .importantly , it clearly dissects the periods and effect of innate immunity , extending on previous studies of viral infection phases where the innate immune response was either ambiguous or ignored . and in the model ) .the trajectories overlap prior to the activation of the innate response , before diverging due to target cell depletion .the shaded region highlights the first phase ( exponential growth ) . in panel( c ) , the dashed line shows viral kinetics in the absence of adaptive immunity ( by letting ; innate immunity remains active ) .the trajectories overlap prior to the activation of the adaptive response .the shaded region highlights the second phase ( innate response ) .note : changes in model parameters shifts where the three phases occur , but does not alter the underlying three - phase structure .i.e. existence of the three phases is robust to variation in parameters ( see _ supplementary material _ and supplementary fig .s3 in particular ) . ] as reviewed by dobrovolny _ , a number of _ in vivo _ studies have been performed to dissect the contributions of t cells and antibodies .we use the findings of these studies to validate our model , by testing how well it is able to reproduce the experimental findings ( without any further adjustment to parameters ) .although the determination of the role of t cells is often hindered by co - inhibition of both t cells and the long - lived antibody response ( e.g. using nude mice ) , it is consistently observed that antibodies play a dominant role in final viral clearance while t cells are primarily responsible for the timely killing of infected cells and so indirectly contribute to an increased rate of removal of free virus towards the end of infection .furthermore , experimental data demonstrate that a long - lived antibody response is crucial for achieving complete viral clearance , while short - lived antibodies are only capable of driving a transient decrease in viral load .we find that our model ( with parameters calibrated against miao _s data ) is able to reproduce these observations : * virus can rebound in the absence of long - lived antibody response ( see fig . 5 and supplementary fig .both the t cell response and short - lived antibody response only facilitate a faster viral clearance , and are incapable of achieving clearance in the absence of long - lived antibody response ( see fig . 5 and supplementary fig .a lower level of t cells ( modulated by a decreased level of initial naive t cells , ) significantly prolongs the viral clearance ( see supplementary fig .in addition , the model also predicts a rapid depletion of naive t cells after primary infection ( see fig .3 ) , which represents a full recruitment of naive t cell precursors .this result may be associated with the experimental evidence suggesting a strong correlation between the naive t cell precursor frequencies and effector t cell magnitudes for different pmhc - specific t cell populations .note that in fig .5 no adjustments to the model ( e.g. to the vertical scale ) were made ; its behavior is completely determined by the calibration to the aforementioned murine data and so these findings represent a ( successful ) prediction of the model .t- , iga- , igg- " was modeled by letting and . external igm " ( in addition to the igm produced by plasma cells ) was modeled by adding a new term to eq . 1 where follows a piecewise function for , for , for and for .( b ) data is from the paper of iwasaki _ et al .the data indicates that the long - lasting iga response , but not the long - lasting igg response or the short - lasting igm response , is necessary for successful viral clearance . no long - lived antibody response " was modeled by letting .note that miao _only measured igm and igg , but not iga . as such , our model s long - lived antibody response was calibrated against igg kinetics ( see fig . 2 ) .therefore , we emphasise that we can only investigate the relative contributions of short - lived and long - lived antibodies . ] in summary , we have demonstrated that our model with parameters calibrated against murine data exhibits three important phases characterized by the involvement of various immune responses .advancing on previous models , our model does not rely on target cell depletion , and successfully reproduces a multitude of behavior from knockout experiments where particular components of the adaptive immune response were removed .this provides us with some confidence that each of the major components of the immune response has been captured adequately by our model , allowing us to now make predictions on the effect of the cellular adaptive response on viral clearance .having established that our model is ( from a structural point of view ) biologically plausible and that our parameterization is capable of reproducing varied experimental data under different immune conditions ( i.e. knockout experiments ) , we now study how the cellular adaptive response influences viral kinetics in detail .we focus on the key clinical outcome of recovery time , defined in the model as the time when viral titer first falls below , the minimum value detected in relevant experiments ( e.g. fig . 2 ) .t cells plays an important role in determining recovery time .recovery time is defined to be the time when viral load falls to 1 .panel ( a ) shows that the average effector t cell number over days 620 is linearly related to the naive t cell number ( i.e. ) .panel ( b ) shows that the recovery time is approximately exponentially related to the initial naive t cell number .combined , these results give panel ( c ) wherein an approximately exponential relationship is observed between the average t cell number and recovery time , both of which are experimentally measurable .note that the exponential / linear fits shown in the figures are not generated by the viral dynamics model but are used to indicate the trends ( evident visually ) in the model s behavior .panel ( d ) shows that varying the delay ( in a similar way to that shown in fig .s5 in the _ supplementary material _ ) , rather than the naive t cell number , does not alter the exponential relationship . in panel( d ) , the crosses represent the results of varying and the empty circles are the same as those in panel ( c ) for comparison . ]time series of the viral load show that the recovery time decreases as the initial naive t cell number ( ) increases ( supplementary fig .s4 ) . with that in mind, we now examine how recovery time is associated with the clinically relevant measure of effector t cell level during viral infection . with an increasing initial level of naive t cells , the average level of effector t cells over days 620 increases linearly ( fig .6a ) , while the recovery time decreases in an approximately exponential manner ( fig .combining these two effects gives rise to an approximately exponential relation between the level of effector t cells and recovery time ( fig .note that the exponential / linear fits shown in the figures are simply to aid in interpretation of the results .they are not generated by the viral dynamics model . if varying the delay for naive t cell activation and differentiation , , while keeping the naive t cell number fixed ( at the default value of 100 ) , we find that the average level of effector t cells is exponentially related to the delay , while the recovery time is dependent on the delay in a piecewise linear manner ( see supplementary fig .s5 ) . nevertheless , the combination still leads to an approximately exponential relationship between the level of effector t cells and recovery time ( supplementary fig .s6 ) , which is almost identical to that of varying naive t cells ( fig .we also examine the sensitivity of the exponential relationship to other model parameters generally accepted to be important in influencing the major components of the system , such as the viral production rate , ifn production rate and naive b cell number .we find that the exponential relationship is robust to significant variation in all of these parameters ( see supplementary figs .s6 and s7 and fig . 9 ) .these results suggest that a higher level of effector t cells is critical for early recovery , consistent with experimental findings .t cell number and recovery time .the x - axis is the average level of functional effector t cells ( i.e. cells ) over the period from day 8 to day 22 ( or the recovery day if it comes earlier ) .spearman s rank correlation test indicates a significant negative correlation between the average t cell numbers and recovery time ( ) . excluding one of the patients ( no .dataset s1 _ ; discussed in the _ discussion _ ) , all other data points ( solid dots ) are fitted by an offset exponential function , indicating that the best achievable recovery time for individuals with a high t cell response is approximately 17.5356 . ] finally , and perhaps surprisingly given our model has been calibrated purely on data from the mouse , a strikingly similar relationship as shown in fig .6c is found in clinical data from influenza a(h7n9 ) virus - infected patients ( fig .7 ) . excluding one patient ( no .dataset s1 _ ; the exclusion is considered further in the _ discussion _ ) , average cells and recovery time are negatively correlated ( spearman s , ) and well captured by an exponential fit with an estimated offset ( see fig . 7 caption for details ) .the exponential relationship ( observed in both model and data ) has features of a rapid decay for relatively low / intermediate levels of effector t cells and a strong saturation for relatively high t levels , implying that even with a very high level of naive t cells , recovery time can not be reduced below a certain value ( in this case , estimated to be approximately 17 days ) .of course , the exponential relationship ( i.e. the scale of t cell level or recovery time ) , is only a qualitative one , as we have no way to determine the scaling between different x - axis measurement units , nor adjust for particular host and/or viral factors that differ between the two experiments ( i.e. h3n2-infection in the mouse versus h7n9-infeciton in humans ) .in addition to naive t cells , memory t cells ( established through previous viral infection ) may also significantly affect recovery time due to both their rapid activation upon antigen stimulus and faster replication rate . to study the role of memory t cells, we must first extend our model .as we are only concerned with how the presence of memory t cells influences the dynamics , as opposed to the development of the memory response itself , the model is modified in a straightforward manner through addition of two additional equations which describe memory t cell ( ) proliferation / differentiation : accordingly the term in eq . 3is modified to .the full model and details on the choice of the additional parameters are provided in the _supplementary material_. note that the model component , , may include different populations of memory t cells , including those directly specific to the virus and those stimulated by a different virus but which provide cross - protection .t cells on viral clearance .recovery time is defined to be the time when the viral load falls to 1 .panel ( a ) demonstrates that varying the number of memory t cells ( ) reduces the recovery time for any naive t cell number ( i.e. ) .note that saturation is observed for where the recovery time is about 6 days , independent of the naive cell numbers .panel ( b ) demonstrates how the presence of pre - existing memory t cells ( solid dots ) leads to a shorter recovery time when compared to the case where no memory t cells are established ( open circles ) .note the time scale difference in panels ( a ) and ( b ) .this simulation is based on the assumption that the level of pre - existing memory t cells is assumed to be either 1% or 5% ( as indicated in the legend ) of the maximum effector t cell number due to primary viral infection .the memory cell number ( which is not shown in this figure ) is about 30 time as many as the naive cell number shown in the figure , i.e. 30 naive cells result in about 900 memory cells before re - infection . ]8a shows how the pre - existing memory t cell number ( ) changes the exponential relationship between naive t cells and recovery time .importantly , as the number of memory t cells increases , the recovery time decreases for any level of naive t cells and the exponential relationship remains .the extent of reduction in the recovery time for a relatively low level of naive t cells is greater than that for a relatively high level of naive t cells .this suggests that people with a lower level of naive t cells may benefit more through induction of memory t cells , emphasizing the potential importance for taking prior population immunity into consideration when designing t cell - based vaccines .the above result is based on the assumption that the initial memory t cell number upon re - infection is independent of the number of naive t cells available during the previous infection .however , it has also been found that the stationary level of memory t cells is usually maintained at about 510% of the maximum antigen - specific t cell number during primary viral infection .this indicates that people with a low naive t cell number may also develop a low level of memory t cells following infection . in consequence, such individuals may be relatively more susceptible to viral re - infection .this alternative and arguably more realistic relationship between the numbers of naive and memory t cells is simulated in fig .8b where memory t cell levels are set to 5% of the maximum of the effector t cell level .results suggest that , upon viral re - infection , pre - existing memory t cells are able to significantly improve recovery time except for in hosts with a very low level of naive t cells ( fig .this is in accordance with the assumption that a smaller naive pool leads to a smaller memory pool and in turn a weaker shortening in recovery time .although the model suggests that the failure of memory t cells to protect the host is unlikely to be observed ( because of the approximately 30 fold increase in the size of the memory pool relative to the naive pool ) , the failure range may be increased if the memory pool size is much smaller ( modulated by , say , changing 5% to 1% in the model ) .therefore , for people with a low naive t cell number , the level of memory t cells may be insufficient and prior infection may provide very limited benefit , further emphasizing the opportunity for novel vaccines that are able to induce a strong memory t cell response to improve clinical outcomes . t cell number and recovery time .recovery time is defined to be the time when viral load falls to 1 .different antibody levels are simulated by varying the initial number of naive b cells ( i.e. at ) . ]antibodies appear at a similar time as effector t cells during influenza viral infection and may enhance the reduction in the recovery time in addition to t cells . by varying the naive b cell number ( as a convenient , but by no means unique , way to influence antibody level ) , we find that increasing the antibody level shortens the recovery time regardless of the initial naive t cell number , leaving the exponential relation largely intact ( fig . 9 ) .a slight saturation occurs for the case in which levels of both naive b cells and t cells are low .moreover , variation in naive b cell number also results in a wider variation in recovery time for a lower naive t cell level , suggesting that people with a lower level of naive t cells may , once again , receive a more significant benefit ( in terms of recovery time ) through effective induction of an antibody response via vaccination .in this paper , we have studied the role of t cells in clearing influenza virus from the host using a viral dynamics model .the model was calibrated on a set of published murine data from miao _ et al ._ and has been further shown to be able to reproduce a range of published data from experiments with different immune components knocked out . by avoiding target - cell depletion ,our model is also compatible with re - infection data , providing a strong platform on which to examine the role of t cells in determining recovery time from infection .our primary finding is that the time of recovery from influenza viral infection is negatively correlated with the level of effector t cells in an approximately exponential manner .this robust property of infection has been identified from the model when calibrated against influenza a(h3n2 ) infection data in mice , but also observed in clinical case series of influenza a(h7n9 ) infection in humans ( fig .7 ) .our findings , in conjunction with conclusions on the potential role for a t cell vaccine that stimulates and/or boosts the memory response , suggest new directions for research in both non - human species and further studies in humans on the association between t cell levels and clinical outcomes .further research , including detailed statistical fitting of our model to an extensive panel of infection data ( as yet unavailable ) from human and non - human species , is required to establish the generality of these relationships and provide quantitative insights for specific viruses in relevant hosts .the non - linear relationship between effector t cell level and recovery time may be useful in clinical treatment .the saturated property of the relation implies that a linear increase in the effector t cell level may result in diminishing incremental improvements in patient recovery times .with evidence of a possible age - dependent loss of naive t cells , our model results imply that boosting the t cell response via t cell vaccination may be particularly useful for those with insufficient naive t cells .the population - level consequences of such boosting strategies , while beyond the scope of this work , have previously been considered by the authors .we also investigated the effect of memory t cell level on viral clearance and found unsurprisingly that a high pre - existing level of memory t cells was always beneficial .however , our results suggest that pre - existing memory t cells may be particularly beneficial for certain groups of people . for example , if the memory t cell number induced by viral infection or vaccination is assumed to be relatively constant for everyone , people with less naive t cells would benefit more upon viral re - infection ( see fig . 8a ) . on the other hand ,if assuming pre - existing memory t cell number is positively correlated with the number of naive t cells ( simulated in fig .8b ) , people with more naive t cells would benefit more upon viral re - infection .emerging evidence suggests that the relationship between the level of memory t cells and naive precursor frequencies is likely to be deeply complicated . in that context ,our model predictions emphasize the importance for further research in this area , and the necessity to take prior population immunity into consideration when designing t cell vaccines .we modeled both short - lived and long - lived antibody responses .experimental data and model predictions consistently show that the short - lived antibody response results in a temporary reduction in virus level whereas the long - lived antibody response is responsible for complete viral clearance ( fig .we emphasize here that although the model is able to capture the observed short - lived and long - lived antibody responses ( in order to study the virus - immune response interactions ) , it is not designed to investigate the mechanisms inducing different antibody responses .the observed difference in antibody decay profile may be a result of many factors including the life times of different antibody - secreting cell types , different antibody life times and antibody consumption through neutralizing free virions .detailed study of these phenomena requires a more detailed model and associated data for parameter estimation and model validation , and is thus left for future work .similarly , t cells are also known to perform a variety of functions in the development of immunity , such as facilitation of the formation and optimal recall of t cells or even direct killing of infected cells during viral infection .their depletion due to , say , hiv infection has also been associated with more severe clinical outcomes following influenza infection .some of the major functions of t cells may be considered to be implicitly modeled through relevant parameters such as the rate of recall of memory t cells ( modeled by the delay ) in our extended model which includes memory t cells .however , a detailed viral dynamics study of the role of t cells in influenza infection , including in hiv infected patients with depleted t cells , remains an open and important challenge . in a recent theoretical study, it was found that spatial heterogeneity in the t cell distribution may influence viral clearance .resident t cells in the lungs have a more direct and significant effect on timely viral clearance than do naive and memory pools resident in lymph nodes .although this factor has been partially taken into consideration in our model by introducing a delay for naive / memory t cells , lack of explicit modeling of the spatial dynamics limits a direct application of our model to investigate these spatial effects . finally , as noted in the results , one of the influenza a(h7n9)-infected patients ( patient a79 )was not included in our analysis of the clinical data ( fig .although our model suggests some possibilities for the source of variation due to possible variation in parameter values , large variations in recovery time are only expected to occur for relatively low levels of naive t cells , nominally incompatible with this patient s moderate t cell response but a relatively long recovery time .however , we note that t cell counts for this patient were only collected at days 10 and 23 , and that the count at day 10 was particularly low and much lower than that at day 23 ( see _ dataset s1 _ ) .we suspect that delayed , rather than weakened , production ( to at least day 10 ) of the t cell response in this patient substantially contributed to the observed delay in recovery .further investigation of this patient s clinical course and clinical samples is currently being undertaken .100 taubenberger , j. k. , and morens , d. m. ( 2008 ) .the pathology of influenza virus infections ._ annu rev pathol . _ * 3 , * 499522 .kreijtz , j. h. _ et al . _immune responses to influenza virus infection ._ virus res . _* 162 , * 1930 .goodbourn , s. _ et al ._ ( 2000 ) .interferons : cell signalling , immune modulation , antiviral response and virus countermeasures . _ j gen virol . _* 81 , * 23412364 .sadler , a. j. , and williams , b. r. g. ( 2008 ) . interferon - inducible antiviral effectors ._ nat rev immunol ._ * 8 , * 559568 .biron , c. a. _ et al . _natural killer cells in antiviral defense : function and regulation by innate cytokines ._ annu rev immunol ._ * 17 , * 189220 .jost , s. , and altfeld , m. ( 2013 ) .control of human viral infections by natural killer cells ._ annu rev immunol . _ * 31 , * 163194 .iwasaki , a. , and pillai , p. s. ( 2014 ) .innate immunity to influenza virus infection ._ nat rev immunol . _ * 14 , * 315328 .murali - krishna , k. _ et al . _counting antigen - specific cd8 t cells : a reevaluation of bystander activation during viral infection . _ immunity ._ * 8(2 ) , * 177187 . wherry , e. j. , and ahmed , r. ( 2004 ) .t - cell differentiation during viral infection ._ j virol . _ * 78(11 ) , * 55355545 . la gruta , n. l. , and turner , s. j. ( 2014 ) .t cell mediated immunity to influenza : mechanisms of viral control ._ trends immunol . _ * 35(8 ) , * 396402 .zhang , n. , and bevan , m. j. ( 2011 ) . t cells : foot soldiers of the immune system. _ immunity . _ * 35(2 ) , * 161168 .iwasaki , t. , and nozima , t. ( 1977 ) .the roles of interferon and neutralizing antibodies and thymus dependence of interferon and antibody production ._ j immunol . _* 118 , * 256263 .fang , m. , and sigal , l. j. ( 2005 ) .antibodies and t cells are complementary and essential for natural resistance to a highly lethal cytopathic virus ._ j immunol ._ * 175 , * 68296836 .graham , m. b. , and braciale , t. j. ( 1997 ) . resistance to and recovery from lethal influenza virus infection in b lymphocyte - deficient mice ._ j exp med ._ * 186 , * 20632068 .moskophidis , d. , and kioussis , d. ( 1998 ) .contribution of virus - specific cytotoxic t cells to virus clearance or pathologic manifestations of influenza virus infection in a t cell receptor transgenic mouse model . _ j exp med . _ * 188(2 ) , * 223232 .valkenburg sa ._ et al . _. protective efficacy of cross - reactive t cells recognising mutant viral epitopes depends on peptide - mhc - i structural interactions and t cell activation threshold ._ plos pathog . _ * 6(8 ) , * e1001039 .doi:10.1371/journal.ppat.1001039 .yap , k. , and ada , g. ( 1978 ) .cytotoxic t cells in the lungs of mice infected with an influenza a virus ._ scand j immunol . _ * 7(1 ) , * 7380 .wells , m. a. _ et al . _recovery from a viral respiratory infection : i. influenza pneumonia in normal and t - deficient mice . _j immunol . _* 126 , * 10361041 .hou , s. _ et al . _delayed clearance of sendai virus in mice lacking class i mhc - restricted t cells ._ j immunol . _ * 149 , * 13191325. bender , b. s. _ et al . _transgenic mice lacking class i major histocompatibility complex - restricted t cells have delayed viral clearance and increased mortality after influenza virus challenge . _j exp med ._ * 175 , * 11431145 .wang , z. _ et al . _recovery from severe h7n9 disease is associated with diverse response mechanisms driven by t - cells ._ nat commun ._ * 6*,7783 ; 10.1038/ncomms7833 .perelson , a. s. _ et al . _hiv-1 dynamics in vivo : virion clearance rate , infected cell life - span , and viral generation time . _ science . _* 217 , * 15821586 .perelson , a. s. ( 2002 ) .modelling viral and immune system dynamics ._ nat rev immunol . _* 2 , * 2836 .antia , r. _ et al . _models of responses : 1 . what is the antigen - independent proliferation program ._ j theor biol . _* 221 , * 585598 .chao , d. l. _ et al . _( 2004 ) . a stochastic model of cytotoxic t cell responses ._ j theor biol . _ * 228 , * 227240 .hraba , t. , and doleal , j. ( 1995 ) .mathematical modelling of hiv infection therapy ._ int j immunopharmac . _ * 17(6 ) , * : 523526 .de boer , r. j. ( 2007 ) . understanding the failure of t - cell vaccination against simian / human immunodeficiency virus ._ j virol . _ * 81(6 ) , * 28382848 .lim , a. g. , and maini , p. k. ( 2014 ) .htlv - i infection : a dynamic struggle between viral persistence and host immunity ._ j theor biol . _* 352 , * 92108 .althaus , c. l. _ et al . _dynamics of t cell responses during acute and chronic lymphocytic choriomeningitis virus infection ._ j immunol . _ * 179(5 ) , * 29442951 .le , d. _ et al . _mathematical modeling provides kinetic details of the human immune response to vaccination ._ front cell infect microbiol . _* 4 , * 177 ; 10.3389/fcimb.2014.00177 .heffernan , j. m. , and keeling , m. j. ( 2008 ) .an in - host model of acute infection : measles as a case study ._ theor popul biol . _ * 73 , * 134147. bocharov , g. a. , and romanyukha , a. a. ( 1994 ) .mathematical model of antiviral immune response iii .influenza a virus infection ._ j theor biol . _ * 167 , * 323360. chang , d. b. , and young , c. s. ( 2007 ) .simple scaling laws for influenza a rise time , duration , and severity ._ j theor biol . _ * 246 , * 621635 .hancioglu , b. _ et al . _( 2007 ) . a dynamical model of human immune response to influenza a virus infection ._ j theor biol ._ * 246 , * 7086 .handel , a. , and antia , r. ( 2008 ) .a simple mathematical model helps to explain the immunodominance of cd8 t cells in influenza a virus infections . __ * 82(16 ) , * 77687772 .lee , h. y. _ et al . _simulation and prediction of the adaptive immune response to influenza a virus infection ._ j virol . _ * 83(14 ) , * 71517165 .saenz , r. a. _ et al . _dynamics of influenza virus infection and pathology ._ j virol . _* 84 , * 39743983 .miao , h. _ et al . _( 2010 ) . quantifying the early immune response and adaptive immune response kinetics in mice infected with influenza a virus . _j virol . _ * 84 , * 66876698 .dobrovolny , h. m. _ et al . _( 2013 ) . assessing mathematical models of influenza infections using features of the immune response . _plos one . _ * 8(2 ) , * e0057088 . doi:10.1371/journal.pone.0057088 .reperant , l. a. _ et al . _the immune response and within - host emergence of pandemic influenza virus ._ lancet . _* 384 , * 20772081 .crauste , f. _ et al . _( 2015 ) . predicting pathogen - specific cd8 t cell immune responses from a modeling approach . _j theor biol . _ * 374 , * 6682 .zarnitsyna , v. i. _ et al . _. mathematical model reveals the role of memory cd8 t cell populations in recall responses to influenza ._ front immunol . _ * 7 , * 165 ; 10.3389/fimmu.2016.00165 .laurie , k. l. _ et al . _the time - interval between infections and viral hierarchies are determinants of viral interference following influenza virus infection in a ferret model . _ j infect dis . _ * 212(11 ) , * 17011710 .kris , r. _ et al . _passive serum antibody causes temporary recovery from influenza virus infection of the nose , trachea and lung of nude mice ._ immunol . _* 63 , * 349353 .pawelek , k. a. _ et al . _modeling within - host dynamics of influenza virus infection including immune responses ._ plos comput biol ._ * 8(6 ) , * e1002588 .doi:10.1371/journal.pcbi.1002588 .cao , p. _ et al . _innate immunity and the inter - exposure interval determine the dynamics of secondary influenza virus infection and explain observed viral hierarchies ._ plos comput biol . _ * 11(8 ) , * e1004334 .doi:10.1371/journal.pcbi.1004334 .baccam , p. _ et al . _kinetics of influenza a virus infection in humans ._ j virol ._ * 80 , * 75907599. hwang , i. _ et al . _. activation mechanisms of natural killer cells during influenza virus infection ._ plos one . _ * 7(12 ) , * e51858 .doi:10.1371/journal.pone.0051858 .kaech , s. m. , and ahmed , r. ( 2001 ) .memory t cell differentiation : initial antigen encounter triggers a developmental program in nave cells ._ nat immunol . _* 2 , * 415422 .cerwenka , a. _ et al . _naive , effector , and memory cd8 t cells in protection against pulmonary influenza virus infection : homing properties rather than initial frequencies are crucial ._ j immunol . _ * 163 , * 55355543 .lawrence , c. w. _ et al . _frequency , specificity , and sites of expansion of t cells during primary pulmonary influenza virus infection ._ j immunol . _ * 174 , * 53325340 .van stipdonk , m. j. _ et al . _nave ctls require a single brief period of antigenic stimulation for clonal expansion and differentiation ._ nat immunol . _* 2 , * 423429 .petrie , s. m. _ et al . _( 2013 ) . reducing uncertainty in within - host parameter estimates of influenza infection by measuring both infectious and total viral load . _plos one . _ * 8(5 ) : * e0064098 .doi:10.1371/journal.pone.0064098 .smith , a. m. _ et al . _an accurate two - phase approximate solution to an acute viral infection model ._ j math biol . _ * 60 , * 711726. neff - laford , h. d. _ et al . _fewer ctl , not enhanced nk cells , are sufficient for viral clearance from the lungs of immunocompromised mice ._ cellul immunol . _ * 226 , * 5464 .jenkins , m. k. , and moon , j. j. ( 2012 ) .the role of naive t cell precursor frequency and recruitment in dictating immune response magnitude ._ j immunol . _ * 188 , * 41354140 .marois , i. _ et al . _the administration of oseltamivir results in reduced effector and memory t cell responses to influenza and affects protective immunity ._ faseb j. _ * 29 , * pii : fj.14 - 260687 .asano , m. s. , and ahmed , r. ( 1996 ) .cd8 t cell memory in b cell - deficient mice ._ j exp med ._ * 183 , * 21652174 .veiga - fernandes , h. _ et al ._ ( 2000 ) .response of naive and memory t cells to antigen stimulation _ in vivo_. _ nat immunol . _ * 1(1 ) , * 4753 .badovinac , v. p. , and harty , j. t. ( 2006 ) .programming , demarcating , and manipulating t - cell memory ._ immunol rev . _ * 211 , * 6780 .dispirito , j. r. , and shen , h. ( 2010 ) .quick to remember , slow to forget : rapid recall responses of memory t cells ._ cell res . _* 20 , * 1323 .regner , m. ( 2001 ) .cross - reactivity in t - cell antigen recognition ._ immunol cell biol . _ * 79(2 ) , * 91100 .sewell , a. k. ( 2012 ) .why must t cells be cross - reactive ?_ nat rev immunol . _ * 12(9 ) , * 669677 .bolton , k. j. _ et al . _prior population immunity reduces the expected impact of ctl - inducing vaccines for pandemic influenza control . _plos one . _ * 10(3 ) , * e0120138 . doi:10.1371/journal.pone.0120138. harty , j. t. , and badovinac , v. p. ( 2008 ) . shaping and reshaping t - cell memory ._ nat rev immunol ._ * 8 , * 107119 .hou , s. _ et al . _virus - specific t - cell memory determined by clonal burst . _* 369 , * 652654 .lazuardi , l. _ et al . _age - related loss of nave t cells and dysregulation of t - cell / b - cell interactions in human lymph nodes ._ immunol ._ * 114(1 ) , * 3743 . cicin - sain , l. _ et al . _ ( 2007 ) .dramatic increase in naive t cell turnover is linked to loss of naive t cells from old primates ._ proc natl acad sci usa . _ * 104(50 ) , * 1996019965 .cicin - sain , l. _ et al . _loss of naive t cells and repertoire constriction predict poor response to vaccination in old primates ._ j immunol . _ * 184 , * 67396745 .la gruta , n. l. _ et al . _primary ctl response magnitude in mice is determined by the extent of naive t cell recruitment and subsequent clonal expansion ._ j clin invest ._ * 120(6 ) , * 18851894 .thomas , p. g. _ et al . _( 2012 ) . ecological analysis of antigen - specific ctl repertoires defines the relationship between naive and immune t - cell populations ._ proc natl acad sci usa . _ * 110(5 ) , * 18391844 .tscharke , d. c. _ et al . _sizing up the key determinants of the t cell response ._ nat rev immunol . _ * 15(11 ) , * 705716 . ho , f. _ et al . _. distinct short - lived and long - lived antibody - producing cell populations . _ eur j immunol . _ * 16(10 ) , * 12971301 .nutt , s. l. _ et al . _( 2015 ) . the generation of antibody - secreting plasma cells ._ nat rev immunol . _ * 15(3 ) , * 160171 .vieira , p. , and rajewsky , k. ( 1988 ) .the half - lives of serum immunoglobulins in adult mice ._ eur j immunol . _ * 18(2 ) , * 313316 .riberdy , j. m. _ et al ._ ( 2000 ) . diminished primary and secondary influenza virus - specific t - cell responses in cd4-depleted mice ._ j virol . _ * 74 , * 97629765 .laidlaw , b. j. _ et al . _ t cell help guides formation of lung - resident memory t cells during influenza viral infection ._ immunity . _ * 41 , * 633645 . cohen , c. _ et al . _ ( 2015 ) .mortality amongst patients with influenza - associated severe acute respiratory illness , south africa , 2009 - 2013 ._ plos one . _ * 10(3 ) , * e0118884 . doi:10.1371/journal.pone.0118884 .pc is supported by an australian government national health and medical research council ( nhmrc ) funded centre for research excellence in infectious diseases modelling to inform public health policy .zw is supported by an nhmrc australia china exchange fellowship .awcy is supported by an australian postgraduate award .jm and kk are support by nhmrc career development and senior researcher fellowships respectively . jmm is supported by an australian research council future fellowship .the h7n9 clinical study was supported by national natural science foundation of china ( nfsc ) grants 81471556 , 81470094 and 81430030 .the authors would like to thank members of the modelling and simulation unit in the centre for epidemiology and biostatistics and the school of mathematics and statistics , university of melbourne , for helpful advice on the study .we have no competing interests ..model parameter values obtained by fitting the model to experimental data . ] and ] and ] can be ignored .some parameters are obtained from the literature and the rest are obtained by fitting the model to experimental data in the paper of miao _ et al . _ , except which is of minor importance when considering a single infection and is thus fixed to reduce uncertainty .[ cols="<,<,<,<",options="header " , ]the delay in activation of t cells and b cells directly results in the delay of production of effector t cells and antibody - producing cells . by a time - shift transform ,7 in the main text becomes where and starts from . as defined in the main text , for any negative time , i.e. for .moreover , for .thus , the equation is trivial for . therefore ,for , we replace back to and eq .s1 becomes where .similarly , eqs . 9 - 11 in the main text can be changed to in this way , the model presented in the main text is equivalent to the following model : for variables whose independent variable is not explicitly specified , they are all functions of , i.e. reads .this model avoids negative time and is solved by the following steps : * firstly choosing a time step size and using it to discretise the time domain to be .the results in the main text are generated using ( day ) , the choice of which is based on the result that further decreasing does not improve the solution ( results not shown ) . *given initial condition + , + we solve the model iteratively .for iteration from to , , and in eqs .s6 and s8 are already known and thus treated as parameters .the system becomes an ode system and can be easily solved by using a built - in ode solver _ ode15s _ in matlab(r2014b ) with default settings .all the variables at time are then updated for use in the next iteration . .... clear dt=0.1 ; % time step size time=0:dt:100 ; % parameters t0=7e+7;gt=0.8;pv=210;deltav=5.0 ; beta=5e-7;betap=3e-8 ; deltai=2;kappan=2.5;kappae=5e-5 ; phi=0.33;rho=2.6;pf=1e-5;deltaf=2 ; betacn=1;betabn=0.03 ; kappas=0.8;kappal=0.4 ; pc=1.2;pb=0.52;deltae=0.57;deltap=0.5 ; ps=12;pl=4;deltas=2;deltal=0.015 ; tauc=6;taub=4;hc=1e+4;hb=1e+4 ; % index indicating when delayed process starts indc = round(tauc / dt+1 ) ; % for tauc indb = round(taub / dt+1 ) ; % for taub % variable vectors and initial conditions v = zeros(1,length(time));v(1)=1e+4 ; t = v;t(1)=7e+7 ; i = t;i(1)=0 ; r = i ; f = i ; cn=100*ones(1,length(time ) ) ; bn=100*ones(1,length(time ) ) ; e = zeros(1,indc+length(time ) ) ; p = i ; as = zeros(1,indb+length(time ) ) ; al = as ; init=[v(1),t(1),i(1),r(1),f(1),cn(1),e(1),bn(1),p(1),as(1),al(1 ) ] ' ; options = odeset('reltol',1e-3,'abstol',1e-6 ) ; for i=2:length(time ) [ ~,y]=ode15s(,[0 dt],init , options , e(i),al(i),as(i),tauc , taub , phi , rho , deltaf , gt , pf , pv , beta , betap , kappan , deltav , deltai , betacn , betabn , kappae , kappas , pl , ps , deltal , deltas , deltap , deltae , pc , pb , kappal , hc , hb , t0 ) ; v(i)=y(end,1);t(i)=y(end,2);i(i)=y(end,3);r(i)=y(end,4 ) ; f(i)=y(end,5);cn(i)=y(end,6);bn(i)=y(end,8);p(i)=y(end,9 ) ; e(indc+i)=y(end,7 ) ; as(indb+i)=y(end,10 ) ; al(indb+i)=y(end,11 ) ; init = y(end , : ) ' ; % initial condition for next iteration end .... .... function ynew = odemodel(~,y , e , al , as , tauc , taub , phi , rho , deltaf , gt , pf , pv , beta , betap , kappan , deltav , deltai , betacn , betabn , kappae , kappas , pl , ps , deltal , deltas , deltap , deltae , pc , pb , kappal , hc , hb , t0 ) % v : viral load % t : target cell % i : infected cell % r : resistant cell % f : ifn % cn : naive cd8 + t cells % e : effector cd8 + t cells % bn : naive b cells % p : plasma b cells % as : short - lived antibodies % al : long - lived antibodies % y=[v , t , i , r , f , cn , e , bn , p , as , al ] ynew = zeros(11,1 ) ; ynew(1)=pv*y(3)-deltav*y(1)-kappas*y(1)*as - kappal*y(1)*al - beta*y(1)*y(2 ) ; ynew(2)=gt*(y(2)+y(4))*(1-(y(2)+y(3)+y(4))/t0)-betap*y(1)*y(2)+rho*y(4)-phi*y(2)*y(5 ) ; ynew(3)=betap*y(1)*y(2)-deltai*y(3)-kappan*y(3)*y(5)-kappae*y(3)*e ; ynew(4)=phi*y(2)*y(5)-rho*y(4 ) ; ynew(5)=pf*y(3)-deltaf*y(5 ) ; ynew(6)=-betacn*y(1)/(y(1)+hc)*y(6 ) ; ynew(7)=betacn*y(1)/(y(1)+hc)*y(6)*exp(pc*tauc)-deltae*y(7 ) ; ynew(8)=-betabn*y(1)/(y(1)+hb)*y(8 ) ; ynew(9)=betabn*y(1)/(y(1)+hb)*y(8)*exp(pb*taub)-deltap*y(9 ) ; ynew(10)=ps*y(9)-deltas*y(10 ) ; ynew(11)=pl*y(9)-deltal*y(11 ) ; .... the model contains 11 equations and 30 parameters ( see table 1 in the main text ) .this represents a serious challenge in terms of parameter estimation , and clearly prevents a straightforward application of standard statistical techniques .however , based on an extensive survey of the experimental literature , we have been able to identify plausible , but by no means unique , combinations of parameters that successfully explain the available data .a number of parameters were taken directly from the literature , as per the citations in table 1 . the rest ( 18 parameters )were estimated by calibrating the model to the published data from miao _ et al ._ who measured viral titre , t cell counts and igm and igg antibodies in laboratory mice ( exhibiting a full immune response ) over time during primary influenza h3n2 virus infection ( shown in fig . 2 in the main text ) .note that the data were presented in scatter plots in the original paper , while we presented the data in mean sd at each data collection time point ( as shown in fig . 2 in the main text ) andfit our mean - field mathematical model to the means .* we first manually determined a set of the 18 parameters which produced a model solution that reasonably matched the experimental data shown in fig . 2 in the main text .in detail , the main criteria include : 1 ) the viral load starts from about , reaches a peak of about at 12 days p.i ., then declines rapidly from about 5 days p.i .( note that the last three data points were not considered due to the limit of detection ) ; 2 ) the t cell count starts to increase rapidly at about 6 days p.i ., reaches a peak of about at 810 days p.i .and returns back to zero after about 20 days p.i . ;3 ) the igm level starts to increase rapidly at 45 days p.i ., reaches a peak of about 200300pg / ml at about 10 days p.i . and returns back to baseline after about 20 days p.i .; 4 ) the igg level starts to increase rapidly at 45 days p.i . , reaches a peak of about 8001000 pg / ml at about 20 days p.i . and decays slowly .given the high - dimensionality of the parameter space and limited experimental data , the procedure was essential in allowing us to identify a candidate parameter set which was not far from generating a local minimum in the following optimisation process .* the candidate parameter set was then used as an initial estimate for optimization using matlab s built - in function _ fmincon _ with default settings .the target of optimization was to minimize the least - squares error ( lse ) : where the four lse components for viral load , effector t cells , igm and igg were given respectively by ^ 2 , \tag{s18}\ ] ] ^ 2 , \tag{s19}\ ] ] ^ 2 , \tag{s20}\ ] ] ^ 2 .\tag{s21}\ ] ] , , and indicate the model solution for variables , , and evaluated at time respectively . , , and indicate the associated data . is the index of the time point and the may differ for the four components . , , and were used to scale the errors to the same order of magnitude . due to the fact that data were not collected at a fixed frequency (i.e. the time interval between adjacent data points is not constant ) , the errors at different time points were assigned different weights based on the length of time intervals between adjacent points .for example , if viral load was measured at time , the weight function for interior points was given by and for boundary points by this error weighting was used to weaken the domination of dense data points on model fits .it is evident that a lot of measurements were done within the first 10 days post - infection but only a few were performed after day 20 post - infection , in particular for igg data ( see fig . 2 in the main text ) .we found that using equally weighted lses led to a model fit that manifestly failed to capture those sparse data points which we believe are equally , if not more important from a more biological perspective , in determining the igg kinetics ( see fig .s8 , compared with fig . 2 in the main text ) .* the parameter constraints when using _ fmincon _ were set to ] , ] , ] , ] , ] , ] , ] , ] , ] . * after obtaining a locally optimized solution for the candidate parameter set , we then checked the solution generated and evaluated its biological plausibility ( based on the criteria mentioned above ) .this step was essential as given the over - specification of the model ( in a statistical sense ) , it was possible for good fitting solutions to be identified by matlab s optimization algorithm , which were nonetheless biologically implausible .for example , oscillatory solutions for quantities such as igg , while providing a good - fit " to data , were not deemed acceptable on biological grounds ( see fig .s9 for such an example ) .if the optimised solution failed our ( qualitative ) evaluation , we returned to the first step and redetermined a new set of parameters as new initial estimates to be optimized using matlab .this entire process was repeated to arrive at the default parameter set shown in table 1 in the main text . to guarantee that the default parameter set was a good choice , we further randomly generated 10,000 sets of parameter samples near the default parameter set ( within % from the default values ) and used them as initial estimates with _fmincon _ to search for locally optimized solutions . of these, 31 generated a better lse but all failed to meet the criteria mentioned above ( results not shown ) .2 in the main text shows how the model reproduces the key dynamic behavior shown in the data .we also show in the _ results _ section in the main text , that the model behavior is robust to perturbation of model parameters and that model predictions are reasonably consistent with a range of other experimental data , demonstrating the plausibility , if not uniqueness ( of course ) , of the parameter set .we emphasize that , although the default parameter set is successful in reproducing a multitude of experimental observations ( e.g. full immune , knockout , re - infection ) as presented in the main text , we by no means claim that this parameter set is unique .it remains an open and challenging problem to reliably identify a biologically plausible and statistically identifiable solution for what is a highly complex system , where we are severely limited by available experimental data . incorporating memory t cells into the model in the main text ,we only make two changes .the first is adding two equations to describe the memory t cell ( ) proliferation / differentiation , similar to eqs .6 and 7 in the main text then we change the term in eq . 3 in the main text to .hence , similar to the approach mentioned above that moving the delayed term from viral load to effector cells , we write down the model in an equivalent form , , \tag{s28}\\ \frac{df}{dt } & = p_fi-\delta_ff , \tag{s29}\\ \frac{dr}{dt } & = \phi ft-\rho r , \tag{s30}\\ \frac{dc_{n}}{dt } & = -\beta_{cn}(\frac{v}{v+h_c})c_{n } , \tag{s31}\\ \frac{de(t+\tau_c)}{dt } & = \beta_{cn}(\frac{v}{v+h_c})c_{n}e^{(p_c\tau_c ) } - \delta_ee(t+\tau_c ) , \tag{s32}\\ \frac{db_n}{dt } & = -\beta_{bn}(\frac{v}{v+h_b})b_{n } , \tag{s33 } \\\frac{dp(t+\tau_b)}{dt } & = \beta_{bn}(\frac{v}{v+h_b})b_{n}e^{(p_b\tau_b ) } - \delta_pp(t+\tau_b ) , \tag{s34}\\ \frac{da_s(t+\tau_b)}{dt } & = p_sp(t+\tau_b)-\delta_sa_s(t+\tau_b ) , \tag{s35 } \\\frac{da_l(t+\tau_b)}{dt } & = p_lp(t+\tau_b)-\delta_la_l(t+\tau_b ) .\tag{s36}\\ \frac{dc_{m}}{dt } & = -\beta_{cm}(\frac{v}{v+h_{cm}})c_{m } , \tag{s37}\\ \frac{de_m(t+\tau_{cm})}{dt } & = \beta_{cm}(\frac{v}{v+h_{cm}})c_{m}e^{(p_{cm}\tau_{cm } ) } - \delta_ee_m(t+\tau_{cm } ) .\tag{s38}\end{aligned}\ ] ] for variables whose independent variables are not explicitly specified , they are all functions of , i.e. reads .memory t cells show a shorter delay and faster proliferation than naive t cells .the shortened delay may be caused by a shortened lag time to the first division and/or a reduced delay for effector cells migrating from the lymphatic compartment to the lung .the former reduction is about 15 hours and the latter is less than about 12 hours .thus , we choose ( days ) , correspond to a one day reduction in the delay time compared to the delay of naive cells ( ) .memory t cells show a higher division rate and a lower loss rate than naive t cells , based on which the net production rate of effector cells for memory t cells is estimated to be about 1.5 times of that for naive cells .thus , we choose ( ) . in the absence of data , we assume and .the initial number of memory t cells is varied as specified in the main text or figures .we assume that the effector t cells produced by either naive or memory t cells are functionally identical ( i.e. then have the same decay rate and killing rate ) . note that we do not model the process of differentiation of effector t cells into memory cells but use a memory cell pool as an initial condition to simulate viral re - infection . ....clear dt=0.1 ; % time step size time=0:dt:100 ; % parameters t0=7e+7;gt=0.8;pv=210;deltav=5.0 ; beta=5e-7;betap=3e-8 ; deltai=2;kappan=2.5;kappae=5e-5 ; phi=0.33;rho=2.6;pf=1e-5;deltaf=2 ; betacn=1;betabn=0.03 ; kappas=0.8;kappal=0.4 ; pc=1.2;pb=0.52;deltae=0.57;deltap=0.5 ; ps=12;pl=4;deltas=2;deltal=0.015 ; tauc=6;taub=4;hc=1e+4;hb=1e+4 ; betacm=1;pcm=1.8;taucm=5 ; % index indicating when delayed process starts indc = round(tauc / dt+1 ) ; % for tauc indcm = round(taucm / dt+1 ) ; % for taucm indb = round(taub / dt+1 ) ; % for taub % variable vectors and initial conditions v = zeros(1,length(time));v(1)=1e+1 ; t = v;t(1)=7e+7;i = t;i(1)=0;r = i;f = i ; cn=100*ones(1,length(time ) ) ; bn=100*ones(1,length(time ) ) ; e = zeros(1,indc+length(time ) ) ; p = i;as = zeros(1,indb+length(time));al = as ; cm=5000*ones(1,length(time ) ) ; em = zeros(1,indcm+length(time ) ) ; init=[v(1),t(1),i(1),r(1),f(1),cn(1),e(1),bn(1),p(1),as(1),al(1 ) , cm(1),em(1 ) ] ' ; options = odeset('reltol',1e-3,'abstol',1e-6 ) ; for i=2:length(time ) [ ~,y ] = ode15s(,[0 dt],init , options , e(i),al(i),as(i),em(i),tauc , taub , phi , rho , deltaf , gt , pf , pv , beta , betap , kappan , deltav , deltai , betacn , betabn , kappae , kappas , pl , ps , deltal , deltas , deltap , deltae , pc , pb , kappal , hc , hb , t0,betacm , pcm , taucm ) ; v(i)=y(end,1);t(i)=y(end,2);i(i)=y(end,3);r(i)=y(end,4 ) ; f(i)=y(end,5);cn(i)=y(end,6);bn(i)=y(end,8);p(i)=y(end,9 ) ; e(indc+i)=y(end,7 ) ; as(indb+i)=y(end,10);al(indb+i)=y(end,11 ) ; cm(i)=y(end,12);em(indcm+i)=y(end,13 ) ; init = y(end , : ) ' ; % initial condition for next iteration end .... .... function ynew = odemodel_new_with_memory(~,y , e , al , as , em , tauc , taub , phi , rho , deltaf , gt , pf , pv , beta , betap , kappan , deltav , deltai , betacn , betabn , kappae , kappas , pl , ps , deltal , deltas , deltap , deltae , pc , pb , kappal , hc , hb , t0,betacm , pcm , taucm ) % v : viral load % t : target cell % i : infected cell % r : resistant cell % f : ifn % cn : naive cd8 + t cells % e : effector cd8 + t cells % bn : naive b cells % p : plasma b cells % as : short - lived antibodies % al : long - lived antibodies % cm : memory cd8 + t cells % em : effector cd8 + t cells produced from memory cells % y=[v , t , i , r , f , cn , e , bn , p , as , al , cm , em ] ynew = zeros(13,1 ) ; ynew(1)=pv*y(3)-deltav*y(1)-kappas*y(1)*as - kappal*y(1)*al - beta*y(1)*y(2 ) ; ynew(2)=gt*(y(2)+y(4))*(1-(y(2)+y(3)+y(4))/t0)-betap*y(1)*y(2)+rho*y(4)-phi*y(2)*y(5 ) ; ynew(3)=betap*y(1)*y(2)-deltai*y(3)-kappan*y(3)*y(5)-kappae*y(3)*(e+em ) ; ynew(4)=phi*y(2)*y(5)-rho*y(4 ) ; ynew(5)=pf*y(3)-deltaf*y(5 ) ; ynew(6)=-betacn*y(1)./(y(1)+hc)*y(6 ) ; ynew(7)=betacn*y(1)./(y(1)+hc)*y(6)*exp(pc*tauc)-deltae*y(7 ) ; ynew(8)=-betabn*y(1)./(y(1)+hb)*y(8 ) ; ynew(9)=betabn*y(1)./(y(1)+hb)*y(8)*exp(pb*taub)-deltap*y(9 ) ; ynew(10)=ps*y(9)-deltas*y(10 ) ; ynew(11)=pl*y(9)-deltal*y(11 ) ; ynew(12)=-betacm*y(1)./(y(1)+hc)*y(12 ) ; ynew(13)=betacm*y(1)./(y(1)+hc)*y(12)*exp(pcm*taucm)-deltae*y(13 ) ; ....
myriad experiments have identified an important role for t cell response mechanisms in determining recovery from influenza a virus infection . animal models of influenza infection further implicate multiple elements of the immune response in defining the dynamical characteristics of viral infection . to date , influenza virus models , while capturing particular aspects of the natural infection history , have been unable to reproduce the full gamut of observed viral kinetic behaviour in a single coherent framework . here , we introduce a mathematical model of influenza viral dynamics incorporating all major immune components ( innate , humoral and cellular ) and explore its properties with a particular emphasis on the role of cellular immunity . calibrated against a range of murine data , our model is capable of recapitulating observed viral kinetics from a multitude of experiments . importantly , the model predicts a robust exponential relationship between the level of effector t cells and recovery time , whereby recovery time rapidly decreases to a fixed minimum recovery time with an increasing level of effector t cells . we find support for this relationship in recent clinical data from influenza a(h7n9 ) hospitalised patients . the exponential relationship implies that people with a lower level of naive t cells may receive significantly more benefit from induction of additional effector t cells arising from immunological memory , itself established through either previous viral infection or t cell - based vaccines .
continuous - variable quantum - key distribution ( cvqkd ) , as an unconditionally secure communication scheme between two legitimate parties alice and bob , has achieved advanced improvements in theoretical analysis and experimental implementation in recent years .practical implementation systems , such as fiber - based gaussian - modulated and discrete - modulated coherent - state protocol qkd systems over tens of kilometers , have been demonstrated in a few groups .the unconditional security of such systems with prepare - and - measure ( pm ) implementation has been confirmed by the security analysis of the equivalent entanglement - based ( eb ) scheme .however , the traditional security analysis of the eb scheme of cvqkd just includes the signal beam and not the local oscillator ( lo ) , which is an auxiliary light beam used as a reference to define the phase of the signal state and is necessary for balanced homodyne detection .this will leave some security loopholes for eve because lo is also unfortunately within eve s manipulating domain .the necessity of monitoring lo intensity for the security proofs in discrete qkd protocols embedded in continuous variables has been discussed .moreover , in , the excess noise caused by imperfect subtraction of balanced homodyne detector ( bhd ) in the presence of lo intensity fluctuations has been noted and quantified with a formulation .however , in the practical implementation of cvqkd , shot noise scaling with lo power measured before keys distribution is still assumed to keep constant if the fluctuations of lo intensity are small . and in this circumstance , pulses with large fluctuation are just discarded as shown in . unfortunately , this will give eve some advantages in exploiting the fluctuation of lo intensity . in this paper , we first describe bob s measurements under this fluctuation of lo intensity , and propose an attacking scheme exploiting this fluctuation .we consider the security of practical cvqkd implementation under this attack and calculate the secret key rate with and without bob monitoring the lo for reverse and direct reconciliation protocol .and then , we give a qualitative analysis about the effect of this lo intensity fluctuation on the secret key rate alice and bob hold .we find that the fluctuation of lo could compromise the secret keys severely if bob does not scale his measurements with the instantaneous lo intensity values .finally , we briefly discuss the accurate monitoring of lo intensity to confirm the security of the practical implementation of cvqkd .generally , in practical systems of cvqkd , the local oscillator intensity is always monitored by splitting a small part with a beam splitter , and pulses with large lo intensity fluctuation are discarded too . however , even with such monitoring , we do not yet clearly understand how fluctuation , in particular small fluctuation , affects the secret key rate . to confirm that the secret key rate obtained by alice and bob is unconditionally secure , in what follows , we will analyze the effects of this fluctuation on the secret key rate only , and do not consider the imperfect measurement of bhd due to incomplete subtraction of it in the presence of lo intensity fluctuations , which has been discussed in . ideally , with a strong lo , a perfect pulsed bhd measuring a weak signalwhose encodings are will output the results , where _ k _ is a proportional constant of bhd , is the amplitude of lo , is the relative phase between the signal and lo except for the signal s initial modulation phase . so scaling with lo power or shot noise ,the results can be recast as with in eq .( [ eq : x0 ] ) is 0 or . herethe quadratures and are defined as and , where is the quadrature of the vacuum state .however , in a practical system , the lo intensity fluctuates in time during key distribution . with a proportional coefficient ,practical lo intensity can be described as , where is the initial amplitude of lo used by normalization and its value is calibrated before key distribution by alice and bob . if we do not monitor lo or quantify its fluctuation ,especially just let the outputs of bhd scale with the initial intensity or power of lo , the outputs then read unfortunately , this fluctuation will open a loophole for eve , as we will see in the following sections . in conventional security analysis , like the eb scheme equivalent to the usual pm implementation depicted in fig .[ fig:1](a ) , lo is not taken into consideration and its intensity is assumed to keep unchanged .however , in practical implementation , eve could intercept not only the signal beam but also the lo , and she can replace the quantum channel between alice and bob with her own perfect quantum channel as shown in figs .[ fig:1](b ) and [ fig:1](c ) . in so doing ,eve s attack can be partially hidden by reducing the intensity of lo with a variable attenuator simulating the fluctuation without changing lo s phase , and such an attack can be called a lo intensity attack ( loia ) . in the following analysis, we will see that , in the parameter - estimation procedure between alice and bob , channel excess noise introduced by eve can be reduced arbitrarily , even to its being null , just by tuning the lo transmission .consequently , alice and bob would underestimate eve s intercepted information and eve could get partial secret keys that alice and bob hold without being found under this attack .figure [ fig:1](b ) describes the loia , which consists of attacking the signal beam with a general gaussian collective attack and attacking the lo beam with an intensity attenuation by a non - changing phase attenuator * a * , such as a beam splitter whose transmission is variable .this signal - beam gaussian collective attack consists of three steps : eve interacts her ancilla modes with the signal mode by a unitary operation * u * for each pulse and stores them in her quantum memory , then she makes an optimal collective measurement after alice and bob s classical communication . figure [ fig:1](c )is one practical loia with * u * being a beam - splitter transformation .its signal attack is also called an entangling cloner attack , which was presented first by grosshan and improved by weedbrook . in appendix[ sec : security ] , we will demonstrate that with this entangling cloner , eve can get the same amount of information as that shown in fig .[ fig:1](b ) .we analyze a practical cvqkd system with homodyne protocol to demonstrate the effect of loia on the secret key rate , and for simplicity we do not give the results of the heterodyne protocol , which is analogous to the homodyne protocol . in the usual pm implementation , alice prepares a series of coherent states centered on with each pulse , and then she sends them to bob through a quantum channel which might be intercepted by eve . here , and , respectively , satisfy a gaussian distribution independently with the same variance and zero mean . this initial mode prepared by alicecan be described as , and is the quadrature variable . here describes the quadrature of the vacuum mode .note that we denote an operator with a hat , while without a hat the same variable corresponds to the classical variable after measurement .so the overall variance of the initial mode prepared by alice is .when this mode comes to bob , bob will get a mode , where describes eve s mode introduced through the quantum channel whose quadrature variance is .bob randomly selects a quadrature to measure , and if eve attenuates the lo intensity during the key distribution , but bob s outputs still scale with the initial lo intensity , just as in eq .( [ eq : x0 ] ) , he will get the measurement where is ( or equivalently ) .however , if bob monitors lo and also scales with the instantaneous intensity value of lo with each pulse , he will get without any loss of course .note that , for computation simplicity , hereafter we assume the variable transmission rate ( or attenuation rate ) of each pulse of lo is the same without loss of generality .thus the variance of bob s measurements and conditional variance on alice s encodings with and without monitoring ( in what follows , without monitoring specially indicates that bob s measurement is obtained just by scaling with the initial lo intensity instead of monitoring instantaneous values , and vice versa ) can be given by \;,\label{eq : vbm}\\ v_{b|a}&=t+(1-t)n\ ; , \\v_{b|a}^w&=\eta\left[t+(1-t)n\right]\;,\end{aligned}\ ] ] where the superscript indicates without monitoring " and all variances are in shot - noise units ; the conditional variance is defined as hence , the covariance matrix of alice s and bob s modes can be obtained as \mathbb{i } \end{pmatrix}\label{eq : rab},\end{split } \\ & \gamma_{ab}^w=\begin{pmatrix } v\mathbb{i } & \sqrt{\eta t(v^2 - 1)}\sigma_z\\ \sqrt{\eta t(v^2 - 1)}\sigma_z & \eta[tv+(1-t)n]\mathbb{i } \end{pmatrix},\end{aligned}\ ] ] where is the pauli matrix and is a unit matrix . from eqs .( [ eq : vb ] ) and ( [ eq : vbm ] ) we can derive that the channel transmission and excess noise are , with monitoring , and , without monitoring .hence , by attenuating the lo intensity as fig . [ fig:1 ] shows , to make , eve could arbitrarily reduce to zero , thus she will get the largest amount of information permitted by physics . in the following numerical simulationwe always make , namely , .thus , the covariance matrix and eve s introducing noise [ in an entangling cloner it is eve s epr state s variance as fig .[ fig:1](c ) shows ] should be selected to be to estimate the secret key rate , without loss of generality , we first analyze the reverse reconciliation then consider the direct reconciliation . from alice andbob s points of view , the secret key rate for reverse reconciliation with monitoring or not are given , respectively , by where the mutual information between alice and bob with and without monitoring are the same , and that is this is because bob s measurements in these two cases are just different with a coefficient , and they correspond with each other one by one , so they are equivalent according to the data - processing theorem .however , the mutual information between eve and bob given by the holevo bound in these two cases is not identical . as the previous analysis showed , in bob s point of view , channel transmission and the excess noise estimation are different .but from eve s point of view , they are identical according to the data - processing theorem because she estimates bob s measurements in these two cases just by multiplying a coefficient .we ll calculate the real information intercepted by eve first .it can be given by where is the von neumann entropy .for a gaussian state , this entropy can be calculated by the symplectic eigenvalues of the covariance matrix characterizing . to calculate eve s information, first eve s system can purify permitted by quantum physics , so that .second , after bob s projective measurement , the system is pure , so that .designating , , and , the symplectic eigenvalues of are given by where and .similarly , the entropy is determined by the symplectic eigenvalue of the covariance matrix , namely , where and stands for the moore - penrose inverse of a matrix .then , and the holevo bound reads where . however , eve s information estimated by bob without monitoring is given by by substituting eqs .( [ eq : xbe ] ) and ( [ eq : xbem ] ) into eqs .( [ eq : krr ] ) and ( [ eq : krrm ] ) , respectively , the secret key rate with and without bob s monitoring can be obtained .however , the secret key rate in eq .( [ eq : krrm ] ) without monitoring is unsecured in evidence .eve s interception of partial information from is not detected , in other words , alice and bob underestimate eve s information without realizing it .actually , the real or unconditionally secure secret key rate , which we called a truly secret key rate , should be available by replacing in eq .( [ eq : krrm ] ) with eq .( [ eq : xbe ] ) .note that it is identical with the monitoring secret key rate in eq .( [ eq : krr ] ) due to eq .( [ eq : iab ] ) .we investigate the secret key rate bob measured without monitoring and the true one or equivalently monitoring one for reverse reconciliation under eve attacking the intensity of lo during key distribution . as fig .[ fig:2 ] shows , with various values of transmission of lo that can be controlled by eve , the truly secret key rate alice and bob actually share decreases rapidly over long distances or small channel transmissions .( color online ) reverse reconciliation pseudosecret key rate and the truly secret one vs channel transmission under loia .solid lines are secret key rate estimated by bob without monitoring lo intensity and dashed lines are the truly secret ones .colored lines correspond to the lo transmissions as labeled . herealice s modulation variance .] additionally , because the mutual information between alice and bob with and without monitoring is identical as eq.([eq : iab ] ) shows , subtracting eq .( [ eq : krr ] ) from eq .( [ eq : krrm ] ) we can estimate eve s intercepted information which is plotted in fig .[ fig:3 ] .we find that eve could get partial or full secret keys which alice and bob hold by controlling the different transmissions of lo . taking a 20-km transmission distance as an example , surprisingly , just with lo intensity fluctuation or attenuating rate 0.08 , eve is able to obtain the full secret keys for reverse reconciliation without bob s monitoring the lo .we now calculate the secret key rate for direct reconciliation , which is a little more complicated , and we investigate the effect of lo intensity attack by eve on cvqkd .the secret key rate estimated by bob with and without lo monitoring is given , respectively , by note that we have already calculated and in eq.([eq : iab ] ) and they are identical for direct and reverse reconciliation .for eve we have where has been already computed in the previous section , and using the fact that after alice s projective measurement on modes and obtaining in the eb scheme shown in fig .[ fig:1](a ) , the system is pure . to calculate , we have to compute the symplectic eigenvalues of covariance matrix , which is obtained by where and can be read in the decomposition of the matrix which is available by elementary transformation of the matrix [ see fig . [ fig:1](a ) ] where is a unit matrix .it is obtained by applying a homodyne detection on mode a after mixing and with a balanced beam - splitter transformation .the matrix and actually is in eq .( [ eq : rab ] ) , is a unit matrix .so we can get and the symplectic eigenvalues of it where and .the holevo bound then reads substituting eqs .( [ eq : xae ] ) and ( [ eq : xaem ] ) into eqs .( [ eq : kdr ] ) and ( [ eq : kdrm ] ) , respectively , the secret key rates in these two cases are obtained . in fig .[ fig:4 ] , we plotted them for channel transmission with various values of and find that the difference between the pseudosecret key rate with bob not monitoring lo and the truly secret one is still increasing with the channel transmission becoming smaller . for eve , when bob does not monitor lo , she will get the partial or total secret key rate without being found by subtracting eq .( [ eq : kdr ] ) from eq .( [ eq : kdrm ] ) when she reduces the intensity of lo . in fig. [ fig:5 ] , we plotted the pseudosecret key rate for direct reconciliation and the mutual information overestimated by alice and bob .we find that for short distance communication ( less 15 km or 3 db limit ) , a small fluctuation of lo intensity could still hide eve s attack partially or totally .note that in the above estimation we assume each pulse s transmission rate ( or attenuation rate ) is identical .however , when is different for each pulse ( eve simulates the fluctuation of lo to hide her dramatic attack on lo ) , eve still could intercept as much as or even more secret key rates than above for reverse and direct reconciliation , as long as the largest value of among all pulses ( or approximately most pulse transmission rates ) is smaller than the above constant value .our analysis shows that reverse reconciliation is more sensitive than direct reconciliation about the fluctuation of lo intensity , and , even with a small attenuation of lo intensity , eve can get full secret keys but not be found .this is consistent with the fact that channel excess noise has a more severe impact on reverse reconciliation than on direct reconciliation . of course , when the intensity of lo fluctuates above the initial calibrated value ( i.e. , ) , eve could not get any secret keys , but alice and bob would overestimate eve s intercepted information due to the overestimation of channel excess noise . however , when lo fluctuates around the initial calibrated value , how to quantify eve s information is still an open question , because the distribution of the fluctuation of lo ( or ) is not a normal distribution and unclear for alice and bob due to eve s arbitrary manipulation .but in this circumstance , eve still could intercept partial secret keys if she increases the channel excess noise of one part of the signal pulses when she controls and decreases it for the other part when controlling , i.e. , making the overall estimated excess noise by alice and bob lower than the real one . remarkably , lo intensity fluctuation opens a loophole for eve to attack the practical system , especially in the case of communication with low channel transmission or over long distance .consequently , in the practical implementation of cvqkd , we must monitor the lo fluctuation carefully and in particular scale the measurements with instantaneous intensity values of lo .alternatively , we can also scale with the lowest intensity value of lo if the fluctuations are very small , but it will estimate the secret key rate pessimistically thus leading to the reduction of the efficiency of the key distribution .however , we can not use the average intensity value of lo to normalize the measurements as most current implementations do , because it still could overestimate the secret key rate for alice and bob .additionally , for reverse reconciliation communication over long distance , very small fluctuation of lo might compromise the secret key rate completely , which presents a big challenge for accurately monitoring lo intensity .finally , we point out that in this paper we do not consider the imperfections of bhd such as detection efficiency , electronic noise , and incomplete subtraction , which may make lo intensity fluctuation have a more severe impact on estimating the secret key rate for alice and bob . in conclusion, we have analyzed the effect of lo intensity fluctuation on the secret key rate estimation of alice and bob for reverse and direct reconciliation .incredibly , bob s estimation of the secret key rate will be compromised severely without monitoring lo or if his measurements do not scale with lo instantaneous intensity values even with monitoring but just discard large fluctuation pulses like in .furthermore , we have shown that eve could hide her attack partially by reducing the intensity of lo and even could steal the total secret keys alice and bob share without being found by a small attenuation of lo intensity , especially for reverse reconciliation .finally , we have also briefly discussed the monitoring of lo and pointed out that it would be a challenge for highly accurate monitoring .this work is supported by the national natural science foundation of china , grant no .is supported by the program for new century excellent talents .c.m . is supported by the hunan provincial innovation foundation for postgraduates .acknowledge support from nudt under grant no . kxk130201 .in this appendix , we calculate the holevo bound obtained by eve for direct and reverse reconciliation using weedbrook s entangling cloner model , and then give the secret key rate shared by alice and bob under loia .we begin the analysis by calculating the von neumann entropy of eve s intercepting state first . as fig .[ fig:1](c ) shows , the entangling cloner consists of eve replacing the gaussian quantum channel between alice and bob with a beam splitter of transmission and an epr pair of variance .half of the epr pair mode is mixed with alice s mode in the beam splitter and is sent to bob to match the noise of the real channel by tuning n. the other half mode is kept by eve to reduce the uncertainty on one output of the beam splitter , the mode , which can be read as where is the quadrature of mode .thus , the variance of mode is given by and the conditional variance can be calculated as , using eq .( [ eq : cvar ] ) , hence , eve s covariance matrix can be obtained as where and the notation stands for a matrix with the arguments on the diagonal elements and zeros everywhere else .the symplectic eigenvalues of this covariance matrix are given by where , and .hence , the von neumann entropy of eve s state is given by for the direct reconciliation protocol of cvqkd , the holevo bound between eve and alice is given by eq .( [ eq : xae0 ] ) , where has been calculated by eq .( [ eq : se ] ) . can be obtained by the conditional covariance matrix and its symplectic eigenvalues are given by where , and .thus , the conditional entropy is substituting eqs .( [ eq : se ] ) and ( [ eq : sea ] ) into eq .( [ eq : xae0 ] ) , we can get the mutual information between alice and eve , under loia , bob s estimation of the holevo bound without monitoring lo intensity then reads , using eq .( [ eq : xae1 ] ) , with eqs .( [ eq : xae1 ] ) and ( [ eq : xaem1 ] ) , the secret key rates in eqs .( [ eq : kdr ] ) and ( [ eq : kdrm ] ) then can be calculated respectively , and the calculation numerically demonstrates that they are perfectly consistent with the fig . [ fig:4 ] .the calculation of the holevo bound between eve and bob for reverse reconciliation is a bit more complicated . using eq .( [ eq : xbe0 ] ) , we only need to calculate the conditional entropy , which is determined by the symplectic eigenvalues of the covariance matrix , where and , .then , can be recast as where + , , + and + .+ hence , its symplectic eigenvalues are given by where , , and is the determinant of a matrix .so , we get the conditional entropy and then the holevo bound consequently , without monitoring lo intensity , alice and bob will give eve the holevo bound substituting eqs .( [ eq : xbe1 ] ) and ( [ eq : xbem1 ] ) into eqs .( [ eq : krr ] ) and ( [ eq : krrm ] ) , respectively , the secret key rates with and without bob s monitoring can be obtained , and for channel transmission with various values of , they are numerically demonstrated to be perfectly consistent with fig .[ fig:2 ] , too .hence , it also indirectly confirms that either for direct or reverse reconciliation , the entangling cloner could reach the holevo bound against the optimal gaussian collective attack . in this paper ,lo intensity fluctuation indicates the deviation of each pulse s intensity from the initial calibrated value during the key distribution .it does not mean the quantum fluctuation of each pulse itself , because lo is a strong classical beam whose quantum fluctuation is very small relative to itself and can be neglected .
we consider the security of practical continuous - variable quantum key distribution implementation with the local oscillator ( lo ) fluctuating in time , which opens a loophole for eve to intercept the secret key . we show that eve can simulate this fluctuation to hide her gaussian collective attack by reducing the intensity of the lo . numerical simulations demonstrate that , if bob does not monitor the lo intensity and does not scale his measurements with the instantaneous intensity values of lo , the secret key rate will be compromised severely .
oscillatory integrals play an important role in the theory of pseudodifferential operators .they are also a useful tool in mathematical physics , in particular in quantum field theory , where they are used to give meaning to formal fourier integrals in the sense of distributions . for phase functions which are homogeneous of order one, this also leads to a characterization of the wave front set of the resulting distribution , as it is known to be contained in the manifold of stationary phase of the phase function . in these applications ,the restriction to phase functions that are homogeneous of order one is often obstructive . in many cases ,this restriction can be overcome by shifting a part of the would - be phase function to the symbol , cf .example [ ex : delta+ ] below .however , such a shift is not always possible , for instance if the would - be phase function contains terms of order greater than one .such phase functions are present in the twisted convolutions that occur in quantum field theory on moyal space , cf .examples [ ex : ncqft ] and [ ex : ncqftb ] below . up to now, a rigorous definition of these twisted convolution integrals could be given only in special cases and in such a way that the information on the wave front set is lost .thus , it is highly desirable to generalize the notion of oscillatory integrals to encompass also phase functions that are not homogeneous of order one .such generalizations were proposed by several authors . however , to the best of our knowledge , the wave front sets of the resulting distributions were not considered , except for one very special case .we comment on these settings below , cf .remark [ rem : asadafujiwara ] .it is shown here that the restriction to phase functions that are homogeneous of order one can indeed be weakened , without losing information on the wave front set .the generalization introduced here not only allows for inhomogeneous phase functions , but also for phase functions that are symbols of any positive order .however , one has to impose a condition that generalizes the usual nondegeneracy requirement .it is also shown that the wave front sets of the distributions thus obtained are contained in a set that generalizes the notion of the manifold of stationary phase .we conclude with a discussion of some applications . throughout , we use the following notation : for an open set , means that is a compact subset of . stands for . for a subset , stands for the projection on the first component denotes the times continuously differentiable functions supported on and the set of elements of with compact support in .the dual space of is denoted by and .the pairing of a distribution and a test function is denoted by .the dot stands for the scalar product on . denotes the angle between two vectors .as usual , cf . , we define a symbol as follows : let be an open set .a function is called a _ symbol of order _if for each and multiindices , we have the set of all such functions , equipped with these seminorms will be denoted by .furthermore , we denote and . for simplicity , we restrict ourselves to these symbols . the generalization to the symbols is straightforward .one then has to restrict to , , where is the order of the phase function introduced below .also the generalization to asymptotic symbols as discussed in is straightforward .the following proposition is a straightforward consequence of the definition of : [ prop : cont ] the maps , and the multiplication are continuous .the following proposition is proven in ( * ? ? ?1.7 ) : [ prop : dense ] if , then is dense in for the topology of .now we introduce our new definition of a phase function .[ def : phase ] a _ phase function _ of order on is a function such that 1 . is a symbol of order .[ enum : phase ] for each there are positive such that [ rem : phase ] condition [ enum : phase ] generalizes the usual nondegeneracy requirement and ensures that oscillates rapidly enough for large .in particular it means that is not a symbol of order less than .it also means that one can choose such that is well - defined and a symbol of order . here can be chosen such that is compact for each .[ rem : asadafujiwara ] our definition of a phase function is a generalization of a definition introduced by hrmander ( * ? ? ?2.3 ) in the context of pseudodifferential operators .he considered phase functions of order 1 ( in the nomenclature introduced above ) and characterized the singular support of the resulting distribution , but not its wave front set .our characterization of the singular support ( cf .corollary [ cor : m ] ) coincides with the one given by hrmander ( * ? ? ?inhomogeneous phase functions were also considered by asada and fujiwara in the context of pseudodifferential operators on . in their setting , , andthere must be a positive constant such that furthermore , all the entries of this matrix ( and their derivatives ) are required to be bounded .thus , the phase function is asymptotically at least of order 1 and at most of order 2 .the admissible amplitudes are at most of order 0 . the wave front set of such operators on not considered by asada and fujiwara .the same applies to the works of boulkhemair and ruzhansky and sugimoto , who work in a similar context .coriasco considered a special case of hrmander s framework , where again , and with , a subset of the symbols of order 1 .furthermore , he imposed growth conditions on that are more restrictive than condition [ enum : phase ] .the resulting operators on can then be extended to operators on .if a further condition analogous to is imposed , then also the wave front set , which is there defined via -microregularity , can be characterized ( at least implicitly , by the change of the wave front set under the action of the operator ) .[ prop : diff ] if is a phase function of order and and there is a such that , then and the map is continuous . for have so that is continuous. differentiation gives the expression in curly brackets is a symbol of order . with the same argument as before onecan thus differentiate times .we formulate the main theorem of this section analogously to ( * ? ? ? * thm .the proof is a straightforward generalization of the proof given there .[ thm : osc ] let be a phase function of order on .then there is a unique way of defining for such that coincides with when for some and such that , for all , the map is continuous .moreover , if and , then the map is continuous . to prove this , we need the following lemma : [ lemma : v ] let be a phase function of order on .then there exist , and such that for the differential operator with adjoint we have furthermore , is a continuous map from to .we choose as in definition [ def : phase ] and as in remark [ rem : phase ] .we set then we have it is easy to see that , and are symbols in the required way .the last statement follows from proposition [ prop : cont ] .the uniqueness is a consequence of proposition [ prop : dense ] . for and , we have , with as in lemma [ lemma : v ] , { \mathrm{d}}^nx { \mathrm{d}}^s\theta,\end{aligned}\ ] ] for any and thus \rvert } { \mathrm{d}}^nx { \mathrm{d}}^s\theta.\ ] ] now the multiplication is continuous .thus , if , then $ ] is a symbol of order and in particular we have , for each , \rvert } ( 1+{\lvert \theta \rvert})^{p\mu - m } \leq f_{p , k}(a ) \sum_{{\lvert \alpha \rvert } \leq p } \sup_{x \in k } { \lvert d^\alpha f \rvert},\ ] ] where is a seminorm on . for , we may thus choose such that and define { \mathrm{d}}^nx { \mathrm{d}}^s\theta.\ ] ] as the sum on the r.h.s . of is a seminorm on , and due to the continuity properties discussed above , the map is continuous . for , we have by .thus , we can unambiguously define .we may now further characterize the distributions that result from a generalized oscillatory integral .[ def : sp ] let be a phase function of order on .we define we call the _ asymptotic manifold of stationary phase_. by definition , it is conic .[ lemma : sp ] is a closed conic subset of . is a closed subset of . from the definition of it follows that if , then for all , so is conic .we now show that is open in .let be such that there are positive such that we set .by taylor s theorem we have where and fulfillthe bounds here are chosen such that , .furthermore , we restrict to . then we may use that is a symbol of order to conclude that there are positive constants , which are bounded for , for which holds . as the zeroth order term in the taylor expansion grows faster than for a fixed positive constant , we can make so small that grows faster than for some positive .thus , is closed in . in order to prove the closedness of , we first note that if , then by the above there is a neighborhood of that does not intersect .thus , it suffices to show that for , there is a neighborhood that does not intersect . by condition [ enum :phase ] of definition [ def : phase ] , there must be positive constants such that by the same argument as above , such a bound holds true in a conic neighborhood of . by the definition of , there are posititve such that we now want to show that one can choose a conic neighborhood of , contained in , such that an analogous bound holds , i.e. , there are positive so that by the above construction , grows as in . the deviations that occur by varying and also scale as , as is a symbol of order .recalling and again using taylor s theorem , one shows that by making small enough , one still retains an inequality of the form . by suitably restricting in , we can ensure that for we have whenever .then no new direction for which we would have to check the bound can appear while varying in .given , it is clear that we can also take a conic neighborhood of , by tilting it by angles less than .choosing gives a neighborhood of that does not intersect .[ prop : m ] if the support of the symbol does not intersect , then is smooth .we choose a neighborhood of whose closure does not intersect . we choose a smooth function that is equal to one in a neighborhood of and vanishes on .we choose another smooth function on with support in which is identical to one whenever for . by definition of , the set is bounded , so we can choose such that is compact for each . then we define by the definition of and , we have and . with these definitions , we have here we used that and have nonoverlapping supports . as and differentiates only w.r.t . , we thus have for arbitrary integer . as maps symbols of order to symbols of order , is smooth by proposition [ prop : diff ] .[ cor : m ] the singular support of is contained in .[ thm : sp ] the wave front set of is contained in .[ lemma : sp1 ] let , .then there is a conic neighborhood of , a conic neighborhood of and positive constants such that furthermore , there is a positive such that condition is fulfilled for by condition [ enum : phase ] of definition [ def : phase ] . that it is also fulfilled in a neighborhood of be shown analogously to the proof of the closedness of in lemma [ lemma : sp ] .condition is fulfilled for .that it is also fulfilled in a neighborhood of can again be shown as in lemma [ lemma : sp ] . in order to prove the last statement , we note that by we have where has length and lies on the cone with angle around ( see the figure , where is denoted by ) . for the distance of and have the bound ( see the dashed lines in the figure ) using , we then obtain the above statement . by corollary [ cor : m ], it suffices to consider points .let .due to proposition [ prop : m ] , we may assume that is supported in an arbitrarily small closed conic neighborhood of .we thus need to show that there is a , identically one near and a conic neighborhood of such that for each there is a seminorm on such that for all supported in . as in the proof of theorem [ thm : osc ] , it suffices to construct such a bound for and then make use of proposition [ prop : dense ] .let be as in lemma [ lemma : sp1 ] .choose that is identically one near and whose support is contained in .choose a that is identical to one on .now we set then for we have now we choose , identical to one near and with support in . we also choose which is identically one on .we consider . then by proposition[ prop : diff ] , the second term yields a smooth function , so that the above bound is fulfilled .it remains to consider { \mathrm{d}}^n x { \mathrm{d}}^s \theta \rvert } \\ & \leq \int { \lvert v_k^p[\psi(x ) ( 1-\xi(\theta ) ) a(x , \theta ) ] \rvert } { \mathrm{d}}^n x { \mathrm{d}}^s \theta.\end{aligned}\ ] ] here we used that is identically one on the support of . by lemma [ lemma : sp1 ] , one now has \rvert } \leq c_{m , p}(a ) ( 1+{\lvert \theta \rvert})^m ( { \lvert \theta \rvert}^\mu + { \lvert k \rvert})^{-p},\ ] ] where is a seminorm on .by using one can make the integral convergent and assure by choosing large enough .[ ex : delta+ ] we consider the two - point function of a free massive scalar relativistic field , where one has and with here , we use the notation , with . note that is not homogeneous . in this problemis circumvented by using as phase function and multiplying the symbol with the function .it is then no longer a symbol ( as it is not smooth in ) , so one has to allow for so - called asymptotic symbols .furthermore , one has to show that the multiplication with such a term does not spoil the fall - off properties , in particular that differentiation w.r.t . lowers the order . in the present approach, this is not necessary . is a phase function in the sense of definition [ def : phase ] and therefore , by theorem [ thm : osc ] , it defines an oscillatory integral for every symbol . in order to find the wave front set ,we compute it is easy to see that its modulus is bounded from below by a positive constant unless or and .thus , we have furthermore , for large , this behaves as where the remainder term scales as .thus , only in the directions the bound on the angle of and can not be fulfilled .hence , we obtain the well - known result . in that convention ,the sign of the zeroth component in the cotangent bundle has to be reversed . ] a variant of this example is obtained by considering phase functions of the form with a positive function that is a symbol of order .such expressions occur for example in quantum field theory on the moyal plane with hyperbolic signature and signify a distortion of the dispersion relations , cf .the above trick to put into the symbol still works , but then the symbol will be of type , where is original type of the symbol .it is straightforward to check that still defines a phase function of order 1 in the sense defined here , and that its stationary phase is as above .if the function in is a symbol of order , of corresponds to solutions of the hyperbolic wave equation .the modification suggested here means that the underlying pde is no longer hyperbolic . ]then the shift of to the symbol is not possible , as this would no longer give a symbol of type .thus , a treatment of in the context of phase functions that are homogeneous of order 1 is not possible .however , one can still interpret as a phase function of order and easily computes [ ex : ncqft ] in quantum field theory on the moyal plane of even dimension with euclidean signature , one frequently finds phase funtions of the form here is some real antisymmetric matrix of maximal rank .the above is clearly a symbol of order 2 , and we have as has rank , condition [ enum : phase ] of definition [ def : phase ] is fulfilled . from the aboveit follows that and thus also , so that the resulting distributions are smooth .we note that up to now such integrals could only be treated in the so - called adiabatic limit .but then one loses the information about the singular behaviour in position space , contrary to the present case , where the wave front set is known completely .[ ex : ncqftb ] in quantum field theory on the moyal plane with hyperbolic signature , one frequently finds phase functions of the form where with as in and as in example [ ex : ncqft ] .the above is a symbol of order 2 , but it is not a phase function as defined here , as can most easily be seen in the case . then with one obtains if the signs of and coincide , then the above derivatives tend to a constant as a function of , so that condition [ enum : phase ] of definition [ def : phase ] is not fulfilled .the rigourous treatment of such integrals is an open problem , which we plan to address in future work .it is a pleasure to thank dorothea bahns for helpful discussions and her detailed comments on the manuscript .i would also like to thank ingo witt for valuable comments .this work was supported by the german research foundation ( deutsche forschungsgemeinschaft ( dfg ) ) through the institutional strategy of the university of gttingen .d. bahns , s. doplicher , k. fredenhagen and g. piacitelli , _ field theory on noncommutative spacetimes : quasiplanar wick products _ , phys .d * 71 * , 025022 ( 2005 ) . c. dscher and j. zahn , _ dispersion relations in the noncommutative and wess - zumino model in the yang - feldman formalism _ , annales henri poincare * 10 * , 35 ( 2009 ) .r. wulkenhaar , _ field theories on deformed spaces _ , j. geom .phys . *56 * , 108 ( 2006 ) . c. dscher , _ yang - feldman formalism on noncommutative minkowski space _ , ph.d .thesis , hamburg ( 2006 ) .
a generalized notion of oscillatory integrals that allows for inhomogeneous phase functions of arbitrary positive order is introduced . the wave front set of the resulting distributions is characterized in a way that generalizes the well - known result for phase functions that are homogeneous of order one .
a recent trend in computer networks and cloud computing is to virtualize network functions , in order to provide higher scalability , reducing maintenance costs , and increasing reliability of network services .virtual network functions as a service ( vnfaas ) is currently under attentive study by telecommunications and cloud stakeholders , as a promising business and technical direction consisting of providing network functions ( i.e. , firewall , intrusion detection , caching , gateways ... ) as a service instead of delivering standalone network appliances . while legacy network services are usually implemented by means of highly reliable hardware specifically built for a single purpose middlebox , vnfaas moves such services to a virtualized environment , named _nfv infrastructure ( nfvi ) _ and based on commercial - off - the - shelf hardware .services implementing network functions are called _ virtual network functions ( vnfs)_. one of the open issues for nfvi design is indeed to guarantee high levels of vnf availability , i.e. , the probability that the network function is working at a given time .in other words , a higher availability corresponds to a smaller downtime of the system , and it is required to satisfy stringent _service level agreements ( sla)_. failures may result in a temporary unavailability of the services , but while in other contexts it may be tolerable , in nfvi network outages are not acceptable , since the failure of a single vnf can induce the failure of all the overlying services . to achieve high availability , backup vnfs can be placed into the nfvi , acting as replicas of the running vnfs , so that when the latter fail , the load is rerouted to the former . however , not all vnfs are equal ones : the software implementing a network function of the server where a vnf is running may be more prone to errors than others , influencing the availability of the overall infrastructure .also , client requests may be routed via different network paths , with different availability performance . therefore to guarantee high levels of availabilityit is important not only to increase the number of replicas placed on an nfvi , but it is also crucial to select where they are placed and which requests they serve . in this context , we study and model the _ high availability virtual network function placement ( ha - vnfp ) _ problem , that is the problem of placing vnfs on an nfvi in order to serve a given set of clients requests guaranteeing high availability .our contribution consist of : a quantitative probabilistic model to measure the expected availability of vnf placement ; a proof that the problem is -hard and that it belongs to the class of nonlinear optimization problems ; a linear mathematical programming formulation that can be solved to optimality for instances of limited size ; a _ variable neighborhood search ( vns ) _heuristic for both online and offline planning ; an extensive simulation campaign , and algorithm integration in a decision support system ( dss ) tool .the paper is organized as follows : in [ sec : description ] we present the ha - vnfp , and in [ sec : literature ] we briefly describe previous works on vm / vnf placement in cloud / nfvi systems . in [ sec : modelling ] we formally describe the optimization problem and propose a linearization of the problem and a mathematical programming formulation that can solve it to optimality . in [ sec : algorithms ] we describe our heuristic methodologies , which are then tested in an extensive simulation campaign in [ sec : simulation ] .we briefly conclude in [ sec : conclusion ] .we consider an nfvi with several geo - distributed datacenters or _ clusters _ ( see [ figure : nfv - abstract - infrastructure ] ) .each cluster consists of a set of heterogeneous servers with limited available computing resources .several instances of the same vnf type can be placed on the nfvi but on different servers .each vnf instance can be assigned to a single server allocating some of its computing resources .indeed each server has a limited amount of computing resources that can not be exceeded .a network connects together all servers of the nfvi : we suppose that the communication links inside a cluster are significantly more reliable than those between servers in different clusters .an access network with multiple access points connects servers to clients .links connecting access points to clusters can differ in the availability level , depending on the type of the connection or the distance from the cluster . in this article, we assume that the total amount of resources and network capacities are sufficient to manage the expected client requests at any time .however assignment decisions may artificially produce congestion over the servers .we analyze how to find assignments providing a trade - off between nfvi availability and system congestion .we are given an estimation of the expected client vnf requests , each characterized by a computing resource demand .an assigned request consumes part of the resources reserved by a vnf instance . indeed the consumed resources must not exceed the reserved ones .requests can be assigned using two different policies : _ demand load balancing _ and_ split demand load balancing_. in the former , a client request is always fully assigned to a single server , while in the latter it may be split among different ones . splitting a request also splits proportionally its demand of computing resources .indeed , when a demand is split it relies on the availabilities of many vnf instances , decreasing the expected availability of the service , but increasing the chance of finding a feasible assignment in case of congestion .we suppose a multi - failure environment in which vnfs , servers , clusters , and networks may fail together .our aim is to improve the vnf availability by replicating instances on the nfvi .we distinguish between _ master _ and _ slave _ vnfs : the former are active vnf instances , while the latter are idle until masters fail .an example of vnf placement is depicted in [ figure : nfv - abstract - placement ] .each master may be _ protected _ by many slaves - we assume in this article that a slave can protect only a master , must be placed on a different server , and must allocate at least the same amount of computing resources of its master .each master periodically saves and sends its state to its slaves , e.g. using technologies such as the one presented in , in such a way that the latter has always an updated state and can consistently restore the computation in case of failure of the former .we suppose that if a master is unreachable , a searching process is started in order to find a slave that can complete the computation .if the searching process fails and all slaves are unreachable , then the service is considered unavailable .a representation of vnf protection is in [ figure : nfv - dp ] . and , while slave are running on and .each link between vnf instances represents the connection between a master and its slave . ]even if vm and vnf resource placement in cloud systems is a recent area of research ( see for a high - level comprehensive study ) , however there already exists orchestrators that are driven by optimization algorithms for the placement , such as .we now present few works in literature studying the optimization problems that arises in this context . [[ placement - of - virtual - machines ] ] placement of virtual machines + + + + + + + + + + + + + + + + + + + + + + + + + + + + + studies the problem of placing vms in datacenters minimizing the average latency of vm - to - vm communications .such a problem is -hard and falls into the category of _ quadratic assignment problems_. the authors provide a polynomial time heuristic algorithm solving the problem in a _`` divide et impera '' _ fashion . in authors deal with the problem of placing vms in geo - distributed clouds minimizing the inter - vm communication delays .they decompose the problem in subproblems that they solve heuristically .they also prove that , under certain conditions , one of the subproblems can be solved to optimality in polynomial time . studies the vm placement problem minimizing the maximum ratio of the demand and the capacity across all cuts in the network , in order to absorb unpredictable traffic burst .the authors provide two different heuristics to solve the problem in reasonable computing time .[ [ placement - of - virtual - network - functions ] ] placement of virtual network functions + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + applies nfv to lte mobile core gateways proposing the problem of placing vnfs in datacenters satisfying all client requests and latency constraints while minimizing the overall network load .instead , in the objective function requires to minimize the total system cost , comprising the setup and link costs . introduces the vnf orchestration problem of placing vnfs and routing client requests through a chain of vnfs . the authors minimize the setup costs while satisfying all client demands .they propose both an ilp and a heuristic to solve such problem .also considers the vnf orchestration problem with vnf switching piece - wise linear latency function and bit - rate compression and decompression operations .two different objective functions are studied : one minimizing costs and one balancing the network usage .[ [ placement - with - protection ] ] placement with protection + + + + + + + + + + + + + + + + + + + + + + + + + in vms are placed with a protection guaranteeing -resiliency , that is at least slaves for each vm .the authors propose an integer formulation that they solve by means of constraint programming . in the recovery problem of a cloud systemis considered where slaves are usually turned off to reduce energy consumption but can be turned on in advance to reduce the recovery time .the authors propose a bicriteria approximation algorithm and a greedy heuristic . in authors solve a problem where links connecting datacenters may fail , and a star connection between vms must be found minimizing the probability of failure .the authors propose an exact and a greedy algorithms to solve both small and large instances , respectively . within disaster - resilient vm placement, proposes a protection scheme in which for each master a slave is selected on a different datacenter , enforcing also path protection . in authors solve the problem of placing slaves for a given set of master vms without exceeding neither servers nor link capacities .their heuristic approaches decompose the problems in two parts : the first allocating slaves , and the second defining protection relationships . in a recent work , the authors model the vm availability by means of a probabilistic approach and solve the placement problem over a set of servers by means of a nonlinear mathematical formulation and greedy heuristics . this is the only work offering an estimation of the availability of the system .however , it considers only the availability of the servers , while in our problem we address a more generic scenario : when datacenters are geo - distributed , a client request shall be assigned to the closest datacenter , since longer connections may have a higher failure rate .therefore , the source of the client requests may affect the placement of the vnfs on the nfvi , and must be taken into account in the optimization process and in the estimation of the availability .in the following we propose a formal definition to the ha - vnfp and a mathematical programming formulation .[ [ clusters - and - servers ] ] clusters and servers + + + + + + + + + + + + + + + + + + + + we are given the set of clusters and the set of servers .each server belongs to a cluster , and we define as the set of servers of cluster .we represent the usual distinct types of computing resources ( cpu , ram , ... ) of server by the same global amount of available resources .[ [ virtual - network - functions ] ] virtual network functions + + + + + + + + + + + + + + + + + + + + + + + + + a set of vnf types is given .each vnf instance runs on a single server .each server can host multiple vnf instances , but at most one master for each type .[ [ networks ] ] networks + + + + + + + + an inter - cluster network allows synchronization between clusters , while an access network connects clusters to a set of access points .we are given sets and of logical links connecting clusters , and logical links connecting cluster to access point , respectively . [ [ clients - requests ] ] clients requests + + + + + + + + + + + + + + + + a set of clients requests is given .each request is a tuple of the requested vnf type , a subset of available access points , and the resources demand . [ [ availability ] ] availability + + + + + + + + + + + + taking into account explicit availability in nfvi design becomes necessary to ensure slas .we suppose that the availabilities of each component ( server , cluster , vnf , link ) are given ( see [ tab : data ] ) , each corresponding to the probability that a component is working . [ [ objective - function ] ] objective function + + + + + + + + + + + + + + + + + + all clients requests must be assigned to servers maximizing the availability of the system , we measure as the minimum availability among all requests .c|l|c|l & set of clusters & & set of servers + & set of servers in cluster & & set of vnf types + & set of requests & & set of access points + & set of synchro .links & & set of access links + & set of request access points & + & capacity of server & & demand of request + & cluster of server & & vnf of request + & availability of vnf & & availability of server + & availability of synchro . link & & availability of access link + & availability of cluster & + concerning the assignment of requests , we can prove that : [ theo : feasibility - split ] when demand split is allowed and , ha - vnfp has always a feasible solution that can be found in polynomial time .in fact since the requests can be split among servers , the feasibility of an instance can be found applying a next - fit greedy algorithm for the bin packing problem with item fragmentation ( bppif ) : servers can be seen as bins , while requests as items that must be packed into bins .the algorithm iteratively pack items to an open bin . when there is not enough residual capacity , the item is split , the bin is filled and closed , and a new bin is open packing the rest of the item .when requests can be split , such algorithm produces a feasible solution for the ha - vnfp : if a request is assigned to a server , then a master vnf serving such a request is allocated on that server too .the next - fit algorithm runs in and therefore a feasible solution can be found in polynomial time .[ theo : feasibility - nosplit ] the feasibility of a ha - vnfp instance without demand split is a -hard problem .indeed we can see again the feasibility problem as a bin packing problem ( bpp ) .however , without split each item must be fully packed into a single bin .therefore , finding a feasible solution is equivalent to the feasibility of a bpp , which is -hard , and it directly follows that : [ theo : complexity - nosplit ] the ha - vnfp without demand split is -hard .that is , for unsplittable demands , it is -hard finding both a feasible solution and the optimum solution .it is less straightforward to also prove that : [ theo : complexity - split ] the ha - vnfp with demand split is -hard .in fact , let us suppose a simple instance where all components ( servers , clusters , links , ... ) are equal ones and where , which means that there will be no slaves in our placement .the problem can be seen again as a bppif in which the objective is to minimize the number of splits of the item that is split the most : in fact , every time a request is split , the availability of the system decreases . in such scenariosthe best solution is the one in which no request is split at all - however , if we could solve such a problem in polynomial time , then we could solve also the feasibility problem of a bpp in polynomial time , which instead is -hard . therefore , since we can reduce a feasibility problem of bpp to an instance of bppif , and the latter to an instance of ha - vnfp , the ha - vnfp with split is -hard . in the following we propose a mathematical programming formulation of ha - vnfp starting from the definition of the set of the solutions : a _ request assignment _ is a pair indicating the subset of servers running either the master or the slaves of a vnf instance , and the server where the master is placed .we also define as the set of all request assignments . an _ assignment configuration _ ( see [ fig : assignment - configurations ] ) is a set of all request assignments for all the fragments of a request .we define as the set of all assignment configurations , that is , where request is split and assigned to two different master vnfs on servers and . both masters have slaves : the master vnf on server has slaves on servers and , while the one on server has slaves on servers and . ][ [ availability - computation ] ] availability computation + + + + + + + + + + + + + + + + + + + + + + + + we compute the nfvi availability for a request by means of a probabilistic approach .given a cluster and a set of access points , is the function computing the probability that at least one of the access links is working : given a vnf and a set of servers , is the function computing the probability that at least one instance of vnf is working : given a request and a request assignment , is the function computing the probability that at least one of the instances of is working : \end{gathered}\ ] ] when a request is split , we compute its availability as the probability that all of its parts succeed : we remark that such formula is nonlinear and produces a integer nonlinear programming formulation which can not be solved by common integer solvers like cplex .therefore we propose a mip linearization of such nonlinear formulation in which for each assignment configuration we have a binary variable stating if such configuration is selected in the solution . [ [ variables ] ] variables + + + + + + + + + the following variables are needed : [ [ model ] ] model + + + + + ha - vnfp can be modeled as follows : constraints and ensure that each request is fully assigned and selects an assignment configuration , respectively . constraints and set the allocated resources of masters and slaves , respectively .constraints ensure that servers capacities are not exceeded .constraints impose that at most one assignment configuration is selected for each request .constraints compute the minimum availability .our formulation can model both the ha - vnfp with and without split : in fact by simply setting for each configuration we forbid configurations splitting a request .solving ha - vnfp as a mip using an integer solver works only for small nfvi , since the number of variables is exponential w.r.t the size of the instances .therefore we propose two different heuristic approaches for ha - vnfp : the first is an adaptation of well - known greedy policies for the bpp that will serve as comparison , while the second is a _ variable neighborhood search _ heuristic using different algorithmic operators to explore the neighborhood of a starting point .most of the heuristics for the placement of vms or vnfs are based on a greedy approach , and bpp heuristics are often exploited to obtain suitable algorithms for the placement , such as in .we also exploit bpp heuristics to obtain three different greedy approaches for the ha - vnfp : _ best availability _ , _ best fit _ , and _ first fit _ greedy heuristics .the algorithm , reported in [ algo : greedy ] , starts from an empty initial placement and for each request it looks for a server having enough residual capacity to satisfy the demand .if such a server is found , then the request is assigned to it , otherwise the algorithm fails without finding a feasible solution .however , we can observe that : when and split is allowed , our greedy heuristic always finds a feasible solution .in fact we can always split a request between two servers , as stated also in [ theo : feasibility - split ] .the selection of the server is performed by the procedure which discards the servers without sufficient resources to satisfy demand , and selects a server depending on the chosen policy : * _ best fit _ : the server whose capacity best fits the demand ; * _ first fit _ :the first server found ; * _ best availability _ : the server with the highest availability. while the first three policies are well - know for the bpp , the fourth one is designed for the ha - vnfp .master vnfs are placed during the assignment of the requests .then , in a similar way , the algorithm places additional slaves : for each master the algorithm looks for a server having enough capacity for a slave still using procedure . after a server is found, the slave is placed .such a procedure is repeated until no additional slave is placed . assignment of requests create vnf if it does not exists in assign request to server in demand is decreased infeasible add slaves servers of without and its slaves create slave of vnf on server in the _ variable neighborhood search ( vns ) _ is a meta - heuristic that systematically changes the neighborhood within the local search algorithm , in order to escape from local optima . in other words, it starts from an initial solution , applies a local search algorithm until it improves , and then changes the type of local search algorithm applied to change the neighborhood .our vns algorithm explores different neighborhoods and it is initialized with several starting points , each obtained using a different greedy algorithm . , , apply to * break * the main logic of our vns algorithm is sketched in [ algo : vns ] : we generate starting points by using the greedy heuristics of [ sec : greedy ] and we explore their neighborhood for a placement improving the availability .if no improvement can be found , the algorithm switches the neighborhood .indeed applying local search is time expensive , but we can observe that a max - min objective function divides the requests in two sets : a set of requests having an availability equal to the objective function and another set having a better availability .we refer to the former as the set of the _ worst requests _ , since they are the ones whose improvement will also improve the availability of the entire solution . to reduce the computing time and focus our algorithm we found to be profitable to restrict the explored neighborhood to the worst requests only .also , after applying each operator we look for new slaves , as in the greedy procedure [ algo : greedy ] . given two feasible placements , we say that one is improving if it has a higher availability or if it has the same availability but fewer worst requests . in the followingwe describe the neighborhoods of our vns .[ [ vnfs - swap ] ] vnfs swap + + + + + + + + + the first neighborhood consists of swapping vnfs ( see [ fig : vnf - swap - n ] ) : given a vnf , we swap it with a subset of vnfs deployed on a different server . if the placement is improved , then we store the result as the best local change .in general our operator is but we found profitable to set an upper bound of to the cardinality of the set of swapped vnfs , obtaining a operator . and are swapped .if a vnf is a master , then all its assigned requests are redirected to the new server.,title="fig : " ] and are swapped .if a vnf is a master , then all its assigned requests are redirected to the new server.,title="fig : " ] and are swapped .if a vnf is a master , then all its assigned requests are redirected to the new server.,title="fig : " ] [ [ slave - vnfs - swap ] ] slave vnfs swap + + + + + + + + + + + + + + + we explore the neighborhood where a slave vnf is removed to free resources for an additional slave of a different master vnf ( see [ fig : b - swap - n ] ) .the complexity of this operator is .[ [ requests - swap ] ] requests swap + + + + + + + + + + + + + we also explore the neighborhood where requests are swapped ( see [ fig : r - swap - n ] ) : given a request we consider a subset of requests assigned to a different server and then swap the former with the latter . similarly to the swap of vnfs , the complexity of this operator is .however , by setting an upper bound of to the cardinality of the swapped requests set we obtain a operator . and are swapped changing the respective servers . when swapping a request , a new vnf instance is created if none existed on the new server ., title="fig : " ] and are swapped changing the respective servers . when swapping a request , a new vnf instanceis created if none existed on the new server ., title="fig : " ] and are swapped changing the respective servers . when swapping a request , a new vnf instanceis created if none existed on the new server ., title="fig : " ] [ [ request - move ] ] request move + + + + + + + + + + + + in the last exploration we consider the neighbors where a request is simply moved to a different server ( see [ fig : r - move - n ] ) .the complexity of this operator is .is moved and assigned to a different server ., title="fig : " ] is moved and assigned to a different server ., title="fig : " ] is moved and assigned to a different server ., title="fig : " ] in principle , even if all the operators polynomial time our vns algorithm is not .however , an upper bound to the number of iterations can be set , obtaining a heuristic .also , in the following we show that our vns requires small computing time for nfvi of limited size and it can be parameterized to end within a time limit , making it suitable for both online and offline planning .we evaluate empirically the quality of our methodologies : the greedy heuristic using four different policies ( best fit , first fit , and best availability ) , the vns algorithm , and the mathematical programming formulation as a mip .however we could run our mip only on small instances with or servers and requests . in our frameworkwe first run the algorithms using the demand load balancing policy , and allowing split only if the former fails to assign all the requests .all methodologies are implemented in c++ , while cplex 12.6.3 is used to solve the mip .the simulations have been conducted on intel xeon x5690 at 3.47 ghz .we also produced a graphical dss tool integrating the vns and the greed algorithms ( in python ) working on arbitrary 2-hop topologies and made it available in .we generated a random dataset consisting of instances that differ for the number of requests , total amount of computing resources , and availabilities of the network components .we set the number of vnf types provided by our nfvi to .we assumed an nfvi with clusters ( ) and access points ( ) .each request has a random demand ] .the availabilities of all the components of our nfvi are selected between as in .we generated instances for each combination of : * number of requests ; * number of access points for each request .the number of servers depends on the number of requests , the total amount of the demands , and the random capacities : we generated a set of servers such that their capacities are enough to serve all the demands .note that such condition guarantees the feasibility only when splitting requests is allowed .servers are randomly distributed among all the clusters , in such a way that for each pair of clusters and we have under these conditions we obtained instances with around servers when and servers when .we first evaluate the quality of our vns heuristic against the solutions obtained by the mip solver and the greedy heuristics . in order to studyhow the nfvi behaves on different levels of congestion , we let its computing resources to grow from an initial value of , to , with a step of .due to the exponential number of the variables of our formulation , the mip solver could handle only small instances with requests .all tests have been performed setting a time limit of two hours each and all the algorithms managed to assign all requests without splits . in [ fig : test - time - random ] we show the average computing time of the algorithms : while the mip hits several times the time limit , computing times are negligible for all the heuristics , and vns can be considered as a good alternative for online orchestration when the set of the requests is small .the optimization problem seems harder when the amount of computing resources is scarce : in fact , the average computing time of the mip is closer to the time limit when the overall capacity is less than . instead , with higher quantities of resources the mip always find the optimal solution within the time limit . in [ fig : test - availability-50 ] we show that the results of our vns heuristic are close to the mip ones , while there is a significant gap between the latters and the greedy heuristic ones . in fact , both the mip and the vns succeed in finding solutions with an availability of three nines even with scarce resources .eventually all the algorithms reach an high level of availability when computing resources are doubled . in [ fig : availability-50-aps ] we show the variation of the availability when the number of access points for each request increases : in [ fig : availability-50 - 11 ] , [ fig : availability-50 - 22 ] , and [ fig : availability-50 - 33 ] we report the average availability when requests can be routed to the nfvi using , , and access points , respectively .the path protection obtained by using more than one access point substantially increases the level of the availability .however , having more than access points does not provide additional benefits .[ fig : availability-100-vns ] in a second round of experiments , we evaluated how our vns algorithm behave when scaling up the number of requests .however , when the number of servers increases its is not possible to use the mip .therefore , in the following analysis we compare the results of our vns algorithm to the greedy ones only .in addition , since our vns algorithm has not polynomial time complexity , we include in the comparison the results obtained by setting a time limit of at the exploration of each starting solution. we can first observe in [ fig : test - time - big ] that the computing time of the vns algorithm grows exponentially when the number of requests increases . indeed setting a time limitreduces the overall computing time , which is always less than a minute . in [ fig : test - availability - big ]we show that on average there is always a substantial gap between the results obtained by our vns algorithms and the greedy heuristics .we can also observe that on average the vns time limit does not penalize substantially the results .therefore our vns algorithm with time limit can reduce computing times with minimal loss in availability . from [ fig : availability-100 ] to [ fig : availability-500 ] we show the results on instances having from to requests individually .we can observe that the greedy heuristics progressively loose in quality of the solutions , and the gap with the vns algorithms increases with the number of the requests . finally in [ fig : availability - vns ]we show only the results concerning the vns algorithm without time limit and how it behaves when both the number of requests and the overall capacity increase .we can observe that the curves are similar and the quality of the solutions provided by our algorithm is not affected by the increasing of the size of the instances .our vns algorithm always provides placements with an availability of three nines even when resources are scarce , and it always reaches four nines when the capacity is doubled .we defined and modeled the ha - vnfp , that is the problem of placing vnfs in nfvi guaranteeing high availability .we provided a quantitative model based on probabilistic approaches to offer an estimation of the availability of a nfvi .we proved that the arising nonlinear optimization problem is -hard and we modelled it by means of a linear formulation with an exponential number of variables .however , to solve instances with a realistic size we designed both efficient and effective heuristics . by extensive simulations ,we show that our vns algorithm finds solution close to the mip ones , but in computing times smaller of orders of magnitude .we highlighted the substantial gap between the availability obtained using classic greedy policies , and the one obtained with a more advanced vns algorithm , when the nfvi is congested .our vns algorithm showed to be a good compromise to solve ha - vnfp in reasonable computing time , proving to be a good alternative for both online and offline planning .we integrated our ha - vnfp algorithms in a graphical simulator made available with tutorial videos in .this article is based upon work from cost action ca15127 ( `` resilient communication services protecting end - user applications from disaster - based failures - recodis '' ) supported by cost ( european cooperation in science and technology ) .this work was funded by the anr reflexion ( contract nb : anr-14-ce28 - 0019 ) and the fed4pmr projects .
virtual network functions as a service ( vnfaas ) is currently under attentive study by telecommunications and cloud stakeholders as a promising business and technical direction consisting of providing network functions as a service on a cloud ( nfv infrastructure ) , instead of delivering standalone network appliances , in order to provide higher scalability and reduce maintenance costs . however , the functioning of such nfvi hosting the vnfs is fundamental for all the services and applications running on top of it , forcing to guarantee a high availability level . indeed the availability of an vnfaas relies on the failure rate of its single components , namely the servers , the virtualization software , and the communication network . the proper assignment of the virtual machines implementing network functions to nfvi servers and their protection is essential to guarantee high availability . we model the high availability virtual network function placement ( ha - vnfp ) as the problem of finding the best assignment of virtual machines to servers guaranteeing protection by replication . we propose a probabilistic approach to measure the real availability of a system and design both efficient and effective algorithms that can be used by stakeholders for both online and offline planning .
in recent years simulations of babcock - leighton type flux - transport ( hereafter blft ) solar dynamos in both 2d and 3d demonstrated the crucial role the sun s global meridional circulation plays in determining solar cycle properties .time variations in speed and profile of meridional circulation have profound influence on solar cycle length and amplitude .the recent unusually long minimum between cycles 23 and 24 has been explained by implementing two plausible changes in meridional circulation , ( i ) by implementing the change from a two - celled profile in latitude in cycles 22 to a one - celled profile in cycle 23 , and ( ii ) by performing a vast number of simulations by introducing a flow - speed change with time during the declining phase of each cycle . accurately knowing the speed and profile variations of the meridional circulation would greatly improve prediction of solar cycle features .the meridional circulation has been observed in the photosphere and inside the upper convection zone in the latitude range from the equator to in each hemisphere .however , the speed , pattern and time variations of the circulation at high latitudes and in the deeper convection zone are not known from observations yet . theoretical models of meridional circulation provide some knowledge , but the flow patterns derived from model outputs vary from model to model , primarily because of our lack of knowledge of viscosity and density profiles and thermodynamics in the solar interior , which are essential ingredients in such models . as differential rotation does not change much with time compared to meridional circulation , in this first studywe focus on time variation of meridional flow - speed , using a set - up similar to that used previously .since the meridional circulation is a specified parameter in kinematic blft dynamos and the dynamo solutions depend sensitively on the spatio - temporal patterns of this circulation , we ask the question : can we infer the meridional circulation ( in both space and time ) from observations of the magnetic field ?the purpose of this paper is to describe an ensemble kalman filter ( enkf ) data assimilation in a 2d blft solar dynamo model for reconstructing meridional flow - speed as a function of time for several solar cycles .a subsequent paper will investigate the reconstruction of spatio - temporal patterns of meridional circulation in the solar convection zone .data assimilation approaches have been in use for several decades in atmospheric and oceanic models , but such approaches have been implemented in solar and geodynamo models only recently . introduced a variational data assimilation system into an - type solar dynamo model to reconstruct the -effect using synthetic data .very recently applied a variational data assimilation method to estimate errors of the reconstructed system states in a stratified convection model .a detailed discussion of data assimilation in the context of the geodynamo can be found in . in a sequential data assimilation framework ,a set of dynamical variables at a particular time defines a `` model state '' , which is the time - varying flow speed in the context of the present paper .scalar functions of these state variables that can also be observed using certain instruments are called `` observation variables '' , which are magnetic fields here .more detailed terminology for identifying data assimilation components with solar physics variables is given in 2 . in brief , the goal of sequential data assimilation is to constrain the model state at each time - step to obtain model - generated observation variables that are as close to the real observations as possible .the basic framework is based on statistical multidimensional regression analysis , a well - developed method that has been applied in atmospheric and oceanic studies ( see for details ) .the enkf sequential data assimilation framework also allows adding model parameters to the set of model states and estimating values of these parameters that are most consistent with the observations .it is a common practice to perform an `` observation system simulation experiment '' ( osse ) in order to validate and calibrate the assimilation framework for a particular model .an osse generates synthetic observations from the same numerical model that is used in the assimilation . in this casethe numerical model is a simple blft dynamo model containing only a weak nonlinearity in the -quenching term ; thus adding gaussian noise to model - outputs for producing synthetic observations works well . in a more realistic situation for a large system with highly nonlinear processes , such as in numerical weather prediction models, it may be necessary to use a non - gaussian ensemble filter ( see , e.g. ) .a few examples of predicting model parameters using sequential data assimilation techniques have been presented by and in the context of estimating neutral thermospheric composition , and most recently by for estimating thermospheric mass density .an enkf data assimilation framework has recently been applied to a 3d , convection - driven geodynamo model for full state estimation using surface poloidal magnetic fields as observations .we implement enkf sequential data assimilation to reconstruct time - variations in meridional flow - speed for several solar cycles , using poloidal and toroidal magnetic fields as observations .we note certain differences in our case compared to the cases described above , namely , unlike neutral thermospheric composition and thermospheric mass density , the meridional flow - speed is not governed by a deterministic equation .in order to describe the enkf data assimilation methodology , we first identify the data assimilation components with solar physics variables .a physical blft dynamo model , which generates magnetic field data for a given the meridional flow - speed , is called the `` forward operator '' .the time - varying meridional flow - speed at a given time is the `` model state '' and will be estimated by the enkf system .the enkf requires a prediction model that generates a forecast of the meridional flow - speed at a later time given the value at the current time . here, that prediction model simply adds a random draw from a gaussian distribution to the current meridional flow - speed , because the blft dynamo model we are using here is a kinematic dynamo model . in the future ,the results can be further refined by imposing additional physical conditions using a dynamical dynamo model .it is important not to confuse the prediction model with the blft dynamo model that acts as a forward operator for the data assimilation process .figure 1 schematically depicts the data assimilation framework considering an `` ensemble of three members '' of the prediction model state ( meridional flow - speeds ) .sequential assimilation can usually be described as a two - step procedure , a forecast followed by an analysis each time an observation is available . in figure 1 , near the label ( a ) denote three different realizations of initial flow - speeds , which are input to the prediction model .we generate `` prior '' estimates of the model state ( denoted by near the label ( b ) ) by using the prediction model to advance the estimates of the flow - speed from the initial time ( ) to the time ( ) at which the first observations of magnetic field are available . in this case , the prediction model just adds a different draw from the gaussian noise distribution to each initial ensemble estimate of the flow speed ( along the solid green arrows in figure 1 ) .the resulting ensemble of flow speeds is referred to as a prior ensemble estimate .the central equation for estimating the prior state is a stochastic equation given by , in which , is a function for generating normalized gaussian random numbers with unit amplitude and unit standard deviation , and the amplitude of the prediction model noise is governed by . thus the evolution of the system from label ( a ) to ( b ) through equation ( 1 ) , denoted by eq1 in figure 1 , constitutes the first step of the two - step procedure in sequential assimilation .the second step includes ( i ) producing observations from outputs of the forward operator and ( ii ) estimating posterior flow - speeds by employing regression among these observations , real observations ( synthetic in osse ) and prior flow - speeds .the forward operator ( blft dynamo in this case ) is denoted by along the solid black arrows , which uses the prior estimates of flow speed to produce a prior `` ensemble of observation estimates '' which are magnetic field outputs .three realizations of magnetic fields from the forward operator are denoted by , , in figure 1 .note that the statement , `` forward operator operating on three model states generates three prior observation estimates '' , is equivalent to the statement , `` blft dynamo running with three meridional flow speeds produces three sets of model outputs of magnetic fields '' . to elucidate the second step of assimilation, we describe the function of the `` filter '' . for real prediction using enkf data assimilation, we would have real data ( observations ) from instruments ; in our osse case it is synthetic , denoted by in figure 1 .synthetic observations are generated by applying the forward operator to a specified time series of meridional flow - speeds ( as shown in figure 2a ) .our goal is to apply an enkf to obtain an improved distribution of estimated flow - speeds ( i.e. `` posterior states '' , ) using the prior ensemble and the observation of magnetic field .the enkf ( black - dotted box in the diagram ) first compares the prior ensemble of observation - estimates to the actual observation and computes increments to the prior observation - estimates .these observation - increments are then regressed using the joint prior - ensemble distribution of flow - speed and magnetic field observations to compute increments for the prior - ensemble of flow - speeds .the enkf can also produce a posterior - ensemble of magnetic field observations ( ) , which can be used for diagnostic purposes .the posterior - ensemble distribution of flow - speeds is the best estimate of the flow - speed distribution at time given the available observations .mean flow - speed at time can be calculated by taking the average over all ensemble members . to proceed with the reconstruction of flow - speeds at the time of the next observation , , the reconstructed flow - speeds at time are used as the input to the prediction model ( equation ( 1 ) ) , and the same procedure described in the previous paragraphsis repeated .random gaussian noise through equation ( 1 ) prevents degeneration of ensemble .thus after many time steps , the entire time series of the ensemble distribution of flow - speeds can be constructed .time series of the mean flow - speeds can be calculated by taking the average over all ensemble members at each time .however , it may produce a better reconstruction in some cases if one ensemble member , which produced observation ( ) closest to real observation ( ) , is chosen . in order to perform an osse ,it is now necessary to define the `` true state '' flow - speed as a function of time .synthetic observations are generated at selected times by applying the forward operator ( blft dynamo model ) to the true state flow - speed at the appropriate time and adding on a random draw from a specified observational error distribution to simulate instrumental and other errors . as noted above, the forward operator is a kinematic blft dynamo model , described in detail in .the dynamo ingredients are a solar - like differential rotation , a single - celled meridional circulation , a babcock - leighton type surface -effect and a depth - dependent diffusivity ; the mathematical forms are prescribed in .dynamo equations , computation domain and boundary conditions are used as given in . at start the integration of the dynamo - dart system over the first assimilation - step by initializing the forward operator ( ftd model ) with a converged solution for a flow - speed of . in the subsequent assimilation - steps , the solution at the end of previous assimilation - step for each ensemble - member is used as initial condition .we construct a time - varying flow - speed for a span of 35 years ( see figure 2(a ) ) , guided by observations , which has a natural variation of 20 - 40% with respect to the mean flow considered here , i.e. 14 , as shown by the thin black line in figure 2(a ) .this specified time - varying flow - speed is referred to as the true model state .thus , keeping the spatial pattern of meridional circulation fixed , we consider here reconstructing the time - series of the scalar flow - speed . to generate synthetic magnetic field data , we incorporate the time - varying true meridional flow - speed in our blft dynamo , and simulate the time series of idealized magnetic fields in the entire computation domain .then we construct the time series of synthetic magnetic observations by adding synthetic observational error to the simulated idealized magnetic data . figure 2(b ) shows a single observation , which is created from the simulated poloidal field by extracting from the location , and .however , note that the dynamo simulation in grids can give us as many as 20202 synthetic magnetic observations of poloidal and toroidal fields .considering only one observation , as shown in figure 2(b ) , we perform assimilation runs with 16 ensemble members with an observational error of , which means an error of about the ideal magnetic field generated using the true meridional flow speed .to estimate the prior states of flow - speed , we use equation ( 1 ) , in which we set .if the meridional flow - speed varies up to ( i.e. for a mean flow - speed of ) during six months , the variation in 15 days can be .thus we chose so that it is large enough to capture the variation in flow - speed within our selected updating time - step of 15 days and also large enough to avoid ensemble collapse , but not so large as to produce unusual departures from cyclic behavior in a flux - transport dynamo .we show in figure 3 the reconstructed meridional flow - speed as a function of time ( panel ( a ) ) and the estimates of the observation computed from the flow speed ensemble after data assimilation ( panel ( b ) ) .we see in figure 3(a ) that the reconstruction is reasonably good except for the time windows between to 18 years and 33 to 35 years during which we find error in the reconstructed flow speed .observations indicate error in the measurement of meridional flow speed .the inset of figure 3(a ) reveals , for an initial guess , far - off from the true - state , the reconstructed states asymptotically converges towards the true - state .even though they oscillate around the truth with large amplitude , the oscillations damp with time .figure 3(b ) shows histograms for the normalized distribution of flow - speeds before ( in cyan ) and after ( in magenta ) the analysis stage , along with true - state ( blue ) , for the time instances of 5 , 6.9 , 10.1 and 27.5 years ( marked by vertical lines in figure 3(a ) ) during assimilation . for this case with 16 ensemble members we chose 10 bins for an optimal display .four time instances are chosen in such a way as to present the following four different phases of reconstructions : ( i ) the distribution of prior and posterior states has small overlap ( top left frame of figure 3(b ) ) , ( ii ) distribution is sharply peaked in one bin each for prior and posterior ( top right frame ) , ( iii ) distribution is broad and has significant overlap ( bottom left frame ) and ( iv ) distribution has no overlap at all ( bottom right frame ) .however , we can clearly see the successful performance of the enkf which reveals that in all cases the analysis phase brings the posterior distribution ( magenta bars ) closer to true - state ( blue ) .figure 3(c ) reveals that the assimilated magnetic observation ( blue solid curve ) is well - reproduced when compared with real observation ( red - dashed curve ) .this is not surprising , because it has already been noted that short - term small fluctuations in flow speed do not significantly influence the overall evolution of global magnetic fields generated by a babcock - leighton dynamo ( see , e.g. ) .we also plot the innovation ( ) for this observation ( black curve ) and the cumulative innovation ( orange curve ) .the innovation at the analysis - step ( ) is defined as the signed difference between real and reconstructed observations , whereas the cumulative innovation ( ) at analysis - step is the normalized sum of norm of innovation vectors over all the previous analysis - steps . and being small ,both have been ten - fold magnified to superimpose on observations .we see that , at three different time instances ( , 17 and 27 years ) , the innovation is relatively large ; this is because at the initial guess is far - off from the truth , and at and 27 years there are relatively sharp changes in flow - speed .the cumulative innovation asymptotes to zero as expected , implying no bias of the system . to investigate the possibility of further improvement in the quality of the enkf reconstruction , we examine the consequences of three important aspects of the enkf : variation in observational error , size of ensemble and number of observations . we perform convergence tests by estimating the error in the reconstructed state as functions of observational errors , size of ensemble and number of observations .we define the error as the root mean square of the difference between the reconstructed state and the true - state .the assimilation interval is chosen to be 15 days in the present case ; denoting every 15 days by the index , the true state and the reconstructed state at the assimilation step by and respectively , we define the error as , , in which is the total number of indices for the 15-days assimilation intervals during the entire time - span of 35 years .figure 4 shows the rms errors in reconstructed flow speed as functions of observational error ( figure 4(a ) ) , size of ensemble ( figure 4(b ) ) and number of observations ( figure 4(c ) ) .we see in figure 4 that the error decreases systematically and asymptotes for certain values of the observational errors ( 1% ) , size of ensemble ( 192 members ) and number of observations ( 180 ) . in the case of more than one observation, we include more poloidal field observations ( synthetic ) from various locations at and near the surface , and more toroidal field observations at and near the bottom of the convection zone .while we vary observational errors in panel ( a ) , we use observational error in panels ( b ) and ( c ) .panels ( a , b ) show convergence to typical hyperbolic patterns , as is often seen in numerical convergence tests ( see , e.g. figure 3 of dikpati ( 2012 ) ) . but panel ( c ) shows that the reconstruction can be improved systematically only when there are more than a certain number of observations ( four in our case ) . in all these experiments, we used the same enkf scheme .bias or systematic error in an osse reconstruction may arise primarily because of the following assumptions made in the assimilation system : ( i ) the evolution of the ensemble spread is linear , ( ii ) the ensemble is sufficiently large , ( iii ) the forward operators are linear . though these assumptions are roughly valid , they are not strictly true .but in general , the resulting systematic errors are small unless the assimilation is applied in a large , highly nonlinear system like a numerical weather prediction model . with the knowledge gained from the convergence experiments shown in figure 4, we consider a case with 192 ensemble members and 180 magnetic observations , consisting of equal numbers of poloidal and toroidal magnetic fields at various latitude and depth locations .each of these observations has an observational error of .we present the reconstructed flow - speed from this assimilation in figure 5 . the reconstructed flow - speed ( red curve )matches very well with the true state , and thus the true state plotted in blue is essentially hidden behind red and green curves . it is not realistic to expect an observational error of as small as 1% ; figure 5 presents here an illustrative example of one of the best possible reconstructions .in fact , several additional assimilation runs indicate that the reconstruction is still good if the observational error does not exceed 40% , and reasonably good when 90 out of 180 observations have up to errors .but the reconstruction fails when all observations have more than 40% error .what we have demonstrated so far is that the time - dependent amplitude of meridional circulation , having one flow - cell per hemisphere , can be reconstructed by implementing enkf data assimilation with synthetic data . in realitywe do not know from observational data whether they were produced by a dynamo operating with a single - celled flow in each hemisphere , or with a more complex flow profile , or with a combination of complex time - variations in all possible dynamo ingredients . in order to investigate the outcome from this methodwhen the assumption made about the flow profile is wrong , we carry out an experiment to reconstruct flow - speed assuming a one - cell flow , while using observations of magnetic fields produced from a flow pattern that has two cells in latitude in each hemisphere ( see for prescription of a two - celled flow ) .we obtain synthetic data for a case with two flow cells in latitude .using 192 ensemble members and 180 magnetic observations with 1% error in each of these observations , we estimate the time variation in flow - speed by assuming a single - celled flow , and plot in figure 6(a ) .we find that the reconstruction is relatively poor , as expected .however , with a closer look we can see that the reconstructed speed is trying to approach the true - state from a lower value for the first 12 years and from a higher value for the next 13 years .when the trend in the true - state reverses near the year 27 , the osse has greater difficulty in converging on it .figure 6(b ) shows the innovation for two typical observations of poloidal fields near the surface , at ( i.e. at low latitude near the equator ) and at ( at high latitude near the pole ) .much larger innovation at high latitude than at low latitude reflects the fact that the spatial pattern is more erroneous at polar latitudes .the cummulative innovation ( orange curve ) does no longer asymptote to zero ; this implies bias in the system , due to incorrect assumption of flow - pattern . in the future, we will extensively explore the reconstruction of spatial pattern of meridional circulation .we have demonstrated through osses that enkf data assimilation into a babcock - leighton flux - transport dynamo can successfully reconstruct time - varying meridional circulation speed for several cycles from observations of the magnetic field . to obtain the best reconstruction , we have fed the assimilation system of 192 ensemble members with 180 observations with 1% observational error .however , a reasonably good reconstruction can be obtained when all observations have up to error , or half of the observations have up to errors , but the rest of them have much smaller errors . noted that the response time of a babcock - leighton dynamo model to changes in meridional flow is months , so the relatively poor reconstruction with only one observation with 33% observational error and 16 ensemble members can be improved if this information about the dynamo model s response time to flow changes can be exploited .a forthcoming paper will investigate how to use this response time during assimilation . throughout this paper , we used an assimilation interval of 15-days .the reason for this choice is as follows .recently have done a very thorough assessment of the predictability of solar flux - transport dynamos .predictability of a model refers to the time it takes for two solutions that start from slightly different values of either initial conditions or input parameters to diverge from each other to the point that they forecast substantially different outcomes . found an -folding time of about 30 years .the flux - transport dynamo model we use here is physically very similar to theirs .therefore we judge that the -folding time for our model will be similar .it is clear that the time interval for updating the data in our osse s should be much less than the predictability limit of the model , but it should be long compared to the integration time step ( a few hours ) , and consistent with the time scale for changes in axisymmetric solar observations , which is one solar rotation .it should also be shorter than the `` response time '' of our dynamo model ( months , see ) to a sudden change in inputs .we have therefore chosen our updating interval to be 15 days ; we plan to test the sensitivity of assimilation to changes in the updating interval from 15 days to longer ( for instance , 30 days ) and shorter ( days ) .in this study we have demonstrated what it takes to reconstruct the amplitude variations with time of a one - celled meridional circulation of fixed profile with latitude and radius . in realitythe sun s meridional circulation may not be a one - celled pattern it may undergo changes both in profile and speed with time .we have demonstrated that the innovation in observation - forecast can be very large when the assumption about the spatial structure of the flow - pattern is incorrect , and it can be even larger where the departure in assumed spatial pattern from actual pattern in flow is larger .thus an obvious next step with our data assimilation system would be to attempt to reconstruct the spatio - temporal pattern of meridional flow .our ultimate goal is to perform assimilation runs from actual observations instead of synthetic data , in cases of reconstruction as well as future predictions . from this studywe can build confidence about the power of enkf data assimilation for reconstructing not only the flow speed but also the profile of meridional circulation in the entire convection zone of the sun in the future .we thank nancy collins and tim hoar for their invaluable help with assimilation tools and software .we extend our thanks to two reviewers for many helpful comments and constructive suggestions , which helped significantly improve the paper .the dart / dynamo assimilation runs have been performed on the yellowstone supercomputer of nwsc / ncar under project number p22104000 , and all assimilation tools used in this work are available to the public from .this work is partially supported by nasa grant nnx08aq34 g .dhrubaditya mitra was supported by the european research council under the astrodyn research project no .227952 and the swedish research council under grant 2011 - 542 and the hao visitor program .national center for atmospheric research is sponsored by the national science foundation . , j. l. , wyman , b. , zhang , s. & hoar , t. , assimilation of surface pressure observations using an ensemble filter in an idealized global atmospheric prediction system , j. atmos ., 62 , 2925 - 2938 , 2005 , a. , hulot , g. , jault , d. , kuang , w. , tangborn , a. , gillet , n. , canet , e. , aubert , j. & lhuillier , f. , an introduction to data assimilation and predictability in geomagnetism , space sci rev , 155 , 247 - 291 , 2010 , j. , bogart , r. s. , kosovichev , a. g. , duvall , t. l. , jr . ,hartlep , h. , detection of equatorward meridional flow and evidence of double - cell meridional circulation inside the sun , apj lett . , 774 , l29 , 1 - 6 , 2013
accurate knowledge of time - variation in meridional flow - speed and profile is crucial for estimating a solar cycle s features , which are ultimately responsible for causing space climate variations . however , no consensus has been reached yet about the sun s meridional circulation pattern observations and theories . by implementing an ensemble kalman filter ( enkf ) data assimilation in a babcock - leighton solar dynamo model using data assimilation research testbed ( dart ) framework , we find that the best reconstruction of time - variation in meridional flow - speed can be obtained when ten or more observations are used with an updating time of 15 days and a observational error . increasing ensemble - size from 16 to 160 improves reconstruction . comparison of reconstructed flow - speed with `` true - state '' reveals that enkf data assimilation is very powerful for reconstructing meridional flow - speeds and suggests that it can be implemented for reconstructing spatio - temporal patterns of meridional circulation .
a rational difference equation is a nonlinear difference equation of the form where the initial conditions are such that the denominator never vanishes for any .+ consider the equation where all the parameters and the initial conditions and are arbitrary complex number . this second order rational difference equation eq.([equation : total - equationa ] )is studied when the parameters are real numbers and initial conditions are non - negative real numbers in . in this present articleit is an attempt to understand the same in the complex plane . + here , a very brief review of some well known results which will be useful in order to apprehend the behavior of solutions of the difference equation ( [ equation : total - equationa ] ) .let where be a continuously differentiable function .then for any pair of initial conditions , the difference equation with initial conditions + + then for any _ initial value _ , the difference equation ( 1 ) will have a unique solution .+ + a point is called _ * equilibrium point * _ of eq.([equation : introduction ] ) if the _ linearized equation _ of eq.([equation : introduction ] ) about the equilibrium is the linear difference equation where for and . the _ characteristic equation _ of eq.(2 ) is the equation the following are the briefings of the linearized stability criterions which are useful in determining the local stability character of the equilibrium of eq.([equation : introduction ] ) , .let be an equilibrium of the difference equation . * the equilibrium of eq .( 2 ) is called * locally stable * if for every , there exists a such that for every and with we have for all . * the equilibrium of eq .( 2 ) is called * locally stable * if it is locally stable and if there exist a such that for every and with we have . *the equilibrium of eq .( 2 ) is called * global attractor * if for every and , we have . *the equilibrium of equation eq .( 2 ) is called * globally asymptotically stable / fit * is stable and is a global attractor . *the equilibrium of eq .( 2 ) is called * unstable * if it is not stable . *the equilibrium of eq .( 2 ) is called * source or repeller * if there exists such that for every and with we have .clearly a source is an unstable equilibrium .* result 1.1 : ( clark s theorem ) * the sufficient condition for the asymptotic stability of the difference equation ( 1 ) is following difference equation is considered to be studied here . where all the parameters are complex number and the initial conditions and are arbitrary complex numbers . + we will consider three different cases of the eq.([equation : introduction ] ) which are as follows : + by the change of variables , , the difference equation reduced to the difference equation where and . by the change of variables , , the difference equation reduced to the difference equation where and . by the change of variables , , the difference equation reduced to the difference equation where and .+ without any loss of generality , we shall now onward focus only on the three difference equations ( 6 ) , ( 7 ) and ( 8) .in this section we establish the local stability character of the equilibria of eq.([equation : total - equationa ] ) in three difference cases as stated in the section 2 .the equilibrium points of eq.(6 ) are the solutions of the quadratic equation eq.(6 ) has the two equilibria points and respectively. the linearized equation of the rational difference equation(6 ) with respect to the equilibrium point is with associated characteristic equation the following result gives the local asymptotic stability of the equilibrium of the eq .the equilibriums of eq.(6 ) is + + locally asymptotically stable if the zeros of the characteristic equation ( 10 ) has two zeros which are and . therefore by _clark s theorem _ , the equilibrium is _ locally asymptotically stable _ if the sum of the modulus of two coefficients is less than .therefore the condition of the polynomial ( 10 ) reduces to .the linearized equation of the rational difference equation ( 6 ) with respect to the equilibrium point is with associated characteristic equation the equilibriums of eq.(6 ) is + + locally asymptotically stable if proof the theorem follows from _clark s theorem _ of local asymptotic stability of the equilibriums .the condition for the local asymptotic stability reduces to .here is an example case for the local asymptotic stability of the equilibriums .+ for and the equilibriums are and . for the equilibrium , the coefficients of the characteristic polynomial ( 10 ) are and with same modulus .therefore the condition as stated in the _ theorem 3.1 _ does not hold . therefore the equilibrium is _ unstable_. + for the equilibrium , the coefficients of the characteristic polynomial ( 12 ) are and with same modulus . therefore the condition as stated in the _ theorem 3.2 _ is hold good .therefore the equilibrium is _ locally asymptotically stable_. + it is seen that in the case of real positive parameters and initials values , the positive equilibrium of the difference equation ( 6 ) is globally asymptotically stable . but the result is not holding well in the complex set up .the equilibrium points of eq.(7 ) are the solutions of the quadratic equation the eq.(7 ) has only the zero equilibrium .the linearized equation of the rational difference equation(7 ) with respect to the zero equilibrium is with associated characteristic equation the following result gives the local asymptotic stability of the zero equilibrium of the eq .the zero equilibriums of the eq .( 7 ) is non - hyperbolic .the zeros of the characteristic equation ( 14 ) has two zeros which are and .therefore by definition , the zero equilibrium is non - hyperbolic as the modulus of one zero is .it is nice to note that in the case of real positive parameters and initials values , the zero equilibrium of the difference equation ( 7 ) is globally asymptotically stable for the parameter .but in the case of complex , the zero equilibrium is non - hyperbolic as we have seen the previous theorem .the equilibrium points of eq.(8 ) are the solutions of the quadratic equation the eq.(8 ) has two equilibriums which are and .the linearized equation of the rational difference equation(8 ) with respect to the zero equilibrium is with associated characteristic equation the following result gives the local asymptotic stability of the zero equilibrium of the eq .the zero equilibriums of the eq .( 8) is locally asymptotically stable if and repeller if .the zeros of the characteristic equation ( 15 ) has two zeros which are and .therefore by definition , the zero equilibrium is locally asymptotically stable if and unstable ( repeller ) if .hence the required is followed .the linearized equation of the rational difference equation(8 ) with respect to the equilibrium is with associated characteristic equation the following result gives the local asymptotic stability of the equilibrium of the eq .the zero equilibriums of the eq .( 8) is locally asymptotically stable if the equilibrium of the characteristic equation ( 18 ) would be _ locally asymptotically stable _ if the sum of the modulus of the coefficients of the characteristic equation ( 18 ) is less than 1 .that is by _clark s theorem _, , that is here is an example case for the local asymptotic stability of the equilibriums .+ for ) and ( ) the equilibriums are and . for the equilibrium , the coefficients of the characteristic polynomial ( 18 ) are and with modulus and respectively .therefore the condition as stated in the _ theorem 3.5 _ hold good . therefore the equilibrium is _ locally asymptotically stable_. + in the case of real positive parameters and intimal values of the difference equation ( 8) , the positive equilibrium is locally asymptotically stable if and but in the complex set , it is encountered through the example above is that the equilibrium is locally asymptotically stable even though and .in this section we would like to explore the boundedness of the solutions of the three difference equations ( 6 ) , ( 7 ) and ( 8) .+ now we would like to try to find open ball such that if and then for all .in other words , if the initial values and belong to then the solution generated by the difference equations would essentially be within the open ball . [result : positive - local - stability2 ] for the difference equation ( 6 ) , for every , if and then provided let be a solution of the equation eq.(6 ) .let be any arbitrary real number .consider .we need to find out an such that for all .it is follows from the eq.(6 ) that for any , using triangle inequality for in order to ensure that , ( assuming ) it is needed to be that is . therefore the required is followed .[ result : positive - local - stability2 ] for the difference equation ( 7 ) , for every , if and then provided let be a solution of the equation eq.(7 ) .let be any arbitrary real number .consider .we need to find out an such that for all .it is follows from the eq.(7 ) , that for any , using triangle inequality for therefore , .we need to be less than .therefore , is followed .[ result : positive - local - stability2 ] for the difference equation ( 8) , for every , if and then and provided let be a solution of the equation eq.(8 ) .let be any arbitrary real number and .consider .we need to find out an such that for all .it is follows from the eq.(8 ) , that for any , using triangle inequality for therefore , in order to ensure that , it is needed to be that is therefore the required is followed .a solution of a difference equation is said to be _ globally periodic _ of period if for any given initial conditions .solution is said to be _ periodic with prime period _ if p is the smallest positive integer having this property . + we shall first look for the prime period two solutions of the three difference equations ( 6 ) , ( 7 ) and ( 8) and their corresponding local stability analysis .let , be a prime period two solution of the difference equation . then and .this two equations lead to the set of solutions ( prime period two ) except the equilibriums as and .+ let , be a prime period two solution of the equation ( 6 ) .we set then the equivalent form of the difference equation ( 6 ) is let t be the map on to itself defined by then is a fixed point of , the second iterate of .+ where and .clearly the two cycle is locally asymptotically stable when the eigenvalues of the jacobian matrix , evaluated at lie inside the unit disk .+ we have , where and + and + + now , set in particular for the prime period solution , + + , we shall see the local asymptotic stability for some example cases of parameters and .the general form of and would be very complected .consider the prime period two solution of the difference equation ( 6 ) , corresponding two the parameters and .+ + in this case , and .therefore , by the linear stability theorem ( ) the prime period solution is _ locally asymptotically stable_. let , be a prime period two solution of the difference equation . then and .this two equations lead to the set of solutions ( prime period two ) except the equilibriums as .+ let , be a prime period two solution of the equation ( 7 ) .we set then the equivalent form of the difference equation ( 7 ) is let t be the map on to itself defined by then is a fixed point of , the second iterate of .+ where and .clearly the two cycle is locally asymptotically stable when the eigenvalues of the jacobian matrix , evaluated at lie inside the unit disk .+ we have , where and + and + + now , set in particular for the prime period solution , + + , we shall see the local asymptotic stability for some example cases of parameters and .the general form of and would very complected .consider the prime period two solution of the difference equation ( 7 ) , corresponding two the parameters and .+ + in this case , and .therefore , by the linear stability theorem ( ) the prime period solution is _ locally asymptotically stable_. let , be a prime period two solution of the difference equation . then and .this two equations lead to the set of solutions ( prime period two ) except the equilibriums as .+ let , be a prime period two solution of the equation ( 8) .we set then the equivalent form of the difference equation ( 8) is let t be the map on to itself defined by then is a fixed point of , the second iterate of .+ where and .clearly the two cycle is locally asymptotically stable when the eigenvalues of the jacobian matrix , evaluated at lie inside the unit disk .+ we have , where and + and + + now , set for the prime period solution , , and . therefore , by the linear stability theorem ( )the prime period solution is _ locally asymptotically stable _ if and only if .it turns out that .in other words , the condition reduces to and which is same condition as it was for the real set up .this is something which is absolutely new feature of the dynamics of the difference equation ( 1 ) which did not arise in the real set up of the same difference equation .computationally we have encountered some chaotic solution of the difference equation ( 8) for some parameter values which are given in the following table .+ the method of lyapunov characteristic exponents serves as a useful tool to quantify chaos .specifically lyapunav exponents measure the rates of convergence or divergence of nearby trajectories .negative lyapunov exponents indicate convergence , while positive lyapunov exponents demonstrate divergence and chaos .the magnitude of the lyapunov exponent is an indicator of the time scale on which chaotic behavior can be predicted or transients decay for the positive and negative exponent cases respectively . in this present study, the largest lyapunov exponent is calculated for a given solution of finite length numerically .+ from computational evidence , it is arguable that for complex parameters and which are stated in the following table the solutions are chaotic for every initial values .* interval of lyapunav exponent * + , & + , & + , & + , & + the chaotic trajectory plots including corresponding complex plots are given the following fig .1 . in the fig .1 , for each of the four cases ten different initial values are taken and plotted in the left and in the right corresponding complex plots are given . from the fig .1 , it is evident that for the four different cases the basin of the chaotic attractor is neighbourhood of the centre of complex plane .does the difference equation have higher order periodic cycle ?if so , what is the highest periodic cycle ?find out the set of all parameters and for which the difference equation ( 8) has chaotic solutions .find out the subset of the of all possible initial values and for which the solutions of the difference equation are chaotic for any complex parameters and .does the neighbourhood of is global chaotic attractor ? if not , are there any other chaotic attractors ?in continuation of the present work the study of the difference equation where , , , , and are all convergent sequence of complex numbers and converges to , , , , and respectively is indeed would be very interesting and that we would like to pursue further .also the most generalization of the present rational difference equation is where and are delay terms and it demands similar analysis which we plan to pursue in near future .the author thanks _ dr . pallab basu _ for discussions and suggestions .saber n elaydi , henrique oliveira , jos manuel ferreira and joo f alves , _ discrete dyanmics and difference equations , proceedings of the twelfth international conference on difference equations and applications _ , world scientific press , 2007 .s. atawna , r. abu - saris , e. s. ismail , and i. hashim , stability of nonhyperbolic equilibrium solution of second order nonlinear rational difference equation , _ journal of difference equations _volume 2015 ( 2015 ) , article i d 486985 .e. camouzis , e. chatterjee , g. ladas and e. p. quinn , on third order rational difference equations open problems and conjectures , _ journal of difference equations and applications _ , 10(2004 ) , 1119 1127 .grove , y. kostrov and s.w .schultz , on riccati difference equations with complex coefficients , _ proceedings of the 14th international conference on difference equations and applications , istanbul , turkey _ , march 2009 .
the dynamics of the second order rational difference equation with complex parameters and arbitrary complex initial conditions is investigated . in the complex set up , the local asymptotic stability and boundedness are studied vividly for this difference equation . several interesting characteristics of the solutions of this equation , using computations , which does not arise when we consider the same equation with positive real parameters and initial conditions are shown . the chaotic solutions of the difference equation is absolutely new feature in the complex set up which is also shown in this article . some of the interesting observations led us to pose some open interesting problems regarding chaotic and higher order periodic solutions and global asymptotic convergence of this equation .